Might People on the Internet Sometimes Tell the Truth?

Lies and Scott Alexander

Scott Alexander wrote a very good post called Might People on the Internet Sometimes Lie, which I have linked to several times in the past. In the first linked post (Lies, Religion, and Miscalibrated Priors), I answered Scott’s question (why it is hard to believe that people are lying even when they probably are), but also pointed out that “either they are lying or the thing actually happened in such and such a specific way” is a false dichotomy in any case.

In the example in my post, I spoke about Arman Razaali and his claim that he shuffled a deck of cards for 30 minutes and ended up with the deck in its original order. As I stated in the post,

People demonstrably lie at far higher rates than 1 in 10^67 or 1 in 10^40. This will remain the case even if you ask about the rate of “apparently unmotivated flat out lying for no reason.” Consequently, “he’s lying, period,” is far more likely than “the story is true, and happened by pure chance.” Nor can we fix this by pointing to the fact that an extraordinary claim is a kind of extraordinary evidence

But as I also stated there, those are not the only options. As it turns out, although my readers may have missed this, Razaali himself stumbled upon my post somewhat later and posted something in the comments there:

At first, I must say that I was a bit flustered when I saw this post come up when I was checking what would happen when I googled myself. But it’s an excellent read, exceptionally done with excellent analysis. Although I feel the natural urge to be offended by this, I’m really not. Your message is very clear, and it articulates the inner workings of the human mind very well, and in fact, I found that I would completely agree. Having lost access to that Quora account a month or two ago, I can’t look back at what I wrote. I can easily see how the answer gave on Quora could very easily be seen as a lie, and if I read it with no context, I would probably think it was fake too. But having been there at the moment as I counted the cards, I am biased towards believing what I saw, even though I could have miscounted horrendously.

Does this sound like something written by one of Scott Alexander’s “annoying trolls”?

Not to me, anyway. I am aware that I am also disinclined for moral reasons to believe that Razaali was lying, for the reasons I stated in that post. Nonetheless, it seems fair to say that this comment fits better with some intermediate hypothesis (e.g. “it was mostly in order and he was mistaken”) rather than with the idea that “he was lying.”

Religion vs. UFOs

I participated in this exchange on Twitter:

Ross Douthat:

Of what use are our professionally-eccentric, no-heresy-too-wild reasoners like @robinhanson if they assume a priori that “spirits or creatures from other dimensions” are an inherently crazy idea?: https://overcomingbias.com/2021/05/ufos-say-govt-competence-is-either-surprisingly-high-or-surprisingly-low.html

Robin Hanson:

But we don’t want to present ourselves as finding any strange story as equally likely. Yes, we are willing to consider most anything, at least from a good source, & we disagree with others on which stories seem more plausible. But we present ourselves as having standards! 🙂

Me:

I think @DouthatNYT intended to hint that many religious experiences offer arguments for religions that are at least as strong as arguments from UFOs for aliens, and probably stronger.

I agree with him and find both unconvincing.

But find it very impressive you were willing to express those opinions.

Robin Hanson:

You can find videos on best recent evidence for ghosts, which to me looks much less persuasive than versions for UFOs. But evidence for non-ghost spirits, they don’t even bother to make videos for that, as there’s almost nothing.

Me:

It is just not true that there is “almost nothing.” E.g. see the discussion in my post here:

Miracles and Multiple Witnesses

Robin does not respond. Possibly he just does not want to spend more time on the matter. But I think there is also something else going on; engaging with this would suggest to people that he does not “have standards.” It is bad enough for his reputation if he talks about UFOs; it would be much worse if he engaged in a discussion about rosaries turning to gold, which sounds silly to most Catholics, let alone to non-Catholic Christians, people of other religions, and non-religious people.

But I meant what I said in that post, when I said, “these reports should be taken seriously.” Contrary to the debunkers, there is nothing silly about something being reported by thousands of people. It is possible that every one of those reports is a lie or a mistake. Likely, even. But I will not assume that this is the case when no one has even bothered to check.

Scott Alexander is probably one of the best bloggers writing today, and one of the most honest, to the degree that his approach to religious experiences is somewhat better. For example, although I was unfortunately unable to find the text just now, possibly because it was in a comment (and some of those threads have thousands of comments) and not in a post, he once spoke about the Miracle of the Sun at Fatima, and jokingly called it something like, “a glitch in the matrix.” The implication was that (1) he does not believe in the religious explanation, but nonetheless (2) the typical “debunkings” are just not very plausible. I agree with this. There are some hints that there might be a natural explanation, but the suggestions are fairly stretched compared to the facts.

December 24th, 2010 – Jan 4th, 2011

What follows is a description of events that happened to me personally in the period named. They are facts. They are not lies. There is no distortion, not due to human memory failures or anything else. The account here is based on detailed records that I made at the time, which I still possess, and which I just reviewed today to ensure that there would be no mistake.

At that time I was a practicing Catholic. On December 24th, 2010, I started a novena to Mary. I was concerned about a friend’s vocation; I believed that they were called to religious life; they had thought the same for a long time but were beginning to change their mind. The intention of the novena was to respond to this situation.

I did not mention this novena to anyone at the time, or to anyone at all before the events described here.

The last day of the novena was January 1st, 2011, a Marian feast day. (It is a typical practice to end a novena on a feast day of saint to whom the novena is directed.)

On January 4th, 2011, I had a conversation with the same friend. I made no mention at any point during this conversation of the above novena, and there is no way that they could have known about it, or at any rate no way that our debunking friends would consider “ordinary.”

They told me about events that happened to them on January 2nd, 2011.

Note that these events were second hand for me (narrated by my friend) and third hand for any readers this blog might have. This does not matter, however; since my friend had no idea about the novena, even if they were completely making it up (which I believe in no way), it would be nearly as surprising.

When praying a novena, it is typical to expect the “answer to the prayer” on the last day or on the day after, as in an example online:

The Benedictine nuns of St Cecilia’s Abbey on the Isle of Wight (http://www.stceciliasabbey.org.uk) recently started a novena to Fr Doyle with the specific intention of finding some Irish vocations. Anybody with even a passing awareness of the Catholic Church in Ireland is aware that there is a deep vocations crisis. Well, the day after the novena ended, a young Irish lady in her 20’s arrived for a visit at the convent. Today, the Feast of the Immaculate Conception, she will start her time as a postulant at St Cecilia’s Abbey.

Some might dismiss this as coincidence. Those with faith will see it in a different light. Readers can make up their own minds. 

January 2nd, 2011, was the day after my novena ended, and the day to which my friend (unaware of the novena) attributed the following event:

They happened to meet with another person, one who was basically a stranger to them, but met through a mutual acquaintance (mutual to my friend and the stranger; unknown to me.) This person (the stranger) asked my friend to pray with her. She then told my friend that “Our Lady knows that you suffer a lot… She wants you to become a religious and she is afraid that you are going astray…”

Apart from a grammatical change for context, the above sentences are a direct quotation from my friend’s account. Note the relationship with the text I placed in bold earlier.

To be Continued

I may have more to say about these events, but for now I want to say two things:

(1) These events actually happened. The attitude of the debunkers is that if anything “extraordinary” ever happens, it is at best a psychological experience, not a question of the facts. This is just false, and this is what I referred to when I mentioned their second error in the previous post.

(2) I do not accept a religious explanation of these events (at any rate not in any sense that would imply that a body of religious doctrine is true as a whole.)

Mind of God

Reconciling Theism and Atheism

In his Dialogues Concerning Natural Religion, David Hume presents Philo as arguing that the disagreement between theists and atheists is merely verbal:

All men of sound reason are disgusted with verbal disputes, which abound so much in philosophical and theological inquiries; and it is found, that the only remedy for this abuse must arise from clear definitions, from the precision of those ideas which enter into any argument, and from the strict and uniform use of those terms which are employed. But there is a species of controversy, which, from the very nature of language and of human ideas, is involved in perpetual ambiguity, and can never, by any precaution or any definitions, be able to reach a reasonable certainty or precision. These are the controversies concerning the degrees of any quality or circumstance. Men may argue to all eternity, whether HANNIBAL be a great, or a very great, or a superlatively great man, what degree of beauty CLEOPATRA possessed, what epithet of praise LIVY or THUCYDIDES is entitled to, without bringing the controversy to any determination. The disputants may here agree in their sense, and differ in the terms, or vice versa; yet never be able to define their terms, so as to enter into each other’s meaning: Because the degrees of these qualities are not, like quantity or number, susceptible of any exact mensuration, which may be the standard in the controversy. That the dispute concerning Theism is of this nature, and consequently is merely verbal, or perhaps, if possible, still more incurably ambiguous, will appear upon the slightest inquiry. I ask the Theist, if he does not allow, that there is a great and immeasurable, because incomprehensible difference between the human and the divine mind: The more pious he is, the more readily will he assent to the affirmative, and the more will he be disposed to magnify the difference: He will even assert, that the difference is of a nature which cannot be too much magnified. I next turn to the Atheist, who, I assert, is only nominally so, and can never possibly be in earnest; and I ask him, whether, from the coherence and apparent sympathy in all the parts of this world, there be not a certain degree of analogy among all the operations of Nature, in every situation and in every age; whether the rotting of a turnip, the generation of an animal, and the structure of human thought, be not energies that probably bear some remote analogy to each other: It is impossible he can deny it: He will readily acknowledge it. Having obtained this concession, I push him still further in his retreat; and I ask him, if it be not probable, that the principle which first arranged, and still maintains order in this universe, bears not also some remote inconceivable analogy to the other operations of nature, and, among the rest, to the economy of human mind and thought. However reluctant, he must give his assent. Where then, cry I to both these antagonists, is the subject of your dispute? The Theist allows, that the original intelligence is very different from human reason: The Atheist allows, that the original principle of order bears some remote analogy to it. Will you quarrel, Gentlemen, about the degrees, and enter into a controversy, which admits not of any precise meaning, nor consequently of any determination? If you should be so obstinate, I should not be surprised to find you insensibly change sides; while the Theist, on the one hand, exaggerates the dissimilarity between the Supreme Being, and frail, imperfect, variable, fleeting, and mortal creatures; and the Atheist, on the other, magnifies the analogy among all the operations of Nature, in every period, every situation, and every position. Consider then, where the real point of controversy lies; and if you cannot lay aside your disputes, endeavour, at least, to cure yourselves of your animosity.

To what extent Hume actually agrees with this argument is not clear, and whether or not a dispute is verbal or real is itself like Hume’s questions about greatness or beauty, that is, it is a matter of degree. Few disagreements are entirely verbal. In any case, I largely agree with the claim that there is little real disagreement here. In response to a question on the about page of this blog, I referred to some remarks about God by Roderick Long:

Since my blog has wandered into theological territory lately, I thought it might be worth saying something about the existence of God.

When I’m asked whether I believe in God, I usually don’t know what to say – not because I’m unsure of my view, but because I’m unsure how to describe my view. But here’s a try.

I think the disagreement between theism and atheism is in a certain sense illusory – that when one tries to sort out precisely what theists are committed to and precisely what atheists are committed to, the two positions come to essentially the same thing, and their respective proponents have been fighting over two sides of the same shield.

Let’s start with the atheist. Is there any sense in which even the atheist is committed to recognising the existence of some sort of supreme, eternal, non-material reality that transcends and underlies everything else? Yes, there is: namely, the logical structure of reality itself.

Thus so long as the theist means no more than this by “God,” the theist and the atheist don’t really disagree.

Now the theist may think that by God she means something more than this. But likewise, before people knew that whales were mammals they thought that by “whale” they meant a kind of fish. What is the theist actually committed to meaning?

Well, suppose that God is not the logical structure of the universe. Then we may ask: in what relation does God stand to that structure, if not identity? There would seem to be two possibilities.

One is that God stands outside that structure, as its creator. But this “possibility” is unintelligible. Logic is a necessary condition of significant discourse; thus one cannot meaningfully speak of a being unconstrained by logic, or a time when logic’s constraints were not yet in place.

The other is that God stands within that structure, along with everything else. But this option, as Wittgenstein observed, would downgrade God to the status of being merely one object among others, one more fragment of contingency – and he would no longer be the greatest of all beings, since there would be something greater: the logical structure itself. (This may be part of what Plato meant in describing the Form of the Good as “beyond being.”)

The only viable option for the theist, then, is to identify God with the logical structure of reality. (Call this “theological logicism.”) But in that case the disagreement between the theist and the atheist dissolves.

It may be objected that the “reconciliation” I offer really favours the atheist over the theist. After all, what theist could be satisfied with a deity who is merely the logical structure of the universe? Yet in fact there is a venerable tradition of theists who proclaim precisely this. Thomas Aquinas, for example, proposed to solve the age-old questions “could God violate the laws of logic?” and “could God command something immoral?” by identifying God with Being and Goodness personified. Thus God is constrained by the laws of logic and morality, not because he is subject to them as to a higher power, but because they express his own nature, and he could not violate or alter them without ceasing to be God. Aquinas’ solution is, essentially, theological logicism; yet few would accuse Aquinas of having a watered-down or crypto-atheistic conception of deity. Why, then, shouldn’t theological logicism be acceptable to the theist?

A further objection may be raised: Aquinas of course did not stop at the identification of God with Being and Goodness, but went on to attribute to God various attributes not obviously compatible with this identification, such as personality and will. But if the logical structure of reality has personality and will, it will not be acceptable to the atheist; and if it does not have personality and will, then it will not be acceptable to the theist. So doesn’t my reconciliation collapse?

I don’t think so. After all, Aquinas always took care to insist that in attributing these qualities to God we are speaking analogically. God does not literally possess personality and will, at least if by those attributes we mean the same attributes that we humans possess; rather he possesses attributes analogous to ours. The atheist too can grant that the logical structure of reality possesses properties analogous to personality and will. It is only at the literal ascription of those attributes that the atheist must balk. No conflict here.

Yet doesn’t God, as understood by theists, have to create and sustain the universe? Perhaps so. But atheists too can grant that the existence of the universe depends on its logical structure and couldn’t exist for so much as an instant without it. So where’s the disagreement?

But doesn’t God have to be worthy of worship? Sure. But atheists, while they cannot conceive of worshipping a person, are generally much more open to the idea of worshipping a principle. Again theological logicism allows us to transcend the opposition between theists and atheists.

But what about prayer? Is the logical structure of reality something one could sensibly pray to? If so, it might seem, victory goes to the theist; and if not, to the atheist. Yet it depends what counts as prayer. Obviously it makes no sense to petition the logical structure of reality for favours; but this is not the only conception of prayer extant. In Science and Health, for example, theologian M. B. Eddy describes the activity of praying not as petitioning a principle but as applying a principle:

“Who would stand before a blackboard, and pray the principle of mathematics to solve the problem? The rule is already established, and it is our task to work out the solution. Shall we ask the divine Principle of all goodness to do His own work? His work is done, and we have only to avail ourselves of God’s rule in order to receive His blessing, which enables us to work out our own salvation.”

Is this a watered-down or “naturalistic” conception of prayer? It need hardly be so; as the founder of Christian Science, Eddy could scarcely be accused of underestimating the power of prayer! And similar conceptions of prayer are found in many eastern religions. Once again, theological logicism’s theistic credentials are as impeccable as its atheistic credentials.

Another possible objection is that whether identifying God with the logical structure of reality favours the atheist or the theist depends on how metaphysically robust a conception of “logical structure” one appeals to. If one thinks of reality’s logical structure in realist terms, as an independent reality in its own right, then the identification favours the theist; but if one instead thinks, in nominalist terms, that there’s nothing to logical structure over and above what it structures, then the identification favours the atheist.

This argument assumes, however, that the distinction between realism and nominalism is a coherent one. I’ve argued elsewhere (see here and here) that it isn’t; conceptual realism pictures logical structure as something imposed by the world on an inherently structureless mind (and so involves the incoherent notion of a structureless mind), while nominalism pictures logical structure as something imposed by the mind on an inherently structureless world (and so involves the equally incoherent notion of a structureless world). If the realism/antirealism dichotomy represents a false opposition, then the theist/atheist dichotomy does so as well. The difference between the two positions will then be only, as Wittgenstein says in another context, “one of battle cry.”

Long is trying too hard, perhaps. As I stated above, few disagreements are entirely verbal, so it would be strange to find no disagreement at all, and we could question some points here. Are atheists really open to worshiping a principle? Respecting, perhaps, but worshiping? A defender of Long, however, might say that “respect” and “worship” do not necessarily have any relevant difference here, and this is itself a merely verbal difference signifying a cultural difference. The theist uses “worship” to indicate that they belong to a religious culture, while the atheist uses “respect” to indicate that they do not. But it would not be easy to find a distinct difference in the actual meaning of the terms.

In any case, there is no need to prove that there is no difference at all, since without a doubt individual theists will disagree on various matters with individual atheists. The point made by both David Hume and Roderick Long stands at least in a general way: there is far less difference between the positions than people typically assume.

In an earlier post I discussed, among other things, whether the first cause should be called a “mind” or not, discussing St. Thomas’s position that it should be, and Plotinus’s position that it should not be. Along the lines of the argument in this post, perhaps this is really an argument about whether or not you should use a certain analogy, and the correct answer may be that it depends on your purposes.

But what if your purpose is simply to understand reality? Even if it is, it is often the case that you can understand various aspects of reality with various analogies, so this will not necessarily provide you with a definite answer. Still, someone might argue that you should not use a mental analogy with regard to the first cause because it will lead people astray. Thus, in a similar way, Richard Dawkins argued that one should not call the first cause “God” because it would mislead people:

Yes, I said, but it must have been simple and therefore, whatever else we call it, God is not an appropriate name (unless we very explicitly divest it of all the baggage that the word ‘God’ carries in the minds of most religious believers). The first cause that we seek must have been the simple basis for a self-bootstrapping crane which eventually raised the world as we know it into its present complex existence.

I will argue shortly that Dawkins was roughly speaking right about the way that the first cause works, although as I said in that earlier post, he did not have a strong argument for it other than his aesthetic sense and the kinds of explanation that he prefers. In any case, his concern with the name “God” is the “baggage” that it “carries in the minds of most religious believers.” That is, if we say, “There is a first cause, therefore God exists,” believers will assume that their concrete beliefs about God are correct.

In a similar way, someone could reasonably argue that speaking of God as a “mind” would tend to lead people into error by leading them to suppose that God would do the kinds of the things that other minds, namely human ones, do. And this definitely happens. Thus for example, in his book Who Designed the Designer?, Michael Augros argues for the existence of God as a mind, and near the end of the book speculates about divine revelation:

I once heard of a certain philosopher who, on his deathbed, when asked whether he would become a Christian, admitted his belief in Aristotle’s “prime mover”, but not in Jesus Christ as the Son of God. This sort of acknowledgment of the prime mover, of some sort of god, still leaves most of our chief concerns unaddressed. Will X ever see her son again, now that the poor boy has died of cancer at age six? Will miserable and contrite Y ever be forgiven, somehow reconciled to the universe and made whole, after having killed a family while driving drunk? Will Z ever be brought to justice, having lived out his whole life laughing at the law while another person rotted in jail for the atrocities he committed? That there is a prime mover does not tell us with sufficient clarity. Even the existence of an all-powerful, all-knowing, all-good god does not enable us to fill in much detail. And so it seems reasonable to suppose that god has something more to say to us, in explicit words, and not only in the mute signs of creation. Perhaps he is waiting to talk to us, biding his time for the right moment. Perhaps he has already spoken, but we have not recognized his voice.

When we cast our eye about by the light of reason in his way, it seems there is room for faith in general, even if no particular faith can be “proved” true in precisely the same way that it can be “proved” that there is a god.

The idea is that given that God is a mind, it follows that it is fairly plausible that he would wish to speak to people. And perhaps that he would wish to establish justice through extraordinary methods, and that he might wish to raise people from the dead.

I think this is “baggage” carried over from Augros’s personal religious views. It is an anthropomorphic mistake, not merely in the sense that he does not have a good reason for such speculation, but in the sense that such a thing is demonstrably implausible. It is not that the divine motives are necessarily unknown to us, but that we can actually discover them, at least to some extent, and we will discover that they are not what he supposes.

Divine Motives

How might one know the divine motives? How does one read the mind of God?

Anything that acts at all does it what it does ultimately because of what it is. This is an obvious point, like the point that the existence of something rather than nothing could not have some reason outside of being. In a similar way, “what is” is the only possible explanation for what is done, since there is nothing else there to be an explanation. And in every action, whether or not we are speaking of the subject in explicitly mental terms or not, we can always use the analogy of desires and goals. In the linked post, I quote St. Thomas as speaking of the human will as the “rational appetite,” and the natural tendency of other things as a “natural appetite.” If we break down the term “rational appetite,” the meaning is “the tendency to do something, because of having a reason to do it.” And this fits with my discussion of human will in various places, such as in this earlier post.

But where do those reasons come from? I gave an account of this here, arguing that rational goals are a secondary effect of the mind’s attempt to understand itself. Of course human goals are complex and have many factors, but this happens because what the mind is trying to understand is complicated and multifaceted. In particular, there is a large amount of pre-existing human behavior that it needs to understand before it can attribute goals: behavior that results from life as a particular kind of animal, behavior that results from being a particular living thing, and behavior that results from having a body of such and such a sort.

In particular, human social behavior results from these things. There was some discussion of this here, when we looked at Alexander Pruss’s discussion of hypothetical rational sharks.

You might already see where this is going. God as the first cause does not have any of the properties that generate human social behavior, so we cannot expect his behavior to resemble human social behavior in any way, as for example by having any desire to speak with people. Indeed, this is the argument I am making, but let us look at the issue more carefully.

I responded to the “dark room” objection to predictive processing here and here. My response depends both the biological history of humans and animals in general, and to some extent on the history of each individual. But the response does not merely explain why people do not typically enter dark rooms and simply stay there until they die. It also explains why occasionally people do do such things, to a greater or lesser approximation, as with suicidal or extremely depressed people.

If we consider the first cause as a mind, as we are doing here, it is an abstract immaterial mind without any history, without any pre-existing behaviors, without any of the sorts of things that allow people to avoid the dark room. So while people will no doubt be offended by the analogy, and while I will try to give a more pleasant interpretation later, one could argue that God is necessarily subject to his own dark room problem: there is no reason for him to have any motives at all, except the one which is intrinsic to minds, namely the motive of understanding. And so he should not be expected to do anything with the world, except to make sure that it is intelligible, since it must be intelligible for him to understand it.

The thoughtful reader will object: on this account, why does God create the world at all? Surely doing and making nothing at all would be even better, by that standard. So God does seem to have a “dark room” problem that he does manage to avoid, namely the temptation to nothing at all. This is a reasonable objection, but I think it would lead us on a tangent, so I will not address it at this time. I will simply take it for granted that God makes something rather than nothing, and discuss what he does with the world given that fact.

In the previous post, I pointed out that David Hume takes for granted that the world has stable natural laws, and uses that to argue that an orderly world can result from applying those laws to “random” configurations over a long enough time. I said that one might accuse him of “cheating” here, but that would only be the case if he intended to maintain a strictly atheistic position which would say that there is no first cause at all, or that if there is, it does not even have a remote analogy with a mind. Thus his attempted reconciliation of theism and atheism is relevant, since it seems from this that he is aware that such a strict atheism cannot be maintained.

St. Thomas makes a similar connection between God as a mind and a stable order of things in his fifth way:

The fifth way is taken from the governance of the world. We see that things which lack intelligence, such as natural bodies, act for an end, and this is evident from their acting always, or nearly always, in the same way, so as to obtain the best result. Hence it is plain that not fortuitously, but designedly, do they achieve their end. Now whatever lacks intelligence cannot move towards an end, unless it be directed by some being endowed with knowledge and intelligence; as the arrow is shot to its mark by the archer. Therefore some intelligent being exists by whom all natural things are directed to their end; and this being we call God.

What are we are to make of the claim that things act “always, or nearly always, in the same way, so as to obtain the best result?” Certainly acting in the same way would be likely to lead to similar results. But why would you think it was the best result?

If we consider where we get the idea of desire and good, the answer will be clear. We don’t have an idea of good which is completely independent from “what actually tends to happen”, even though this is not quite a definition of the term either. So ultimately St. Thomas’s argument here is based on the fact that things act in similar ways and achieve similar results. The idea that it is “best” is not an additional contribution.

But now consider the alternative. Suppose that things did not act in similar ways, or that doing so did not lead to similar results. We would live in David Hume’s non-inductive world. The result is likely to be mathematically and logically impossible. If someone says, “look, the world works in a coherent way,” and then attempts to describe how it would look if it worked in an incoherent way, they will discover that the latter “possibility” cannot be described. Any description must be coherent in order to be a description, so the incoherent “option” was never a real option in the first place.

This argument might suggest that the position of Plotinus, that mind should not be attributed to God at all, is the more reasonable one. But since we are exploring the situation where we do make that attribution, let us consider the consequences.

We argued above that the sole divine motive for the world is intelligibility. This requires coherence and consistency. It also requires a tendency towards the good, for the above mentioned reasons. Having a coherent tendency at all is ultimately not something different from tending towards good.

The world described is arguably a deist world, one in which the laws of nature are consistently followed, but God does nothing else in the world. The Enlightenment deists presumably had various reasons for their position: criticism of specific religious doctrines, doubts about miracles, and an aesthetic attraction to a perfectly consistent world. But like Dawkins with his argument about God’s simplicity, they do not seem (to me at least) to have had very strong arguments. That does not prove that their position was wrong, and even their weaker arguments may have had some relationship with the truth; even an aesthetic attraction to a perfectly consistent world has some connection with intelligibility, which is the actual reason for the world to be that way.

Once again, as with the objection about creating a world at all, a careful reader might object that this argument is not conclusive. If you have a first cause at all, then it seems that you must have one or more first effects, and even if those effects are simple, they cannot be infinitely simple. And given that they are not infinitely simple, who is to set the threshold? What is to prevent one or more of those effects from being “miraculous” relative to anything else, or even from being something like a voice giving someone a divine revelation?

There is something to this argument, but as with the previous objection, I will not be giving my response here. I will simply note for the moment that it is a little bit strained to suggest that such a thing could happen without God having an explicit motive of “talking to people,” and as argued above, such a motive cannot exist in God. That said, I will go on to some other issues.

As the Heavens are Higher

Apart from my arguments, it has long been noticed in the actual world that God seems much more interested in acting consistently than in bringing about any specific results in human affairs.

Someone like Richard Dawkins, or perhaps Job, if he had taken the counsel of his wife, might respond to the situation in the following way. “God” is not an appropriate name for a first cause that acts like this. If anything is more important to God than being personal, it would be being good. But the God described here is not good at all, since he doesn’t seem to care a bit about human affairs. And he inflicts horrible suffering on people just for the sake of consistency with physical laws. Instead of calling such a cause “God,” why don’t we call it “the Evil Demon” or something like that?

There is a lot that could be said about this. Some of it I have already said elsewhere. Some of it I will perhaps say at other times. For now I will make three brief points.

First, ensuring that the world is intelligible and that it behaves consistently is no small thing. In fact it is a prerequisite for any good thing that might happen anywhere and any time. We would not even arrive at the idea of “good” things if we did not strive consistently for similar results, nor would we get the idea of “striving” if we did did not often obtain them. Thus it is not really true that God has no interest in human affairs: rather, he is concerned with the affairs of all things, including humans.

Second, along similar lines, consider what the supposed alternative would be. If God were “good” in the way you wish, his behavior would be ultimately unintelligible. This is not merely because some physical law might not be followed if there were a miracle. It would be unintelligible behavior in the strict sense, that is, in the sense that no explanation could be given for why God is doing this. The ordinary proposal would be that it is because “this is good,” but when this statement is a human judgement made according to human motives, there would need to be an explanation for why a human judgement is guiding divine behavior. “God is a mind” does not adequately explain this. And it is not clear that an ultimately unintelligible world is a good one.

Third, to extend the point about God’s concern with all things, I suggest that the answer is roughly speaking the one that Scott Alexander gives non-seriously here, except taken seriously. This answer depends on an assumption of some sort of modal realism, a topic which I was slowly approaching for some time, but which merits a far more detailed discussion, and I am not sure when I will get around to it, if ever. The reader might note however that this answer probably resolves the question about “why didn’t God do nothing at all” by claiming that this was never an option anyway.

Replies to Objections on Form

This post replies to the objections raised in the last post.

Reply 1. I do not define form as “many relations”, in part for this very reason. Rather, I say that it is a network, and thus is one thing tied together, so to speak.

Nonetheless, the objection seems to wish to find something absolutely one which is in no way many and which causes unity in other things which are in some way lacking in unity. This does not fit with the idea of giving an account, which necessarily involves many words and thus reference to many aspects of a thing. And thus it also does not fit with the idea of form as that which makes a thing what it is, because it is evident that when we ask what a thing is, we are typically asking about things that have many aspects, as a human being has many senses and many body parts and so on.

In other words, form makes a thing one, but it also makes it what it is, which means that it also makes a thing many in various ways. And so form is one in some way, and thus called a “network,” but it also contains various relations that account for the many aspects of the thing.

Someone might extend this objection by saying that if a form contains many relations, there will need to be a form of form, uniting these relations. But there is a difference between many material parts, which might need a form in order to be one, and relations, which bind things together of themselves. To be related to something, in this sense, is somewhat like being attached to it in some way, while a number of physical bodies are not attached to each other simply in virtue of being a number of bodies. It is true that this implies a certain amount of complexity in form, but this is simply the result of the fact that there is a certain amount of complexity in what things actually are.

Reply 2. “Apt to make something one” is included in the definition in order to point to the relationships and networks of relationships that we are concerned with. For example, one could discuss the idea of a mereological sum, for example the tree outside my window together with my cell phone, and talk about a certain network of relationships intrinsic to that “sum.” This network would have little share in the idea of form, precisely because it is not apt to make anything one thing in any ordinary sense. However, I say “little share” here rather than “no share”, because this is probably a question of degree and kind. As I said here, “one thing” is said in many ways and with many degrees, and thus also form exists in many ways and with many degrees. In particular, there is no reason to suppose that “one” has one true sense compared to which the other senses would be more false than true.

Reply 3. A network of relationships could be an accidental form. Thus the form that makes a blue thing blue would normally be an accidental form. But there will be a similar network of relationships that make a thing a substance. If something is related to other things as “that in which other things are present,” and is not related to other things as “that which is present in something else,” then it will exist as substance, and precisely because it is related to things in these ways. So the definition is in fact general in comparison to both substance and accident.

Reply 4. This objection could be understood as asserting that everything relative depends on something prior which is absolute. Taken in this sense, the objection is simply mistaken. The existence of more than one thing proves conclusively that relationship as such does not need to depend on anything absolute.

Another way to understand the objection would be as asserting that whatever we may say about the thing in relation to other things, all of this must result from what the thing is in itself, apart from all of this. Therefore the essence of the thing is prior to anything at all that we say about it. And in this way, there is a truth here and an error here, namely the Kantian truth and the Kantian error. Certainly the thing is the cause of our knowledge, and not simply identical with our knowledge. Nonetheless, we possess knowledge, not ignorance, of the thing, and we have this knowledge by participation in the network of relationships that defines the thing.

Reply 5. The objection gratuitously asserts that our definition is reductionist, and this can equally well be gratuitously denied. In fact, this account includes the rejection of both reductionist and anti-reductionist positions. Insofar as people suppose that these positions are the only possible positions, if they see that my account implies the rejection of their particular side of the argument, they will naturally suppose that my account implies the acceptance of the other side. This is why the 10th objection claims the opposite: namely that my account is mistaken because it seems to be anti-reductionist.

Reply 6. I agree, in fact, that we are mostly ignorant of the nature of “blue,” and likewise of the natures of most other things. But we are equally ignorant of the network of relationships that these things share in. Thus in an earlier post about Mary’s Room, I noted that we do not even come close to knowing everything that can be known about color. Something similar would be true about pretty much everything that we can commonly name. We have some knowledge of what blue is, but it is a very imperfect knowledge, and similarly we have some knowledge of what a human being is, but it is a very imperfect knowledge. This is one reason why I qualified the claim that the essences of things are not hidden: in another way, virtually all essences are hidden from us, because they are typically too complex for us to understand exhaustively.

An additional problem, also mentioned in the case of “blue,” is that the experience of blue is not the understanding of blue, and these would remain distinct even if the understanding of blue were perfect. But again, it would be an instance of the Kantian error to suppose that it follows that one would not understand the nature of blue even if one understood it (thus we make the absurdity evident.)

Reply 7. God is not an exception to the claim about hidden essences, nor to this account of form, and these claims are not necessarily inconsistent with Christian theology.

The simplicity of God should not be understood as necessarily being opposed to being a network of relationships. In particular, the Trinity is thought to be the same as the essence of God, and what is the Trinity except a network of relations?

Nor does the impossibility of knowing the essence of God imply that God’s essence is hidden in the relevant sense. Rather, it is enough to say that it is inaccessible for “practical” reasons, so to speak. For example, consider St. Thomas’s argument that no one knows all that God can do:

The created intellect, in seeing the divine essence, does not see in it all that God does or can do. For it is manifest that things are seen in God as they are in Him. But all other things are in God as effects are in the power of their cause. Therefore all things are seen in God as an effect is seen in its cause. Now it is clear that the more perfectly a cause is seen, the more of its effects can be seen in it. For whoever has a lofty understanding, as soon as one demonstrative principle is put before him can gather the knowledge of many conclusions; but this is beyond one of a weaker intellect, for he needs things to be explained to him separately. And so an intellect can know all the effects of a cause and the reasons for those effects in the cause itself, if it comprehends the cause wholly. Now no created intellect can comprehend God wholly, as shown above (Article 7). Therefore no created intellect in seeing God can know all that God does or can do, for this would be to comprehend His power; but of what God does or can do any intellect can know the more, the more perfectly it sees God.

St. Thomas argues that if anyone knew all that God can do, i.e. everything that can be God’s effect, he would not only know the essence of God, but know it perfectly. This actually supports our position precisely: if you have an exhaustive account of the network of relationships between God and the world, actual and potential, according to St. Thomas, this is to know the essence of God exhaustively.

Reply 8. I concede the objection, but simply note that the error is on the part of Christian theology, not on the part of this account.

In this case, someone might ask why I included this objection, along with the previous, where even if I consider the theology defensible, I do not consider it authoritative. The reason is that I included objections that I expected various readers to hold in one form or another, and these are two of them. But what is the use of addressing them if I simply reject the premise of the objection?

There is at least one benefit to this. There is an important lesson here. Religious doctrines are typically defined in such a way that they have few or no undue sensible implications, as I said for example about the Real Presence. But philosophy is more difficult, and shares in much of the same distance from the senses that such religious claims have. Consequently, even if you manage to avoid adopting religious doctrines that have false scientific implications (and many don’t manage to avoid even this), if you accept any religious doctrines at all, it will be much harder to avoid false philosophical implications.

In fact, the idea of an immortal soul probably has false scientific consequences as well as false philosophical consequences, at least taken as it is usually understood. Thus for example Sean Carroll argues that the mortality of the soul is a settled issue:

Adam claims that “simply is no controlled, experimental[ly] verifiable information” regarding life after death. By these standards, there is no controlled, experimentally verifiable information regarding whether the Moon is made of green cheese. Sure, we can take spectra of light reflecting from the Moon, and even send astronauts up there and bring samples back for analysis. But that’s only scratching the surface, as it were. What if the Moon is almost all green cheese, but is covered with a layer of dust a few meters thick? Can you really say that you know this isn’t true? Until you have actually examined every single cubic centimeter of the Moon’s interior, you don’t really have experimentally verifiable information, do you? So maybe agnosticism on the green-cheese issue is warranted. (Come up with all the information we actually do have about the Moon; I promise you I can fit it into the green-cheese hypothesis.)

Obviously this is completely crazy. Our conviction that green cheese makes up a negligible fraction of the Moon’s interior comes not from direct observation, but from the gross incompatibility of that idea with other things we think we know. Given what we do understand about rocks and planets and dairy products and the Solar System, it’s absurd to imagine that the Moon is made of green cheese. We know better.

We also know better for life after death, although people are much more reluctant to admit it. Admittedly, “direct” evidence one way or the other is hard to come by — all we have are a few legends and sketchy claims from unreliable witnesses with near-death experiences, plus a bucketload of wishful thinking. But surely it’s okay to take account of indirect evidence — namely, compatibility of the idea that some form of our individual soul survives death with other things we know about how the world works.

Claims that some form of consciousness persists after our bodies die and decay into their constituent atoms face one huge, insuperable obstacle: the laws of physics underlying everyday life are completely understood, and there’s no way within those laws to allow for the information stored in our brains to persist after we die. If you claim that some form of soul persists beyond death, what particles is that soul made of? What forces are holding it together? How does it interact with ordinary matter?

Everything we know about quantum field theory (QFT) says that there aren’t any sensible answers to these questions. Of course, everything we know about quantum field theory could be wrong. Also, the Moon could be made of green cheese.

Among advocates for life after death, nobody even tries to sit down and do the hard work of explaining how the basic physics of atoms and electrons would have to be altered in order for this to be true. If we tried, the fundamental absurdity of the task would quickly become evident.

Even if you don’t believe that human beings are “simply” collections of atoms evolving and interacting according to rules laid down in the Standard Model of particle physics, most people would grudgingly admit that atoms are part of who we are. If it’s really nothing but atoms and the known forces, there is clearly no way for the soul to survive death. Believing in life after death, to put it mildly, requires physics beyond the Standard Model. Most importantly, we need some way for that “new physics” to interact with the atoms that we do have.

Very roughly speaking, when most people think about an immaterial soul that persists after death, they have in mind some sort of blob of spirit energy that takes up residence near our brain, and drives around our body like a soccer mom driving an SUV. The questions are these: what form does that spirit energy take, and how does it interact with our ordinary atoms? Not only is new physics required, but dramatically new physics. Within QFT, there can’t be a new collection of “spirit particles” and “spirit forces” that interact with our regular atoms, because we would have detected them in existing experiments. Ockham’s razor is not on your side here, since you have to posit a completely new realm of reality obeying very different rules than the ones we know.

There are certainly different ways to think about this, but this is in fact a common way of thinking about the soul in relation to the body. For example, consider this discussion by James Chastek:

Objection: Conservation laws require that outcomes be already determined. By your own admission, life has to be able to “alter what would happen by physical causes alone” and therefore violates conservation laws.

Response: Again, laws and initial conditions do not suffice to explain the actual world. Life only “alters” physical causes under the counterfactual supposition that physical causes could act alone, i.e. in a way that could suffice to explain outcomes in the actual world.

Objection: It is meaningless to describe life acting on physical laws and conditions when we can’t detect this. Life-actions are vacuous entities about which we can say nothing at all. What’s their Hamiltonian?

Response: Physical laws and conditions as physical are instrumental or partial accounts of the actual world. The interactive mechanisms and measurement devices appropriate to establishing the existence of physical causes are not appropriate tools for describing all causes of the actual world.

Chastek is deliberately ignoring the question that he poses himself. But we know his opinion of the matter from previous discussions. What physics would calculate would be one thing; what the human being will do, according to Chastek, is something different.

This almost certainly does imply a violation of the laws of physics in the sense of the discussion in Chastek’s post, as well as in the sense that concerns Sean Carroll. In fact, it probably would imply a violation of conservation of energy, very possibly to such a degree that it would be possible in principle to exploit the violation to create a perpetual motion machine, somewhat along the lines of this short story by Scott Alexander. And these violations would detectable in principle, and very likely in practice as well, at least at some point.

Nonetheless, one might think about it differently, without suggesting these things, but still suppose that people have immortal souls. And one might be forgiven for being skeptical of Sean Carroll’s arguments, given that his metaphysics is wrong. Perhaps there is some implicit dependence of his argument on this mistaken metaphysics. The problem with this response is that even the correct metaphysics has the same implications, even without considering Carroll’s arguments from physics.

It is easy to see that there still loopholes for someone who wishes to maintain the immortality of the soul. But such loopholes also indicate an additional problem with the idea. In particular, the idea that the soul is subsistent implies that it is a substantial part of a human being: that a human is a whole made of soul and body much as the body is a whole made of various parts such as legs and arms. If this were the case, the soul might not be material in a quantitative sense, but it would be “matter” in the sense that we have argued that form is not matter. In this case, it would be reasonable to suppose that an additional substantial form would be necessary to unify soul and body, themselves two substantial parts.

Reply 9. There in fact is an implicit reference to matter in the definition. “Apt to make something one” refers to what is made, but it also refers to what it is made out of, if there is anything out of which it is made. The form of a chair makes the chair one chair, but it also makes the stuff of the chair into one chair.

There is more to say about matter, but my intention for now was to clarify the concept of form.

Reply 10. The network of relationships is most certainly not a construct of the mind, if one places this in opposition to “real thing.” You cannot trace back relationships to causes that do not include any relationships, if only because “cause” is in itself relative.

I have argued against reductionism in many places, and do not need to repeat those arguments here, but in particular I would note that the objection implies that “mind” is a construct of the mind, and this implies circular causality, which is impossible.

Reply 11. The objection is not really argued, and this is mainly because there cannot be a real argument for it. There is however a rough intuition supporting it, which is that applying this idea of form to immaterial things seems unfair to reality, as though we were trying to say that the limits of reality are set by the limits of the human mind. Once again, however, this is simply a case of the usual Kantian error, mixed together with choosing something that would be especially unknown to us. An immaterial thing could not exist without having some relationship with everything else. As we have suggested elsewhere, “there is an immaterial thing,” cannot even be assigned a meaning without the implied claim that I stand in some relation with it, and that it stands in some relation to me. But evidently I know very little about it. This does not mean that we need some new definition of what it is to be something; it simply means I do not know much of what that thing is, just as I do not know much of anything about it at all.

 

How Sex Minimizes Uncertainty

This is in response to an issue raised by Scott Alexander on his Tumblr.

I actually responded to the dark room problem of predictive processing earlier. However, here I will construct an imaginary model which will hopefully explain the same thing more clearly and briefly.

Suppose there is dust particle which falls towards the ground 90% of the time, and is blown higher into the air 10% of the time.

Now suppose we bring the dust particle to life, and give it the power of predictive processing. If it predicts it will move in a certain direction, this will tend to cause it to move in that direction. However, this causal power is not infallible. So we can suppose that if it predicts it will move where it was going to move anyway, in the dead situation, it will move in that direction. But if it predicts it will move in the opposite direction from where it would have moved in the dead situation, then let us suppose that it will move in the predicted direction 75% of the time, while in the remaining 25% of the time, it will move in the direction the dead particle would have moved, and its prediction will be mistaken.

Now if the particle predicts it will fall towards the ground, then it will fall towards the ground 97.5% of the time, and in the remaining 2.5% of the time it will be blown higher in the air.

Meanwhile, if the particle predicts that it will be blown higher, then it will be blown higher in 77.5% of cases, and in 22.5% of cases it will fall downwards.

97.5% accuracy is less uncertain than 77.5% accuracy, so the dust particle will minimize uncertainty by consistently predicting that it will fall downwards.

The application to sex and hunger and so on should be evident.

Lies, Religion, and Miscalibrated Priors

In a post from some time ago, Scott Alexander asks why it is so hard to believe that people are lying, even in situations where it should be obvious that they made up the whole story:

The weird thing is, I know all of this. I know that if a community is big enough to include even a few liars, then absent a strong mechanism to stop them those lies should rise to the top. I know that pretty much all of our modern communities are super-Dunbar sized and ought to follow that principle.

And yet my System 1 still refuses to believe that the people in those Reddit threads are liars. It’s actually kind of horrified at the thought, imagining them as their shoulders slump and they glumly say “Well, I guess I didn’t really expect anyone to believe me”. I want to say “No! I believe you! I know you had a weird experience and it must be hard for you, but these things happen, I’m sure you’re a good person!”

If you’re like me, and you want to respond to this post with “but how do you know that person didn’t just experience a certain coincidence or weird psychological trick?”, then before you comment take a second to ask why the “they’re lying” theory is so hard to believe. And when you figure it out, tell me, because I really want to know.

The strongest reason for this effect is almost certainly a moral reason. In an earlier post, I discussed St. Thomas’s explanation for why one should give a charitable interpretation to someone’s behavior, and in a follow up, I explained the problem of applying that reasoning to the situation of judging whether a person is lying or not. St. Thomas assumes that the bad consequences of being mistaken about someone’s moral character will be minor, and most of the time this is true. But if we asking the question, “are they telling the truth or are they lying?”, the consequences can sometimes be very serious if we are mistaken.

Whether or not one is correct in making this application, it is not hard to see that this is the principal answer to Scott’s question. It is hard to believe the “they’re lying” theory not because of the probability that they are lying, but because we are unwilling to risk injuring someone with our opinion. This is without doubt a good motive from a moral standpoint.

But if you proceed to take this unwillingness as a sign of the probability that they are telling the truth, this would be a demonstrably miscalibrated probability assignment. Consider a story on Quora which makes a good example of Scott’s point:

I shuffled a deck of cards and got the same order that I started with.

No I am not kidding and its not because I can’t shuffle.

Let me just tell the story of how it happened. I was on a trip to Europe and I bought a pack of playing cards at the airport in Madrid to entertain myself on the flight back to Dallas.

It was about halfway through the flight after I’d watched Pixels twice in a row (That s literally the only reason I even remembered this) And I opened my brand new Real Madrid Playing Cards and I just shuffled them for probably like 30 minutes doing different tricks that I’d learned at school to entertain myself and the little girl sitting next to me also found them to be quite cool.

I then went to look at the other sides of the cards since they all had a picture of the Real Madrid player with the same number on the back. That’s when I realized that they were all in order. I literally flipped through the cards and saw Nacho-Fernandes, Ronaldo, Toni Kroos, Karim Benzema and the rest of the team go by all in the perfect order.

Then a few weeks ago when we randomly started talking about Pixels in AP Statistics I brought up this story and my teacher was absolutely amazed. We did the math and the amount of possibilities when shuffling a deck of cards is 52! Meaning 52 x 51 x 50 x 49 x 48….

There were 8.0658175e+67 different combinations of cards that I could have gotten. And I managed to get the same one twice.

The lack of context here might make us more willing to say that Arman Razaali is lying, compared to Scott’s particular examples. Nonetheless, I think a normal person will feel somewhat unwilling to say, “he’s lying, end of story.” I certainly feel that myself.

It does not take many shuffles to essentially randomize a deck. Consequently if Razaali’s statement that he “shuffled them for probably like 30 minutes” is even approximately true, 1 in 52! is probably a good estimate of the chance of the outcome that he claims, if we assume that it happened by chance. It might be some orders of magnitude less since there might be some possibility of “unshuffling.” I do not know enough about the physical process of shuffling to know whether this is a real possibility or not, but it is not likely to make a significant difference: e.g. the difference between 10^67 and 10^40 would be a huge difference mathematically, but it would not be significant for our considerations here, because both are simply too large for us to grasp.

People demonstrably lie at far higher rates than 1 in 10^67 or 1 in 10^40. This will remain the case even if you ask about the rate of “apparently unmotivated flat out lying for no reason.” Consequently, “he’s lying, period,” is far more likely than “the story is true, and happened by pure chance.” Nor can we fix this by pointing to the fact that an extraordinary claim is a kind of extraordinary evidence. In the linked post I said that the case of seeing ghosts, and similar things, might be unclear:

Or in other words, is claiming to have seen a ghost more like claiming to have picked 422,819,208, or is it more like claiming to have picked 500,000,000?

That remains undetermined, at least by the considerations which we have given here. But unless you have good reasons to suspect that seeing ghosts is significantly more rare than claiming to see a ghost, it is misguided to dismiss such claims as requiring some special evidence apart from the claim itself.

In this case there is no such unclarity – if we interpret the claim as “by pure chance the deck ended up in its original order,” then it is precisely like claiming to have picked 500,000,000, except that it is far less likely.

Note that there is some remaining ambiguity. Razaali could defend himself by saying, “I said it happened, I didn’t say it happened by chance.” Or in other words, “but how do you know that person didn’t just experience a certain coincidence or weird psychological trick?” But this is simply to point out that “he’s lying” and “this happened by pure chance” are not exhaustive alternatives. And this is true. But if we want to estimate the likelihood of those two alternatives in particular, we must say that it is far more likely that he is lying than that it happened, and happened by chance. And so much so that if one of these alternatives is true, it is virtually certain that he is lying.

As I have said above, the inclination to doubt that such a person is lying primarily has a moral reason. This might lead someone to say that my estimation here also has a moral reason: I just want to form my beliefs in the “correct” way, they might say: it is not about whether Razaali’s story really happened or not.

Charles Taylor, in chapter 15 of A Secular Age, gives a similar explanation of the situation of former religious believers who apparently have lost their faith due to evidence and argument:

From the believer’s perspective, all this falls out rather differently. We start with an epistemic response: the argument from modern science to all-around materialism seems quite unconvincing. Whenever this is worked out in something closer to detail, it seems full of holes. The best examples today might be evolution, sociobiology, and the like. But we also see reasonings of this kind in the works of Richard Dawkins, for instance, or Daniel Dennett.

So the believer returns the compliment. He casts about for an explanation why the materialist is so eager to believe very inconclusive arguments. Here the moral outlook just mentioned comes back in, but in a different role. Not that, failure to rise to which makes you unable to face the facts of materialism; but rather that, whose moral attraction, and seeming plausibility to the facts of the human moral condition, draw you to it, so that you readily grant the materialist argument from science its various leaps of faith. The whole package seems plausible, so we don’t pick too closely at the details.

But how can this be? Surely, the whole package is meant to be plausible precisely because science has shown . . . etc. That’s certainly the way the package of epistemic and moral views presents itself to those who accept it; that’s the official story, as it were. But the supposition here is that the official story isn’t the real one; that the real power that the package has to attract and convince lies in it as a definition of our ethical predicament, in particular, as beings capable of forming beliefs.

This means that this ideal of the courageous acknowledger of unpalatable truths, ready to eschew all easy comfort and consolation, and who by the same token becomes capable of grasping and controlling the world, sits well with us, draws us, that we feel tempted to make it our own. And/or it means that the counter-ideals of belief, devotion, piety, can all-too-easily seem actuated by a still immature desire for consolation, meaning, extra-human sustenance.

What seems to accredit the view of the package as epistemically-driven are all the famous conversion stories, starting with post-Darwinian Victorians but continuing to our day, where people who had a strong faith early in life found that they had reluctantly, even with anguish of soul, to relinquish it, because “Darwin has refuted the Bible”. Surely, we want to say, these people in a sense preferred the Christian outlook morally, but had to bow, with whatever degree of inner pain, to the facts.

But that’s exactly what I’m resisting saying. What happened here was not that a moral outlook bowed to brute facts. Rather we might say that one moral outlook gave way to another. Another model of what was higher triumphed. And much was going for this model: images of power, of untrammelled agency, of spiritual self-possession (the “buffered self”). On the other side, one’s childhood faith had perhaps in many respects remained childish; it was all too easy to come to see it as essentially and constitutionally so.

But this recession of one moral ideal in face of the other is only one aspect of the story. The crucial judgment is an all-in one about the nature of the human ethical predicament: the new moral outlook, the “ethics of belief” in Clifford’s famous phrase, that one should only give credence to what was clearly demonstrated by the evidence, was not only attractive in itself; it also carried with it a view of our ethical predicament, namely, that we are strongly tempted, the more so, the less mature we are, to deviate from this austere principle, and give assent to comforting untruths. The convert to the new ethics has learned to mistrust some of his own deepest instincts, and in particular those which draw him to religious belief. The really operative conversion here was based on the plausibility of this understanding of our ethical situation over the Christian one with its characteristic picture of what entices us to sin and apostasy. The crucial change is in the status accorded to the inclination to believe; this is the object of a radical shift in interpretation. It is no longer the impetus in us towards truth, but has become rather the most dangerous temptation to sin against the austere principles of belief-formation. This whole construal of our ethical predicament becomes more plausible. The attraction of the new moral ideal is only part of this, albeit an important one. What was also crucial was a changed reading of our own motivation, wherein the desire to believe appears now as childish temptation. Since all incipient faith is childish in an obvious sense, and (in the Christian case) only evolves beyond this by being child-like in the Gospel sense, this (mis)reading is not difficult to make.

Taylor’s argument is that the arguments for unbelief are unconvincing; consequently, in order to explain why unbelievers find them convincing, he must find some moral explanation for why they do not believe. This turns out to be the desire to have a particular “ethics of belief”: they do not want to have beliefs which are not formed in such and such a particular way. This is much like the theoretical response above regarding my estimation of the probability that Razaali is lying, and how that might be considered a moral estimation, rather than being concerned with what actually happened.

There are a number of problems with Taylor’s argument, which I may or may not address in the future in more detail. For the moment I will take note of three things:

First, neither in this passage nor elsewhere in the book does Taylor explain in any detailed way why he finds the unbeliever’s arguments unconvincing. I find the arguments convincing, and it is the rebuttals (by others, not by Taylor, since he does not attempt this) that I find unconvincing. Now of course Taylor will say this is because of my particular ethical motivations, but I disagree, and I have considered the matter exactly in the kind of detail to which he refers when he says, “Whenever this is worked out in something closer to detail, it seems full of holes.” On the contrary, the problem of detail is mostly on the other side; most religious views can only make sense when they are not worked out in detail. But this is a topic for another time.

Second, Taylor sets up an implicit dichotomy between his own religious views and “all-around materialism.” But these two claims do not come remotely close to exhausting the possibilities. This is much like forcing someone to choose between “he’s lying” and “this happened by pure chance.” It is obvious in both cases (the deck of cards and religious belief) that the options do not exhaust the possibilities. So insisting on one of them is likely motivated itself: Taylor insists on this dichotomy to make his religious beliefs seem more plausible, using a presumed implausibility of “all-around materialism,” and my hypothetical interlocutor insists on the dichotomy in the hope of persuading me that the deck might have or did randomly end up in its original order, using my presumed unwillingness to accuse someone of lying.

Third, Taylor is not entirely wrong that such an ethical motivation is likely involved in the case of religious belief and unbelief, nor would my hypothetical interlocutor be entirely wrong that such motivations are relevant to our beliefs about the deck of cards.

But we need to consider this point more carefully. Insofar as beliefs are voluntary, you cannot make one side voluntary and the other side involuntary. You cannot say, “Your beliefs are voluntarily adopted due to moral reasons, while my beliefs are imposed on my intellect by the nature of things.” If accepting an opinion is voluntary, rejecting it will also be voluntary, and if rejecting it is voluntary, accepting it will also be voluntary. In this sense, it is quite correct that ethical motivations will always be involved, even when a person’s opinion is actually true, and even when all the reasons that make it likely are fully known. To this degree, I agree that I want to form my beliefs in a way which is prudent and reasonable, and I agree that this desire is partly responsible for my beliefs about religion, and for my above estimate of the chance that Razaali is lying.

But that is not all: my interlocutor (Taylor or the hypothetical one) is also implicitly or explicitly concluding that fundamentally the question is not about truth. Basically, they say, I want to have “correctly formed” beliefs, but this has nothing to do with the real truth of the matter. Sure, I might feel forced to believe that Razaali’s story isn’t true, but there really is no reason it couldn’t be true. And likewise I might feel forced to believe that Taylor’s religious beliefs are untrue, but there really is no reason they couldn’t be.

And in this respect they are mistaken, not because anything “couldn’t” be true, but because the issue of truth is central, much more so than forming beliefs in an ethical way. Regardless of your ethical motives, if you believe that Razaali’s story is true and happened by pure chance, it is virtually certain that you believe a falsehood. Maybe you are forming this belief in a virtuous way, and maybe you are forming it in a vicious way: but either way, it is utterly false. Either it in fact did not happen, or it in fact did not happen by chance.

We know this, essentially, from the “statistics” of the situation: no matter how many qualifications we add, lies in such situations will be vastly more common than truths. But note that something still seems “unconvincing” here, in the sense of Scott Alexander’s original post: even after “knowing all this,” he finds himself very unwilling to say they are lying. In a discussion with Angra Mainyu, I remarked that our apparently involuntary assessments of things are more like desires than like beliefs:

So rather than calling that assessment a belief, it would be more accurate to call it a desire. It is not believing something, but desiring to believe something. Hunger is the tendency to go and get food; that assessment is the tendency to treat a certain claim (“the USA is larger than Austria”) as a fact. And in both cases there are good reasons for those desires: you are benefited by food, and you are benefited by treating that claim as a fact.

In a similar way, because we have the natural desire not to injure people, we will naturally desire not to treat “he is lying” as a fact; that is, we will desire not to believe it. The conclusion that Angra should draw in the case under discussion, according to his position, is that I do not “really believe” that it is more likely that Razaali is lying than that his story is true, because I do feel the force of the desire not to say that he is lying. But I resist that desire, in part because I want to have reasonable beliefs, but most of all because it is false that Razaali’s story is true and happened by chance.

To the degree that this desire feels like a prior probability, and it does feel that way, it is necessarily miscalibrated. But to the degree that this desire remains nonetheless, this reasoning will continue to feel in some sense unconvincing. And it does in fact feel that way to me, even after making the argument, as expected. Very possibly, this is not unrelated to Taylor’s assessment that the argument for unbelief “seems quite unconvincing.” But discussing that in the detail which Taylor omitted is a task for another time.

 

 

The Self and Disembodied Predictive Processing

While I criticized his claim overall, there is some truth in Scott Alexander’s remark that “the predictive processing model isn’t really a natural match for embodiment theory.” The theory of “embodiment” refers to the idea that a thing’s matter contributes in particular ways to its functioning; it cannot be explained by its form alone. As I said in the previous post, the human mind is certainly embodied in this sense. Nonetheless, the idea of predictive processing can suggest something somewhat disembodied. We can imagine the following picture of Andy Clark’s view:

Imagine the human mind as a person in an underground bunker. There is a bank of labelled computer screens on one wall, which portray incoming sensations. On another computer, the person analyzes the incoming data and records his predictions for what is to come, along with the equations or other things which represent his best guesses about the rules guiding incoming sensations.

As time goes on, his predictions are sometimes correct and sometimes incorrect, and so he refines his equations and his predictions to make them more accurate.

As in the previous post, we have here a “barren landscape.” The person in the bunker originally isn’t trying to control anything or to reach any particular outcome; he is just guessing what is going to appear on the screens. This idea also appears somewhat “disembodied”: what the mind is doing down in its bunker does not seem to have much to do with the body and the processes by which it is obtaining sensations.

At some point, however, the mind notices a particular difference between some of the incoming streams of sensation and the rest. The typical screen works like the one labelled “vision.” And there is a problem here. While the mind is pretty good at predicting what comes next there, things frequently come up which it did not predict. No matter how much it improves its rules and equations, it simply cannot entirely overcome this problem. The stream is just too unpredictable for that.

On the other hand, one stream labelled “proprioception” seems to work a bit differently. At any rate, extreme unpredicted events turn out to be much rarer. Additionally, the mind notices something particularly interesting: small differences to prediction do not seem to make much difference to accuracy. Or in other words, if it takes its best guess, then arbitrarily modifies it, as long as this is by a small amount, it will be just as accurate as its original guess would have been.

And thus if it modifies it repeatedly in this way, it can get any outcome it “wants.” Or in other words, the mind has learned that it is in control of one of the incoming streams, and not merely observing it.

This seems to suggest something particular. We do not have any innate knowledge that we are things in the world and that we can affect the world; this is something learned. In this sense, the idea of the self is one that we learn from experience, like the ideas of other things. I pointed out elsewhere that Descartes is mistaken to think the knowledge of thinking is primary. In a similar way, knowledge of self is not primary, but reflective.

Hellen Keller writes in The World I Live In (XI):

Before my teacher came to me, I did not know that I am. I lived in a world that was a no-world. I cannot hope to describe adequately that unconscious, yet conscious time of nothingness. I did not know that I knew aught, or that I lived or acted or desired. I had neither will nor intellect. I was carried along to objects and acts by a certain blind natural impetus. I had a mind which caused me to feel anger, satisfaction, desire. These two facts led those about me to suppose that I willed and thought. I can remember all this, not because I knew that it was so, but because I have tactual memory.

When I wanted anything I liked, ice cream, for instance, of which I was very fond, I had a delicious taste on my tongue (which, by the way, I never have now), and in my hand I felt the turning of the freezer. I made the sign, and my mother knew I wanted ice-cream. I “thought” and desired in my fingers.

Since I had no power of thought, I did not compare one mental state with another. So I was not conscious of any change or process going on in my brain when my teacher began to instruct me. I merely felt keen delight in obtaining more easily what I wanted by means of the finger motions she taught me. I thought only of objects, and only objects I wanted. It was the turning of the freezer on a larger scale. When I learned the meaning of “I” and “me” and found that I was something, I began to think. Then consciousness first existed for me.

Helen Keller’s experience is related to the idea of language as a kind of technology of thought. But the main point is that she is quite literally correct in saying that she did not know that she existed. This does not mean that she had the thought, “I do not exist,” but rather that she had no conscious thought about the self at all. Of course she speaks of feeling desire, but that is precisely as a feeling. Desire for ice cream is what is there (not “what I feel,” but “what is”) before the taste of ice cream arrives (not “before I taste ice cream.”)

 

Predictive Processing

In a sort of curious coincidence, a few days after I published my last few posts, Scott Alexander posted a book review of Andy Clark’s book Surfing Uncertainty. A major theme of my posts was that in a certain sense, a decision consists in the expectation of performing the action decided upon. In a similar way, Andy Clark claims that the human brain does something very similar from moment to moment. Thus he begins chapter 4 of his book:

To surf the waves of sensory stimulation, predicting the present is simply not enough. Instead, we are built to engage the world. We are built to act in ways that are sensitive to the contingencies of the past, and that actively bring forth the futures that we need and desire. How does a guessing engine (a hierarchical prediction machine) turn prediction into accomplishment? The answer that we shall explore is: by predicting the shape of its own motor trajectories. In accounting for action, we thus move from predicting the rolling present to predicting the near-future, in the form of the not-yet-actual trajectories of our own limbs and bodies. These trajectories, predictive processing suggests, are specified by their distinctive sensory (especially proprioceptive) consequences. In ways that we are about to explore, predicting these (non-actual) sensory states actually serves to bring them about.

Such predictions act as self-fulfilling prophecies. Expecting the flow of sensation that would result were you to move your body so as to keep the surfboard in that rolling sweet spot results (if you happen to be an expert surfer) in that very flow, locating the surfboard right where you want it. Expert prediction of the world (here, the dynamic ever-changing waves) combines with expert prediction of the sensory flow that would, in that context, characterize the desired action, so as to bring that action about.

There is a great deal that could be said about the book, and about this theory, but for the moment I will content myself with remarking on one of Scott Alexander’s complaints about the book, and making one additional point. In his review, Scott remarks:

In particular, he’s obsessed with showing how “embodied” everything is all the time. This gets kind of awkward, since the predictive processing model isn’t really a natural match for embodiment theory, and describes a brain which is pretty embodied in some ways but not-so-embodied in others. If you want a hundred pages of apologia along the lines of “this may not look embodied, but if you squint you’ll see how super-duper embodied it really is!”, this is your book.

I did not find Clark obsessed with this, and I think it would be hard to reasonably describe any hundred pages in the book as devoted to this particular topic. This inclines to me to suggest that Scott may be irritated by such discussion of the topic that comes up because it does not seem relevant to him. I will therefore explain the relevance, namely in relation to a different difficulty which Scott discusses in another post:

There’s something more interesting in Section 7.10 of Surfing Uncertainty [actually 8.10], “Escape From The Darkened Room”. It asks: if the brain works to minimize prediction error, isn’t its best strategy to sit in a dark room and do nothing forever? After all, then it can predict its sense-data pretty much perfectly – it’ll always just stay “darkened room”.

Section 7.10 [8.10] gives a kind of hand-wave-y answer here, saying that of course organisms have some drives, and probably it makes sense for them to desire novelty and explore new options, and so on. Overall this isn’t too different from PCT’s idea of “intrinsic error”, and as long as we remember that it’s not really predicting anything in particular it seems like a fair response.

Clark’s response may be somewhat “hand-wave-y,” but I think the response might seem slightly more problematic to Scott than it actually is, precisely because he does not understand the idea of embodiment, and how it applies to this situation.

If we think about predictions on a general intellectual level, there is a good reason not to predict that you will not eat something soon. If you do predict this, you will turn out to be wrong, as is often discovered by would-be adopters of extreme fasts or diets. You will in fact eat something soon, regardless of what you think about this; so if you want the truth, you should believe that you will eat something soon.

The “darkened room” problem, however, is not about this general level. The argument is that if the brain is predicting its actions from moment to moment on a subconscious level, then if its main concern is getting accurate predictions, it could just predict an absence of action, and carry this out, and its predictions would be accurate. So why does this not happen? Clark gives his “hand-wave-y” answer:

Prediction-error-based neural processing is, we have seen, part of a potent recipe for multi-scale self-organization. Such multiscale self-organization does not occur in a vacuum. Instead, it operates only against the backdrop of an evolved organismic (neural and gross-bodily) form, and (as we will see in chapter 9) an equally transformative backdrop of slowly accumulated material structure and cultural practices: the socio-technological legacy of generation upon generation of human learning and experience.

To start to bring this larger picture into focus, the first point to notice is that explicit, fast timescale processes of prediction error minimization must answer to the needs and projects of evolved, embodied, and environmentally embedded agents. The very existence of such agents (see Friston, 2011b, 2012c) thus already implies a huge range of structurally implicit creature-specific ‘expectations’. Such creatures are built to seek mates, to avoid hunger and thirst, and to engage (even when not hungry and thirsty) in the kinds of sporadic environmental exploration that will help prepare them for unexpected environmental shifts, resource scarcities, new competitors, and so on. On a moment-by-moment basis, then, prediction error is minimized only against the backdrop of this complex set of creature-defining ‘expectations’.”

In one way, the answer here is a historical one. If you simply ask the abstract question, “would it minimize prediction error to predict doing nothing, and then to do nothing,” perhaps it would. But evolution could not bring such a creature into existence, while it was able to produce a creature that would predict that it would engage the world in various ways, and then would proceed to engage the world in those ways.

The objection, of course, would not be that the creature of the “darkened room” is possible. The objection would be that since such a creature is not possible, it must be wrong to describe the brain as minimizing prediction error. But notice that if you predict that you will not eat, and then you do not eat, you are no more right or wrong than if you predict that you will eat, and then you do eat. Either one is possible from the standpoint of prediction, but only one is possible from the standpoint of history.

This is where being “embodied” is relevant. The brain is not an abstract algorithm which has no content except to minimize prediction error; it is a physical object which works together in physical ways with the rest of the human body to carry out specifically human actions and to live a human life.

On the largest scale of evolutionary history, there were surely organisms that were nourished and reproduced long before there was anything analagous to a mind at work in those organisms. So when mind began to be, and took over some of this process, this could only happen in such a way that it would continue the work that was already there. A “predictive engine” could only begin to be by predicting that nourishment and reproduction would continue, since any attempt to do otherwise would necessarily result either in false predictions or in death.

This response is necessarily “hand-wave-y” in the sense that I (and presumably Clark) do not understand the precise physical implementation. But it is easy to see that it was historically necessary for things to happen this way, and it is an expression of “embodiment” in the sense that “minimize prediction error” is an abstract algorithm which does not and cannot exhaust everything which is there. The objection would be, “then there must be some other algorithm instead.” But this does not follow: no abstract algorithm will exhaust a physical object. Thus for example, animals will fall because they are heavy. Asking whether falling will satisfy some abstract algorithm is not relevant. In a similar way, animals had to be physically arranged in such a way that they would usually eat and reproduce.

I said I would make one additional point, although it may well be related to the above concern. In section 4.8 Clark notes that his account does not need to consider costs and benefits, at least directly:

But the story does not stop there. For the very same strategy here applies to the notion of desired consequences and rewards at all levels. Thus we read that ‘crucially, active inference does not invoke any “desired consequences”. It rests only on experience-dependent learning and inference: experience induces prior expectations, which guide perceptual inference and action’ (Friston, Mattout, & Kilner, 2011, p. 157). Apart from a certain efflorescence of corollary discharge, in the form of downward-flowing predictions, we here seem to confront something of a desert landscape: a world in which value functions, costs, reward signals, and perhaps even desires have been replaced by complex interacting expectations that inform perception and entrain action. But we could equally say (and I think this is the better way to express the point) that the functions of rewards and cost functions are now simply absorbed into a more complex generative model. They are implicit in our sensory (especially proprioceptive) expectations and they constrain behavior by prescribing their distinctive sensory implications.

The idea of the “desert landscape” seems to be that this account appears to do away with the idea of the good, and the idea of desire. The brain predicts what it is going to do, and those predictions cause it to do those things. This all seems purely intellectual: it seems that there is no purpose or goal or good involved.

The correct response to this, I think, is connected to what I have said elsewhere about desire and good. I noted there that we recognize our desires as desires for particular things by noticing that when we have certain feelings, we tend to do certain things. If we did not do those things, we would never conclude that those feelings are desires for doing those things. Note that someone could raise a similar objection here: if this is true, then are not desire and good mere words? We feel certain feelings, and do certain things, and that is all there is to be said. Where is good or purpose here?

The truth here is that good and being are convertible. The objection (to my definition and to Clark’s account) is not a reasonable objection at all: it would be a reasonable objection only if we expected good to be something different from being, in which case it would of course be nothing at all.

Minimizing Motivated Beliefs

In the last post, we noted that there is a conflict between the goal of accurate beliefs about your future actions, and your own goals about your future. More accurate beliefs will not always lead to a better fulfillment of those goals. This implies that you must be ready to engage in a certain amount of trade, if you desire both truth and other things. Eliezer Yudkowsky argues that self-deception, and therefore also such trade, is either impossible or stupid, depending on how it is understood:

What if self-deception helps us be happy?  What if just running out and overcoming bias will make us—gasp!—unhappy?  Surely, true wisdom would be second-order rationality, choosing when to be rational.  That way you can decide which cognitive biases should govern you, to maximize your happiness.

Leaving the morality aside, I doubt such a lunatic dislocation in the mind could really happen.

Second-order rationality implies that at some point, you will think to yourself, “And now, I will irrationally believe that I will win the lottery, in order to make myself happy.”  But we do not have such direct control over our beliefs.  You cannot make yourself believe the sky is green by an act of will.  You might be able to believe you believed it—though I have just made that more difficult for you by pointing out the difference.  (You’re welcome!)  You might even believe you were happy and self-deceived; but you would not in fact be happy and self-deceived.

For second-order rationality to be genuinely rational, you would first need a good model of reality, to extrapolate the consequences of rationality and irrationality.  If you then chose to be first-order irrational, you would need to forget this accurate view. And then forget the act of forgetting.  I don’t mean to commit the logical fallacy of generalizing from fictional evidence, but I think Orwell did a good job of extrapolating where this path leads.

You can’t know the consequences of being biased, until you have already debiased yourself.  And then it is too late for self-deception.

The other alternative is to choose blindly to remain biased, without any clear idea of the consequences.  This is not second-order rationality.  It is willful stupidity.

There are several errors here. The first is the denial that belief is voluntary. As I remarked in the comments to this post, it is best to think of “choosing to believe a thing” as “choosing to treat this thing as a fact.” And this is something which is indeed voluntary. Thus for example it is by choice that I am, at this very moment, treating it as a fact that belief is voluntary.

There is some truth in Yudkowsky’s remark that “you cannot make yourself believe the sky is green by an act of will.” But this is not because the thing itself is intrinsically involuntary. On the contrary, you could, if you wished, choose to treat the greenness of the sky as a fact, at least for the most part and in most ways. The problem is that you have no good motive to wish to act this way, and plenty of good motives not to act this way. In this sense, it is impossible for most of us to believe that the sky is green in the same way it is impossible for most of us to commit suicide; we simply have no good motive to do either of these things.

Yudkowsky’s second error is connected with the first. Since, according to him, it is impossible to deliberately and directly deceive oneself, self-deception can only happen in an indirect manner: “The other alternative is to choose blindly to remain biased, without any clear idea of the consequences.  This is not second-order rationality.  It is willful stupidity.” The idea is that ordinary beliefs are simply involuntary, but we can have beliefs that are somewhat voluntary by choosing “blindly to remain biased, without any clear idea of the consequences.” Since this is “willful stupidity,” a reasonable person would completely avoid such behavior, and thus all of his beliefs would be involuntary.

Essentially, Yudkowsky is claiming that we have some involuntary beliefs, and that we should avoid adding any voluntary beliefs to our involuntary ones. This view is fundamentally flawed precisely because all of our beliefs are voluntary, and thus we cannot avoid having voluntary beliefs.

Nor is it “willful stupidity” to trade away some truth for the sake of other good things. Completely avoiding this is in fact intrinsically impossible. If you are seeking one good, you are not equally seeking a distinct good; one cannot serve two masters. Thus since all people are interested in some goods distinct from truth, there is no one who fails to trade away some truth for the sake of other things. Yudkowsky’s mistake here is related to his wishful thinking about wishful thinking which I discussed previously. In this way he views himself, at least ideally, as completely avoiding wishful thinking. This is both impossible and unhelpful, impossible in that everyone has such motivated beliefs, and unhelpful because such beliefs can in fact be beneficial.

A better attitude to this matter is adopted by Robin Hanson, as for example when he discusses motives for having opinions in a post which we previously considered here. Bryan Caplan has a similar view, discussed here.

Once we have a clear view of this matter, we can use this to minimize the loss of truth that results from such beliefs. For example, in a post linked above, we discussed the argument that fictional accounts consistently distort one’s beliefs about reality. Rather than pretending that there is no such effect, we can deliberately consider to what extent we wish to be open to this possibility, depending on our other purposes for engaging with such accounts. This is not “willful stupidity”; the stupidity would to be engage in such trades without realizing that such trades are inevitable, and thus not to realize to what extent you are doing it.

Consider one of the cases of voluntary belief discussed in this earlier post. As we quoted at the time, Eric Reitan remarks:

For most horror victims, the sense that their lives have positive meaning may depend on the conviction that a transcendent good is at work redeeming evil. Is the evidential case against the existence of such a good really so convincing that it warrants saying to these horror victims, “Give up hope”? Should we call them irrational when they cling to that hope or when those among the privileged live in that hope for the sake of the afflicted? What does moral decency imply about the legitimacy of insisting, as the new atheists do, that any view of life which embraces the ethico-religious hope should be expunged from the world?

Here, Reitan is proposing that someone believe that “a transcendent good is at work redeeming evil” for the purpose of having “the sense that their lives have positive meaning.” If we look at this as it is, namely as proposing a voluntary belief for the sake of something other than truth, we can find ways to minimize the potential conflict between accuracy and this other goal. For example, the person might simply believe that “my life has a positive meaning,” without trying to explain why this is so. For the reasons given here, “my life has a positive meaning” is necessarily more probable and more known than any explanation for this that might be adopted. To pick a particular explanation and claim that it is more likely would be to fall into the conjunction fallacy.

Of course, real life is unfortunately more complicated. The woman in Reitan’s discussion might well respond to our proposal somewhat in this way (not a real quotation):

Probability is not the issue here, precisely because it is not a question of the truth of the matter in itself. There is a need to actually feel that one’s life is meaningful, not just to believe it. And the simple statement “life is meaningful” will not provide that feeling. Without the feeling, it will also be almost impossible to continue to believe it, no matter what the probability is. So in order to achieve this goal, it is necessary to believe a stronger and more particular claim.

And this response might be correct. Some such goals, due to their complexity, might not be easily achieved without adopting rather unlikely beliefs. For example, Robin Hanson, while discussing his reasons for having opinions, several times mentions the desire for “interesting” opinions. This is a case where many people will not even notice the trade involved, because the desire for interesting ideas seems closely related to the desire for truth. But in fact truth and interestingness are diverse things, and the goals are diverse, and one who desires both will likely engage in some trade. In fact, relative to truth seeking, looking for interesting things is a dangerous endeavor. Scott Alexander notes that interesting things are usually false:

This suggests a more general principle: interesting things should usually be lies. Let me give three examples.

I wrote in Toxoplasma of Rage about how even when people crusade against real evils, the particular stories they focus on tend to be false disproportionately often. Why? Because the thousands of true stories all have some subtleties or complicating factors, whereas liars are free to make up things which exactly perfectly fit the narrative. Given thousands of stories to choose from, the ones that bubble to the top will probably be the lies, just like on Reddit.

Every time I do a links post, even when I am very careful to double- and triple- check everything, and to only link to trustworthy sources in the mainstream media, a couple of my links end up being wrong. I’m selecting for surprising-if-true stories, but there’s only one way to get surprising-if-true stories that isn’t surprising, and given an entire Internet to choose from, many of the stories involved will be false.

And then there’s bad science. I can’t remember where I first saw this, so I can’t give credit, but somebody argued that the problem with non-replicable science isn’t just publication bias or p-hacking. It’s that some people will be sloppy, biased, or just stumble through bad luck upon a seemingly-good methodology that actually produces lots of false positives, and that almost all interesting results will come from these people. They’re the equivalent of Reddit liars – if there are enough of them, then all of the top comments will be theirs, since they’re able to come up with much more interesting stuff than the truth-tellers. In fields where sloppiness is easy, the truth-tellers will be gradually driven out, appearing to be incompetent since they can’t even replicate the most basic findings of the field, let alone advance it in any way. The sloppy people will survive to train the next generation of PhD students, and you’ll end up with a stable equilibrium.

In a way this makes the goal of believing interesting things much like the woman’s case. The goal of “believing interesting things” will be better achieved by more complex and detailed beliefs, even though to the extent that they are more complex and detailed, they are simply that much less likely to be true.

The point of this present post, then, is not to deny that some goals might be such that they are better attained with rather unlikely beliefs, and in some cases even in proportion to the unlikelihood of the beliefs. Rather, the point is that a conscious awareness of the trades involved will allow a person to minimize the loss of truth involved. If you never look at your bank account, you will not notice how much money you are losing from that monthly debit for internet. In the same way, if you hold Yudkowksy’s opinion, and believe that you never trade away truth for other things, which is itself both false and motivated, you are like someone who never looks at your account: you will not notice how much you are losing.

Technology and Culture

The last two posts have effectively answered the question raised about Scott Alexander’s account of cultural decline. What could be meant by calling some aspects of culture “less compatible with modern society?” Society tends to change over time, and some of those changes are humanly irreversible. It is entirely possible, and in fact common, for some of those irreversible changes to stand in tension with various elements of culture. This will necessarily tend to cause cultural decay at least with respect to those elements, and often with respect to other elements of culture as well, since the various aspects of culture are related.

This happens in a particular way with changes in technology, although technology is not the only driver of such irreversible change.

It would be extremely difficult for individuals to opt out of the use of of various technologies. For example, it would be quite difficult for Americans to give up the use of plumbing and heating, and a serious attempt to do so might lead to illness or death in many cases. And it would be still more difficult to give up the use of clothes, money, and language. Attempting to do so, assuming that one managed to preserve one’s physical life, would likely lead to imprisonment or other forms of institutionalization (which would make it that much more difficult to abandon the use of clothes.)

Someone might well respond here, “Wait, why are you bringing up clothes, money, and language as examples of technology?” Clothes and money seem more like cultural institutions than technology in the first place; and language seems to be natural to humans.

I have already spoken of language as a kind of technology. And with regard to clothes and money, it is even more evident that in the concrete forms in which they exist in our world today they are tightly intertwined with various technologies. The cash used in the United States depends on mints and printing presses, actual mechanical technologies. And if one wishes to buy something without cash, this usually depends on still more complex technology. Similar things are true of the clothes that we wear.

I concede, of course, that the use of these things is different from the use of the machines that make them, or as in the case of credit cards, support their use, although there is less distinction in the latter case. But I deliberately brought up things which look like purely cultural institutions in order to note their relationship with technology, because we are discussing the manner in which technological change can result in cultural change. Technology and culture are tightly intertwined, and can never be wholly separated.

Sarah Perry discusses this (the whole post is worth reading):

Almost every technological advance is a de-condensation: it abstracts a particular function away from an object, a person, or an institution, and allows it to grow separately from all the things it used to be connected to. Writing de-condenses communication: communication can now take place abstracted from face-to-face speech. Automobiles abstract transportation from exercise, and allow further de-condensation of useful locations (sometimes called sprawl). Markets de-condense production and consumption.

Why is technology so often at odds with the sacred? In other words, why does everyone get so mad about technological change? We humans are irrational and fearful creatures, but I don’t think it’s just that. Technological advances, by their nature, tear the world apart. They carve a piece away from the existing order – de-condensing, abstracting, unbundling – and all the previous dependencies collapse. The world must then heal itself around this rupture, to form a new order and wholeness. To fear disruption is completely reasonable.

The more powerful the technology, the more unpredictable its effects will be. A technological advance in the sense of a de-condensation is by its nature something that does not fit in the existing order. The world will need to reshape itself to fit. Technology is a bad carver, not in the sense that it is bad, but in the sense of Socrates:

First, the taking in of scattered particulars under one Idea, so that everyone understands what is being talked about … Second, the separation of the Idea into parts, by dividing it at the joints, as nature directs, not breaking any limb in half as a bad carver might.”

Plato, Phaedrus, 265D, quoted in Notes on the Synthesis of Form, Christopher Alexander.

The most powerful technological advances break limbs in half. They cut up the world in an entirely new way, inconceivable in the previous order.

Now someone, arguing much in Chesterton’s vein, might say that this does not have to happen. If a technology is damaging in this way, then just don’t use it. The problem is that often one does not have a realistic choice not to use it, as in my examples above. And much more can one fail to have a choice not to interact with people who use the new technology, and interacting with those people will itself change the way that life works. And as Robin Hanson noted, there is not some human global power that decides whether or not a technology gets to be introduced into human society or not. This happens rather by the uncoordinated and unplanned decisions of individuals.

And this is sufficient to explain the tendency towards cultural decline. The constant progress of technology results, and results of necessity, in constant cultural decline. And thus we fools understand why the former days were better than these.

Scott Alexander on the Decline of Culture

From Scott Alexander’s Tumblr:

voximperatoris:

[This post is copied over from Stephen Hicks.]

An instructive series of quotations, collected over the years, on the theme of pessimism about the present in relation to the past:

Plato, 360 BCE: “In that country [Egypt] arithmetical games have been invented for the use of mere children, which they learn as pleasure and amusement. I have late in life heard with amazement of our ignorance in these matters [science in general]; to me we appear to be more like pigs than men, and I am quite ashamed, not only of myself, but of all Greeks.” (Laws, Book VII)

Catullus, c. 60 BCE: “Oh, this age! How tasteless and ill-bred it is!”

Sallust, 86– c. 35 BCE: “to speak of the morals of our country, the nature of my theme seems to suggest that I go farther back and give a brief account of the institutions of our forefathers in peace and in war, how they governed the commonwealth, how great it was when they bequeathed it to us, and how by gradual changes it has ceased to be the noblest and best, and has become the worst and most vicious.” About Rome’s forefathers: “good morals were cultivated at home and in the field; there was the greatest harmony and little or no avarice; justice and probity prevailed among them.” They “adorned the shrines of the gods with piety, their own homes with glory, while from the vanquished they took naught save the power of doing harm.” But Rome now is a moral mess: “The men of to‑day, on the contrary, basest of creatures, with supreme wickedness are robbing our allies of all that those heroes in the hour of victory had left them; they act as though the one and only way to rule were to wrong.” (The Catiline War)

Horace, c. 23-13 BCE: “Our fathers, viler than our grandfathers, begot us who are viler still, and we shall bring forth a progeny more degenerate still.” (Odes 3:6)

Alberti, 1436: Nature is no longer producing great intellects — “or giants which in her youthful and more glorious days she had produced so marvelously and abundantly.” (On Painting)

Peter Paul Rubens, c. 1620: “For what else can our degenerate race do in this age of error. Our lowly disposition keeps us close to the ground, and we have declined from that heroic genius and judgment of the ancients.”

Mary Wollstonecraft, c. 1790: “As from the respect paid to property flow, as from a poisoned fountain, most of the evils and vices which render this world such a dreary scene to the contemplative mind.”

William Wordsworth, 1802:
“Milton! thou should’st be living at this hour:
England hath need of thee: she is a fen
Of stagnant waters: altar, sword, and pen,
Fireside, the heroic wealth of hall and bower,
Have forfeited their ancient English dower
Of inward happiness. We are selfish men;
Oh! raise us up, return to us again;
And give us manners, virtue, freedom, power.”
(“London”)

John Stuart Mill, in 1859, speaking of his generation: “the present low state of the human mind.” (On Liberty, Chapter 3)

Friedrich Nietzsche, in 1871: “What else, in the desolate waste of present-day culture, holds any promise of a sound, healthy future? In vain we look for a single powerfully branching root, a spot of earth that is fruitful: we see only dust, sand, dullness, and languor” (Birth of Tragedy, Section 20).

Frederick Taylor, 1911: “We can see our forests vanishing, our water-powers going to waste, our soil being carried by floods into the sea; and the end of our coal and our iron is in sight.” (Scientific Management)

T. S. Eliot, c. 1925: “We can assert with some confidence that our own period is one of decline; that the standards of culture are lower than they were fifty years ago; and that the evidences of this decline are visible in every department of human activity.”

So has the world really been in constant decline? Or perhaps, as Gibbon put it in The Decline and Fall of the Roman Empire (1776): “There exists in human nature a strong propensity to depreciate the advantages, and to magnify the evils, of the present times.”

Words to keep in mind as we try to assess objectively our own generation’s serious problems.

I hate this argument. It’s the only time I ever see “Every single person from history has always believed that X is true” used as an argument *against* X.

I mean, imagine that I listed Thomas Aquinas as saying “Technology sure has gotten better the past few decades,” and then Leonardo da Vinci, “Technology sure has gotten better the past few decades”. Benjamin Franklin, “Technology sure has gotten better the past few decades”. Abraham Lincon, “Technology sure has gotten better the past few decades. Henry Ford, “Technology sure has gotten better the past few decades.”

My conclusion – people who think technology is advancing now are silly, there’s just some human bias toward always believing technology is advancing.

In the same way technology can always be advancing, culture can always be declining, for certain definitions of culture that emphasize the parts less compatible with modern society. Like technology, this isn’t a monotonic process – there will be disruptions every time one civilization collapses and a new one begins, and occasional conscious attempts by whole societies to reverse the trend, but in general, given movement from time t to time t+1, people can correctly notice cultural decline.

I mean, really. If, like Nietszche, your thing is the BRUTE STRENGTH of the valiant warrior, do you think that the modern office worker has exactly as much valiant warrior spirit as the 19th century frontiersman? Do you think the 19th century frontiersman had as much as the medieval crusader? Do you think the medieval crusader had as much as the Spartans? Pinker says the world is going from a state of violence to a state of security, and the flip side of that is people getting, on average, more domesticated and having less of the wild free spirit that Nietszche idealized.

Likewise, when people talk about “virtue”, a lot of the time they’re talking about chastity and willingness to remain faithful in a monogamous marriage for the purpose of procreation. And a lot of the time they don’t even mean actual chastity, they mean vocal public support for chastity and social norms demanding it. Do you really believe our culture has as much of that as previous cultures do? Remember, the sort of sharia law stuff that we find so abhorrent and misogynist was considered progressive during Mohammed’s time, and with good reason.

I would even argue that Alberti is right about genius. There are certain forms of genius that modern society selects for and certain ones it selects against. Remember, before writing became common, the Greek bards would have mostly memorized Homer. I think about the doctors of past ages, who had amazing ability to detect symptoms with the naked eye in a way that almost nobody now can match because we use CT scan instead and there’s no reason to learn this art. (Also, I think modern doctors have much fewer total hours of training than older doctors, because as bad as today’s workplace-protection/no-overtime rules are, theirs were worse)

And really? Using the fact that some guy complained of soil erosion as proof that nobody’s complaints are ever valid? Soil erosion is a real thing, it’s bad, and AFAIK it does indeed keep getting worse.

More controversially, if T.S. Eliot wants to look at a world that over four hundred years, went from the Renaissance masters to modern art, I am totally okay with him calling that a terrible cultural decline.

Scott’s argument is plausible, although he seems somewhat confused insofar as he appears to associate Mohammed with monogamy. And since we are discussing the matter with an interlocutor who maintains that the decline of culture is obvious, we will concede the point immediately. Scott seems a bit ambivalent in regard to whether a declining culture is a bad thing, but we will concede that as well, other things being equal.

However, we do not clearly see an answer here to one of the questions raised in the last post: if culture tends to decline, why does this happen? Scott seems to suggest an answer when he says, “Culture can always be declining, for certain definitions of culture that emphasize the parts less compatible with modern society.” According to this, culture tends to decline because it becomes incompatible with modern society. The problem with this is that it seems to be a “moronic pseudo-reason”: 2017 is just one year among others. So no parts of culture should be less compatible with life in 2017, than with life in 1017, or in any other year. Chesterton makes a similar argument:

We often read nowadays of the valor or audacity with which some rebel attacks a hoary tyranny or an antiquated superstition. There is not really any courage at all in attacking hoary or antiquated things, any more than in offering to fight one’s grandmother. The really courageous man is he who defies tyrannies young as the morning and superstitions fresh as the first flowers. The only true free-thinker is he whose intellect is as much free from the future as from the past. He cares as little for what will be as for what has been; he cares only for what ought to be. And for my present purpose I specially insist on this abstract independence. If I am to discuss what is wrong, one of the first things that are wrong is this: the deep and silent modern assumption that past things have become impossible. There is one metaphor of which the moderns are very fond; they are always saying, “You can’t put the clock back.” The simple and obvious answer is “You can.” A clock, being a piece of human construction, can be restored by the human finger to any figure or hour. In the same way society, being a piece of human construction, can be reconstructed upon any plan that has ever existed.

There is another proverb, “As you have made your bed, so you must lie on it”; which again is simply a lie. If I have made my bed uncomfortable, please God I will make it again. We could restore the Heptarchy or the stage coaches if we chose. It might take some time to do, and it might be very inadvisable to do it; but certainly it is not impossible as bringing back last Friday is impossible. This is, as I say, the first freedom that I claim: the freedom to restore. I claim a right to propose as a solution the old patriarchal system of a Highland clan, if that should seem to eliminate the largest number of evils. It certainly would eliminate some evils; for instance, the unnatural sense of obeying cold and harsh strangers, mere bureaucrats and policemen. I claim the right to propose the complete independence of the small Greek or Italian towns, a sovereign city of Brixton or Brompton, if that seems the best way out of our troubles. It would be a way out of some of our troubles; we could not have in a small state, for instance, those enormous illusions about men or measures which are nourished by the great national or international newspapers. You could not persuade a city state that Mr. Beit was an Englishman, or Mr. Dillon a desperado, any more than you could persuade a Hampshire Village that the village drunkard was a teetotaller or the village idiot a statesman. Nevertheless, I do not as a fact propose that the Browns and the Smiths should be collected under separate tartans. Nor do I even propose that Clapham should declare its independence. I merely declare my independence. I merely claim my choice of all the tools in the universe; and I shall not admit that any of them are blunted merely because they have been used.