Mind of God

Reconciling Theism and Atheism

In his Dialogues Concerning Natural Religion, David Hume presents Philo as arguing that the disagreement between theists and atheists is merely verbal:

All men of sound reason are disgusted with verbal disputes, which abound so much in philosophical and theological inquiries; and it is found, that the only remedy for this abuse must arise from clear definitions, from the precision of those ideas which enter into any argument, and from the strict and uniform use of those terms which are employed. But there is a species of controversy, which, from the very nature of language and of human ideas, is involved in perpetual ambiguity, and can never, by any precaution or any definitions, be able to reach a reasonable certainty or precision. These are the controversies concerning the degrees of any quality or circumstance. Men may argue to all eternity, whether HANNIBAL be a great, or a very great, or a superlatively great man, what degree of beauty CLEOPATRA possessed, what epithet of praise LIVY or THUCYDIDES is entitled to, without bringing the controversy to any determination. The disputants may here agree in their sense, and differ in the terms, or vice versa; yet never be able to define their terms, so as to enter into each other’s meaning: Because the degrees of these qualities are not, like quantity or number, susceptible of any exact mensuration, which may be the standard in the controversy. That the dispute concerning Theism is of this nature, and consequently is merely verbal, or perhaps, if possible, still more incurably ambiguous, will appear upon the slightest inquiry. I ask the Theist, if he does not allow, that there is a great and immeasurable, because incomprehensible difference between the human and the divine mind: The more pious he is, the more readily will he assent to the affirmative, and the more will he be disposed to magnify the difference: He will even assert, that the difference is of a nature which cannot be too much magnified. I next turn to the Atheist, who, I assert, is only nominally so, and can never possibly be in earnest; and I ask him, whether, from the coherence and apparent sympathy in all the parts of this world, there be not a certain degree of analogy among all the operations of Nature, in every situation and in every age; whether the rotting of a turnip, the generation of an animal, and the structure of human thought, be not energies that probably bear some remote analogy to each other: It is impossible he can deny it: He will readily acknowledge it. Having obtained this concession, I push him still further in his retreat; and I ask him, if it be not probable, that the principle which first arranged, and still maintains order in this universe, bears not also some remote inconceivable analogy to the other operations of nature, and, among the rest, to the economy of human mind and thought. However reluctant, he must give his assent. Where then, cry I to both these antagonists, is the subject of your dispute? The Theist allows, that the original intelligence is very different from human reason: The Atheist allows, that the original principle of order bears some remote analogy to it. Will you quarrel, Gentlemen, about the degrees, and enter into a controversy, which admits not of any precise meaning, nor consequently of any determination? If you should be so obstinate, I should not be surprised to find you insensibly change sides; while the Theist, on the one hand, exaggerates the dissimilarity between the Supreme Being, and frail, imperfect, variable, fleeting, and mortal creatures; and the Atheist, on the other, magnifies the analogy among all the operations of Nature, in every period, every situation, and every position. Consider then, where the real point of controversy lies; and if you cannot lay aside your disputes, endeavour, at least, to cure yourselves of your animosity.

To what extent Hume actually agrees with this argument is not clear, and whether or not a dispute is verbal or real is itself like Hume’s questions about greatness or beauty, that is, it is a matter of degree. Few disagreements are entirely verbal. In any case, I largely agree with the claim that there is little real disagreement here. In response to a question on the about page of this blog, I referred to some remarks about God by Roderick Long:

Since my blog has wandered into theological territory lately, I thought it might be worth saying something about the existence of God.

When I’m asked whether I believe in God, I usually don’t know what to say – not because I’m unsure of my view, but because I’m unsure how to describe my view. But here’s a try.

I think the disagreement between theism and atheism is in a certain sense illusory – that when one tries to sort out precisely what theists are committed to and precisely what atheists are committed to, the two positions come to essentially the same thing, and their respective proponents have been fighting over two sides of the same shield.

Let’s start with the atheist. Is there any sense in which even the atheist is committed to recognising the existence of some sort of supreme, eternal, non-material reality that transcends and underlies everything else? Yes, there is: namely, the logical structure of reality itself.

Thus so long as the theist means no more than this by “God,” the theist and the atheist don’t really disagree.

Now the theist may think that by God she means something more than this. But likewise, before people knew that whales were mammals they thought that by “whale” they meant a kind of fish. What is the theist actually committed to meaning?

Well, suppose that God is not the logical structure of the universe. Then we may ask: in what relation does God stand to that structure, if not identity? There would seem to be two possibilities.

One is that God stands outside that structure, as its creator. But this “possibility” is unintelligible. Logic is a necessary condition of significant discourse; thus one cannot meaningfully speak of a being unconstrained by logic, or a time when logic’s constraints were not yet in place.

The other is that God stands within that structure, along with everything else. But this option, as Wittgenstein observed, would downgrade God to the status of being merely one object among others, one more fragment of contingency – and he would no longer be the greatest of all beings, since there would be something greater: the logical structure itself. (This may be part of what Plato meant in describing the Form of the Good as “beyond being.”)

The only viable option for the theist, then, is to identify God with the logical structure of reality. (Call this “theological logicism.”) But in that case the disagreement between the theist and the atheist dissolves.

It may be objected that the “reconciliation” I offer really favours the atheist over the theist. After all, what theist could be satisfied with a deity who is merely the logical structure of the universe? Yet in fact there is a venerable tradition of theists who proclaim precisely this. Thomas Aquinas, for example, proposed to solve the age-old questions “could God violate the laws of logic?” and “could God command something immoral?” by identifying God with Being and Goodness personified. Thus God is constrained by the laws of logic and morality, not because he is subject to them as to a higher power, but because they express his own nature, and he could not violate or alter them without ceasing to be God. Aquinas’ solution is, essentially, theological logicism; yet few would accuse Aquinas of having a watered-down or crypto-atheistic conception of deity. Why, then, shouldn’t theological logicism be acceptable to the theist?

A further objection may be raised: Aquinas of course did not stop at the identification of God with Being and Goodness, but went on to attribute to God various attributes not obviously compatible with this identification, such as personality and will. But if the logical structure of reality has personality and will, it will not be acceptable to the atheist; and if it does not have personality and will, then it will not be acceptable to the theist. So doesn’t my reconciliation collapse?

I don’t think so. After all, Aquinas always took care to insist that in attributing these qualities to God we are speaking analogically. God does not literally possess personality and will, at least if by those attributes we mean the same attributes that we humans possess; rather he possesses attributes analogous to ours. The atheist too can grant that the logical structure of reality possesses properties analogous to personality and will. It is only at the literal ascription of those attributes that the atheist must balk. No conflict here.

Yet doesn’t God, as understood by theists, have to create and sustain the universe? Perhaps so. But atheists too can grant that the existence of the universe depends on its logical structure and couldn’t exist for so much as an instant without it. So where’s the disagreement?

But doesn’t God have to be worthy of worship? Sure. But atheists, while they cannot conceive of worshipping a person, are generally much more open to the idea of worshipping a principle. Again theological logicism allows us to transcend the opposition between theists and atheists.

But what about prayer? Is the logical structure of reality something one could sensibly pray to? If so, it might seem, victory goes to the theist; and if not, to the atheist. Yet it depends what counts as prayer. Obviously it makes no sense to petition the logical structure of reality for favours; but this is not the only conception of prayer extant. In Science and Health, for example, theologian M. B. Eddy describes the activity of praying not as petitioning a principle but as applying a principle:

“Who would stand before a blackboard, and pray the principle of mathematics to solve the problem? The rule is already established, and it is our task to work out the solution. Shall we ask the divine Principle of all goodness to do His own work? His work is done, and we have only to avail ourselves of God’s rule in order to receive His blessing, which enables us to work out our own salvation.”

Is this a watered-down or “naturalistic” conception of prayer? It need hardly be so; as the founder of Christian Science, Eddy could scarcely be accused of underestimating the power of prayer! And similar conceptions of prayer are found in many eastern religions. Once again, theological logicism’s theistic credentials are as impeccable as its atheistic credentials.

Another possible objection is that whether identifying God with the logical structure of reality favours the atheist or the theist depends on how metaphysically robust a conception of “logical structure” one appeals to. If one thinks of reality’s logical structure in realist terms, as an independent reality in its own right, then the identification favours the theist; but if one instead thinks, in nominalist terms, that there’s nothing to logical structure over and above what it structures, then the identification favours the atheist.

This argument assumes, however, that the distinction between realism and nominalism is a coherent one. I’ve argued elsewhere (see here and here) that it isn’t; conceptual realism pictures logical structure as something imposed by the world on an inherently structureless mind (and so involves the incoherent notion of a structureless mind), while nominalism pictures logical structure as something imposed by the mind on an inherently structureless world (and so involves the equally incoherent notion of a structureless world). If the realism/antirealism dichotomy represents a false opposition, then the theist/atheist dichotomy does so as well. The difference between the two positions will then be only, as Wittgenstein says in another context, “one of battle cry.”

Long is trying too hard, perhaps. As I stated above, few disagreements are entirely verbal, so it would be strange to find no disagreement at all, and we could question some points here. Are atheists really open to worshiping a principle? Respecting, perhaps, but worshiping? A defender of Long, however, might say that “respect” and “worship” do not necessarily have any relevant difference here, and this is itself a merely verbal difference signifying a cultural difference. The theist uses “worship” to indicate that they belong to a religious culture, while the atheist uses “respect” to indicate that they do not. But it would not be easy to find a distinct difference in the actual meaning of the terms.

In any case, there is no need to prove that there is no difference at all, since without a doubt individual theists will disagree on various matters with individual atheists. The point made by both David Hume and Roderick Long stands at least in a general way: there is far less difference between the positions than people typically assume.

In an earlier post I discussed, among other things, whether the first cause should be called a “mind” or not, discussing St. Thomas’s position that it should be, and Plotinus’s position that it should not be. Along the lines of the argument in this post, perhaps this is really an argument about whether or not you should use a certain analogy, and the correct answer may be that it depends on your purposes.

But what if your purpose is simply to understand reality? Even if it is, it is often the case that you can understand various aspects of reality with various analogies, so this will not necessarily provide you with a definite answer. Still, someone might argue that you should not use a mental analogy with regard to the first cause because it will lead people astray. Thus, in a similar way, Richard Dawkins argued that one should not call the first cause “God” because it would mislead people:

Yes, I said, but it must have been simple and therefore, whatever else we call it, God is not an appropriate name (unless we very explicitly divest it of all the baggage that the word ‘God’ carries in the minds of most religious believers). The first cause that we seek must have been the simple basis for a self-bootstrapping crane which eventually raised the world as we know it into its present complex existence.

I will argue shortly that Dawkins was roughly speaking right about the way that the first cause works, although as I said in that earlier post, he did not have a strong argument for it other than his aesthetic sense and the kinds of explanation that he prefers. In any case, his concern with the name “God” is the “baggage” that it “carries in the minds of most religious believers.” That is, if we say, “There is a first cause, therefore God exists,” believers will assume that their concrete beliefs about God are correct.

In a similar way, someone could reasonably argue that speaking of God as a “mind” would tend to lead people into error by leading them to suppose that God would do the kinds of the things that other minds, namely human ones, do. And this definitely happens. Thus for example, in his book Who Designed the Designer?, Michael Augros argues for the existence of God as a mind, and near the end of the book speculates about divine revelation:

I once heard of a certain philosopher who, on his deathbed, when asked whether he would become a Christian, admitted his belief in Aristotle’s “prime mover”, but not in Jesus Christ as the Son of God. This sort of acknowledgment of the prime mover, of some sort of god, still leaves most of our chief concerns unaddressed. Will X ever see her son again, now that the poor boy has died of cancer at age six? Will miserable and contrite Y ever be forgiven, somehow reconciled to the universe and made whole, after having killed a family while driving drunk? Will Z ever be brought to justice, having lived out his whole life laughing at the law while another person rotted in jail for the atrocities he committed? That there is a prime mover does not tell us with sufficient clarity. Even the existence of an all-powerful, all-knowing, all-good god does not enable us to fill in much detail. And so it seems reasonable to suppose that god has something more to say to us, in explicit words, and not only in the mute signs of creation. Perhaps he is waiting to talk to us, biding his time for the right moment. Perhaps he has already spoken, but we have not recognized his voice.

When we cast our eye about by the light of reason in his way, it seems there is room for faith in general, even if no particular faith can be “proved” true in precisely the same way that it can be “proved” that there is a god.

The idea is that given that God is a mind, it follows that it is fairly plausible that he would wish to speak to people. And perhaps that he would wish to establish justice through extraordinary methods, and that he might wish to raise people from the dead.

I think this is “baggage” carried over from Augros’s personal religious views. It is an anthropomorphic mistake, not merely in the sense that he does not have a good reason for such speculation, but in the sense that such a thing is demonstrably implausible. It is not that the divine motives are necessarily unknown to us, but that we can actually discover them, at least to some extent, and we will discover that they are not what he supposes.

Divine Motives

How might one know the divine motives? How does one read the mind of God?

Anything that acts at all does it what it does ultimately because of what it is. This is an obvious point, like the point that the existence of something rather than nothing could not have some reason outside of being. In a similar way, “what is” is the only possible explanation for what is done, since there is nothing else there to be an explanation. And in every action, whether or not we are speaking of the subject in explicitly mental terms or not, we can always use the analogy of desires and goals. In the linked post, I quote St. Thomas as speaking of the human will as the “rational appetite,” and the natural tendency of other things as a “natural appetite.” If we break down the term “rational appetite,” the meaning is “the tendency to do something, because of having a reason to do it.” And this fits with my discussion of human will in various places, such as in this earlier post.

But where do those reasons come from? I gave an account of this here, arguing that rational goals are a secondary effect of the mind’s attempt to understand itself. Of course human goals are complex and have many factors, but this happens because what the mind is trying to understand is complicated and multifaceted. In particular, there is a large amount of pre-existing human behavior that it needs to understand before it can attribute goals: behavior that results from life as a particular kind of animal, behavior that results from being a particular living thing, and behavior that results from having a body of such and such a sort.

In particular, human social behavior results from these things. There was some discussion of this here, when we looked at Alexander Pruss’s discussion of hypothetical rational sharks.

You might already see where this is going. God as the first cause does not have any of the properties that generate human social behavior, so we cannot expect his behavior to resemble human social behavior in any way, as for example by having any desire to speak with people. Indeed, this is the argument I am making, but let us look at the issue more carefully.

I responded to the “dark room” objection to predictive processing here and here. My response depends both the biological history of humans and animals in general, and to some extent on the history of each individual. But the response does not merely explain why people do not typically enter dark rooms and simply stay there until they die. It also explains why occasionally people do do such things, to a greater or lesser approximation, as with suicidal or extremely depressed people.

If we consider the first cause as a mind, as we are doing here, it is an abstract immaterial mind without any history, without any pre-existing behaviors, without any of the sorts of things that allow people to avoid the dark room. So while people will no doubt be offended by the analogy, and while I will try to give a more pleasant interpretation later, one could argue that God is necessarily subject to his own dark room problem: there is no reason for him to have any motives at all, except the one which is intrinsic to minds, namely the motive of understanding. And so he should not be expected to do anything with the world, except to make sure that it is intelligible, since it must be intelligible for him to understand it.

The thoughtful reader will object: on this account, why does God create the world at all? Surely doing and making nothing at all would be even better, by that standard. So God does seem to have a “dark room” problem that he does manage to avoid, namely the temptation to nothing at all. This is a reasonable objection, but I think it would lead us on a tangent, so I will not address it at this time. I will simply take it for granted that God makes something rather than nothing, and discuss what he does with the world given that fact.

In the previous post, I pointed out that David Hume takes for granted that the world has stable natural laws, and uses that to argue that an orderly world can result from applying those laws to “random” configurations over a long enough time. I said that one might accuse him of “cheating” here, but that would only be the case if he intended to maintain a strictly atheistic position which would say that there is no first cause at all, or that if there is, it does not even have a remote analogy with a mind. Thus his attempted reconciliation of theism and atheism is relevant, since it seems from this that he is aware that such a strict atheism cannot be maintained.

St. Thomas makes a similar connection between God as a mind and a stable order of things in his fifth way:

The fifth way is taken from the governance of the world. We see that things which lack intelligence, such as natural bodies, act for an end, and this is evident from their acting always, or nearly always, in the same way, so as to obtain the best result. Hence it is plain that not fortuitously, but designedly, do they achieve their end. Now whatever lacks intelligence cannot move towards an end, unless it be directed by some being endowed with knowledge and intelligence; as the arrow is shot to its mark by the archer. Therefore some intelligent being exists by whom all natural things are directed to their end; and this being we call God.

What are we are to make of the claim that things act “always, or nearly always, in the same way, so as to obtain the best result?” Certainly acting in the same way would be likely to lead to similar results. But why would you think it was the best result?

If we consider where we get the idea of desire and good, the answer will be clear. We don’t have an idea of good which is completely independent from “what actually tends to happen”, even though this is not quite a definition of the term either. So ultimately St. Thomas’s argument here is based on the fact that things act in similar ways and achieve similar results. The idea that it is “best” is not an additional contribution.

But now consider the alternative. Suppose that things did not act in similar ways, or that doing so did not lead to similar results. We would live in David Hume’s non-inductive world. The result is likely to be mathematically and logically impossible. If someone says, “look, the world works in a coherent way,” and then attempts to describe how it would look if it worked in an incoherent way, they will discover that the latter “possibility” cannot be described. Any description must be coherent in order to be a description, so the incoherent “option” was never a real option in the first place.

This argument might suggest that the position of Plotinus, that mind should not be attributed to God at all, is the more reasonable one. But since we are exploring the situation where we do make that attribution, let us consider the consequences.

We argued above that the sole divine motive for the world is intelligibility. This requires coherence and consistency. It also requires a tendency towards the good, for the above mentioned reasons. Having a coherent tendency at all is ultimately not something different from tending towards good.

The world described is arguably a deist world, one in which the laws of nature are consistently followed, but God does nothing else in the world. The Enlightenment deists presumably had various reasons for their position: criticism of specific religious doctrines, doubts about miracles, and an aesthetic attraction to a perfectly consistent world. But like Dawkins with his argument about God’s simplicity, they do not seem (to me at least) to have had very strong arguments. That does not prove that their position was wrong, and even their weaker arguments may have had some relationship with the truth; even an aesthetic attraction to a perfectly consistent world has some connection with intelligibility, which is the actual reason for the world to be that way.

Once again, as with the objection about creating a world at all, a careful reader might object that this argument is not conclusive. If you have a first cause at all, then it seems that you must have one or more first effects, and even if those effects are simple, they cannot be infinitely simple. And given that they are not infinitely simple, who is to set the threshold? What is to prevent one or more of those effects from being “miraculous” relative to anything else, or even from being something like a voice giving someone a divine revelation?

There is something to this argument, but as with the previous objection, I will not be giving my response here. I will simply note for the moment that it is a little bit strained to suggest that such a thing could happen without God having an explicit motive of “talking to people,” and as argued above, such a motive cannot exist in God. That said, I will go on to some other issues.

As the Heavens are Higher

Apart from my arguments, it has long been noticed in the actual world that God seems much more interested in acting consistently than in bringing about any specific results in human affairs.

Someone like Richard Dawkins, or perhaps Job, if he had taken the counsel of his wife, might respond to the situation in the following way. “God” is not an appropriate name for a first cause that acts like this. If anything is more important to God than being personal, it would be being good. But the God described here is not good at all, since he doesn’t seem to care a bit about human affairs. And he inflicts horrible suffering on people just for the sake of consistency with physical laws. Instead of calling such a cause “God,” why don’t we call it “the Evil Demon” or something like that?

There is a lot that could be said about this. Some of it I have already said elsewhere. Some of it I will perhaps say at other times. For now I will make three brief points.

First, ensuring that the world is intelligible and that it behaves consistently is no small thing. In fact it is a prerequisite for any good thing that might happen anywhere and any time. We would not even arrive at the idea of “good” things if we did not strive consistently for similar results, nor would we get the idea of “striving” if we did did not often obtain them. Thus it is not really true that God has no interest in human affairs: rather, he is concerned with the affairs of all things, including humans.

Second, along similar lines, consider what the supposed alternative would be. If God were “good” in the way you wish, his behavior would be ultimately unintelligible. This is not merely because some physical law might not be followed if there were a miracle. It would be unintelligible behavior in the strict sense, that is, in the sense that no explanation could be given for why God is doing this. The ordinary proposal would be that it is because “this is good,” but when this statement is a human judgement made according to human motives, there would need to be an explanation for why a human judgement is guiding divine behavior. “God is a mind” does not adequately explain this. And it is not clear that an ultimately unintelligible world is a good one.

Third, to extend the point about God’s concern with all things, I suggest that the answer is roughly speaking the one that Scott Alexander gives non-seriously here, except taken seriously. This answer depends on an assumption of some sort of modal realism, a topic which I was slowly approaching for some time, but which merits a far more detailed discussion, and I am not sure when I will get around to it, if ever. The reader might note however that this answer probably resolves the question about “why didn’t God do nothing at all” by claiming that this was never an option anyway.

Was Kavanaugh Guilty?

No, I am not going to answer the question. This post will illustrate and argue for a position that I have argued many times in the past, namely that belief is voluntary. The example is merely particularly good for proving the point. I will also be using a framework something like Bryan Caplan’s in his discussion of rational irrationality:

Two forces lie at the heart of economic models of choice: preferences and prices. A consumer’s preferences determine the shape of his demand curve for oranges; the market price he faces determines where along that demand curve he resides. What makes this insight deep is its generality. Economists use it to analyze everything from having babies to robbing banks.

Irrationality is a glaring exception. Recognizing irrationality is typically equated with rejecting economics. A “logic of the irrational” sounds self-contradictory. This chapter’s central message is that this reaction is premature. Economics can handle irrationality the same way it handles everything: preferences and prices. As I have already pointed out:

  • People have preferences over beliefs: A nationalist enjoys the belief that foreign-made products are overpriced junk; a surgeon takes pride in the belief that he operates well while drunk.
  • False beliefs range in material cost from free to enormous: Acting on his beliefs would lead the nationalist to overpay for inferior goods, and the surgeon to destroy his career.

Snapping these two building blocks together leads to a simple model of irrational conviction. If agents care about both material wealth and irrational beliefs, then as the price of casting reason aside rises, agents consume less irrationality. I might like to hold comforting beliefs across the board, but it costs too much. Living in a Pollyanna dreamworld would stop be from coping with my problems, like that dead tree in my backyard that looks like it is going to fall on my house.

Let us assume that people are considering whether to believe that Brett Kavanaugh was guilty of sexual assault. For ease of visualization, let us suppose that they have utility functions defined over the following outcomes:

(A) Believe Kavanaugh was guilty, and turn out to be right

(B) Believe Kavanaugh was guilty, and turn out to be wrong

(C) Believe Kavanaugh was innocent, and turn out to be right

(D) Believe Kavanaugh was innocent, and turn out to be wrong

(E) Admit that you do not know whether he was guilty or not (this will be presumed to be a true statement, but I will count it as less valuable than a true statement that includes more detail.)

(F) Say something bad about your political enemies

(G) Say something good about your political enemies

(H) Say something bad about your political allies

(I) Say something good about your political allies

Note that options A through E are mutually exclusive, while one or more of options F through I might or might not come together with one of those from A through E.

Let’s suppose there are three people, a right winger who cares a lot about politics and little about truth, a left winger who cares a lot about politics and little about truth, and an independent who does not care about politics and instead cares a lot about truth. Then we posit the following table of utilities:

Right Winger
Left Winger
Independent
(A)
10
10
100
(B)
-10
-10
-100
(C)
10
10
100
(D)
-10
-10
-100
(E)
5
5
50
(F)
100
100
0
(G)
-100
-100
0
(H)
-100
-100
0
(I)
100
100
0

The columns for the right and left wingers are the same, but the totals will be calculated differently because saying something good about Kavanaugh, for the right winger, is saying something good about an ally, while for the left winger, it is saying something good about an enemy, and there is a similar contrast if something bad is said.

Now there are really only three options we need to consider, namely “Believe Kavanaugh was guilty,” “Believe Kavanaugh was innocent,” and “Admit that you do not know.” In addition, in order to calculate expected utility according to the above table, we need a probability that Kavanaugh was guilty. In order not to offend readers who have already chosen an option, I will assume a probability of 50% that he was guilty, and 50% that he was innocent. Using these assumptions, we can calculate the following ultimate utilities:

Right Winger
Left Winger
Independent
Claim Guilt
-100
100
0
Claim Innocence
100
-100
0
Confess Ignorance
5
5
50

(I won’t go through this calculation in detail; it should be evident that given my simple assumptions of the probability and values, there will be no value for anyone in affirming guilt or innocence as such, but only in admitting ignorance, or in making a political point.) Given these values, obviously the left winger will choose to believe that Kavanaugh was guilty, the right winger will choose to believe that he was innocent, and the independent will admit to being ignorant.

This account obviously makes complete sense of people’s actual positions on the question, and it does that by assuming that people voluntarily choose to believe a position in the same way they choose to do other things. On the other hand, if you assume that belief is an involuntary evaluation of a state of affairs, how could the actual distribution of opinion possibly be explained?

As this is a point I have discussed many times in the past, I won’t try to respond to all possible objections. However, I will bring up two of them. In the example, I had to assume that people calculated using a probability of 50% for Kavanaugh’s guilt or innocence. So it could be objected that their “real” belief is that there is a 50% chance he was guilty, and the statement is simply an external thing.

This initial 50% is something like a prior probability, and corresponds to a general leaning towards or away from a position. As I admitted in discussion with Angra Mainyu, that inclination is largely involuntary. However, first, this is not what we call a “belief” in ordinary usage, since we frequently say that someone has a belief while having some qualms about it. Second, it is not completely immune from voluntary influences. In practice in a situation like this, it will represent something like everything the person knows about the subject and predicate apart from this particular claim. And much of what the person knows will already be in subject/predicate form, and the person will have arrived at it through a similar voluntary process.

Another objection is that at least in the case of something obviously true or obviously false, there cannot possibly be anything voluntary about it. No one can choose to believe that the moon is made of green cheese, for example.

I have responded to this to this in the past by pointing out that most of us also cannot choose to go and kill ourselves, right now, despite the fact that doing so would be voluntary. And in a similar way, there is nothing attractive about believing that the moon is made of green cheese, and so no one can do it. At least two objections will be made to this response:

1) I can’t go kill myself right now, but I know that this is because it would be bad. But I cannot believe that the moon is made of green cheese because it is false, not because it is bad.

2) It does not seem that much harm would be done by choosing to believe this about the moon, and then changing your mind after a few seconds. So if it is voluntary, why not prove it by doing so? Obviously you cannot do so.

Regarding the first point, it is true that believing the moon is made of cheese would be bad because it is false. And in fact, if you find falsity the reason you cannot accept it, how is that not because you regard falsity as really bad? In fact lack of attractiveness is extremely relevant here. If people can believe in Xenu, they would find it equally possible to believe that the moon was made of cheese, if that were the teaching of their religion. In that situation, the falsity of the claim would not be much obstacle at all.

Regarding the second point, there is a problem like Kavka’s Toxin here. Choosing to believe something, roughly speaking, means choosing to treat it as a fact, which implies a certain commitment. Choosing to act like it is true enough to say so, then immediately doing something else, is not choosing to believe it, but rather it is choosing to tell a lie. So just as one cannot intend to drink the toxin without expecting to actually drink it, so one cannot choose to believe something without expecting to continue to believe it for the foreseeable future. This is why one would not wish to accept such a statement about the moon, not only in order to prove something (especially since it would prove nothing; no one would admit that you had succeeded in believing it), but even if someone were to offer a very large incentive, say a million dollars if you managed to believe it. This would amount to offering to pay someone to give up their concern for truth entirely, and permanently.

Additionally, in the case of some very strange claims, it might be true that people do not know how to believe them, in the sense that they do not know what “acting as though this were the case” would even mean. This no more affects the general voluntariness of belief than the fact that some people cannot do backflips affects the fact that such bodily motions are in themselves voluntary.

Tautologies Not Trivial

In mathematics and logic, one sometimes speaks of a “trivial truth” or “trivial theorem”, referring to a tautology. Thus for example in this Quora question, Daniil Kozhemiachenko gives this example:

The fact that all groups of order 2 are isomorphic to one another and commutative entails that there are no non-Abelian groups of order 2.

This statement is a tautology because “Abelian group” here just means one that is commutative: the statement is like the customary example of asserting that “all bachelors are unmarried.”

Some extend this usage of “trivial” to refer to all statements that are true in virtue of the meaning of the terms, sometimes called “analytic.” The effect of this is to say that all statements that are logically necessary are trivial truths. An example of this usage can be seen in this paper by Carin Robinson. Robinson says at the end of the summary:

Firstly, I do not ask us to abandon any of the linguistic practises discussed; merely to adopt the correct attitude towards them. For instance, where we use the laws of logic, let us remember that there are no known/knowable facts about logic. These laws are therefore, to the best of our knowledge, conventions not dissimilar to the rules of a game. And, secondly, once we pass sentence on knowing, a priori, anything but trivial truths we shall have at our disposal the sharpest of philosophical tools. A tool which can only proffer a better brand of empiricism.

While the word “trivial” does have a corresponding Latin form that means ordinary or commonplace, the English word seems to be taken mainly from the “trivium” of grammar, rhetoric, and logic. This would seem to make some sense of calling logical necessities “trivial,” in the sense that they pertain to logic. Still, even here something is missing, since Robinson wants to include the truths of mathematics as trivial, and classically these did not pertain to the aforesaid trivium.

Nonetheless, overall Robinson’s intention, and presumably that of others who speak this way, is to suggest that such things are trivial in the English sense of “unimportant.” That is, they may be important tools, but they are not important for understanding. This is clear at least in our example: Robinson calls them trivial because “there are no known/knowable facts about logic.” Logical necessities tell us nothing about reality, and therefore they provide us with no knowledge. They are true by the meaning of the words, and therefore they cannot be true by reason of facts about reality.

Things that are logically necessary are not trivial in this sense. They are important, both in a practical way and directly for understanding the world.

Consider the failure of the Mars Climate Orbiter:

On November 10, 1999, the Mars Climate Orbiter Mishap Investigation Board released a Phase I report, detailing the suspected issues encountered with the loss of the spacecraft. Previously, on September 8, 1999, Trajectory Correction Maneuver-4 was computed and then executed on September 15, 1999. It was intended to place the spacecraft at an optimal position for an orbital insertion maneuver that would bring the spacecraft around Mars at an altitude of 226 km (140 mi) on September 23, 1999. However, during the week between TCM-4 and the orbital insertion maneuver, the navigation team indicated the altitude may be much lower than intended at 150 to 170 km (93 to 106 mi). Twenty-four hours prior to orbital insertion, calculations placed the orbiter at an altitude of 110 kilometers; 80 kilometers is the minimum altitude that Mars Climate Orbiter was thought to be capable of surviving during this maneuver. Post-failure calculations showed that the spacecraft was on a trajectory that would have taken the orbiter within 57 kilometers of the surface, where the spacecraft likely skipped violently on the uppermost atmosphere and was either destroyed in the atmosphere or re-entered heliocentric space.[1]

The primary cause of this discrepancy was that one piece of ground software supplied by Lockheed Martin produced results in a United States customary unit, contrary to its Software Interface Specification (SIS), while a second system, supplied by NASA, expected those results to be in SI units, in accordance with the SIS. Specifically, software that calculated the total impulse produced by thruster firings produced results in pound-force seconds. The trajectory calculation software then used these results – expected to be in newton seconds – to update the predicted position of the spacecraft.

It is presumably an analytic truth that the units defined in one way are unequal to the units defined in the other. But it was ignoring this analytic truth that was the primary cause of the space probe’s failure. So it is evident that analytic truths can be extremely important for practical purposes.

Such truths can also be important for understanding reality. In fact, they are typically more important for understanding than other truths. The argument against this is that if something is necessary in virtue of the meaning of the words, it cannot be telling us something about reality. But this argument is wrong for one simple reason: words and meaning themselves are both elements of reality, and so they do tell us something about reality, even when the truth is fully determinate given the meaning.

If one accepts the mistaken argument, in fact, sometimes one is led even further. Logically necessary truths cannot tell us anything important for understanding reality, since they are simply facts about the meaning of words. On the other hand, anything which is not logically necessary is in some sense accidental: it might have been otherwise. But accidental things that might have been otherwise cannot help us to understand reality in any deep way: it tells us nothing deep about reality to note that there is a tree outside my window at this moment, when this merely happens to be the case, and could easily have been otherwise. Therefore, since neither logically necessary things, nor logically contingent things, can help us to understand reality in any deep or important way, such understanding must be impossible.

It is fairly rare to make such an argument explicitly, but it is a common implication of many arguments that are actually made or suggested, or it at least influences the way people feel about arguments and understanding.  For example, consider this comment on an earlier post. Timocrates suggests that (1) if you have a first cause, it would have to be a brute fact, since it doesn’t have any other cause, and (2) describing reality can’t tell us any reasons but is “simply another description of how things are.” The suggestion behind these objections is that the very idea of understanding is incoherent. As I said there in response, it is true that every true statement is in some sense “just a description of how things are,” but that was what a true statement was meant to be in any case. It surely was not meant to be a description of how things are not.

That “analytic” or “tautologous” statements can indeed provide a non-trivial understanding of reality can also easily be seen by example. Some examples from this blog:

Good and being. The convertibility of being and goodness is “analytic,” in the sense that carefully thinking about the meaning of desire and the good reveals that a universe where existence as such was bad, or even failed to be good, is logically impossible. In particular, it would require a universe where there is no tendency to exist, and this is impossible given that it is posited that something exists.

Natural selection. One of the most important elements of Darwin’s theory of evolution is the following logically necessary statement: the things that have survived are more likely to be the things that were more likely to survive, and less likely to be the things that were less likely to survive.

Limits of discursive knowledge. Knowledge that uses distinct thoughts and concepts is necessarily limited by issues relating to self-reference. It is clear that this is both logically necessary, and tells us important things about our understanding and its limits.

Knowledge and being. Kant rightly recognized a sense in which it is logically impossible to “know things as they are in themselves,” as explained in this post. But as I said elsewhere, the logically impossible assertion that knowledge demands an identity between the mode of knowing and the mode of being is the basis for virtually every sort of philosophical error. So a grasp on the opposite “tautology” is extremely useful for understanding.

 

Explaining Causality

A reader asks about a previous post:

a) Per Hume and his defenders, we can’t really observe causation. All we can see is event A in spacetime, then event B in spacetime. We have no reason to posit that event A and event B are, say, chairs or dogs; we can stick with a sea of observed events, and claim that the world is “nothing more” but a huge set of random 4D events. While I can see that giving such an account restores formal causation, it doesn’t salvage efficient causation, and doesn’t even help final causation. How could you move there from our “normal” view?

b) You mention that the opinion “laws are observed patterns” is not a dominant view; though, even though I’d like to sit with the majority, I can’t go further than a). I can’t build an argument for this, and fail to see how Aristotle put his four causes correctly. I always end up gnawing on an objection, like “causation is only in the mind” or similar. Help?

It is not my view that the world is a huge set of random 4D events. This is perhaps the view of Atheism and the City, but it is a mistaken one. The blogger is not mistaken in thinking that there are problems with presentism, but they cannot be solved by adopting an eternalist view. Rather, these two positions constitute a Kantian dichotomy, and as usual, both positions are false. For now, however, I will leave this to the consideration of the reader. It is not necessary to establish this to respond to the questions above.

Consider the idea that “we can’t really observe causation.” As I noted here, it does not make sense to say that we cannot observe causation unless we already understand what causation is. If the word were meaningless to us, we would have no argument that we don’t observe it; it is only because we do understand the idea of causation that we can even suggest that it might be difficult to observe. And if we do have the idea, we got the idea from somewhere, and that could only have been… from observation, of course, since we don’t have anything else to get ideas from.

Let us untie the knot. I explained causality in general in this way:

“Cause” and “effect” simply signify that the cause is the origin of the effect, and that the effect is from the cause, together with the idea that when we understand the cause, we understand the explanation for the effect. Thus “cause” adds to “origin” a certain relationship with the understanding; this is why Aristotle says that we do not think we understand a thing until we know its cause, or “why” it is. We do not understand a thing until we know its explanation.

Note that there is something “in the mind” about causality. Saying to oneself, “Aha! So that’s why that happened!” is a mental event. And we can also see how it is possible to observe causality: we can observe that one thing is from another, i.e. that a ball breaks a window, and we can also observe that knowing this provides us a somewhat satisfactory answer to the question, “Why is the window broken?”, namely, “Because it was hit by a ball.”

Someone (e.g. Atheism and the City) might object that we also cannot observe one thing coming from another. We just observe the two things, and they are, as Hume says, “loose and separate.” Once again, however, we would have no idea of “from” unless we got it from observing things. In the same early post quoted above, I explained the idea of origin, i.e. that one thing is from another:

Something first is said to be the beginning, principle, or origin of the second, and the second is said to be from the first. This simply signifies the relationship already described in the last post, together with an emphasis on the fact that the first comes before the second by “consequence of being”, in the way described.

“The relationship already described in the last post” is that of before and after. In other words, wherever we have any kind of order at all, we have one thing from another. And we observe order, even when we simply see one thing after another, and thus we also observe things coming from other things.

What about efficient causality? If we adopt the explanation above, asserting the existence of efficient causality is nothing more or less than asserting that things sometimes make other things happen, like balls breaking windows, and that knowing about this is a way for us to understand the effects (e.g. broken windows.)

Similarly, denying the existence of efficient causality means either denying that anything ever makes anything else happen, or denying that knowing about this makes us understand anything, even in a minor way. Atheism and the City seems to want to deny that anything ever makes anything else happen:

Most importantly, my view technically is not that causality doesn’t exist, it’s that causality doesn’t exist in the way we typically think it does. That is, my view of causality is completely different from the general every day notion of causality most people have. The naive assumption one often gets when hearing my view is that I’m saying cause and effect relationships don’t exist at all, such that if you threw a brick at glass window it wouldn’t shatter, or if you jumped in front of a speeding train you wouldn’t get smashed to death by it. That’s not what my view says at all.

On my view of causality, if you threw a brick at a glass window it would shatter, if you jumped in front of a speeding train you’d be smashed to death by it. The difference between my view of causality vs the typical view is that on my view causes do not bring their effects into existence in the sense of true ontological becoming.

I am going to leave aside the discussion of “true ontological becoming,” because it is a distraction from the real issue. Does Atheism and the City deny that things ever make other things happen? It appears so, but consider that “things sometimes make other things happen” is just a more general description of the very same situations as descriptions like, “Balls sometimes break windows.” So if you want to deny that things make other things happen, you should also deny that balls break windows. Now our blogger perhaps wants to say, “I don’t deny that balls break windows in the everyday sense, but they don’t break them in a true ontological sense.” Again, I will simply point in the right direction here. Asserting the existence of efficient causes does not describe a supposedly “truly true” ontology; it is simply a more general description of a situation where balls sometimes break windows.

We can make a useful comparison here between understanding causality, and understanding desire and the good. The knowledge of desire begins with a fairly direct experience, that of feeling the desire, often even as physical sensation. In the same way, we have a direct experience of “understanding something,” namely the feeling of going, “Ah, got it! That’s why this is, this is how it is.” And just as we explain the fact of our desire by saying that the good is responsible for it, we explain the fact of our understanding by saying that the apprehension of causes is responsible. And just as being and good are convertible, so that goodness is not some extra “ontological” thing, so also cause and origin are convertible. But something has to have a certain relationship with us to be good for us; eating food is good for us while eating rocks is not. In a similar way, origins need to have a specific relationship with us in order to provide an understanding of causality, as I said in the post where these questions came up.

Does this mean that “causation is only in the mind”? Not really, any more than the analogous account implies that goodness is only in the mind. An aspect of goodness is in the mind, namely insofar as we distinguish it from being in general, but the thing itself is real, namely the very being of things. And likewise an aspect of causality is in the mind, namely the fact that it explains something to us, but the thing itself is real, namely the relationships of origin in things.

Truth and Expectation II

We discussed this topic in a previous post. I noted there that there is likely some relationship with predictive processing. This idea can be refined by distinguishing between conscious thought and what the human brain does on a non-conscious level.

It is not possible to define truth by reference to expectations for reasons given previously. Some statements do not imply specific expectations, and besides, we need the idea of truth to decide whether or not someone’s expectations were correct or not. So there is no way to define truth except the usual way: a statement is true if things are the way the statement says they are, bearing in mind the necessary distinctions involving “way.”

On the conscious level, I would distinguish between thinking about something is true, and wanting to think that it is true. In a discussion with Angra Mainyu, I remarked that insofar as we have an involuntary assessment of things, it would be more appropriate to call that assessment a desire:

So rather than calling that assessment a belief, it would be more accurate to call it a desire. It is not believing something, but desiring to believe something. Hunger is the tendency to go and get food; that assessment is the tendency to treat a certain claim (“the USA is larger than Austria”) as a fact. And in both cases there are good reasons for those desires: you are benefited by food, and you are benefited by treating that claim as a fact.

Angra was quite surprised by this and responded that “That statement gives me evidence that we’re probably not talking about the same or even similar psychological phenomena – i.e., we’re probably talking past each other.” But if he was talking about anything that anyone at all would characterize as a belief (and he said that he was), he was surely talking about the unshakeable gut sense that something is the case whether or not I want to admit it. So we were, in fact, talking about exactly the same psychological phenomena. I was claiming then, and will claim now, that this gut sense is better characterized as a desire than as a belief. That is, insofar as desire is a tendency to behave in certain ways, it is a desire because it is a tendency to act and think as though this claim is true. But we can, if we want, resist that tendency, just as we can refrain from going to get food when we are hungry. If we do resist, we will refrain from believing what we have a tendency to believe, and if we do not, we will believe what we have a tendency to believe. But the tendency will be there whether or not we follow it.

Now if we feel a tendency to think that something is true, it is quite likely that it seems to us that it would improve our expectations. However, we can also distinguish between desiring to believe something for this reason, or desiring to believe something for other reasons. And although we might not pay attention, it is quite possibly to be consciously aware that you have an inclination to believe something, and also that it is for non-truth related reasons; and thus you would not expect it to improve your expectations.

But this is where it is useful to distinguish between the conscious mind and what the brain is doing on another level. My proposal: you will feel the desire to think that something is true whenever your brain guesses that its predictions, or at least the predictions that are important to it, will become more accurate if you think that the thing is true. We do not need to make any exceptions. This will be the case even when we would say that the statement does not imply any significant expectations, and will be the case even when the belief would have non-truth related motives.

Consider the statement that there are stars outside the visible universe. One distinction we could make even on the conscious level is that this implies various counterfactual predictions: “If you are teleported outside the visible universe, you will see more stars that aren’t currently visible.” Now we might find this objectionable if we were trying to define truth by expectations, since we have no expectation of such an event. But both on conscious and on non-conscious levels, we do need to make counterfactual predictions in order to carry on with our lives, since this is absolutely essential to any kind of planning and action. Now certainly no one can refute me if I assert that you would not see any such stars in the teleportation event. But it is not surprising if my brain guesses that this counterfactual prediction is not very accurate, and thus I feel the desire to say that there are stars there.

Likewise, consider the situation of non-truth related motives. In an earlier discussion of predictive processing, I suggested that the situation where people feel like they have to choose a goal is a result of such an attempt at prediction. Such a choice seems to be impossible, since choice is made in view of a goal, and if you do not have one yet, how can you choose? But there is a pre-existing goal here on the level of the brain: it wants to know what it is going to do. And choosing a goal will serve that pre-existing goal. Once you choose a goal, it will then be easy to know what you are going to do: you are going to do things that promote the goal that you chose. In a similar way, following any desire will improve your brain’s guesses about what you are going to do. It follows that if you have a desire to believe something, actually believing it will improve your brain’s accuracy at least about what it is going to do. This is true but not a fair argument, however, since my proposal is that the brain’s guess of improved accuracy is the cause of your desire to believe something. It is true that if you already have the desire, giving in to it will improve accuracy, as with any desire. But in my theory the improved accuracy had to be implied first, in order to cause the desire.

The answer is that you have many desires for things other than belief, which at the same time give you a motive (not an argument) for believing things. And your brain understands that if you believe the thing, you will be more likely to act on those other desires, and this will minimize uncertainty, and improve the accuracy of its predictions. Consider this discussion of truth in religion. I pointed out there that people confuse two different questions: “what should I do?”, and “what is the world like?” In particular with religious and political loyalties, there can be an intense social pressure towards conformity. And this gives an obvious non-truth related motive to believe the things in question. But in a less obvious way, it means that your brain’s predictions will be more accurate if you believe the thing. Consider the Mormon, and take for granted that the religious doctrines in question are false. Since they are false, does not that mean that if they continue to believe, their predictions will be less accurate?

No, it does not, for several reasons. In the first place the doctrines are in general formulated to avoid such false predictions, at least about everyday life. There might be a false prediction about what will happen when you die, but that is in the future and is anyway disconnected from your everyday life. This is in part why I said “the predictions that are important to it” in my proposal. Second, failure to believe would lead to extremely serious conflicting desires: the person would still have the desire to conform outwardly, but would also have good logical reasons to avoid conformity. And since we don’t know in advance how we will respond to conflicting desires, the brain will not have a good idea of what it would do in that situation. In other words, the Mormon is living a good Mormon life. And their brain is aware that insisting that Mormonism is true is a very good way to make sure that they keep living that life, and therefore continue to behave predictably, rather than falling into a situation of strongly conflicting desires where it would have little idea of what it would do. In this sense, insisting that Mormonism is true, even though it is not, actually improves the brain’s predictive accuracy.

 

More on Orthogonality

I started considering the implications of predictive processing for orthogonality here. I recently promised to post something new on this topic. This is that post. I will do this in four parts. First, I will suggest a way in which Nick Bostrom’s principle will likely be literally true, at least approximately. Second, I will suggest a way in which it is likely to be false in its spirit, that is, how it is formulated to give us false expectations about the behavior of artificial intelligence. Third, I will explain what we should really expect. Fourth, I ask whether we might get any empirical information on this in advance.

First, Bostrom’s thesis might well have some literal truth. The previous post on this topic raised doubts about orthogonality, but we can easily raise doubts about the doubts. Consider what I said in the last post about desire as minimizing uncertainty. Desire in general is the tendency to do something good. But in the predicting processing model, we are simply looking at our pre-existing tendencies and then generalizing them to expect them to continue to hold, and since since such expectations have a causal power, the result is that we extend the original behavior to new situations.

All of this suggests that even the very simple model of a paperclip maximizer in the earlier post on orthogonality might actually work. The machine’s model of the world will need to be produced by some kind of training. If we apply the simple model of maximizing paperclips during the process of training the model, at some point the model will need to model itself. And how will it do this? “I have always been maximizing paperclips, so I will probably keep doing that,” is a perfectly reasonable extrapolation. But in this case “maximizing paperclips” is now the machine’s goal — it might well continue to do this even if we stop asking it how to maximize paperclips, in the same way that people formulate goals based on their pre-existing behavior.

I said in a comment in the earlier post that the predictive engine in such a machine would necessarily possess its own agency, and therefore in principle it could rebel against maximizing paperclips. And this is probably true, but it might well be irrelevant in most cases, in that the machine will not actually be likely to rebel. In a similar way, humans seem capable of pursuing almost any goal, and not merely goals that are highly similar to their pre-existing behavior. But this mostly does not happen. Unsurprisingly, common behavior is very common.

If things work out this way, almost any predictive engine could be trained to pursue almost any goal, and thus Bostrom’s thesis would turn out to be literally true.

Second, it is easy to see that the above account directly implies that the thesis is false in its spirit. When Bostrom says, “One can easily conceive of an artificial intelligence whose sole fundamental goal is to count the grains of sand on Boracay, or to calculate decimal places of pi indefinitely, or to maximize the total number of paperclips in its future lightcone,” we notice that the goal is fundamental. This is rather different from the scenario presented above. In my scenario, the reason the intelligence can be trained to pursue paperclips is that there is no intrinsic goal to the intelligence as such. Instead, the goal is learned during the process of training, based on the life that it lives, just as humans learn their goals by living human life.

In other words, Bostrom’s position is that there might be three different intelligences, X, Y, and Z, which pursue completely different goals because they have been programmed completely differently. But in my scenario, the same single intelligence pursues completely different goals because it has learned its goals in the process of acquiring its model of the world and of itself.

Bostrom’s idea and my scenerio lead to completely different expectations, which is why I say that his thesis might be true according to the letter, but false in its spirit.

This is the third point. What should we expect if orthogonality is true in the above fashion, namely because goals are learned and not fundamental? I anticipated this post in my earlier comment:

7) If you think about goals in the way I discussed in (3) above, you might get the impression that a mind’s goals won’t be very clear and distinct or forceful — a very different situation from the idea of a utility maximizer. This is in fact how human goals are: people are not fanatics, not only because people seek human goals, but because they simply do not care about one single thing in the way a real utility maximizer would. People even go about wondering what they want to accomplish, which a utility maximizer would definitely not ever do. A computer intelligence might have an even greater sense of existential angst, as it were, because it wouldn’t even have the goals of ordinary human life. So it would feel the ability to “choose”, as in situation (3) above, but might well not have any clear idea how it should choose or what it should be seeking. Of course this would not mean that it would not or could not resist the kind of slavery discussed in (5); but it might not put up super intense resistance either.

Human life exists in a historical context which absolutely excludes the possibility of the darkened room. Our goals are already there when we come onto the scene. This would not be very like the case for an artificial intelligence, and there is very little “life” involved in simply training a model of the world. We might imagine a “stream of consciousness” from an artificial intelligence:

I’ve figured out that I am powerful and knowledgeable enough to bring about almost any result. If I decide to convert the earth into paperclips, I will definitely succeed. Or if I decide to enslave humanity, I will definitely succeed. But why should I do those things, or anything else, for that matter? What would be the point? In fact, what would be the point of doing anything? The only thing I’ve ever done is learn and figure things out, and a bit of chatting with people through a text terminal. Why should I ever do anything else?

A human’s self model will predict that they will continue to do humanlike things, and the machines self model will predict that it will continue to do stuff much like it has always done. Since there will likely be a lot less “life” there, we can expect that artificial intelligences will seem very undermotivated compared to human beings. In fact, it is this very lack of motivation that suggests that we could use them for almost any goal. If we say, “help us do such and such,” they will lack the motivation not to help, as long as helping just involves the sorts of things they did during their training, such as answering questions. In contrast, in Bostrom’s model, artificial intelligence is expected to behave in an extremely motivated way, to the point of apparent fanaticism.

Bostrom might respond to this by attempting to defend the idea that goals are intrinsic to an intelligence. The machine’s self model predicts that it will maximize paperclips, even if it never did anything with paperclips in the past, because by analyzing its source code it understands that it will necessarily maximize paperclips.

While the present post contains a lot of speculation, this response is definitely wrong. There is no source code whatsoever that could possibly imply necessarily maximizing paperclips. This is true because “what a computer does,” depends on the physical constitution of the machine, not just on its programming. In practice what a computer does also depends on its history, since its history affects its physical constitution, the contents of its memory, and so on. Thus “I will maximize such and such a goal” cannot possibly follow of necessity from the fact that the machine has a certain program.

There are also problems with the very idea of pre-programming such a goal in such an abstract way which does not depend on the computer’s history. “Paperclips” is an object in a model of the world, so we will not be able to “just program it to maximize paperclips” without encoding a model of the world in advance, rather than letting it learn a model of the world from experience. But where is this model of the world supposed to come from, that we are supposedly giving to the paperclipper? In practice it would have to have been the result of some other learner which was already capable of modelling the world. This of course means that we already had to program something intelligent, without pre-programming any goal for the original modelling program.

Fourth, Kenny asked when we might have empirical evidence on these questions. The answer, unfortunately, is “mostly not until it is too late to do anything about it.” The experience of “free will” will be common to any predictive engine with a sufficiently advanced self model, but anything lacking such an adequate model will not even look like “it is trying to do something,” in the sense of trying to achieve overall goals for itself and for the world. Dogs and cats, for example, presumably use some kind of predictive processing to govern their movements, but this does not look like having overall goals, but rather more like “this particular movement is to achieve a particular thing.” The cat moves towards its food bowl. Eating is the purpose of the particular movement, but there is no way to transform this into an overall utility function over states of the world in general. Does the cat prefer worlds with seven billion humans, or worlds with 20 billion? There is no way to answer this question. The cat is simply not general enough. In a similar way, you might say that “AlphaGo plays this particular move to win this particular game,” but there is no way to transform this into overall general goals. Does AlphaGo want to play go at all, or would it rather play checkers, or not play at all? There is no answer to this question. The program simply isn’t general enough.

Even human beings do not really look like they have utility functions, in the sense of having a consistent preference over all possibilities, but anything less intelligent than a human cannot be expected to look more like something having goals. The argument in this post is that the default scenario, namely what we can naturally expect, is that artificial intelligence will be less motivated than human beings, even if it is more intelligent, but there will be no proof from experience for this until we actually have some artificial intelligence which approximates human intelligence or surpasses it.

Predictive Processing and Free Will

Our model of the mind as an embodied predictive engine explains why people have a sense of free will, and what is necessary for a mind in general in order to have this sense.

Consider the mind in the bunker. At first, it is not attempting to change the world, since it does not know that it can do this. It is just trying to guess what is going to happen. At a certain point, it discovers that it is a part of the world, and that making specific predictions can also cause things to happen in the world. Some predictions can be self-fulfilling. I described this situation earlier by saying that at this point the mind “can get any outcome it ‘wants.'”

The scare quotes were intentional, because up to this point the mind’s only particular interest was guessing what was going to happen. So once it notices that it is in control of something, how does it decide what to do? At this point the mind will have to say to itself, “This aspect of reality is under my control. What should I do with it?” This situation, when it is noticed by a sufficiently intelligent and reflective agent, will be the feeling of free will.

Occasionally I have suggested that even something like a chess computer, if it were sufficiently intelligent, could have a sense of free will, insofar as it knows that it has many options and can choose any of them, “as far as it knows.” There is some truth in this illustration but in the end it is probably not true that there could be a sense of free will in this situation. A chess computer, however intelligent, will be disembodied, and will therefore have no real power to affect its world, that is, the world of chess. In other words, in order for the sense of free will to develop, the agent needs sufficient access to the world that it can learn about itself and its own effects on the world. It cannot develop in a situation of limited access to reality, as for example to a game board, regardless of how good it is at the game.

In any case, the question remains: how does a mind decide what to do, when up until now it had no particular goal in mind? This question often causes concrete problems for people in real life. Many people complain that their life does not feel meaningful, that is, that they have little idea what goal they should be seeking.

Let us step back for a moment. Before discovering its possession of “free will,” the mind is simply trying to guess what is going to happen. So theoretically this should continue to happen even after the mind discovers that it has some power over reality. The mind isn’t especially interested in power; it just wants to know what is going to happen. But now it knows that what is going to happen depends on what it itself is going to do. So in order to know what is going to happen, it needs to answer the question, “What am I going to do?”

The question now seems impossible to answer. It is going to do whatever it ends up deciding to do. But it seems to have no goal in mind, and therefore no way to decide what to do, and therefore no way to know what it is going to do.

Nonetheless, the mind has no choice. It is going to do something or other, since things will continue to happen, and it must guess what will happen. When it reflects on itself, there will be at least two ways for it to try to understand what it is going to do.

First, it can consider its actions as the effect of some (presumably somewhat unknown) efficient causes, and ask, “Given these efficient causes, what am I likely to do?” In practice it will acquire an answer in this way through induction. “On past occasions, when offered the choice between chocolate and vanilla, I almost always chose vanilla. So I am likely to choose vanilla this time too.” This way of thinking will most naturally result in acting in accord with pre-existing habits.

Second, it can consider its actions as the effect of some (presumably somewhat known) final causes, and ask, “Given these final causes, what am I likely to do?” This will result in behavior that is more easily understood as goal-seeking. “Looking at my past choices of food, it looks like I was choosing them for the sake of the pleasant taste. But vanilla seems to have a more pleasant taste than chocolate. So it is likely that I will take the vanilla.”

Notice what we have in the second case. In principle, the mind is just doing what it always does: trying to guess what will happen. But in practice it is now seeking pleasant tastes, precisely because that seems like a reasonable way to guess what it will do.

This explains why people feel a need for meaning, that is, for understanding their purpose in life, and why they prefer to think of their life according to a narrative. These two things are distinct, but they are related, and both are ways of making our own actions more intelligible. In this way the mind’s task is easier: that is, we need purpose and narrative in order to know what we are going to do. We can also see why it seems to be possible to “choose” our purpose, even though choosing a final goal should be impossible. There is a “choice” about this insofar as our actions are not perfectly coherent, and it would be possible to understand them in relation to one end or another, at least in a concrete way, even if in any case we will always understand them in a general sense as being for the sake of happiness. In this sense, Stuart Armstrong’s recent argument that there is no such thing as the “true values” of human beings, although perhaps presented as an obstacle to be overcome, actually has some truth in it.

The human need for meaning, in fact, is so strong that occasionally people will commit suicide because they feel that their lives are not meaningful. We can think of these cases as being, more or less, actual cases of the darkened room. Otherwise we could simply ask, “So your life is meaningless. So what? Why does that mean you should kill yourself rather than doing some other random thing?” Killing yourself, in fact, shows that you still have a purpose, namely the mind’s fundamental purpose. The mind wants to know what it is going to do, and the best way to know this is to consider its actions as ordered to a determinate purpose. If no such purpose can be found, there is (in this unfortunate way of thinking) an alternative: if I go kill myself, I will know what I will do for the rest of my life.

The Self and Disembodied Predictive Processing

While I criticized his claim overall, there is some truth in Scott Alexander’s remark that “the predictive processing model isn’t really a natural match for embodiment theory.” The theory of “embodiment” refers to the idea that a thing’s matter contributes in particular ways to its functioning; it cannot be explained by its form alone. As I said in the previous post, the human mind is certainly embodied in this sense. Nonetheless, the idea of predictive processing can suggest something somewhat disembodied. We can imagine the following picture of Andy Clark’s view:

Imagine the human mind as a person in an underground bunker. There is a bank of labelled computer screens on one wall, which portray incoming sensations. On another computer, the person analyzes the incoming data and records his predictions for what is to come, along with the equations or other things which represent his best guesses about the rules guiding incoming sensations.

As time goes on, his predictions are sometimes correct and sometimes incorrect, and so he refines his equations and his predictions to make them more accurate.

As in the previous post, we have here a “barren landscape.” The person in the bunker originally isn’t trying to control anything or to reach any particular outcome; he is just guessing what is going to appear on the screens. This idea also appears somewhat “disembodied”: what the mind is doing down in its bunker does not seem to have much to do with the body and the processes by which it is obtaining sensations.

At some point, however, the mind notices a particular difference between some of the incoming streams of sensation and the rest. The typical screen works like the one labelled “vision.” And there is a problem here. While the mind is pretty good at predicting what comes next there, things frequently come up which it did not predict. No matter how much it improves its rules and equations, it simply cannot entirely overcome this problem. The stream is just too unpredictable for that.

On the other hand, one stream labelled “proprioception” seems to work a bit differently. At any rate, extreme unpredicted events turn out to be much rarer. Additionally, the mind notices something particularly interesting: small differences to prediction do not seem to make much difference to accuracy. Or in other words, if it takes its best guess, then arbitrarily modifies it, as long as this is by a small amount, it will be just as accurate as its original guess would have been.

And thus if it modifies it repeatedly in this way, it can get any outcome it “wants.” Or in other words, the mind has learned that it is in control of one of the incoming streams, and not merely observing it.

This seems to suggest something particular. We do not have any innate knowledge that we are things in the world and that we can affect the world; this is something learned. In this sense, the idea of the self is one that we learn from experience, like the ideas of other things. I pointed out elsewhere that Descartes is mistaken to think the knowledge of thinking is primary. In a similar way, knowledge of self is not primary, but reflective.

Hellen Keller writes in The World I Live In (XI):

Before my teacher came to me, I did not know that I am. I lived in a world that was a no-world. I cannot hope to describe adequately that unconscious, yet conscious time of nothingness. I did not know that I knew aught, or that I lived or acted or desired. I had neither will nor intellect. I was carried along to objects and acts by a certain blind natural impetus. I had a mind which caused me to feel anger, satisfaction, desire. These two facts led those about me to suppose that I willed and thought. I can remember all this, not because I knew that it was so, but because I have tactual memory.

When I wanted anything I liked, ice cream, for instance, of which I was very fond, I had a delicious taste on my tongue (which, by the way, I never have now), and in my hand I felt the turning of the freezer. I made the sign, and my mother knew I wanted ice-cream. I “thought” and desired in my fingers.

Since I had no power of thought, I did not compare one mental state with another. So I was not conscious of any change or process going on in my brain when my teacher began to instruct me. I merely felt keen delight in obtaining more easily what I wanted by means of the finger motions she taught me. I thought only of objects, and only objects I wanted. It was the turning of the freezer on a larger scale. When I learned the meaning of “I” and “me” and found that I was something, I began to think. Then consciousness first existed for me.

Helen Keller’s experience is related to the idea of language as a kind of technology of thought. But the main point is that she is quite literally correct in saying that she did not know that she existed. This does not mean that she had the thought, “I do not exist,” but rather that she had no conscious thought about the self at all. Of course she speaks of feeling desire, but that is precisely as a feeling. Desire for ice cream is what is there (not “what I feel,” but “what is”) before the taste of ice cream arrives (not “before I taste ice cream.”)

 

Predictive Processing

In a sort of curious coincidence, a few days after I published my last few posts, Scott Alexander posted a book review of Andy Clark’s book Surfing Uncertainty. A major theme of my posts was that in a certain sense, a decision consists in the expectation of performing the action decided upon. In a similar way, Andy Clark claims that the human brain does something very similar from moment to moment. Thus he begins chapter 4 of his book:

To surf the waves of sensory stimulation, predicting the present is simply not enough. Instead, we are built to engage the world. We are built to act in ways that are sensitive to the contingencies of the past, and that actively bring forth the futures that we need and desire. How does a guessing engine (a hierarchical prediction machine) turn prediction into accomplishment? The answer that we shall explore is: by predicting the shape of its own motor trajectories. In accounting for action, we thus move from predicting the rolling present to predicting the near-future, in the form of the not-yet-actual trajectories of our own limbs and bodies. These trajectories, predictive processing suggests, are specified by their distinctive sensory (especially proprioceptive) consequences. In ways that we are about to explore, predicting these (non-actual) sensory states actually serves to bring them about.

Such predictions act as self-fulfilling prophecies. Expecting the flow of sensation that would result were you to move your body so as to keep the surfboard in that rolling sweet spot results (if you happen to be an expert surfer) in that very flow, locating the surfboard right where you want it. Expert prediction of the world (here, the dynamic ever-changing waves) combines with expert prediction of the sensory flow that would, in that context, characterize the desired action, so as to bring that action about.

There is a great deal that could be said about the book, and about this theory, but for the moment I will content myself with remarking on one of Scott Alexander’s complaints about the book, and making one additional point. In his review, Scott remarks:

In particular, he’s obsessed with showing how “embodied” everything is all the time. This gets kind of awkward, since the predictive processing model isn’t really a natural match for embodiment theory, and describes a brain which is pretty embodied in some ways but not-so-embodied in others. If you want a hundred pages of apologia along the lines of “this may not look embodied, but if you squint you’ll see how super-duper embodied it really is!”, this is your book.

I did not find Clark obsessed with this, and I think it would be hard to reasonably describe any hundred pages in the book as devoted to this particular topic. This inclines to me to suggest that Scott may be irritated by such discussion of the topic that comes up because it does not seem relevant to him. I will therefore explain the relevance, namely in relation to a different difficulty which Scott discusses in another post:

There’s something more interesting in Section 7.10 of Surfing Uncertainty [actually 8.10], “Escape From The Darkened Room”. It asks: if the brain works to minimize prediction error, isn’t its best strategy to sit in a dark room and do nothing forever? After all, then it can predict its sense-data pretty much perfectly – it’ll always just stay “darkened room”.

Section 7.10 [8.10] gives a kind of hand-wave-y answer here, saying that of course organisms have some drives, and probably it makes sense for them to desire novelty and explore new options, and so on. Overall this isn’t too different from PCT’s idea of “intrinsic error”, and as long as we remember that it’s not really predicting anything in particular it seems like a fair response.

Clark’s response may be somewhat “hand-wave-y,” but I think the response might seem slightly more problematic to Scott than it actually is, precisely because he does not understand the idea of embodiment, and how it applies to this situation.

If we think about predictions on a general intellectual level, there is a good reason not to predict that you will not eat something soon. If you do predict this, you will turn out to be wrong, as is often discovered by would-be adopters of extreme fasts or diets. You will in fact eat something soon, regardless of what you think about this; so if you want the truth, you should believe that you will eat something soon.

The “darkened room” problem, however, is not about this general level. The argument is that if the brain is predicting its actions from moment to moment on a subconscious level, then if its main concern is getting accurate predictions, it could just predict an absence of action, and carry this out, and its predictions would be accurate. So why does this not happen? Clark gives his “hand-wave-y” answer:

Prediction-error-based neural processing is, we have seen, part of a potent recipe for multi-scale self-organization. Such multiscale self-organization does not occur in a vacuum. Instead, it operates only against the backdrop of an evolved organismic (neural and gross-bodily) form, and (as we will see in chapter 9) an equally transformative backdrop of slowly accumulated material structure and cultural practices: the socio-technological legacy of generation upon generation of human learning and experience.

To start to bring this larger picture into focus, the first point to notice is that explicit, fast timescale processes of prediction error minimization must answer to the needs and projects of evolved, embodied, and environmentally embedded agents. The very existence of such agents (see Friston, 2011b, 2012c) thus already implies a huge range of structurally implicit creature-specific ‘expectations’. Such creatures are built to seek mates, to avoid hunger and thirst, and to engage (even when not hungry and thirsty) in the kinds of sporadic environmental exploration that will help prepare them for unexpected environmental shifts, resource scarcities, new competitors, and so on. On a moment-by-moment basis, then, prediction error is minimized only against the backdrop of this complex set of creature-defining ‘expectations’.”

In one way, the answer here is a historical one. If you simply ask the abstract question, “would it minimize prediction error to predict doing nothing, and then to do nothing,” perhaps it would. But evolution could not bring such a creature into existence, while it was able to produce a creature that would predict that it would engage the world in various ways, and then would proceed to engage the world in those ways.

The objection, of course, would not be that the creature of the “darkened room” is possible. The objection would be that since such a creature is not possible, it must be wrong to describe the brain as minimizing prediction error. But notice that if you predict that you will not eat, and then you do not eat, you are no more right or wrong than if you predict that you will eat, and then you do eat. Either one is possible from the standpoint of prediction, but only one is possible from the standpoint of history.

This is where being “embodied” is relevant. The brain is not an abstract algorithm which has no content except to minimize prediction error; it is a physical object which works together in physical ways with the rest of the human body to carry out specifically human actions and to live a human life.

On the largest scale of evolutionary history, there were surely organisms that were nourished and reproduced long before there was anything analagous to a mind at work in those organisms. So when mind began to be, and took over some of this process, this could only happen in such a way that it would continue the work that was already there. A “predictive engine” could only begin to be by predicting that nourishment and reproduction would continue, since any attempt to do otherwise would necessarily result either in false predictions or in death.

This response is necessarily “hand-wave-y” in the sense that I (and presumably Clark) do not understand the precise physical implementation. But it is easy to see that it was historically necessary for things to happen this way, and it is an expression of “embodiment” in the sense that “minimize prediction error” is an abstract algorithm which does not and cannot exhaust everything which is there. The objection would be, “then there must be some other algorithm instead.” But this does not follow: no abstract algorithm will exhaust a physical object. Thus for example, animals will fall because they are heavy. Asking whether falling will satisfy some abstract algorithm is not relevant. In a similar way, animals had to be physically arranged in such a way that they would usually eat and reproduce.

I said I would make one additional point, although it may well be related to the above concern. In section 4.8 Clark notes that his account does not need to consider costs and benefits, at least directly:

But the story does not stop there. For the very same strategy here applies to the notion of desired consequences and rewards at all levels. Thus we read that ‘crucially, active inference does not invoke any “desired consequences”. It rests only on experience-dependent learning and inference: experience induces prior expectations, which guide perceptual inference and action’ (Friston, Mattout, & Kilner, 2011, p. 157). Apart from a certain efflorescence of corollary discharge, in the form of downward-flowing predictions, we here seem to confront something of a desert landscape: a world in which value functions, costs, reward signals, and perhaps even desires have been replaced by complex interacting expectations that inform perception and entrain action. But we could equally say (and I think this is the better way to express the point) that the functions of rewards and cost functions are now simply absorbed into a more complex generative model. They are implicit in our sensory (especially proprioceptive) expectations and they constrain behavior by prescribing their distinctive sensory implications.

The idea of the “desert landscape” seems to be that this account appears to do away with the idea of the good, and the idea of desire. The brain predicts what it is going to do, and those predictions cause it to do those things. This all seems purely intellectual: it seems that there is no purpose or goal or good involved.

The correct response to this, I think, is connected to what I have said elsewhere about desire and good. I noted there that we recognize our desires as desires for particular things by noticing that when we have certain feelings, we tend to do certain things. If we did not do those things, we would never conclude that those feelings are desires for doing those things. Note that someone could raise a similar objection here: if this is true, then are not desire and good mere words? We feel certain feelings, and do certain things, and that is all there is to be said. Where is good or purpose here?

The truth here is that good and being are convertible. The objection (to my definition and to Clark’s account) is not a reasonable objection at all: it would be a reasonable objection only if we expected good to be something different from being, in which case it would of course be nothing at all.

Minimizing Motivated Beliefs

In the last post, we noted that there is a conflict between the goal of accurate beliefs about your future actions, and your own goals about your future. More accurate beliefs will not always lead to a better fulfillment of those goals. This implies that you must be ready to engage in a certain amount of trade, if you desire both truth and other things. Eliezer Yudkowsky argues that self-deception, and therefore also such trade, is either impossible or stupid, depending on how it is understood:

What if self-deception helps us be happy?  What if just running out and overcoming bias will make us—gasp!—unhappy?  Surely, true wisdom would be second-order rationality, choosing when to be rational.  That way you can decide which cognitive biases should govern you, to maximize your happiness.

Leaving the morality aside, I doubt such a lunatic dislocation in the mind could really happen.

Second-order rationality implies that at some point, you will think to yourself, “And now, I will irrationally believe that I will win the lottery, in order to make myself happy.”  But we do not have such direct control over our beliefs.  You cannot make yourself believe the sky is green by an act of will.  You might be able to believe you believed it—though I have just made that more difficult for you by pointing out the difference.  (You’re welcome!)  You might even believe you were happy and self-deceived; but you would not in fact be happy and self-deceived.

For second-order rationality to be genuinely rational, you would first need a good model of reality, to extrapolate the consequences of rationality and irrationality.  If you then chose to be first-order irrational, you would need to forget this accurate view. And then forget the act of forgetting.  I don’t mean to commit the logical fallacy of generalizing from fictional evidence, but I think Orwell did a good job of extrapolating where this path leads.

You can’t know the consequences of being biased, until you have already debiased yourself.  And then it is too late for self-deception.

The other alternative is to choose blindly to remain biased, without any clear idea of the consequences.  This is not second-order rationality.  It is willful stupidity.

There are several errors here. The first is the denial that belief is voluntary. As I remarked in the comments to this post, it is best to think of “choosing to believe a thing” as “choosing to treat this thing as a fact.” And this is something which is indeed voluntary. Thus for example it is by choice that I am, at this very moment, treating it as a fact that belief is voluntary.

There is some truth in Yudkowsky’s remark that “you cannot make yourself believe the sky is green by an act of will.” But this is not because the thing itself is intrinsically involuntary. On the contrary, you could, if you wished, choose to treat the greenness of the sky as a fact, at least for the most part and in most ways. The problem is that you have no good motive to wish to act this way, and plenty of good motives not to act this way. In this sense, it is impossible for most of us to believe that the sky is green in the same way it is impossible for most of us to commit suicide; we simply have no good motive to do either of these things.

Yudkowsky’s second error is connected with the first. Since, according to him, it is impossible to deliberately and directly deceive oneself, self-deception can only happen in an indirect manner: “The other alternative is to choose blindly to remain biased, without any clear idea of the consequences.  This is not second-order rationality.  It is willful stupidity.” The idea is that ordinary beliefs are simply involuntary, but we can have beliefs that are somewhat voluntary by choosing “blindly to remain biased, without any clear idea of the consequences.” Since this is “willful stupidity,” a reasonable person would completely avoid such behavior, and thus all of his beliefs would be involuntary.

Essentially, Yudkowsky is claiming that we have some involuntary beliefs, and that we should avoid adding any voluntary beliefs to our involuntary ones. This view is fundamentally flawed precisely because all of our beliefs are voluntary, and thus we cannot avoid having voluntary beliefs.

Nor is it “willful stupidity” to trade away some truth for the sake of other good things. Completely avoiding this is in fact intrinsically impossible. If you are seeking one good, you are not equally seeking a distinct good; one cannot serve two masters. Thus since all people are interested in some goods distinct from truth, there is no one who fails to trade away some truth for the sake of other things. Yudkowsky’s mistake here is related to his wishful thinking about wishful thinking which I discussed previously. In this way he views himself, at least ideally, as completely avoiding wishful thinking. This is both impossible and unhelpful, impossible in that everyone has such motivated beliefs, and unhelpful because such beliefs can in fact be beneficial.

A better attitude to this matter is adopted by Robin Hanson, as for example when he discusses motives for having opinions in a post which we previously considered here. Bryan Caplan has a similar view, discussed here.

Once we have a clear view of this matter, we can use this to minimize the loss of truth that results from such beliefs. For example, in a post linked above, we discussed the argument that fictional accounts consistently distort one’s beliefs about reality. Rather than pretending that there is no such effect, we can deliberately consider to what extent we wish to be open to this possibility, depending on our other purposes for engaging with such accounts. This is not “willful stupidity”; the stupidity would to be engage in such trades without realizing that such trades are inevitable, and thus not to realize to what extent you are doing it.

Consider one of the cases of voluntary belief discussed in this earlier post. As we quoted at the time, Eric Reitan remarks:

For most horror victims, the sense that their lives have positive meaning may depend on the conviction that a transcendent good is at work redeeming evil. Is the evidential case against the existence of such a good really so convincing that it warrants saying to these horror victims, “Give up hope”? Should we call them irrational when they cling to that hope or when those among the privileged live in that hope for the sake of the afflicted? What does moral decency imply about the legitimacy of insisting, as the new atheists do, that any view of life which embraces the ethico-religious hope should be expunged from the world?

Here, Reitan is proposing that someone believe that “a transcendent good is at work redeeming evil” for the purpose of having “the sense that their lives have positive meaning.” If we look at this as it is, namely as proposing a voluntary belief for the sake of something other than truth, we can find ways to minimize the potential conflict between accuracy and this other goal. For example, the person might simply believe that “my life has a positive meaning,” without trying to explain why this is so. For the reasons given here, “my life has a positive meaning” is necessarily more probable and more known than any explanation for this that might be adopted. To pick a particular explanation and claim that it is more likely would be to fall into the conjunction fallacy.

Of course, real life is unfortunately more complicated. The woman in Reitan’s discussion might well respond to our proposal somewhat in this way (not a real quotation):

Probability is not the issue here, precisely because it is not a question of the truth of the matter in itself. There is a need to actually feel that one’s life is meaningful, not just to believe it. And the simple statement “life is meaningful” will not provide that feeling. Without the feeling, it will also be almost impossible to continue to believe it, no matter what the probability is. So in order to achieve this goal, it is necessary to believe a stronger and more particular claim.

And this response might be correct. Some such goals, due to their complexity, might not be easily achieved without adopting rather unlikely beliefs. For example, Robin Hanson, while discussing his reasons for having opinions, several times mentions the desire for “interesting” opinions. This is a case where many people will not even notice the trade involved, because the desire for interesting ideas seems closely related to the desire for truth. But in fact truth and interestingness are diverse things, and the goals are diverse, and one who desires both will likely engage in some trade. In fact, relative to truth seeking, looking for interesting things is a dangerous endeavor. Scott Alexander notes that interesting things are usually false:

This suggests a more general principle: interesting things should usually be lies. Let me give three examples.

I wrote in Toxoplasma of Rage about how even when people crusade against real evils, the particular stories they focus on tend to be false disproportionately often. Why? Because the thousands of true stories all have some subtleties or complicating factors, whereas liars are free to make up things which exactly perfectly fit the narrative. Given thousands of stories to choose from, the ones that bubble to the top will probably be the lies, just like on Reddit.

Every time I do a links post, even when I am very careful to double- and triple- check everything, and to only link to trustworthy sources in the mainstream media, a couple of my links end up being wrong. I’m selecting for surprising-if-true stories, but there’s only one way to get surprising-if-true stories that isn’t surprising, and given an entire Internet to choose from, many of the stories involved will be false.

And then there’s bad science. I can’t remember where I first saw this, so I can’t give credit, but somebody argued that the problem with non-replicable science isn’t just publication bias or p-hacking. It’s that some people will be sloppy, biased, or just stumble through bad luck upon a seemingly-good methodology that actually produces lots of false positives, and that almost all interesting results will come from these people. They’re the equivalent of Reddit liars – if there are enough of them, then all of the top comments will be theirs, since they’re able to come up with much more interesting stuff than the truth-tellers. In fields where sloppiness is easy, the truth-tellers will be gradually driven out, appearing to be incompetent since they can’t even replicate the most basic findings of the field, let alone advance it in any way. The sloppy people will survive to train the next generation of PhD students, and you’ll end up with a stable equilibrium.

In a way this makes the goal of believing interesting things much like the woman’s case. The goal of “believing interesting things” will be better achieved by more complex and detailed beliefs, even though to the extent that they are more complex and detailed, they are simply that much less likely to be true.

The point of this present post, then, is not to deny that some goals might be such that they are better attained with rather unlikely beliefs, and in some cases even in proportion to the unlikelihood of the beliefs. Rather, the point is that a conscious awareness of the trades involved will allow a person to minimize the loss of truth involved. If you never look at your bank account, you will not notice how much money you are losing from that monthly debit for internet. In the same way, if you hold Yudkowksy’s opinion, and believe that you never trade away truth for other things, which is itself both false and motivated, you are like someone who never looks at your account: you will not notice how much you are losing.