Was Kavanaugh Guilty?

No, I am not going to answer the question. This post will illustrate and argue for a position that I have argued many times in the past, namely that belief is voluntary. The example is merely particularly good for proving the point. I will also be using a framework something like Bryan Caplan’s in his discussion of rational irrationality:

Two forces lie at the heart of economic models of choice: preferences and prices. A consumer’s preferences determine the shape of his demand curve for oranges; the market price he faces determines where along that demand curve he resides. What makes this insight deep is its generality. Economists use it to analyze everything from having babies to robbing banks.

Irrationality is a glaring exception. Recognizing irrationality is typically equated with rejecting economics. A “logic of the irrational” sounds self-contradictory. This chapter’s central message is that this reaction is premature. Economics can handle irrationality the same way it handles everything: preferences and prices. As I have already pointed out:

  • People have preferences over beliefs: A nationalist enjoys the belief that foreign-made products are overpriced junk; a surgeon takes pride in the belief that he operates well while drunk.
  • False beliefs range in material cost from free to enormous: Acting on his beliefs would lead the nationalist to overpay for inferior goods, and the surgeon to destroy his career.

Snapping these two building blocks together leads to a simple model of irrational conviction. If agents care about both material wealth and irrational beliefs, then as the price of casting reason aside rises, agents consume less irrationality. I might like to hold comforting beliefs across the board, but it costs too much. Living in a Pollyanna dreamworld would stop be from coping with my problems, like that dead tree in my backyard that looks like it is going to fall on my house.

Let us assume that people are considering whether to believe that Brett Kavanaugh was guilty of sexual assault. For ease of visualization, let us suppose that they have utility functions defined over the following outcomes:

(A) Believe Kavanaugh was guilty, and turn out to be right

(B) Believe Kavanaugh was guilty, and turn out to be wrong

(C) Believe Kavanaugh was innocent, and turn out to be right

(D) Believe Kavanaugh was innocent, and turn out to be wrong

(E) Admit that you do not know whether he was guilty or not (this will be presumed to be a true statement, but I will count it as less valuable than a true statement that includes more detail.)

(F) Say something bad about your political enemies

(G) Say something good about your political enemies

(H) Say something bad about your political allies

(I) Say something good about your political allies

Note that options A through E are mutually exclusive, while one or more of options F through I might or might not come together with one of those from A through E.

Let’s suppose there are three people, a right winger who cares a lot about politics and little about truth, a left winger who cares a lot about politics and little about truth, and an independent who does not care about politics and instead cares a lot about truth. Then we posit the following table of utilities:

Right Winger
Left Winger
Independent
(A)
10
10
100
(B)
-10
-10
-100
(C)
10
10
100
(D)
-10
-10
-100
(E)
5
5
50
(F)
100
100
0
(G)
-100
-100
0
(H)
-100
-100
0
(I)
100
100
0

The columns for the right and left wingers are the same, but the totals will be calculated differently because saying something good about Kavanaugh, for the right winger, is saying something good about an ally, while for the left winger, it is saying something good about an enemy, and there is a similar contrast if something bad is said.

Now there are really only three options we need to consider, namely “Believe Kavanaugh was guilty,” “Believe Kavanaugh was innocent,” and “Admit that you do not know.” In addition, in order to calculate expected utility according to the above table, we need a probability that Kavanaugh was guilty. In order not to offend readers who have already chosen an option, I will assume a probability of 50% that he was guilty, and 50% that he was innocent. Using these assumptions, we can calculate the following ultimate utilities:

Right Winger
Left Winger
Independent
Claim Guilt
-100
100
0
Claim Innocence
100
-100
0
Confess Ignorance
5
5
50

(I won’t go through this calculation in detail; it should be evident that given my simple assumptions of the probability and values, there will be no value for anyone in affirming guilt or innocence as such, but only in admitting ignorance, or in making a political point.) Given these values, obviously the left winger will choose to believe that Kavanaugh was guilty, the right winger will choose to believe that he was innocent, and the independent will admit to being ignorant.

This account obviously makes complete sense of people’s actual positions on the question, and it does that by assuming that people voluntarily choose to believe a position in the same way they choose to do other things. On the other hand, if you assume that belief is an involuntary evaluation of a state of affairs, how could the actual distribution of opinion possibly be explained?

As this is a point I have discussed many times in the past, I won’t try to respond to all possible objections. However, I will bring up two of them. In the example, I had to assume that people calculated using a probability of 50% for Kavanaugh’s guilt or innocence. So it could be objected that their “real” belief is that there is a 50% chance he was guilty, and the statement is simply an external thing.

This initial 50% is something like a prior probability, and corresponds to a general leaning towards or away from a position. As I admitted in discussion with Angra Mainyu, that inclination is largely involuntary. However, first, this is not what we call a “belief” in ordinary usage, since we frequently say that someone has a belief while having some qualms about it. Second, it is not completely immune from voluntary influences. In practice in a situation like this, it will represent something like everything the person knows about the subject and predicate apart from this particular claim. And much of what the person knows will already be in subject/predicate form, and the person will have arrived at it through a similar voluntary process.

Another objection is that at least in the case of something obviously true or obviously false, there cannot possibly be anything voluntary about it. No one can choose to believe that the moon is made of green cheese, for example.

I have responded to this to this in the past by pointing out that most of us also cannot choose to go and kill ourselves, right now, despite the fact that doing so would be voluntary. And in a similar way, there is nothing attractive about believing that the moon is made of green cheese, and so no one can do it. At least two objections will be made to this response:

1) I can’t go kill myself right now, but I know that this is because it would be bad. But I cannot believe that the moon is made of green cheese because it is false, not because it is bad.

2) It does not seem that much harm would be done by choosing to believe this about the moon, and then changing your mind after a few seconds. So if it is voluntary, why not prove it by doing so? Obviously you cannot do so.

Regarding the first point, it is true that believing the moon is made of cheese would be bad because it is false. And in fact, if you find falsity the reason you cannot accept it, how is that not because you regard falsity as really bad? In fact lack of attractiveness is extremely relevant here. If people can believe in Xenu, they would find it equally possible to believe that the moon was made of cheese, if that were the teaching of their religion. In that situation, the falsity of the claim would not be much obstacle at all.

Regarding the second point, there is a problem like Kavka’s Toxin here. Choosing to believe something, roughly speaking, means choosing to treat it as a fact, which implies a certain commitment. Choosing to act like it is true enough to say so, then immediately doing something else, is not choosing to believe it, but rather it is choosing to tell a lie. So just as one cannot intend to drink the toxin without expecting to actually drink it, so one cannot choose to believe something without expecting to continue to believe it for the foreseeable future. This is why one would not wish to accept such a statement about the moon, not only in order to prove something (especially since it would prove nothing; no one would admit that you had succeeded in believing it), but even if someone were to offer a very large incentive, say a million dollars if you managed to believe it. This would amount to offering to pay someone to give up their concern for truth entirely, and permanently.

Additionally, in the case of some very strange claims, it might be true that people do not know how to believe them, in the sense that they do not know what “acting as though this were the case” would even mean. This no more affects the general voluntariness of belief than the fact that some people cannot do backflips affects the fact that such bodily motions are in themselves voluntary.

Predictive Processing and Free Will

Our model of the mind as an embodied predictive engine explains why people have a sense of free will, and what is necessary for a mind in general in order to have this sense.

Consider the mind in the bunker. At first, it is not attempting to change the world, since it does not know that it can do this. It is just trying to guess what is going to happen. At a certain point, it discovers that it is a part of the world, and that making specific predictions can also cause things to happen in the world. Some predictions can be self-fulfilling. I described this situation earlier by saying that at this point the mind “can get any outcome it ‘wants.'”

The scare quotes were intentional, because up to this point the mind’s only particular interest was guessing what was going to happen. So once it notices that it is in control of something, how does it decide what to do? At this point the mind will have to say to itself, “This aspect of reality is under my control. What should I do with it?” This situation, when it is noticed by a sufficiently intelligent and reflective agent, will be the feeling of free will.

Occasionally I have suggested that even something like a chess computer, if it were sufficiently intelligent, could have a sense of free will, insofar as it knows that it has many options and can choose any of them, “as far as it knows.” There is some truth in this illustration but in the end it is probably not true that there could be a sense of free will in this situation. A chess computer, however intelligent, will be disembodied, and will therefore have no real power to affect its world, that is, the world of chess. In other words, in order for the sense of free will to develop, the agent needs sufficient access to the world that it can learn about itself and its own effects on the world. It cannot develop in a situation of limited access to reality, as for example to a game board, regardless of how good it is at the game.

In any case, the question remains: how does a mind decide what to do, when up until now it had no particular goal in mind? This question often causes concrete problems for people in real life. Many people complain that their life does not feel meaningful, that is, that they have little idea what goal they should be seeking.

Let us step back for a moment. Before discovering its possession of “free will,” the mind is simply trying to guess what is going to happen. So theoretically this should continue to happen even after the mind discovers that it has some power over reality. The mind isn’t especially interested in power; it just wants to know what is going to happen. But now it knows that what is going to happen depends on what it itself is going to do. So in order to know what is going to happen, it needs to answer the question, “What am I going to do?”

The question now seems impossible to answer. It is going to do whatever it ends up deciding to do. But it seems to have no goal in mind, and therefore no way to decide what to do, and therefore no way to know what it is going to do.

Nonetheless, the mind has no choice. It is going to do something or other, since things will continue to happen, and it must guess what will happen. When it reflects on itself, there will be at least two ways for it to try to understand what it is going to do.

First, it can consider its actions as the effect of some (presumably somewhat unknown) efficient causes, and ask, “Given these efficient causes, what am I likely to do?” In practice it will acquire an answer in this way through induction. “On past occasions, when offered the choice between chocolate and vanilla, I almost always chose vanilla. So I am likely to choose vanilla this time too.” This way of thinking will most naturally result in acting in accord with pre-existing habits.

Second, it can consider its actions as the effect of some (presumably somewhat known) final causes, and ask, “Given these final causes, what am I likely to do?” This will result in behavior that is more easily understood as goal-seeking. “Looking at my past choices of food, it looks like I was choosing them for the sake of the pleasant taste. But vanilla seems to have a more pleasant taste than chocolate. So it is likely that I will take the vanilla.”

Notice what we have in the second case. In principle, the mind is just doing what it always does: trying to guess what will happen. But in practice it is now seeking pleasant tastes, precisely because that seems like a reasonable way to guess what it will do.

This explains why people feel a need for meaning, that is, for understanding their purpose in life, and why they prefer to think of their life according to a narrative. These two things are distinct, but they are related, and both are ways of making our own actions more intelligible. In this way the mind’s task is easier: that is, we need purpose and narrative in order to know what we are going to do. We can also see why it seems to be possible to “choose” our purpose, even though choosing a final goal should be impossible. There is a “choice” about this insofar as our actions are not perfectly coherent, and it would be possible to understand them in relation to one end or another, at least in a concrete way, even if in any case we will always understand them in a general sense as being for the sake of happiness. In this sense, Stuart Armstrong’s recent argument that there is no such thing as the “true values” of human beings, although perhaps presented as an obstacle to be overcome, actually has some truth in it.

The human need for meaning, in fact, is so strong that occasionally people will commit suicide because they feel that their lives are not meaningful. We can think of these cases as being, more or less, actual cases of the darkened room. Otherwise we could simply ask, “So your life is meaningless. So what? Why does that mean you should kill yourself rather than doing some other random thing?” Killing yourself, in fact, shows that you still have a purpose, namely the mind’s fundamental purpose. The mind wants to know what it is going to do, and the best way to know this is to consider its actions as ordered to a determinate purpose. If no such purpose can be found, there is (in this unfortunate way of thinking) an alternative: if I go kill myself, I will know what I will do for the rest of my life.