Was Kavanaugh Guilty?

No, I am not going to answer the question. This post will illustrate and argue for a position that I have argued many times in the past, namely that belief is voluntary. The example is merely particularly good for proving the point. I will also be using a framework something like Bryan Caplan’s in his discussion of rational irrationality:

Two forces lie at the heart of economic models of choice: preferences and prices. A consumer’s preferences determine the shape of his demand curve for oranges; the market price he faces determines where along that demand curve he resides. What makes this insight deep is its generality. Economists use it to analyze everything from having babies to robbing banks.

Irrationality is a glaring exception. Recognizing irrationality is typically equated with rejecting economics. A “logic of the irrational” sounds self-contradictory. This chapter’s central message is that this reaction is premature. Economics can handle irrationality the same way it handles everything: preferences and prices. As I have already pointed out:

  • People have preferences over beliefs: A nationalist enjoys the belief that foreign-made products are overpriced junk; a surgeon takes pride in the belief that he operates well while drunk.
  • False beliefs range in material cost from free to enormous: Acting on his beliefs would lead the nationalist to overpay for inferior goods, and the surgeon to destroy his career.

Snapping these two building blocks together leads to a simple model of irrational conviction. If agents care about both material wealth and irrational beliefs, then as the price of casting reason aside rises, agents consume less irrationality. I might like to hold comforting beliefs across the board, but it costs too much. Living in a Pollyanna dreamworld would stop be from coping with my problems, like that dead tree in my backyard that looks like it is going to fall on my house.

Let us assume that people are considering whether to believe that Brett Kavanaugh was guilty of sexual assault. For ease of visualization, let us suppose that they have utility functions defined over the following outcomes:

(A) Believe Kavanaugh was guilty, and turn out to be right

(B) Believe Kavanaugh was guilty, and turn out to be wrong

(C) Believe Kavanaugh was innocent, and turn out to be right

(D) Believe Kavanaugh was innocent, and turn out to be wrong

(E) Admit that you do not know whether he was guilty or not (this will be presumed to be a true statement, but I will count it as less valuable than a true statement that includes more detail.)

(F) Say something bad about your political enemies

(G) Say something good about your political enemies

(H) Say something bad about your political allies

(I) Say something good about your political allies

Note that options A through E are mutually exclusive, while one or more of options F through I might or might not come together with one of those from A through E.

Let’s suppose there are three people, a right winger who cares a lot about politics and little about truth, a left winger who cares a lot about politics and little about truth, and an independent who does not care about politics and instead cares a lot about truth. Then we posit the following table of utilities:

Right Winger
Left Winger
Independent
(A)
10
10
100
(B)
-10
-10
-100
(C)
10
10
100
(D)
-10
-10
-100
(E)
5
5
50
(F)
100
100
0
(G)
-100
-100
0
(H)
-100
-100
0
(I)
100
100
0

The columns for the right and left wingers are the same, but the totals will be calculated differently because saying something good about Kavanaugh, for the right winger, is saying something good about an ally, while for the left winger, it is saying something good about an enemy, and there is a similar contrast if something bad is said.

Now there are really only three options we need to consider, namely “Believe Kavanaugh was guilty,” “Believe Kavanaugh was innocent,” and “Admit that you do not know.” In addition, in order to calculate expected utility according to the above table, we need a probability that Kavanaugh was guilty. In order not to offend readers who have already chosen an option, I will assume a probability of 50% that he was guilty, and 50% that he was innocent. Using these assumptions, we can calculate the following ultimate utilities:

Right Winger
Left Winger
Independent
Claim Guilt
-100
100
0
Claim Innocence
100
-100
0
Confess Ignorance
5
5
50

(I won’t go through this calculation in detail; it should be evident that given my simple assumptions of the probability and values, there will be no value for anyone in affirming guilt or innocence as such, but only in admitting ignorance, or in making a political point.) Given these values, obviously the left winger will choose to believe that Kavanaugh was guilty, the right winger will choose to believe that he was innocent, and the independent will admit to being ignorant.

This account obviously makes complete sense of people’s actual positions on the question, and it does that by assuming that people voluntarily choose to believe a position in the same way they choose to do other things. On the other hand, if you assume that belief is an involuntary evaluation of a state of affairs, how could the actual distribution of opinion possibly be explained?

As this is a point I have discussed many times in the past, I won’t try to respond to all possible objections. However, I will bring up two of them. In the example, I had to assume that people calculated using a probability of 50% for Kavanaugh’s guilt or innocence. So it could be objected that their “real” belief is that there is a 50% chance he was guilty, and the statement is simply an external thing.

This initial 50% is something like a prior probability, and corresponds to a general leaning towards or away from a position. As I admitted in discussion with Angra Mainyu, that inclination is largely involuntary. However, first, this is not what we call a “belief” in ordinary usage, since we frequently say that someone has a belief while having some qualms about it. Second, it is not completely immune from voluntary influences. In practice in a situation like this, it will represent something like everything the person knows about the subject and predicate apart from this particular claim. And much of what the person knows will already be in subject/predicate form, and the person will have arrived at it through a similar voluntary process.

Another objection is that at least in the case of something obviously true or obviously false, there cannot possibly be anything voluntary about it. No one can choose to believe that the moon is made of green cheese, for example.

I have responded to this to this in the past by pointing out that most of us also cannot choose to go and kill ourselves, right now, despite the fact that doing so would be voluntary. And in a similar way, there is nothing attractive about believing that the moon is made of green cheese, and so no one can do it. At least two objections will be made to this response:

1) I can’t go kill myself right now, but I know that this is because it would be bad. But I cannot believe that the moon is made of green cheese because it is false, not because it is bad.

2) It does not seem that much harm would be done by choosing to believe this about the moon, and then changing your mind after a few seconds. So if it is voluntary, why not prove it by doing so? Obviously you cannot do so.

Regarding the first point, it is true that believing the moon is made of cheese would be bad because it is false. And in fact, if you find falsity the reason you cannot accept it, how is that not because you regard falsity as really bad? In fact lack of attractiveness is extremely relevant here. If people can believe in Xenu, they would find it equally possible to believe that the moon was made of cheese, if that were the teaching of their religion. In that situation, the falsity of the claim would not be much obstacle at all.

Regarding the second point, there is a problem like Kavka’s Toxin here. Choosing to believe something, roughly speaking, means choosing to treat it as a fact, which implies a certain commitment. Choosing to act like it is true enough to say so, then immediately doing something else, is not choosing to believe it, but rather it is choosing to tell a lie. So just as one cannot intend to drink the toxin without expecting to actually drink it, so one cannot choose to believe something without expecting to continue to believe it for the foreseeable future. This is why one would not wish to accept such a statement about the moon, not only in order to prove something (especially since it would prove nothing; no one would admit that you had succeeded in believing it), but even if someone were to offer a very large incentive, say a million dollars if you managed to believe it. This would amount to offering to pay someone to give up their concern for truth entirely, and permanently.

Additionally, in the case of some very strange claims, it might be true that people do not know how to believe them, in the sense that they do not know what “acting as though this were the case” would even mean. This no more affects the general voluntariness of belief than the fact that some people cannot do backflips affects the fact that such bodily motions are in themselves voluntary.

Truth and Expectation II

We discussed this topic in a previous post. I noted there that there is likely some relationship with predictive processing. This idea can be refined by distinguishing between conscious thought and what the human brain does on a non-conscious level.

It is not possible to define truth by reference to expectations for reasons given previously. Some statements do not imply specific expectations, and besides, we need the idea of truth to decide whether or not someone’s expectations were correct or not. So there is no way to define truth except the usual way: a statement is true if things are the way the statement says they are, bearing in mind the necessary distinctions involving “way.”

On the conscious level, I would distinguish between thinking about something is true, and wanting to think that it is true. In a discussion with Angra Mainyu, I remarked that insofar as we have an involuntary assessment of things, it would be more appropriate to call that assessment a desire:

So rather than calling that assessment a belief, it would be more accurate to call it a desire. It is not believing something, but desiring to believe something. Hunger is the tendency to go and get food; that assessment is the tendency to treat a certain claim (“the USA is larger than Austria”) as a fact. And in both cases there are good reasons for those desires: you are benefited by food, and you are benefited by treating that claim as a fact.

Angra was quite surprised by this and responded that “That statement gives me evidence that we’re probably not talking about the same or even similar psychological phenomena – i.e., we’re probably talking past each other.” But if he was talking about anything that anyone at all would characterize as a belief (and he said that he was), he was surely talking about the unshakeable gut sense that something is the case whether or not I want to admit it. So we were, in fact, talking about exactly the same psychological phenomena. I was claiming then, and will claim now, that this gut sense is better characterized as a desire than as a belief. That is, insofar as desire is a tendency to behave in certain ways, it is a desire because it is a tendency to act and think as though this claim is true. But we can, if we want, resist that tendency, just as we can refrain from going to get food when we are hungry. If we do resist, we will refrain from believing what we have a tendency to believe, and if we do not, we will believe what we have a tendency to believe. But the tendency will be there whether or not we follow it.

Now if we feel a tendency to think that something is true, it is quite likely that it seems to us that it would improve our expectations. However, we can also distinguish between desiring to believe something for this reason, or desiring to believe something for other reasons. And although we might not pay attention, it is quite possibly to be consciously aware that you have an inclination to believe something, and also that it is for non-truth related reasons; and thus you would not expect it to improve your expectations.

But this is where it is useful to distinguish between the conscious mind and what the brain is doing on another level. My proposal: you will feel the desire to think that something is true whenever your brain guesses that its predictions, or at least the predictions that are important to it, will become more accurate if you think that the thing is true. We do not need to make any exceptions. This will be the case even when we would say that the statement does not imply any significant expectations, and will be the case even when the belief would have non-truth related motives.

Consider the statement that there are stars outside the visible universe. One distinction we could make even on the conscious level is that this implies various counterfactual predictions: “If you are teleported outside the visible universe, you will see more stars that aren’t currently visible.” Now we might find this objectionable if we were trying to define truth by expectations, since we have no expectation of such an event. But both on conscious and on non-conscious levels, we do need to make counterfactual predictions in order to carry on with our lives, since this is absolutely essential to any kind of planning and action. Now certainly no one can refute me if I assert that you would not see any such stars in the teleportation event. But it is not surprising if my brain guesses that this counterfactual prediction is not very accurate, and thus I feel the desire to say that there are stars there.

Likewise, consider the situation of non-truth related motives. In an earlier discussion of predictive processing, I suggested that the situation where people feel like they have to choose a goal is a result of such an attempt at prediction. Such a choice seems to be impossible, since choice is made in view of a goal, and if you do not have one yet, how can you choose? But there is a pre-existing goal here on the level of the brain: it wants to know what it is going to do. And choosing a goal will serve that pre-existing goal. Once you choose a goal, it will then be easy to know what you are going to do: you are going to do things that promote the goal that you chose. In a similar way, following any desire will improve your brain’s guesses about what you are going to do. It follows that if you have a desire to believe something, actually believing it will improve your brain’s accuracy at least about what it is going to do. This is true but not a fair argument, however, since my proposal is that the brain’s guess of improved accuracy is the cause of your desire to believe something. It is true that if you already have the desire, giving in to it will improve accuracy, as with any desire. But in my theory the improved accuracy had to be implied first, in order to cause the desire.

The answer is that you have many desires for things other than belief, which at the same time give you a motive (not an argument) for believing things. And your brain understands that if you believe the thing, you will be more likely to act on those other desires, and this will minimize uncertainty, and improve the accuracy of its predictions. Consider this discussion of truth in religion. I pointed out there that people confuse two different questions: “what should I do?”, and “what is the world like?” In particular with religious and political loyalties, there can be an intense social pressure towards conformity. And this gives an obvious non-truth related motive to believe the things in question. But in a less obvious way, it means that your brain’s predictions will be more accurate if you believe the thing. Consider the Mormon, and take for granted that the religious doctrines in question are false. Since they are false, does not that mean that if they continue to believe, their predictions will be less accurate?

No, it does not, for several reasons. In the first place the doctrines are in general formulated to avoid such false predictions, at least about everyday life. There might be a false prediction about what will happen when you die, but that is in the future and is anyway disconnected from your everyday life. This is in part why I said “the predictions that are important to it” in my proposal. Second, failure to believe would lead to extremely serious conflicting desires: the person would still have the desire to conform outwardly, but would also have good logical reasons to avoid conformity. And since we don’t know in advance how we will respond to conflicting desires, the brain will not have a good idea of what it would do in that situation. In other words, the Mormon is living a good Mormon life. And their brain is aware that insisting that Mormonism is true is a very good way to make sure that they keep living that life, and therefore continue to behave predictably, rather than falling into a situation of strongly conflicting desires where it would have little idea of what it would do. In this sense, insisting that Mormonism is true, even though it is not, actually improves the brain’s predictive accuracy.

 

Lies, Religion, and Miscalibrated Priors

In a post from some time ago, Scott Alexander asks why it is so hard to believe that people are lying, even in situations where it should be obvious that they made up the whole story:

The weird thing is, I know all of this. I know that if a community is big enough to include even a few liars, then absent a strong mechanism to stop them those lies should rise to the top. I know that pretty much all of our modern communities are super-Dunbar sized and ought to follow that principle.

And yet my System 1 still refuses to believe that the people in those Reddit threads are liars. It’s actually kind of horrified at the thought, imagining them as their shoulders slump and they glumly say “Well, I guess I didn’t really expect anyone to believe me”. I want to say “No! I believe you! I know you had a weird experience and it must be hard for you, but these things happen, I’m sure you’re a good person!”

If you’re like me, and you want to respond to this post with “but how do you know that person didn’t just experience a certain coincidence or weird psychological trick?”, then before you comment take a second to ask why the “they’re lying” theory is so hard to believe. And when you figure it out, tell me, because I really want to know.

The strongest reason for this effect is almost certainly a moral reason. In an earlier post, I discussed St. Thomas’s explanation for why one should give a charitable interpretation to someone’s behavior, and in a follow up, I explained the problem of applying that reasoning to the situation of judging whether a person is lying or not. St. Thomas assumes that the bad consequences of being mistaken about someone’s moral character will be minor, and most of the time this is true. But if we asking the question, “are they telling the truth or are they lying?”, the consequences can sometimes be very serious if we are mistaken.

Whether or not one is correct in making this application, it is not hard to see that this is the principal answer to Scott’s question. It is hard to believe the “they’re lying” theory not because of the probability that they are lying, but because we are unwilling to risk injuring someone with our opinion. This is without doubt a good motive from a moral standpoint.

But if you proceed to take this unwillingness as a sign of the probability that they are telling the truth, this would be a demonstrably miscalibrated probability assignment. Consider a story on Quora which makes a good example of Scott’s point:

I shuffled a deck of cards and got the same order that I started with.

No I am not kidding and its not because I can’t shuffle.

Let me just tell the story of how it happened. I was on a trip to Europe and I bought a pack of playing cards at the airport in Madrid to entertain myself on the flight back to Dallas.

It was about halfway through the flight after I’d watched Pixels twice in a row (That s literally the only reason I even remembered this) And I opened my brand new Real Madrid Playing Cards and I just shuffled them for probably like 30 minutes doing different tricks that I’d learned at school to entertain myself and the little girl sitting next to me also found them to be quite cool.

I then went to look at the other sides of the cards since they all had a picture of the Real Madrid player with the same number on the back. That’s when I realized that they were all in order. I literally flipped through the cards and saw Nacho-Fernandes, Ronaldo, Toni Kroos, Karim Benzema and the rest of the team go by all in the perfect order.

Then a few weeks ago when we randomly started talking about Pixels in AP Statistics I brought up this story and my teacher was absolutely amazed. We did the math and the amount of possibilities when shuffling a deck of cards is 52! Meaning 52 x 51 x 50 x 49 x 48….

There were 8.0658175e+67 different combinations of cards that I could have gotten. And I managed to get the same one twice.

The lack of context here might make us more willing to say that Arman Razaali is lying, compared to Scott’s particular examples. Nonetheless, I think a normal person will feel somewhat unwilling to say, “he’s lying, end of story.” I certainly feel that myself.

It does not take many shuffles to essentially randomize a deck. Consequently if Razaali’s statement that he “shuffled them for probably like 30 minutes” is even approximately true, 1 in 52! is probably a good estimate of the chance of the outcome that he claims, if we assume that it happened by chance. It might be some orders of magnitude less since there might be some possibility of “unshuffling.” I do not know enough about the physical process of shuffling to know whether this is a real possibility or not, but it is not likely to make a significant difference: e.g. the difference between 10^67 and 10^40 would be a huge difference mathematically, but it would not be significant for our considerations here, because both are simply too large for us to grasp.

People demonstrably lie at far higher rates than 1 in 10^67 or 1 in 10^40. This will remain the case even if you ask about the rate of “apparently unmotivated flat out lying for no reason.” Consequently, “he’s lying, period,” is far more likely than “the story is true, and happened by pure chance.” Nor can we fix this by pointing to the fact that an extraordinary claim is a kind of extraordinary evidence. In the linked post I said that the case of seeing ghosts, and similar things, might be unclear:

Or in other words, is claiming to have seen a ghost more like claiming to have picked 422,819,208, or is it more like claiming to have picked 500,000,000?

That remains undetermined, at least by the considerations which we have given here. But unless you have good reasons to suspect that seeing ghosts is significantly more rare than claiming to see a ghost, it is misguided to dismiss such claims as requiring some special evidence apart from the claim itself.

In this case there is no such unclarity – if we interpret the claim as “by pure chance the deck ended up in its original order,” then it is precisely like claiming to have picked 500,000,000, except that it is far less likely.

Note that there is some remaining ambiguity. Razaali could defend himself by saying, “I said it happened, I didn’t say it happened by chance.” Or in other words, “but how do you know that person didn’t just experience a certain coincidence or weird psychological trick?” But this is simply to point out that “he’s lying” and “this happened by pure chance” are not exhaustive alternatives. And this is true. But if we want to estimate the likelihood of those two alternatives in particular, we must say that it is far more likely that he is lying than that it happened, and happened by chance. And so much so that if one of these alternatives is true, it is virtually certain that he is lying.

As I have said above, the inclination to doubt that such a person is lying primarily has a moral reason. This might lead someone to say that my estimation here also has a moral reason: I just want to form my beliefs in the “correct” way, they might say: it is not about whether Razaali’s story really happened or not.

Charles Taylor, in chapter 15 of A Secular Age, gives a similar explanation of the situation of former religious believers who apparently have lost their faith due to evidence and argument:

From the believer’s perspective, all this falls out rather differently. We start with an epistemic response: the argument from modern science to all-around materialism seems quite unconvincing. Whenever this is worked out in something closer to detail, it seems full of holes. The best examples today might be evolution, sociobiology, and the like. But we also see reasonings of this kind in the works of Richard Dawkins, for instance, or Daniel Dennett.

So the believer returns the compliment. He casts about for an explanation why the materialist is so eager to believe very inconclusive arguments. Here the moral outlook just mentioned comes back in, but in a different role. Not that, failure to rise to which makes you unable to face the facts of materialism; but rather that, whose moral attraction, and seeming plausibility to the facts of the human moral condition, draw you to it, so that you readily grant the materialist argument from science its various leaps of faith. The whole package seems plausible, so we don’t pick too closely at the details.

But how can this be? Surely, the whole package is meant to be plausible precisely because science has shown . . . etc. That’s certainly the way the package of epistemic and moral views presents itself to those who accept it; that’s the official story, as it were. But the supposition here is that the official story isn’t the real one; that the real power that the package has to attract and convince lies in it as a definition of our ethical predicament, in particular, as beings capable of forming beliefs.

This means that this ideal of the courageous acknowledger of unpalatable truths, ready to eschew all easy comfort and consolation, and who by the same token becomes capable of grasping and controlling the world, sits well with us, draws us, that we feel tempted to make it our own. And/or it means that the counter-ideals of belief, devotion, piety, can all-too-easily seem actuated by a still immature desire for consolation, meaning, extra-human sustenance.

What seems to accredit the view of the package as epistemically-driven are all the famous conversion stories, starting with post-Darwinian Victorians but continuing to our day, where people who had a strong faith early in life found that they had reluctantly, even with anguish of soul, to relinquish it, because “Darwin has refuted the Bible”. Surely, we want to say, these people in a sense preferred the Christian outlook morally, but had to bow, with whatever degree of inner pain, to the facts.

But that’s exactly what I’m resisting saying. What happened here was not that a moral outlook bowed to brute facts. Rather we might say that one moral outlook gave way to another. Another model of what was higher triumphed. And much was going for this model: images of power, of untrammelled agency, of spiritual self-possession (the “buffered self”). On the other side, one’s childhood faith had perhaps in many respects remained childish; it was all too easy to come to see it as essentially and constitutionally so.

But this recession of one moral ideal in face of the other is only one aspect of the story. The crucial judgment is an all-in one about the nature of the human ethical predicament: the new moral outlook, the “ethics of belief” in Clifford’s famous phrase, that one should only give credence to what was clearly demonstrated by the evidence, was not only attractive in itself; it also carried with it a view of our ethical predicament, namely, that we are strongly tempted, the more so, the less mature we are, to deviate from this austere principle, and give assent to comforting untruths. The convert to the new ethics has learned to mistrust some of his own deepest instincts, and in particular those which draw him to religious belief. The really operative conversion here was based on the plausibility of this understanding of our ethical situation over the Christian one with its characteristic picture of what entices us to sin and apostasy. The crucial change is in the status accorded to the inclination to believe; this is the object of a radical shift in interpretation. It is no longer the impetus in us towards truth, but has become rather the most dangerous temptation to sin against the austere principles of belief-formation. This whole construal of our ethical predicament becomes more plausible. The attraction of the new moral ideal is only part of this, albeit an important one. What was also crucial was a changed reading of our own motivation, wherein the desire to believe appears now as childish temptation. Since all incipient faith is childish in an obvious sense, and (in the Christian case) only evolves beyond this by being child-like in the Gospel sense, this (mis)reading is not difficult to make.

Taylor’s argument is that the arguments for unbelief are unconvincing; consequently, in order to explain why unbelievers find them convincing, he must find some moral explanation for why they do not believe. This turns out to be the desire to have a particular “ethics of belief”: they do not want to have beliefs which are not formed in such and such a particular way. This is much like the theoretical response above regarding my estimation of the probability that Razaali is lying, and how that might be considered a moral estimation, rather than being concerned with what actually happened.

There are a number of problems with Taylor’s argument, which I may or may not address in the future in more detail. For the moment I will take note of three things:

First, neither in this passage nor elsewhere in the book does Taylor explain in any detailed way why he finds the unbeliever’s arguments unconvincing. I find the arguments convincing, and it is the rebuttals (by others, not by Taylor, since he does not attempt this) that I find unconvincing. Now of course Taylor will say this is because of my particular ethical motivations, but I disagree, and I have considered the matter exactly in the kind of detail to which he refers when he says, “Whenever this is worked out in something closer to detail, it seems full of holes.” On the contrary, the problem of detail is mostly on the other side; most religious views can only make sense when they are not worked out in detail. But this is a topic for another time.

Second, Taylor sets up an implicit dichotomy between his own religious views and “all-around materialism.” But these two claims do not come remotely close to exhausting the possibilities. This is much like forcing someone to choose between “he’s lying” and “this happened by pure chance.” It is obvious in both cases (the deck of cards and religious belief) that the options do not exhaust the possibilities. So insisting on one of them is likely motivated itself: Taylor insists on this dichotomy to make his religious beliefs seem more plausible, using a presumed implausibility of “all-around materialism,” and my hypothetical interlocutor insists on the dichotomy in the hope of persuading me that the deck might have or did randomly end up in its original order, using my presumed unwillingness to accuse someone of lying.

Third, Taylor is not entirely wrong that such an ethical motivation is likely involved in the case of religious belief and unbelief, nor would my hypothetical interlocutor be entirely wrong that such motivations are relevant to our beliefs about the deck of cards.

But we need to consider this point more carefully. Insofar as beliefs are voluntary, you cannot make one side voluntary and the other side involuntary. You cannot say, “Your beliefs are voluntarily adopted due to moral reasons, while my beliefs are imposed on my intellect by the nature of things.” If accepting an opinion is voluntary, rejecting it will also be voluntary, and if rejecting it is voluntary, accepting it will also be voluntary. In this sense, it is quite correct that ethical motivations will always be involved, even when a person’s opinion is actually true, and even when all the reasons that make it likely are fully known. To this degree, I agree that I want to form my beliefs in a way which is prudent and reasonable, and I agree that this desire is partly responsible for my beliefs about religion, and for my above estimate of the chance that Razaali is lying.

But that is not all: my interlocutor (Taylor or the hypothetical one) is also implicitly or explicitly concluding that fundamentally the question is not about truth. Basically, they say, I want to have “correctly formed” beliefs, but this has nothing to do with the real truth of the matter. Sure, I might feel forced to believe that Razaali’s story isn’t true, but there really is no reason it couldn’t be true. And likewise I might feel forced to believe that Taylor’s religious beliefs are untrue, but there really is no reason they couldn’t be.

And in this respect they are mistaken, not because anything “couldn’t” be true, but because the issue of truth is central, much more so than forming beliefs in an ethical way. Regardless of your ethical motives, if you believe that Razaali’s story is true and happened by pure chance, it is virtually certain that you believe a falsehood. Maybe you are forming this belief in a virtuous way, and maybe you are forming it in a vicious way: but either way, it is utterly false. Either it in fact did not happen, or it in fact did not happen by chance.

We know this, essentially, from the “statistics” of the situation: no matter how many qualifications we add, lies in such situations will be vastly more common than truths. But note that something still seems “unconvincing” here, in the sense of Scott Alexander’s original post: even after “knowing all this,” he finds himself very unwilling to say they are lying. In a discussion with Angra Mainyu, I remarked that our apparently involuntary assessments of things are more like desires than like beliefs:

So rather than calling that assessment a belief, it would be more accurate to call it a desire. It is not believing something, but desiring to believe something. Hunger is the tendency to go and get food; that assessment is the tendency to treat a certain claim (“the USA is larger than Austria”) as a fact. And in both cases there are good reasons for those desires: you are benefited by food, and you are benefited by treating that claim as a fact.

In a similar way, because we have the natural desire not to injure people, we will naturally desire not to treat “he is lying” as a fact; that is, we will desire not to believe it. The conclusion that Angra should draw in the case under discussion, according to his position, is that I do not “really believe” that it is more likely that Razaali is lying than that his story is true, because I do feel the force of the desire not to say that he is lying. But I resist that desire, in part because I want to have reasonable beliefs, but most of all because it is false that Razaali’s story is true and happened by chance.

To the degree that this desire feels like a prior probability, and it does feel that way, it is necessarily miscalibrated. But to the degree that this desire remains nonetheless, this reasoning will continue to feel in some sense unconvincing. And it does in fact feel that way to me, even after making the argument, as expected. Very possibly, this is not unrelated to Taylor’s assessment that the argument for unbelief “seems quite unconvincing.” But discussing that in the detail which Taylor omitted is a task for another time.