Was Kavanaugh Guilty?

No, I am not going to answer the question. This post will illustrate and argue for a position that I have argued many times in the past, namely that belief is voluntary. The example is merely particularly good for proving the point. I will also be using a framework something like Bryan Caplan’s in his discussion of rational irrationality:

Two forces lie at the heart of economic models of choice: preferences and prices. A consumer’s preferences determine the shape of his demand curve for oranges; the market price he faces determines where along that demand curve he resides. What makes this insight deep is its generality. Economists use it to analyze everything from having babies to robbing banks.

Irrationality is a glaring exception. Recognizing irrationality is typically equated with rejecting economics. A “logic of the irrational” sounds self-contradictory. This chapter’s central message is that this reaction is premature. Economics can handle irrationality the same way it handles everything: preferences and prices. As I have already pointed out:

  • People have preferences over beliefs: A nationalist enjoys the belief that foreign-made products are overpriced junk; a surgeon takes pride in the belief that he operates well while drunk.
  • False beliefs range in material cost from free to enormous: Acting on his beliefs would lead the nationalist to overpay for inferior goods, and the surgeon to destroy his career.

Snapping these two building blocks together leads to a simple model of irrational conviction. If agents care about both material wealth and irrational beliefs, then as the price of casting reason aside rises, agents consume less irrationality. I might like to hold comforting beliefs across the board, but it costs too much. Living in a Pollyanna dreamworld would stop be from coping with my problems, like that dead tree in my backyard that looks like it is going to fall on my house.

Let us assume that people are considering whether to believe that Brett Kavanaugh was guilty of sexual assault. For ease of visualization, let us suppose that they have utility functions defined over the following outcomes:

(A) Believe Kavanaugh was guilty, and turn out to be right

(B) Believe Kavanaugh was guilty, and turn out to be wrong

(C) Believe Kavanaugh was innocent, and turn out to be right

(D) Believe Kavanaugh was innocent, and turn out to be wrong

(E) Admit that you do not know whether he was guilty or not (this will be presumed to be a true statement, but I will count it as less valuable than a true statement that includes more detail.)

(F) Say something bad about your political enemies

(G) Say something good about your political enemies

(H) Say something bad about your political allies

(I) Say something good about your political allies

Note that options A through E are mutually exclusive, while one or more of options F through I might or might not come together with one of those from A through E.

Let’s suppose there are three people, a right winger who cares a lot about politics and little about truth, a left winger who cares a lot about politics and little about truth, and an independent who does not care about politics and instead cares a lot about truth. Then we posit the following table of utilities:

Right Winger
Left Winger
Independent
(A)
10
10
100
(B)
-10
-10
-100
(C)
10
10
100
(D)
-10
-10
-100
(E)
5
5
50
(F)
100
100
0
(G)
-100
-100
0
(H)
-100
-100
0
(I)
100
100
0

The columns for the right and left wingers are the same, but the totals will be calculated differently because saying something good about Kavanaugh, for the right winger, is saying something good about an ally, while for the left winger, it is saying something good about an enemy, and there is a similar contrast if something bad is said.

Now there are really only three options we need to consider, namely “Believe Kavanaugh was guilty,” “Believe Kavanaugh was innocent,” and “Admit that you do not know.” In addition, in order to calculate expected utility according to the above table, we need a probability that Kavanaugh was guilty. In order not to offend readers who have already chosen an option, I will assume a probability of 50% that he was guilty, and 50% that he was innocent. Using these assumptions, we can calculate the following ultimate utilities:

Right Winger
Left Winger
Independent
Claim Guilt
-100
100
0
Claim Innocence
100
-100
0
Confess Ignorance
5
5
50

(I won’t go through this calculation in detail; it should be evident that given my simple assumptions of the probability and values, there will be no value for anyone in affirming guilt or innocence as such, but only in admitting ignorance, or in making a political point.) Given these values, obviously the left winger will choose to believe that Kavanaugh was guilty, the right winger will choose to believe that he was innocent, and the independent will admit to being ignorant.

This account obviously makes complete sense of people’s actual positions on the question, and it does that by assuming that people voluntarily choose to believe a position in the same way they choose to do other things. On the other hand, if you assume that belief is an involuntary evaluation of a state of affairs, how could the actual distribution of opinion possibly be explained?

As this is a point I have discussed many times in the past, I won’t try to respond to all possible objections. However, I will bring up two of them. In the example, I had to assume that people calculated using a probability of 50% for Kavanaugh’s guilt or innocence. So it could be objected that their “real” belief is that there is a 50% chance he was guilty, and the statement is simply an external thing.

This initial 50% is something like a prior probability, and corresponds to a general leaning towards or away from a position. As I admitted in discussion with Angra Mainyu, that inclination is largely involuntary. However, first, this is not what we call a “belief” in ordinary usage, since we frequently say that someone has a belief while having some qualms about it. Second, it is not completely immune from voluntary influences. In practice in a situation like this, it will represent something like everything the person knows about the subject and predicate apart from this particular claim. And much of what the person knows will already be in subject/predicate form, and the person will have arrived at it through a similar voluntary process.

Another objection is that at least in the case of something obviously true or obviously false, there cannot possibly be anything voluntary about it. No one can choose to believe that the moon is made of green cheese, for example.

I have responded to this to this in the past by pointing out that most of us also cannot choose to go and kill ourselves, right now, despite the fact that doing so would be voluntary. And in a similar way, there is nothing attractive about believing that the moon is made of green cheese, and so no one can do it. At least two objections will be made to this response:

1) I can’t go kill myself right now, but I know that this is because it would be bad. But I cannot believe that the moon is made of green cheese because it is false, not because it is bad.

2) It does not seem that much harm would be done by choosing to believe this about the moon, and then changing your mind after a few seconds. So if it is voluntary, why not prove it by doing so? Obviously you cannot do so.

Regarding the first point, it is true that believing the moon is made of cheese would be bad because it is false. And in fact, if you find falsity the reason you cannot accept it, how is that not because you regard falsity as really bad? In fact lack of attractiveness is extremely relevant here. If people can believe in Xenu, they would find it equally possible to believe that the moon was made of cheese, if that were the teaching of their religion. In that situation, the falsity of the claim would not be much obstacle at all.

Regarding the second point, there is a problem like Kavka’s Toxin here. Choosing to believe something, roughly speaking, means choosing to treat it as a fact, which implies a certain commitment. Choosing to act like it is true enough to say so, then immediately doing something else, is not choosing to believe it, but rather it is choosing to tell a lie. So just as one cannot intend to drink the toxin without expecting to actually drink it, so one cannot choose to believe something without expecting to continue to believe it for the foreseeable future. This is why one would not wish to accept such a statement about the moon, not only in order to prove something (especially since it would prove nothing; no one would admit that you had succeeded in believing it), but even if someone were to offer a very large incentive, say a million dollars if you managed to believe it. This would amount to offering to pay someone to give up their concern for truth entirely, and permanently.

Additionally, in the case of some very strange claims, it might be true that people do not know how to believe them, in the sense that they do not know what “acting as though this were the case” would even mean. This no more affects the general voluntariness of belief than the fact that some people cannot do backflips affects the fact that such bodily motions are in themselves voluntary.

C.S. Lewis on Punishment

C.S. Lewis discusses a certain theory of punishment:

In England we have lately had a controversy about Capital Punishment. … My subject is not Capital Punishment in particular, but that theory of punishment in general which the controversy showed to be almost universal among my fellow-countrymen. It may be called the Humanitarian theory. Those who hold it think that it is mild and merciful. In this I believe that they are seriously mistaken. I believe that the “Humanity” which it claims is a dangerous illusion and disguises the possibility of cruelty and injustice without end. I urge a return to the traditional or Retributive theory not solely, not even primarily, in the interests of society, but in the interests of the criminal.

According to the Humanitarian theory, to punish a man because he deserves it, and as much as he deserves, is mere revenge, and, therefore, barbarous and immoral. It is maintained that the only legitimate motives for punishing are the desire to deter others by example or to mend the criminal. When this theory is combined, as frequently happens, with the belief that all crime is more or less pathological, the idea of mending tails off into that of healing or curing and punishment becomes therapeutic. Thus it appears at first sight that we have passed from the harsh and self-righteous notion of giving the wicked their deserts to the charitable and enlightened one of tending the psychologically sick. What could be more amiable? One little point which is taken for granted in this theory needs, however, to be made explicit. The things done to the criminal, even if they are called cures, will be just as compulsory as they were in the old days when we called them punishments. If a tendency to steal can be cured by psychotherapy, the thief will no doubt be forced to undergo treatment. Otherwise, society cannot continue.

My contention is that this doctrine, merciful though it appears, really means that each one of us, from the moment he breaks the law, is deprived of the rights of a human being.

The reason is this. The Humanitarian theory removes from Punishment the concept of Desert. But the concept of Desert is the only connecting link between punishment and justice. It is only as deserved or undeserved that a sentence can be just or unjust. I do not here contend that the question “Is it deserved?” is the only one we can reasonably ask about a punishment. We may very properly ask whether it is likely to deter others and to reform the criminal. But neither of these two last questions is a question about justice. There is no sense in talking about a “just deterrent” or a “just cure”. We demand of a deterrent not whether it is just but whether it will deter. We demand of a cure not whether it is just but whether it succeeds. Thus when we cease to consider what the criminal deserves and consider only what will cure him or deter others, we have tacitly removed him from the sphere of justice altogether; instead of a person, a subject of rights, we now have a mere object, a patient, a “case”.

Later in the essay, he gives some examples of how the Humanitarian theory will make things worse, as in the following case:

The immediate starting point of this article was a letter I read in one of our Leftist weeklies. The author was pleading that a certain sin, now treated by our laws as a crime, should henceforward be treated as a disease. And he complained that under the present system the offender, after a term in gaol, was simply let out to return to his original environment where he would probably relapse. What he complained of was not the shutting up but the letting out. On his remedial view of punishment the offender should, of course, be detained until he was cured. And of course the official straighteners are the only people who can say when that is. The first result of the Humanitarian theory is, therefore, to substitute for a definite sentence (reflecting to some extent the community’s moral judgment on the degree of ill-desert involved) an indefinite sentence terminable only by the word of those experts–and they are not experts in moral theology nor even in the Law of Nature–who inflict it. Which of us, if he stood in the dock, would not prefer to be tried by the old system?

This post will make three points:

(1) The “Humanitarian” theory is basically correct about the purpose of punishment.

(2) C.S. Lewis is right that there are good reasons to talk about justice and about what someone deserves or does not deserve. Such considerations are, as he supposes, essential to a system of justice. Lewis is also right to suppose that many supporters of the Humanitarian theory, despite being factually correct about the purpose of punishment, are mistaken in opposing such talk as cruel and immoral.

(3) Once the Humanitarian theory is corrected in such a way as to incorporate the notion of “just deserts”, Lewis’s objections fail.

Consider the first point, the purpose of punishment. There was already some discussion of this in a previous post. In a sense, everyone already knows that Humanitarians are right about the basic purpose of punishment, including C.S. Lewis. Lewis points out the obvious fact himself: whatever you call them and however you explain them, punishments for crime are compulsory in a society because “otherwise, society cannot continue.” But why cannot society continue without punishment? What supposedly would happen if you did not have any punishments? What would actually happen if a government credibly declared that it would never again punish anything?

What would actually happen, of course, is that this amount to a declaration that the government was dissolving itself, and someone else would take over and establish new crimes and new punishments, either at that same level of generality as the original government, or at more local levels (e.g. perhaps each town would become a city-state.) In any case each of the new governments would still have punishments, so you would not have succeeded in abolishing punishment.

What happens in the imaginary situation where you do succeed, where no one else takes over? This presumably would be a Hobbesian “state of nature,” which is not a society at all. In other words, the situation simply does not count as a society at all, unless certain rules are followed pretty consistently. And those rules will not be followed consistently without punishments. So it is easy to see why punishment exists: to make sure that those rules are followed, generally speaking. Since rules are meant to make some things happen and prevent other things, punishment is simply to make sure that the rules actually function as rules. But this is exactly what the Humanitarian theory says is the purpose of punishment: to make others less likely to break the rules, and to make the one who has already broken the rules less likely to break them in the future.

Thus C.S. Lewis himself is implicitly recognizing that the Humanitarians are basically right about the purpose of punishment, in acknowledging that punishment is necessary for the very existence of society.

Let’s go on to the second point, the idea of just deserts. C.S. Lewis is right that many proponents of Humanitarian view either believe that the idea is absurd, or that if there is such a thing as deserving something, no one can deserve something bad, or that if people can deserve things, this is not really a relevant consideration for a justice system. For example, it appears that Kelsey Piper blogging at The Unit of Caring believes something along these lines; here she has a pretty reasonable post responding to some criticisms analogous to those of C.S. Lewis to the theory.

I will approach this by saying a few things about what a law is in general. St. Thomas defines law: “It is nothing else than an ordinance of reason for the common good, made by him who has care of the community, and promulgated.” But let’s drop the careful formulation and the conditions, as necessary as they may be. St. Thomas’s definition is simply a more detailed account of what everyone knows: a law is a rule that people invent for the benefit of a community.

Is there such a thing as an unjust law? In St. Thomas’s account, in a sense yes, and in a sense no. “For the common good” means that the law is beneficial. In that sense, if the law is “unjust,” it is harmful, and thus it is not for the common good. And in that sense it does not satisfy the definition of a law, and so is not a law at all. But obviously ordinary people will call it a law anyway, and in that way it is an unjust law, because it is unsuited to the purpose of a law.

Now here’s the thing. An apparent rule is not really a rule at all unless it tends to make something happen. In the case that we are talking about, namely human law, that generally means that laws require penalties for being broken in order to be laws at all. It is true that in a society with an extremely strong respect for law, it might occasionally be possible to make a law without establishing any specific penalty, and still have that law followed. The community would still need to leave itself the option of establishing a penalty; otherwise it would just be advice rather than a law.

This causes a slight problem. The purpose of a law is to make sure that certain things are done and others avoided, and the reason for penalties is to back up this purpose. But when someone breaks the law, the law has already failed. The very thing the law was meant to prevent has already happened. And what now? Should the person be punished? Why? To prevent the law from being broken? It has already been broken. So we cannot prevent it from being broken. And the thing is, punishment is something bad. So to inflict the punishment now, after the crime has already been committed, seems like just stacking one bad thing on top of another.

At this point the “Retributive” theory of justice will chime in. “We should still inflict the punishment because it is just, and the criminal deserves it.”

This is the appeal of the Humanitarian’s condemnation of the retributive theory. The Retributive theory, the Humanitarian will say, is just asserting that something bad, namely the punishment, in this situation, is something good, by bringing in the idea of “justice.” But this is a contradiction: something bad is bad by definition, and cannot be good.

The reader is perhaps beginning to understand the placement of the previous post. A law is established, with a penalty for being broken, in order to make certain things happen. This is like intending to drink the toxin. But if someone breaks the law, what is the point of inflicting the punishment? And the next morning, what is the point of drinking the toxin in the afternoon, when the money is already received or not? There is a difference of course, because in this case the dilemma only comes up because the law has been broken. We could make the cases more analogous, however, by stipulating in the case of Kavka’s toxin that the rich billionaire offers this deal: “The million will be found in your account, with a probability of 99.99%, if and only if you intend to drink the toxin only if the million is not found in your account (which will happen only in the unlucky 0.01% of cases), and you do not need to drink or intend to drink in the situation where the million is found in your account.” In this situation, the person might well reason thus:

If the morning comes and the million is not in my account, why on earth would I drink the toxin? This deal is super unfair.

Nonetheless, as in the original deal, there is one and only one way to get the million: namely, by planning to drink the toxin in that situation, and by planning not to reconsider, no matter what. As in the case of law, the probability factor that I added means that it is possible not to get the million, although you probably will. But the person who formed this intention will go through with it and drink the toxin, unless they reconsider; and they had the definite intention of not reconsidering.

The situations are now more analogous, but there is still an additional difference, one that makes it even easier to decide to follow the law than to drink the toxin. The only reason to commit to drinking the toxin was to get the million, which, in our current situation, has already failed. But in the case of the law, one purpose was to prevent the criminal from performing a certain action, and that purpose has already failed. But it also has the purpose of preventing them from doing it in the future, and preventing others from doing it. So there additional motivations for carrying out the law.

We can leave the additional difference to the side for now, however. The point would be essentially valid even if you made a law to prevent one particular act, and that act ended up being done. The retributionist would say, “Ok, so applying the punishment at this point will not prevent the thing it was meant to prevent. But it is just, and the criminal deserves it, and we should still inflict it.” And they are right: the whole idea of establishing the the rule included the idea that the punishment would actually be carried out, in this situation. There was a rule against reconsidering the rule, just as the fellow in the situation with the toxin planned not to reconsider their plan.

What is meant when it is said that a punishment is “just,” and that the criminal “deserves it,” then is simply that it is what is required by the rules we have established, and that those rules are reasonable ones.

Someone will object here. It seems that this cannot be true, because some punishments are wicked and unjust even though there were rules establishing them. And it seems that this is because people simply do not deserve those things: so there must be such a thing as “what they deserve,” in itself and independent of any rules. But this is where we must return to the point made above about just and unjust laws. One hears, for example, of cases in which people were sentenced to death for petty theft. We can agree that this is unjust in itself: but this is precisely because the rule, “someone who steals food should be killed,” is not a reasonable rule which will benefit the community. You might have something good in mind for it, namely to prevent stealing, but if you carry out the penalty on even one occasion, you have done more harm than all the stealing put together. The Humanitarians are right that the thing inflicted in a punishment is bad, and remains bad. It does not become something good in that situation. And this is precisely why it needs some real proportion to the crime.

We can analyze the situation in two ways, from the point of view of the State, considered as though a kind of person, and from the point of the view of the person who carries out the law. The State makes a kind of promise to inflict a punishment for some crimes, in such a way as to minimize the total harm of both the crimes and their punishment. Additionally, to some extent it promises not to reconsider this in situation where a crime is actually committed. “To some extent” here is of course essential: such rules are not and should not be absolutely rigid. If the crime is actually committed, the State is in a situation like our person who finds himself without the million and having committed to drink the toxin in that situation: the normal result of the situation will be that the State inflicts the punishment, and the person drinks the toxin, without any additional consideration of motivations or reasons.

From the point of view of the individual, he carries out the sentence “because it is just,” i.e. because it is required by reasonable rules which we have established for the good of the community. And that, i.e. carrying out reasonable laws, is a good thing, even though the material content includes something bad. The moral object of the executioner is the fulfillment of justice, not the killing of a person.

We have perhaps already pointed the way to the last point, namely that with the incorporation of the idea of justice, C.S. Lewis’s criticisms fail. Lewis argues that if the purpose of punishment is medicinal, then it is in principle unlimited: but this is not true even of medicine. No one would take medicine which would cause more harm than the disease, nor would it be acceptable to compel someone else to take such medicine.

More importantly, Lewis’s criticism play off the problems that are caused by believing that one needs to consider at every point, “will the consequences of this particular punishment or action be good or not?” This is not necessary because this is not the way law works, despite the fact that the general purpose is the one supposed. Law only works because to some extent it promises not to reconsider, like our fellow in the case of Kavka’s toxin. Just as you are wrong to focus on whether “drinking the toxin right now will harm me and not benefit me”, so the State would be wrong to focus too much on the particular consequences of carrying out the law right now, as opposed to the general consequences of the general law.

Thus for example Lewis supposes rulers considering the matter in an entirely utilitarian way:

But that is not the worst. If the justification of exemplary punishment is not to be based on desert but solely on its efficacy as a deterrent, it is not absolutely necessary that the man we punish should even have committed the crime. The deterrent effect demands that the public should draw the moral, “If we do such an act we shall suffer like that man.” The punishment of a man actually guilty whom the public think innocent will not have the desired effect; the punishment of a man actually innocent will, provided the public think him guilty. But every modern State has powers which make it easy to fake a trial. When a victim is urgently needed for exemplary purposes and a guilty victim cannot be found, all the purposes of deterrence will be equally served by the punishment (call it “cure” if you prefer) of an innocent victim, provided that the public can be cheated into thinking him guilty. It is no use to ask me why I assume that our rulers will be so wicked.

As said, this is not the way law works. The question will be about which laws are reasonable and beneficial in general, not about whether such and such particular actions are beneficial in particular cases. Consider a proposed law formulated with such an idea in mind:

When the ruling officials believe that it is urgently necessary to deter people from committing a crime, and no one can be found who has actually committed it, the rulers are authorized to deceive the public into believing that an innocent man has committed the crime, and to punish that innocent man.

It should not be necessary to make a long argument that as a general rule, this does not serve the good of a community, regardless of might happen in particular cases. In this way it is quite right to say that this is unjust in itself. This does not, however, establish that “what someone deserves” has any concrete content which is not established by law.

As a sort of footnote to this post, we might note that “deserts” are sometimes extended to natural consequences in much the way “law” is extended to laws of nature, mathematics, or logic. For example, Bryan Caplan distinguishes “deserving” and “undeserving” poor:

I propose to use the same standard to identify the “deserving” and “undeserving” poor.  The deserving poor are those who can’t take – and couldn’t have taken – reasonable steps to avoid poverty. The undeserving poor are those who can take – or could have taken – reasonable steps to avoid poverty.  Reasonable steps like: Work full-time, even if the best job you can get isn’t fun; spend your money on food and shelter before you get cigarettes or cable t.v.; use contraception if you can’t afford a child.  A simple test of “reasonableness”: If you wouldn’t accept an excuse from a friend, you shouldn’t accept it from anyone.

This is rather different from the sense discussed in this post, but you could view it as an extension of it. It is a rule (of mathematics, really) that “if you spend all of your money you will not have any left,” and we probably do not need to spend much effort trying to change this situation, considered in general, even if we might want to change it for an individual.

Common Sense and Culture

If we compare what I said about common sense to the letter of St. Augustine on the errors of the Donatists, quoted here, it seems that St. Augustine takes his belief in Christianity to be a matter of accepting common sense:

For they prefer to the testimonies of Holy Writ their own contentions, because, in the case of Cæcilianus, formerly a bishop of the Church of Carthage, against whom they brought charges which they were and are unable to substantiate, they separated themselves from the Catholic Church—that is, from the unity of all nations. Although, even if the charges had been true which were brought by them against Cæcilianus, and could at length be proved to us, yet, though we might pronounce an anathema upon him even in the grave, we are still bound not for the sake of any man to leave the Church, which rests for its foundation on divine witness, and is not the figment of litigious opinions, seeing that it is better to trust in the Lord than to put confidence in man. For we cannot allow that if Cæcilianus had erred,— a supposition which I make without prejudice to his integrity—Christ should therefore have forfeited His inheritance. It is easy for a man to believe of his fellow-men either what is true or what is false; but it marks abandoned impudence to desire to condemn the communion of the whole world on account of charges alleged against a man, of which you cannot establish the truth in the face of the world.

It is true that St. Augustine talks about “divine witness” and so on here, but it is also easy to see that a significant source of his confidence is existing widespread religious agreement. It is foolish to abandon “the unity of all nations,” and impudent to “condemn the communion of the whole world.” And the problem with “charges alleged against a man, of which you cannot establish the truth in the face of the world,” is that if you disagree with the common consent of mankind, you should first attempt to convince others before putting forward your personal ideas as absolute truth.

Is common sense a real reason for St. Augustine’s religious position, or he is merely attempting to justify himself? Consider his famous rebuke of those who attack science in the name of religion:

Usually, even a non-Christian knows something about the earth, the heavens, and the other elements of this world, about the motion and orbit of the stars and even their size and relative positions, about the predictable eclipses of the sun and moon, the cycles of the years and the seasons, about the kinds of animals, shrubs, stones, and so forth, and this knowledge he holds to as being certain from reason and experience. Now, it is a disgraceful and dangerous thing for an infidel to hear a Christian, presumably giving the meaning of Holy Scripture, talking non-sense on these topics; and we should take all means to prevent such an embarrassing situation, in which people show up vast ignorance in a Christian and laugh it to scorn. The shame is not so much that an ignorant individual is derided, but that people outside the household of the faith think our sacred writers held such opinions, and, to the great loss of those for whose salvation we toil, the writers of our Scripture are criticized and rejected as unlearned men. If they find a Christian mistaken in a field which they themselves know well and hear him maintaining his foolish opinions about our books, how are they going to believe those books in matters concerning the resurrection of the dead, the hope of eternal life, and the kingdom of heaven, when they think their pages are full of falsehoods on facts which they themselves have learnt from experience and the light of reason? Reckless and incompetent expounders of holy Scripture bring untold trouble and sorrow on their wiser brethren when they are caught in one of their mischievous false opinions and are taken to task by those who are not bound by the authority of our sacred books. For then, to defend their utterly foolish and obviously untrue statements, they will try to call upon Holy Scripture for proof and even recite from memory many passages which they think support their position, although “they understand neither what they say nor the things about which they make assertion.”

St. Augustine in fact seems to be giving priority to common sense over religion here. If your religion contradicts common sense, your religion is wrong and common sense is right. This suggests that his argument for his religion from common sense is an honest one; it might even be his strongest reason for his belief.

As I said in the earlier post, the argument for religion from the consent of humanity had problems even at the time, and as things stand, it has no real relevance. There is no religious doctrine, let alone any religion, that one could reasonably say is accepted by even a majority of humanity, let alone by all. At any rate, this is the case unless one makes one’s doctrine far vaguer than would be permitted by any religion.

I concluded above that St. Augustine’s defense of common sense is likely an honest one. But note that this was not necessary: it would be perfectly possible for someone to defend common sense in order to justify themselves, without actually caring about the truth of common sense. In fact, consider what I said here about Scott Sumner and James Larson. Larson’s claim to accept realism is basically not an honest one. I do not mean that he does not believe it, but that its truth is irrelevant to him. What matters to him is that he can seemingly justify himself in maintaining his religious position in the face of all opposition.

Consider the cynical position of Francis Bacon about people relative to truth, discussed here. According to Bacon, no one is interested in truth in itself, but only as a means to other things. While the cynical position overall is incorrect, there is a lot of truth in it. Consequently, it will not be uncommon for someone to defend common sense, not so much because of its truth, but as part of a larger project of defending their culture. Culture is bound up with claims about the world, and defending culture therefore involves defending claims about the world. And if everyone accepts something, presumably everyone in your culture accepts it. One sign of this, of course, would be if someone passes freely back and forth between putting forth things that everyone accepts, and things that everyone in their culture accepts, as though these were equivalent.

Likewise, someone can attack common sense, not for the purpose of truth, but in order to engage in a kind of culture war. Consider the recent comments by “werzekeugjj” on the last post. There is no option here but to explain these comments with the methods of Ezekiel Bulver. For they cannot possibly represent opinions about the world at all, let alone opinions that were arrived at by honest means. Werzekeugjj, for example, responds to the question, “Do people sometimes write comments?” with “No.” As I pointed out there, if they do not, then he did not compose those comments, and there is nothing to reply to. As Aristotle puts it,

We can, however, demonstrate negatively even that this view is impossible, if our opponent will only say something; and if he says nothing, it is absurd to seek to give an account of our views to one who cannot give an account of anything, in so far as he cannot do so. For such a man, as such, is from the start no better than a vegetable.

Nor is it possible to apply a principle of charity here and say that Werzekeugjj intends to say that their claims are true in some complicated metaphysical sense. This does apply to the position of the blogger from Atheism and the City, discussed in that post. He presumably does not intend to reject common sense. I simply point out in my response that common sense is enough to draw the conclusions about causality that matter. The point is that this cannot apply to Werzekeugjj’s expressed position, because I spoke expressly of things in the everyday way, and the response was that the everyday claims themselves are false.

Of course, no one actually thinks that the everyday claims are false, including Werzekeugjj. What was the purpose of composing these comments, then?

We can gather a clue from this comment:

“in such a block unniverse there is no time flow
so your point on finalism or causality is moot
same with God
they don’t exist

The body of the post does not mention God, and God is not the topic. Why then does Werzekeugjj bring up God here? The most likely motivation is the kind of culture war motivation discussed here. Werzekeugjj associated talk of causality and reasons with talk of God, and intends to attack a culture that speaks this way with whatever it takes, including a full on rejection of common sense. Science has shown that your common sense views of the world are entirely false, Werzekeugjj says, and therefore you might as well abandon the rest of your culture (including its talk of God) along with the rest of your views.

Supposedly describing their intentions, Werzekeugjj says,

i’m not trying to understand the world or to change your mind but i’m trying to state what is true
and i’m puzzled by how you think there is no problem with arguments like these

This is false, precisely as a description of their personal motives. No one who says that balls never break windows and that they did not write their comments (in the very comments themselves) can pretend to be “trying to state what is true.” Sorry, but that is not your intention. More reasonably, we can suppose that Werzekeugjj sees my post as part of a project of defending a certain culture, and they intend to attack that culture.

But that is an inaccurate understanding of the post. I defend common sense because it is right, not because it is a part of any particular culture. As Bryan Caplan puts it, “Common sense is the foundation of all reasoning.  If you want to reject a common-sense claim, you’d better do it in the name of an even stronger common-sense claim.”

Minimizing Motivated Beliefs

In the last post, we noted that there is a conflict between the goal of accurate beliefs about your future actions, and your own goals about your future. More accurate beliefs will not always lead to a better fulfillment of those goals. This implies that you must be ready to engage in a certain amount of trade, if you desire both truth and other things. Eliezer Yudkowsky argues that self-deception, and therefore also such trade, is either impossible or stupid, depending on how it is understood:

What if self-deception helps us be happy?  What if just running out and overcoming bias will make us—gasp!—unhappy?  Surely, true wisdom would be second-order rationality, choosing when to be rational.  That way you can decide which cognitive biases should govern you, to maximize your happiness.

Leaving the morality aside, I doubt such a lunatic dislocation in the mind could really happen.

Second-order rationality implies that at some point, you will think to yourself, “And now, I will irrationally believe that I will win the lottery, in order to make myself happy.”  But we do not have such direct control over our beliefs.  You cannot make yourself believe the sky is green by an act of will.  You might be able to believe you believed it—though I have just made that more difficult for you by pointing out the difference.  (You’re welcome!)  You might even believe you were happy and self-deceived; but you would not in fact be happy and self-deceived.

For second-order rationality to be genuinely rational, you would first need a good model of reality, to extrapolate the consequences of rationality and irrationality.  If you then chose to be first-order irrational, you would need to forget this accurate view. And then forget the act of forgetting.  I don’t mean to commit the logical fallacy of generalizing from fictional evidence, but I think Orwell did a good job of extrapolating where this path leads.

You can’t know the consequences of being biased, until you have already debiased yourself.  And then it is too late for self-deception.

The other alternative is to choose blindly to remain biased, without any clear idea of the consequences.  This is not second-order rationality.  It is willful stupidity.

There are several errors here. The first is the denial that belief is voluntary. As I remarked in the comments to this post, it is best to think of “choosing to believe a thing” as “choosing to treat this thing as a fact.” And this is something which is indeed voluntary. Thus for example it is by choice that I am, at this very moment, treating it as a fact that belief is voluntary.

There is some truth in Yudkowsky’s remark that “you cannot make yourself believe the sky is green by an act of will.” But this is not because the thing itself is intrinsically involuntary. On the contrary, you could, if you wished, choose to treat the greenness of the sky as a fact, at least for the most part and in most ways. The problem is that you have no good motive to wish to act this way, and plenty of good motives not to act this way. In this sense, it is impossible for most of us to believe that the sky is green in the same way it is impossible for most of us to commit suicide; we simply have no good motive to do either of these things.

Yudkowsky’s second error is connected with the first. Since, according to him, it is impossible to deliberately and directly deceive oneself, self-deception can only happen in an indirect manner: “The other alternative is to choose blindly to remain biased, without any clear idea of the consequences.  This is not second-order rationality.  It is willful stupidity.” The idea is that ordinary beliefs are simply involuntary, but we can have beliefs that are somewhat voluntary by choosing “blindly to remain biased, without any clear idea of the consequences.” Since this is “willful stupidity,” a reasonable person would completely avoid such behavior, and thus all of his beliefs would be involuntary.

Essentially, Yudkowsky is claiming that we have some involuntary beliefs, and that we should avoid adding any voluntary beliefs to our involuntary ones. This view is fundamentally flawed precisely because all of our beliefs are voluntary, and thus we cannot avoid having voluntary beliefs.

Nor is it “willful stupidity” to trade away some truth for the sake of other good things. Completely avoiding this is in fact intrinsically impossible. If you are seeking one good, you are not equally seeking a distinct good; one cannot serve two masters. Thus since all people are interested in some goods distinct from truth, there is no one who fails to trade away some truth for the sake of other things. Yudkowsky’s mistake here is related to his wishful thinking about wishful thinking which I discussed previously. In this way he views himself, at least ideally, as completely avoiding wishful thinking. This is both impossible and unhelpful, impossible in that everyone has such motivated beliefs, and unhelpful because such beliefs can in fact be beneficial.

A better attitude to this matter is adopted by Robin Hanson, as for example when he discusses motives for having opinions in a post which we previously considered here. Bryan Caplan has a similar view, discussed here.

Once we have a clear view of this matter, we can use this to minimize the loss of truth that results from such beliefs. For example, in a post linked above, we discussed the argument that fictional accounts consistently distort one’s beliefs about reality. Rather than pretending that there is no such effect, we can deliberately consider to what extent we wish to be open to this possibility, depending on our other purposes for engaging with such accounts. This is not “willful stupidity”; the stupidity would to be engage in such trades without realizing that such trades are inevitable, and thus not to realize to what extent you are doing it.

Consider one of the cases of voluntary belief discussed in this earlier post. As we quoted at the time, Eric Reitan remarks:

For most horror victims, the sense that their lives have positive meaning may depend on the conviction that a transcendent good is at work redeeming evil. Is the evidential case against the existence of such a good really so convincing that it warrants saying to these horror victims, “Give up hope”? Should we call them irrational when they cling to that hope or when those among the privileged live in that hope for the sake of the afflicted? What does moral decency imply about the legitimacy of insisting, as the new atheists do, that any view of life which embraces the ethico-religious hope should be expunged from the world?

Here, Reitan is proposing that someone believe that “a transcendent good is at work redeeming evil” for the purpose of having “the sense that their lives have positive meaning.” If we look at this as it is, namely as proposing a voluntary belief for the sake of something other than truth, we can find ways to minimize the potential conflict between accuracy and this other goal. For example, the person might simply believe that “my life has a positive meaning,” without trying to explain why this is so. For the reasons given here, “my life has a positive meaning” is necessarily more probable and more known than any explanation for this that might be adopted. To pick a particular explanation and claim that it is more likely would be to fall into the conjunction fallacy.

Of course, real life is unfortunately more complicated. The woman in Reitan’s discussion might well respond to our proposal somewhat in this way (not a real quotation):

Probability is not the issue here, precisely because it is not a question of the truth of the matter in itself. There is a need to actually feel that one’s life is meaningful, not just to believe it. And the simple statement “life is meaningful” will not provide that feeling. Without the feeling, it will also be almost impossible to continue to believe it, no matter what the probability is. So in order to achieve this goal, it is necessary to believe a stronger and more particular claim.

And this response might be correct. Some such goals, due to their complexity, might not be easily achieved without adopting rather unlikely beliefs. For example, Robin Hanson, while discussing his reasons for having opinions, several times mentions the desire for “interesting” opinions. This is a case where many people will not even notice the trade involved, because the desire for interesting ideas seems closely related to the desire for truth. But in fact truth and interestingness are diverse things, and the goals are diverse, and one who desires both will likely engage in some trade. In fact, relative to truth seeking, looking for interesting things is a dangerous endeavor. Scott Alexander notes that interesting things are usually false:

This suggests a more general principle: interesting things should usually be lies. Let me give three examples.

I wrote in Toxoplasma of Rage about how even when people crusade against real evils, the particular stories they focus on tend to be false disproportionately often. Why? Because the thousands of true stories all have some subtleties or complicating factors, whereas liars are free to make up things which exactly perfectly fit the narrative. Given thousands of stories to choose from, the ones that bubble to the top will probably be the lies, just like on Reddit.

Every time I do a links post, even when I am very careful to double- and triple- check everything, and to only link to trustworthy sources in the mainstream media, a couple of my links end up being wrong. I’m selecting for surprising-if-true stories, but there’s only one way to get surprising-if-true stories that isn’t surprising, and given an entire Internet to choose from, many of the stories involved will be false.

And then there’s bad science. I can’t remember where I first saw this, so I can’t give credit, but somebody argued that the problem with non-replicable science isn’t just publication bias or p-hacking. It’s that some people will be sloppy, biased, or just stumble through bad luck upon a seemingly-good methodology that actually produces lots of false positives, and that almost all interesting results will come from these people. They’re the equivalent of Reddit liars – if there are enough of them, then all of the top comments will be theirs, since they’re able to come up with much more interesting stuff than the truth-tellers. In fields where sloppiness is easy, the truth-tellers will be gradually driven out, appearing to be incompetent since they can’t even replicate the most basic findings of the field, let alone advance it in any way. The sloppy people will survive to train the next generation of PhD students, and you’ll end up with a stable equilibrium.

In a way this makes the goal of believing interesting things much like the woman’s case. The goal of “believing interesting things” will be better achieved by more complex and detailed beliefs, even though to the extent that they are more complex and detailed, they are simply that much less likely to be true.

The point of this present post, then, is not to deny that some goals might be such that they are better attained with rather unlikely beliefs, and in some cases even in proportion to the unlikelihood of the beliefs. Rather, the point is that a conscious awareness of the trades involved will allow a person to minimize the loss of truth involved. If you never look at your bank account, you will not notice how much money you are losing from that monthly debit for internet. In the same way, if you hold Yudkowksy’s opinion, and believe that you never trade away truth for other things, which is itself both false and motivated, you are like someone who never looks at your account: you will not notice how much you are losing.

Do I Really Want To Know?

Some days ago I asked how we can determine whether we really love the truth or not. Bryan’s Caplan’s account of preferences over beliefs and rational irrationality indicates there may be an additional impediment to answering this question correctly, besides the factors mentioned in the first post. I may care more or less about the truth about various issues, especially depending on how they relate with other things I care about. Now consider the difference between “I have a deep love for the truth,” and “I don’t care much about the truth.”

For most people, the former statement is likely to appear attractive, and the latter unattractive. Let’s suppose we are trying to determine which one is actually true. If the first one is true, then we would care about the truth about ourselves, and we would make a decent effort to determine the truth, presumably arriving at the conclusion that the first is true (since it is true by hypothesis.)

But suppose the second is true. In that case, we are unlikely to make a great effort to determine the actual truth. Instead, we are likely to believe the more attractive opinion, namely the first, unless the costs of believing this are too high.

In principle, believing that I have a deep love for truth when in fact I do not could have a very high cost indeed. But in practice this would be by a very circuitous route, and frequently the costs would not be immediate or apparent in any way. Consequently someone who does not care much about the truth is likely to believe that he does care a lot, and is only likely to change his mind when the costs of his error become apparent, just like the person who becomes uncertain when he is offered a bet. Under normal circumstances, then, most people will hold the first belief, regardless of whether the first or the second is actually true.

 

Rational Irrationality

After giving reasons for thinking that people have preferences over beliefs, Bryan Caplan presents his model of rational irrationality, namely the factors that determine whether or not people give in to such preferences or resist them.

In extreme cases, mistaken beliefs are fatal. A baby-proofed house illustrates many errors that adults cannot afford to make. It is dangerous to think that poisonous substances are candy. It is dangerous to reject the theory of gravity at the top of the stairs. It is dangerous to hold that sticking forks in electrical sockets is harmless fun.

But false beliefs do not have to be deadly to be costly. If the price of oranges is 50 cents each, but you mistakenly believe it is a dollar, you buy too few oranges. If bottled water is, contrary to your impressions, neither healthier nor better-tasting than tap water, you may throw hundreds of dollars down the drain. If your chance of getting an academic job is lower than you guess, you could waste your twenties in a dead-end Ph.D. program.

The cost of error varies with the belief and the believer’s situation. For some people, the belief that the American Civil War came before the American Revolution would be a costly mistake. A history student might fail his exam, a history professor ruin his professional reputation, a Civil War reenactor lose his friends’ respect, a public figure face damaging ridicule.

Normally, however, a firewall stands between this mistake and “real life.” Historical errors are rarely an obstacle to wealth, happiness, descendants, or any standard metric of success. The same goes for philosophy, religion, astronomy, geology, and other “impractical” subjects. The point is not that there is no objectively true answer in these fields. The Revolution really did precede the Civil War. But your optimal course of action if the Revolution came first is identical to your optimal course if the Revolution came second.

To take another example: Think about your average day. What would you do differently if you believed that the earth began in 4004 B.C., as Bishop Ussher infamously maintained? You would still get out of bed, drive to work, eat lunch, go home, have dinner, watch TV, and go to sleep. Ussher’s mistake is cheap.

Virtually the only way that mistakes on these questions injure you is via their social consequences. A lone man on a desert island could maintain practically any historical view with perfect safety. When another person washes up, however, there is a small chance that odd historical views will reduce his respect for his fellow islander, impeding cooperation. Notice, however, that the danger is deviance, not error. If everyone else has sensible historical views, and you do not, your status may fall. But the same holds if everyone else has bizarre historical views and they catch you scoffing.

To use economic jargon, the private cost of an action can be negligible, though its social cost is high. Air pollution is the textbook example. When you drive, you make the air you breathe worse. But the effect is barely perceptible. Your willingness to eliminate your own emissions might be a tenth of a cent. That is the private cost of your pollution. But suppose that you had the same impact on the air of 999,999 strangers. Each disvalues your emissions by a tenth of a cent too. The social cost of your activity—the harm to everyone including yourself—is $1,000, a million times the private cost.

Caplan thus makes the general points that our beliefs on many topics cannot hurt us directly, and frequently can do so only by means of social consequences. He adds the final point that the private cost of an action—or in this case a belief—may be very different from the total cost.

Finally, Caplan presents his economic model of rational irrationality:

Two forces lie at the heart of economic models of choice: preferences and prices. A consumer’s preferences determine the shape of his demand curve for oranges; the market price he faces determines where along that demand curve he resides. What makes this insight deep is its generality. Economists use it to analyze everything from having babies to robbing banks.

Irrationality is a glaring exception. Recognizing irrationality is typically equated with rejecting economics. A “logic of the irrational” sounds self-contradictory. This chapter’s central message is that this reaction is premature. Economics can handle irrationality the same way it handles everything: preferences and prices. As I have already pointed out:

  • People have preferences over beliefs: A nationalist enjoys the belief that foreign-made products are overpriced junk; a surgeon takes pride in the belief that he operates well while drunk.
  • False beliefs range in material cost from free to enormous: Acting on his beliefs would lead the nationalist to overpay for inferior goods, and the surgeon to destroy his career.

Snapping these two building blocks together leads to a simple model of irrational conviction. If agents care about both material wealth and irrational beliefs, then as the price of casting reason aside rises, agents consume less irrationality. I might like to hold comforting beliefs across the board, but it costs too much. Living in a Pollyanna dreamworld would stop be from coping with my problems, like that dead tree in my backyard that looks like it is going to fall on my house.

As I said in the last post, one reason why people argue against such a view is that it can seem psychologically implausible. Caplan takes notes of the same fact:

Arguably the main reason why economists have not long since adopted an approach like mine is that it seems psychologically implausible. Rational irrationality appears to map an odd route to delusion:

Step 1: Figure out the truth to the best of your ability.

Step 2: Weigh the psychological benefits of rejecting the truth against its material costs.

Step 3: If the psychological benefits outweigh the material costs, purge the truth from your mind and embrace error.

The psychological plausibility of this stilted story is underrated.

Of course, this process is not so conscious and explicit in reality, and this is why the above seems so implausible. Caplan presents the more realistic version:

But rational irrationality does not require Orwellian underpinnings. The psychological interpretation can be seriously toned down without changing the model. Above all, the steps should be conceived as tacit. To get in your car and drive away entails a long series of steps—take out your keys, unlock and open the door, sit down, put the key in the ignition, and so on. The thought processes behind these steps are rarely explicit. Yet we know the steps on some level, because when we observe a would-be driver who fails to take one—by, say, trying to open a locked door without using his key—it is easy to state which step he skipped.

Once we recognize that cognitive “steps” are usually tacit, we can enhance the introspective credibility of the steps themselves. The process of irrationality can be recast:

Step 1: Be rational on topics where you have no emotional attachment to a particular answer.

Step 2: On topics where you have an emotional attachment to a particular answer, keep a “lookout” for questions where false beliefs imply a substantial material cost for you.

Step 3: If you pay no substantial material costs of error, go with the flow; believe whatever makes you feel best.

Step 4: If there are substantial material costs of error, raise your level of intellectual self-discipline in order to become more objective.

Step 5: Balance the emotional trauma of heightened objectivity—the progressive shattering of your comforting illusions—against the material costs of error.

There is no need to posit that people start with a clear perception of the truth, then throw it away. The only requirement is that rationality remain on “standby,” ready to engage when error is dangerous.

Caplan offers various examples of this process happening in practice. I will include here only the last example:

Want to bet? We encounter the price-sensitivity of irrationality whenever someone unexpectedly offers us a bet based on our professed beliefs. Suppose you insist that poverty in the Third World is sure to get worse in the next decade. A challenger immediately retorts, “Want to bet? If you’re really ‘sure,’ you won’t mind giving me ten-to-one odds.” Why are you unlikely to accept this wager? Perhaps you never believed your own words; your statements were poetry—or lies. But it is implausible to tar all reluctance to bet with insincerity. People often believe that their assertions are true until you make them “put up or shut up.” A bet moderates their views—that is, changes their minds—whether or not they retract their words.

Bryan Caplan’s account is very closely related to what I have argued elsewhere, namely that people are more influenced by non-truth-related motives in areas remote from the senses. Caplan’s account explains that a large part of the reason for this is simply that being mistaken is less harmful in these areas (at least in a material sense), and consequently that people care less about whether their views in these areas are true, and care more about other factors. This also explains why the person who is offered a bet in the example changes his mind: this is not simply explained by whether or not the truth of the matter can be determined by sensible experience, but by whether a mistaken opinion in this particular case is likely to cause harm or not.

Nonetheless, even if you do care about truth because error can harm you, this too is a love of sweetness, not of truth.

Bryan Caplan on Preferences Over Beliefs

Responding to the criticism mentioned in the previous post, Caplan begins by noting that it is quite possible to observe preferences:

I observe one person’s preferences every day—mine. Within its sphere I trust my introspection more than I could ever trust the work of another economist. Introspection tells me that I am getting hungry, and would be happy to pay a dollar for an ice cream bar. If anything qualifies as “raw data,” this does. Indeed, it is harder to doubt than “raw data” that economists routinely accept—like self-reported earnings.

One thing my introspection tells me is that some beliefs are more emotionally appealing than their opposites. For example, I like to believe that I am right. It is worse to admit error, or lose money because of error, but error is disturbing all by itself. Having these feelings does not imply that I indulge them—no more than accepting money from a source with an agenda implies that my writings are insincere. But the temptation is there.

After this discussion of his own experience, he considers the experience of others:

Introspection is a fine way to learn about your own preferences. But what about the preferences of others? Perhaps you are so abnormal that it is utterly misleading to extrapolate from yourself to the rest of humanity. The simplest way to check is to listen to what other people say about their preferences.

I was once at a dinner with Gary Becker where he scoffed at this idea. His position, roughly, was, “You can’t believe what people say,” though he still paid attention when the waiter named the house specialties. Yes, there is a sound core to Becker’s position. People fail to reflect carefully. People deceive. But contrary to Becker, these are not reasons to ignore their words. We should put less weight on testimony when people speak in haste, or have an incentive to lie. But listening remains more informative than plugging your ears. After all, human beings can detect lies as well as tell them. Experimental psychology documents that liars sometimes give themselves away with demeanor or inconsistencies in their stories.

Once we take the testimony of mankind seriously, evidence of preferences over beliefs abounds. People can’t shut up about them. Consider the words of philosopher George Berkeley:

“I can easily overlook any present momentary sorrow when I reflect that it is in my power to be happy a thousand years hence. If it were not for this thought I had rather be an oyster than a man.”

Paul Samuelson himself revels in the Keynesian revelation, approvingly quoting Wordsworth to capture the joy of the General Theory: “Bliss was it in that dawn to be alive, but to be young was very heaven!”

Many autobiographies describe the pain of abandoning the ideas that once gave meaning to the author’s life. As Whittaker Chambers puts it:

“So great an effort, quite apart from its physical and practical hazards, cannot occur without a profound upheaval of the spirit. No man lightly reverses the faith of an adult lifetime, held implacably to the point of criminality. He reverses it only with a violence greater than the faith he is repudiating.”

No wonder that—in his own words—Chambers broke with Communism “slowly, reluctantly, in agony.” For Arthur Koestler, deconversion was “emotional harakiri.” He adds, “Those who have been caught by the great illusion of our time, and have lived though its moral and intellectual debauch, either give themselves up to a new addiction of the opposite type, or are condemned to pay with a lifelong hangover.” Richard Write laments, “I knew in my heart that I should never be able to feel with that simple sharpness about life, should never again express such passionate hope, should never again make so total a commitment of faith.”

The desire for “hope and illusion” plays a role even in mental illness. According to his biographer, Nobel Prize winner and paranoid schizophrenic John Nash often preferred his fantasy world—where he was a “Messianic godlike figure”—to harsh reality:

“For Nash, the recovery of everyday thought processes produced a sense of diminution and loss…. He refers to his remissions not as joyful returns to a healthy state, but as ‘interludes, as it were, of enforced rationality.'”

One criticism here might go as follows. Yes, Caplan has done a fine job of showing that people find some beliefs attractive and others unattractive, that some beliefs make them happy and some unhappy. But like C.S. Lewis, one can argue that this does not imply that this is why they hold those beliefs. It is likely enough that they have some real reasons as well, and this means that their preferences are irrelevant.

One basis for this objection is probably the idea that sitting down and choosing to believe something seems psychologically implausible. But it does not have to happen so explicitly, even though this is more possible than people might think. The fact that such preferences can be felt as “temptations,” as Caplan puts it in describing his own experience, is an indication that it is entirely possible to give in to the temptation or to resist it, and thus that we can choose our beliefs in effect, even if this is not an explicit thought.

We could compare such situations to the situation of someone addicted to smoking or drinking. Let’s suppose they are trying to get over it, but constantly falling back into the behavior. It may be psychologically implausible to assert, “He says he wants to get over it, but he is just faking. He actually prefers to remain addicted.” But this does not change the fact that every time he goes to the store to buy cigarettes, every time he takes one out to light it, every time he steps outside for a smoke, he exercises his power of choice. In the same way, we determine our beliefs by concrete choices, even though in many cases the idea that the person could have simply decided to choose the opposite belief may be implausible. I have discussed this kind of thing earlier, as for example here. When we are engaged in an argument with someone, and they seem to be getting the better of the argument, it is one choice if we say, “You’re probably right,” and another choice if we say, “You’re just wrong, but you’re clearly incapable of understanding the truth of the matter…” In any case it is certainly a choice, even if it does not feel like one, just as the smoker or the alcoholic may not feel like he has a choice about smoking and drinking.

Caplan has a last consideration:

If neither way of verifying the existence of preferences over beliefs appeals to you, a final one remains. Reverse the direction of reasoning. Smoke usually means fire. The more bizarre a mistake is, the harder it is to attribute to lack of information. Suppose your friend thinks he is Napoleon. It is conceivable that he got an improbable coincidence of misleading signals sufficient to convince any of us. But it is awfully suspicious that he embraces the pleasant view that he is a world-historic figure, rather than, say, Napoleon’s dishwasher. Similarly, suppose an adult sees trade as a zero-sum game. Since he experiences the opposite every day, it is hard to blame his mistake on “lack of information.” More plausibly, like blaming your team’s defeat on cheaters, seeing trade as disguised exploitation soothes those who dislike the market’s outcome.

It is unlikely that Bryan Caplan means to say your friend here is wicked rather than insane. Clearly someone living in the present who believes that he is Napoleon is insane, in the sense that his mind is not working normally. But Caplan’s point is that you cannot simply say, “His mind is not working normally, and therefore he holds an arbitrary belief with no relationship with reality,” but instead he holds a belief which includes something which many people would like to think, namely, “I am a famous and important person,” but which most ordinary people do not in fact think, because it is obviously false (in most cases.) So one way that the person’s mind works differently is that reality doesn’t have as much power to prevent him from holding attractive beliefs as for normal people, much like the case of John Nash as described by Caplan. But the fact that some beliefs are attractive is not a way in which he differs. It is a way in which he is like all of us.

The point about trade is that everyone who buys something at a store believes that he is making himself better off by his purchase, and knows that he makes the store better off as well. So someone who says that trade is zero-sum is contradicting this obvious fact; his claim cannot be due to a lack of evidence regarding the mutual utility of trade.

Love of Truth and Love of Self

Love of self is natural and can extend to almost any aspect of ourselves, including to our beliefs. In other words, we tend to love our beliefs because they are ours. This is a kind of “sweetnesss“. As suggested in the linked post, since we believe that our beliefs are true, it is not easy to distinguish between loving our beliefs for the sake of truth, and loving them because they are ours. But these are two different things: the first is the love of truth, and the second is an aspect of love of self.

Just as we love ourselves, we love the wholes of which we are parts: our family, our country, our religious communities, and so on. These are better than pure love of self, but they too can represent a kind of sweetness: if we love of our beliefs because they are the beliefs of our family, of our friends, of our religious and political communities, or because they are part of our worldview, none of these things is the love of truth, whether or not the beliefs are actually true.

This raises two questions: first, how do we know whether we are acting out of the love of truth, or out of some other love? And second, if there is a way to answer the first question, what can we do about it?

These questions are closely related to a frequent theme of this blog, namely voluntary beliefs, and the motives for these beliefs. Bryan Caplan, in his book The Myth of the Rational Voter, discusses these things under the name of “preferences over beliefs”:

The desire for truth can clash with other motives. Material self-interest is the leading suspect. We distrust salesmen because they make more money if they shade the truth. In markets for ideas, similarly, people often accuse their opponents of being “bought,” their judgment corrupted by a flow of income that would dry up if they changed their minds. Dasgupta and Stiglitz deride the free-market critique of antitrust policy as “well-funded” but “not well-founded.” Some accept funding from interested parties, then bluntly speak their minds anyway. The temptation, however, is to balance being right and being rich.

Social pressure for conformity is another force that conflicts with truth-seeking. Espousing unpopular views often transforms you into an unpopular person. Few want to be pariahs, so they self-censor. If pariahs are less likely to be hired, conformity blends into conflict of interest. However, even bereft of financial consequences, who wants to be hated? The temptation is to balance being right and being liked.

But greed and conformism are not the only forces at war with truth. Human beings also have mixed cognitive motives. One of our goals is to reach correct answers in order to take appropriate action, but that is not the only goal of our thought. On many topics, one position is more comforting, flattering, or exciting, raising the danger that our judgment will be corrupted not by money or social approval, but by our own passions.

Even on a desert isle, some beliefs make us feel better about ourselves. Gustave Le Bon refers to “that portion of hope and illusion without which [men] cannot live.” Religion is the most obvious example. Since it is often considered rude to call attention to the fact, let Gaetano Mosca make the point for me:

“The Christian must be enabled to think with complacency that everybody not of the Christian faith will be damned. The Brahman must be given grounds for rejoicing that he alone is descended from the head of Brahma and has the exalted honor of reading the sacred books. The Buddhist must be taught highly to prize the privilege he has of attaining Nirvana soonest. The Mohammedan must recall with satisfaction that he alone is a true believer, and that all others are infidel dogs in this life and tormented dogs in the next. The radical socialist must be convinced that all who do not think as he does are either selfish, money-spoiled bourgeois or ignorant and servile simpletons. These are all examples of arguments that provide for one’s need of esteeming one’s self and one’s own religion or convictions and at the same time for the need of despising and hating others.”

Worldviews are more a mental security blanket than a serious effort
to understand the world: “Illusions endure because illusion is a need
for almost all men, a need they feel no less strongly than their material needs.” Modern empirical work suggests that Mosca was on to something: The religious consistently enjoy greater life satisfaction. No wonder human beings shield their beliefs from criticism, and cling to them if counterevidence seeps through their defenses.

Most people find the existence of mixed cognitive motives so obvious
that “proof” is superfluous. Jost and his coauthors casually remark in the Psychological Bulletin that “Nearly everyone is aware of the possibility that people are capable of believing what they want to believe, at least within certain limits.” But my fellow economists are unlikely to sign off so easily. If one economist tells another, “Your economics is just a religion,” the allegedly religious economist normally takes the distinction between “emotional ideologue” and “dispassionate scholar” for granted, and paints himself as the latter. But when I assert the generic existence of preferences over beliefs, many economists challenge the whole category. How do I know preferences over beliefs exist? Some eminent economists imply that this is impossible to know because preferences are unobservable.

This is very similar to points that I have made from time to time on this blog. Like Caplan, I consider the fact that beliefs have a voluntary character, at least up to a certain point, to be virtually obvious. Likewise, Caplan points out that in the midst of a discussion an economist may take for granted the idea of the “emotional ideologue,” namely someone whose beliefs are motivated by emotions, but frequently he will not concede the point in generic terms. In a similar way, people in general constantly recognize the influence of motives on beliefs in particular cases, especially in regard to other people, but they frequently fight against the concept in general. C.S. Lewis is one example, although he does concede the point to some extent.

In the next post I will look at Caplan’s response to the economists, and at some point after that bring the discussion back to the question about the love of truth.