Rational Irrationality

After giving reasons for thinking that people have preferences over beliefs, Bryan Caplan presents his model of rational irrationality, namely the factors that determine whether or not people give in to such preferences or resist them.

In extreme cases, mistaken beliefs are fatal. A baby-proofed house illustrates many errors that adults cannot afford to make. It is dangerous to think that poisonous substances are candy. It is dangerous to reject the theory of gravity at the top of the stairs. It is dangerous to hold that sticking forks in electrical sockets is harmless fun.

But false beliefs do not have to be deadly to be costly. If the price of oranges is 50 cents each, but you mistakenly believe it is a dollar, you buy too few oranges. If bottled water is, contrary to your impressions, neither healthier nor better-tasting than tap water, you may throw hundreds of dollars down the drain. If your chance of getting an academic job is lower than you guess, you could waste your twenties in a dead-end Ph.D. program.

The cost of error varies with the belief and the believer’s situation. For some people, the belief that the American Civil War came before the American Revolution would be a costly mistake. A history student might fail his exam, a history professor ruin his professional reputation, a Civil War reenactor lose his friends’ respect, a public figure face damaging ridicule.

Normally, however, a firewall stands between this mistake and “real life.” Historical errors are rarely an obstacle to wealth, happiness, descendants, or any standard metric of success. The same goes for philosophy, religion, astronomy, geology, and other “impractical” subjects. The point is not that there is no objectively true answer in these fields. The Revolution really did precede the Civil War. But your optimal course of action if the Revolution came first is identical to your optimal course if the Revolution came second.

To take another example: Think about your average day. What would you do differently if you believed that the earth began in 4004 B.C., as Bishop Ussher infamously maintained? You would still get out of bed, drive to work, eat lunch, go home, have dinner, watch TV, and go to sleep. Ussher’s mistake is cheap.

Virtually the only way that mistakes on these questions injure you is via their social consequences. A lone man on a desert island could maintain practically any historical view with perfect safety. When another person washes up, however, there is a small chance that odd historical views will reduce his respect for his fellow islander, impeding cooperation. Notice, however, that the danger is deviance, not error. If everyone else has sensible historical views, and you do not, your status may fall. But the same holds if everyone else has bizarre historical views and they catch you scoffing.

To use economic jargon, the private cost of an action can be negligible, though its social cost is high. Air pollution is the textbook example. When you drive, you make the air you breathe worse. But the effect is barely perceptible. Your willingness to eliminate your own emissions might be a tenth of a cent. That is the private cost of your pollution. But suppose that you had the same impact on the air of 999,999 strangers. Each disvalues your emissions by a tenth of a cent too. The social cost of your activity—the harm to everyone including yourself—is $1,000, a million times the private cost.

Caplan thus makes the general points that our beliefs on many topics cannot hurt us directly, and frequently can do so only by means of social consequences. He adds the final point that the private cost of an action—or in this case a belief—may be very different from the total cost.

Finally, Caplan presents his economic model of rational irrationality:

Two forces lie at the heart of economic models of choice: preferences and prices. A consumer’s preferences determine the shape of his demand curve for oranges; the market price he faces determines where along that demand curve he resides. What makes this insight deep is its generality. Economists use it to analyze everything from having babies to robbing banks.

Irrationality is a glaring exception. Recognizing irrationality is typically equated with rejecting economics. A “logic of the irrational” sounds self-contradictory. This chapter’s central message is that this reaction is premature. Economics can handle irrationality the same way it handles everything: preferences and prices. As I have already pointed out:

  • People have preferences over beliefs: A nationalist enjoys the belief that foreign-made products are overpriced junk; a surgeon takes pride in the belief that he operates well while drunk.
  • False beliefs range in material cost from free to enormous: Acting on his beliefs would lead the nationalist to overpay for inferior goods, and the surgeon to destroy his career.

Snapping these two building blocks together leads to a simple model of irrational conviction. If agents care about both material wealth and irrational beliefs, then as the price of casting reason aside rises, agents consume less irrationality. I might like to hold comforting beliefs across the board, but it costs too much. Living in a Pollyanna dreamworld would stop be from coping with my problems, like that dead tree in my backyard that looks like it is going to fall on my house.

As I said in the last post, one reason why people argue against such a view is that it can seem psychologically implausible. Caplan takes notes of the same fact:

Arguably the main reason why economists have not long since adopted an approach like mine is that it seems psychologically implausible. Rational irrationality appears to map an odd route to delusion:

Step 1: Figure out the truth to the best of your ability.

Step 2: Weigh the psychological benefits of rejecting the truth against its material costs.

Step 3: If the psychological benefits outweigh the material costs, purge the truth from your mind and embrace error.

The psychological plausibility of this stilted story is underrated.

Of course, this process is not so conscious and explicit in reality, and this is why the above seems so implausible. Caplan presents the more realistic version:

But rational irrationality does not require Orwellian underpinnings. The psychological interpretation can be seriously toned down without changing the model. Above all, the steps should be conceived as tacit. To get in your car and drive away entails a long series of steps—take out your keys, unlock and open the door, sit down, put the key in the ignition, and so on. The thought processes behind these steps are rarely explicit. Yet we know the steps on some level, because when we observe a would-be driver who fails to take one—by, say, trying to open a locked door without using his key—it is easy to state which step he skipped.

Once we recognize that cognitive “steps” are usually tacit, we can enhance the introspective credibility of the steps themselves. The process of irrationality can be recast:

Step 1: Be rational on topics where you have no emotional attachment to a particular answer.

Step 2: On topics where you have an emotional attachment to a particular answer, keep a “lookout” for questions where false beliefs imply a substantial material cost for you.

Step 3: If you pay no substantial material costs of error, go with the flow; believe whatever makes you feel best.

Step 4: If there are substantial material costs of error, raise your level of intellectual self-discipline in order to become more objective.

Step 5: Balance the emotional trauma of heightened objectivity—the progressive shattering of your comforting illusions—against the material costs of error.

There is no need to posit that people start with a clear perception of the truth, then throw it away. The only requirement is that rationality remain on “standby,” ready to engage when error is dangerous.

Caplan offers various examples of this process happening in practice. I will include here only the last example:

Want to bet? We encounter the price-sensitivity of irrationality whenever someone unexpectedly offers us a bet based on our professed beliefs. Suppose you insist that poverty in the Third World is sure to get worse in the next decade. A challenger immediately retorts, “Want to bet? If you’re really ‘sure,’ you won’t mind giving me ten-to-one odds.” Why are you unlikely to accept this wager? Perhaps you never believed your own words; your statements were poetry—or lies. But it is implausible to tar all reluctance to bet with insincerity. People often believe that their assertions are true until you make them “put up or shut up.” A bet moderates their views—that is, changes their minds—whether or not they retract their words.

Bryan Caplan’s account is very closely related to what I have argued elsewhere, namely that people are more influenced by non-truth-related motives in areas remote from the senses. Caplan’s account explains that a large part of the reason for this is simply that being mistaken is less harmful in these areas (at least in a material sense), and consequently that people care less about whether their views in these areas are true, and care more about other factors. This also explains why the person who is offered a bet in the example changes his mind: this is not simply explained by whether or not the truth of the matter can be determined by sensible experience, but by whether a mistaken opinion in this particular case is likely to cause harm or not.

Nonetheless, even if you do care about truth because error can harm you, this too is a love of sweetness, not of truth.

Bryan Caplan on Preferences Over Beliefs

Responding to the criticism mentioned in the previous post, Caplan begins by noting that it is quite possible to observe preferences:

I observe one person’s preferences every day—mine. Within its sphere I trust my introspection more than I could ever trust the work of another economist. Introspection tells me that I am getting hungry, and would be happy to pay a dollar for an ice cream bar. If anything qualifies as “raw data,” this does. Indeed, it is harder to doubt than “raw data” that economists routinely accept—like self-reported earnings.

One thing my introspection tells me is that some beliefs are more emotionally appealing than their opposites. For example, I like to believe that I am right. It is worse to admit error, or lose money because of error, but error is disturbing all by itself. Having these feelings does not imply that I indulge them—no more than accepting money from a source with an agenda implies that my writings are insincere. But the temptation is there.

After this discussion of his own experience, he considers the experience of others:

Introspection is a fine way to learn about your own preferences. But what about the preferences of others? Perhaps you are so abnormal that it is utterly misleading to extrapolate from yourself to the rest of humanity. The simplest way to check is to listen to what other people say about their preferences.

I was once at a dinner with Gary Becker where he scoffed at this idea. His position, roughly, was, “You can’t believe what people say,” though he still paid attention when the waiter named the house specialties. Yes, there is a sound core to Becker’s position. People fail to reflect carefully. People deceive. But contrary to Becker, these are not reasons to ignore their words. We should put less weight on testimony when people speak in haste, or have an incentive to lie. But listening remains more informative than plugging your ears. After all, human beings can detect lies as well as tell them. Experimental psychology documents that liars sometimes give themselves away with demeanor or inconsistencies in their stories.

Once we take the testimony of mankind seriously, evidence of preferences over beliefs abounds. People can’t shut up about them. Consider the words of philosopher George Berkeley:

“I can easily overlook any present momentary sorrow when I reflect that it is in my power to be happy a thousand years hence. If it were not for this thought I had rather be an oyster than a man.”

Paul Samuelson himself revels in the Keynesian revelation, approvingly quoting Wordsworth to capture the joy of the General Theory: “Bliss was it in that dawn to be alive, but to be young was very heaven!”

Many autobiographies describe the pain of abandoning the ideas that once gave meaning to the author’s life. As Whittaker Chambers puts it:

“So great an effort, quite apart from its physical and practical hazards, cannot occur without a profound upheaval of the spirit. No man lightly reverses the faith of an adult lifetime, held implacably to the point of criminality. He reverses it only with a violence greater than the faith he is repudiating.”

No wonder that—in his own words—Chambers broke with Communism “slowly, reluctantly, in agony.” For Arthur Koestler, deconversion was “emotional harakiri.” He adds, “Those who have been caught by the great illusion of our time, and have lived though its moral and intellectual debauch, either give themselves up to a new addiction of the opposite type, or are condemned to pay with a lifelong hangover.” Richard Write laments, “I knew in my heart that I should never be able to feel with that simple sharpness about life, should never again express such passionate hope, should never again make so total a commitment of faith.”

The desire for “hope and illusion” plays a role even in mental illness. According to his biographer, Nobel Prize winner and paranoid schizophrenic John Nash often preferred his fantasy world—where he was a “Messianic godlike figure”—to harsh reality:

“For Nash, the recovery of everyday thought processes produced a sense of diminution and loss…. He refers to his remissions not as joyful returns to a healthy state, but as ‘interludes, as it were, of enforced rationality.'”

One criticism here might go as follows. Yes, Caplan has done a fine job of showing that people find some beliefs attractive and others unattractive, that some beliefs make them happy and some unhappy. But like C.S. Lewis, one can argue that this does not imply that this is why they hold those beliefs. It is likely enough that they have some real reasons as well, and this means that their preferences are irrelevant.

One basis for this objection is probably the idea that sitting down and choosing to believe something seems psychologically implausible. But it does not have to happen so explicitly, even though this is more possible than people might think. The fact that such preferences can be felt as “temptations,” as Caplan puts it in describing his own experience, is an indication that it is entirely possible to give in to the temptation or to resist it, and thus that we can choose our beliefs in effect, even if this is not an explicit thought.

We could compare such situations to the situation of someone addicted to smoking or drinking. Let’s suppose they are trying to get over it, but constantly falling back into the behavior. It may be psychologically implausible to assert, “He says he wants to get over it, but he is just faking. He actually prefers to remain addicted.” But this does not change the fact that every time he goes to the store to buy cigarettes, every time he takes one out to light it, every time he steps outside for a smoke, he exercises his power of choice. In the same way, we determine our beliefs by concrete choices, even though in many cases the idea that the person could have simply decided to choose the opposite belief may be implausible. I have discussed this kind of thing earlier, as for example here. When we are engaged in an argument with someone, and they seem to be getting the better of the argument, it is one choice if we say, “You’re probably right,” and another choice if we say, “You’re just wrong, but you’re clearly incapable of understanding the truth of the matter…” In any case it is certainly a choice, even if it does not feel like one, just as the smoker or the alcoholic may not feel like he has a choice about smoking and drinking.

Caplan has a last consideration:

If neither way of verifying the existence of preferences over beliefs appeals to you, a final one remains. Reverse the direction of reasoning. Smoke usually means fire. The more bizarre a mistake is, the harder it is to attribute to lack of information. Suppose your friend thinks he is Napoleon. It is conceivable that he got an improbable coincidence of misleading signals sufficient to convince any of us. But it is awfully suspicious that he embraces the pleasant view that he is a world-historic figure, rather than, say, Napoleon’s dishwasher. Similarly, suppose an adult sees trade as a zero-sum game. Since he experiences the opposite every day, it is hard to blame his mistake on “lack of information.” More plausibly, like blaming your team’s defeat on cheaters, seeing trade as disguised exploitation soothes those who dislike the market’s outcome.

It is unlikely that Bryan Caplan means to say your friend here is wicked rather than insane. Clearly someone living in the present who believes that he is Napoleon is insane, in the sense that his mind is not working normally. But Caplan’s point is that you cannot simply say, “His mind is not working normally, and therefore he holds an arbitrary belief with no relationship with reality,” but instead he holds a belief which includes something which many people would like to think, namely, “I am a famous and important person,” but which most ordinary people do not in fact think, because it is obviously false (in most cases.) So one way that the person’s mind works differently is that reality doesn’t have as much power to prevent him from holding attractive beliefs as for normal people, much like the case of John Nash as described by Caplan. But the fact that some beliefs are attractive is not a way in which he differs. It is a way in which he is like all of us.

The point about trade is that everyone who buys something at a store believes that he is making himself better off by his purchase, and knows that he makes the store better off as well. So someone who says that trade is zero-sum is contradicting this obvious fact; his claim cannot be due to a lack of evidence regarding the mutual utility of trade.

Love of Truth and Love of Self

Love of self is natural and can extend to almost any aspect of ourselves, including to our beliefs. In other words, we tend to love our beliefs because they are ours. This is a kind of “sweetnesss“. As suggested in the linked post, since we believe that our beliefs are true, it is not easy to distinguish between loving our beliefs for the sake of truth, and loving them because they are ours. But these are two different things: the first is the love of truth, and the second is an aspect of love of self.

Just as we love ourselves, we love the wholes of which we are parts: our family, our country, our religious communities, and so on. These are better than pure love of self, but they too can represent a kind of sweetness: if we love of our beliefs because they are the beliefs of our family, of our friends, of our religious and political communities, or because they are part of our worldview, none of these things is the love of truth, whether or not the beliefs are actually true.

This raises two questions: first, how do we know whether we are acting out of the love of truth, or out of some other love? And second, if there is a way to answer the first question, what can we do about it?

These questions are closely related to a frequent theme of this blog, namely voluntary beliefs, and the motives for these beliefs. Bryan Caplan, in his book The Myth of the Rational Voter, discusses these things under the name of “preferences over beliefs”:

The desire for truth can clash with other motives. Material self-interest is the leading suspect. We distrust salesmen because they make more money if they shade the truth. In markets for ideas, similarly, people often accuse their opponents of being “bought,” their judgment corrupted by a flow of income that would dry up if they changed their minds. Dasgupta and Stiglitz deride the free-market critique of antitrust policy as “well-funded” but “not well-founded.” Some accept funding from interested parties, then bluntly speak their minds anyway. The temptation, however, is to balance being right and being rich.

Social pressure for conformity is another force that conflicts with truth-seeking. Espousing unpopular views often transforms you into an unpopular person. Few want to be pariahs, so they self-censor. If pariahs are less likely to be hired, conformity blends into conflict of interest. However, even bereft of financial consequences, who wants to be hated? The temptation is to balance being right and being liked.

But greed and conformism are not the only forces at war with truth. Human beings also have mixed cognitive motives. One of our goals is to reach correct answers in order to take appropriate action, but that is not the only goal of our thought. On many topics, one position is more comforting, flattering, or exciting, raising the danger that our judgment will be corrupted not by money or social approval, but by our own passions.

Even on a desert isle, some beliefs make us feel better about ourselves. Gustave Le Bon refers to “that portion of hope and illusion without which [men] cannot live.” Religion is the most obvious example. Since it is often considered rude to call attention to the fact, let Gaetano Mosca make the point for me:

“The Christian must be enabled to think with complacency that everybody not of the Christian faith will be damned. The Brahman must be given grounds for rejoicing that he alone is descended from the head of Brahma and has the exalted honor of reading the sacred books. The Buddhist must be taught highly to prize the privilege he has of attaining Nirvana soonest. The Mohammedan must recall with satisfaction that he alone is a true believer, and that all others are infidel dogs in this life and tormented dogs in the next. The radical socialist must be convinced that all who do not think as he does are either selfish, money-spoiled bourgeois or ignorant and servile simpletons. These are all examples of arguments that provide for one’s need of esteeming one’s self and one’s own religion or convictions and at the same time for the need of despising and hating others.”

Worldviews are more a mental security blanket than a serious effort
to understand the world: “Illusions endure because illusion is a need
for almost all men, a need they feel no less strongly than their material needs.” Modern empirical work suggests that Mosca was on to something: The religious consistently enjoy greater life satisfaction. No wonder human beings shield their beliefs from criticism, and cling to them if counterevidence seeps through their defenses.

Most people find the existence of mixed cognitive motives so obvious
that “proof” is superfluous. Jost and his coauthors casually remark in the Psychological Bulletin that “Nearly everyone is aware of the possibility that people are capable of believing what they want to believe, at least within certain limits.” But my fellow economists are unlikely to sign off so easily. If one economist tells another, “Your economics is just a religion,” the allegedly religious economist normally takes the distinction between “emotional ideologue” and “dispassionate scholar” for granted, and paints himself as the latter. But when I assert the generic existence of preferences over beliefs, many economists challenge the whole category. How do I know preferences over beliefs exist? Some eminent economists imply that this is impossible to know because preferences are unobservable.

This is very similar to points that I have made from time to time on this blog. Like Caplan, I consider the fact that beliefs have a voluntary character, at least up to a certain point, to be virtually obvious. Likewise, Caplan points out that in the midst of a discussion an economist may take for granted the idea of the “emotional ideologue,” namely someone whose beliefs are motivated by emotions, but frequently he will not concede the point in generic terms. In a similar way, people in general constantly recognize the influence of motives on beliefs in particular cases, especially in regard to other people, but they frequently fight against the concept in general. C.S. Lewis is one example, although he does concede the point to some extent.

In the next post I will look at Caplan’s response to the economists, and at some point after that bring the discussion back to the question about the love of truth.

Sweet Wine

Aristotle says in the Topics,

For the ‘desire of X’ may mean the desire of it as an end (e.g. the desire of health) or as a means to an end (e.g. the desire of being doctored), or as a thing desired accidentally, as, in the case of wine, the sweet-toothed person desires it not because it is wine but because it is sweet. For essentially he desires the sweet, and only accidentally the wine: for if it be dry, he no longer desires it. His desire for it is therefore accidental.

The person who is interested in sweet wine may not be fully aware of this distinction, especially if he believes that all wine is sweet. With this belief, he may well suppose that he desires wine in itself. But he is mistaken about his own desire: his desire is for the sweet, not for wine, except accidentally.

We can make the same distinction between someone who loves truth and someone who loves an opinion for some other reason, that is, someone who loves “sweet” opinions.

As said above, if all wine were sweet, it would be easy to confuse the love of sweetness with the love of wine. A problem very close to this arises with truth and opinion: not all of a person’s beliefs are true, but as long as he believes them, he thinks that they are true. So if someone loves his beliefs, it appears to him that he loves a set of true beliefs, whether or not this is actually the case. Consequently it may appear to him that loves the truth.

But perhaps he does, and perhaps he doesn’t. He may be mistaken about his own love, just as a person can be mistaken about his desire for wine. And he may be mistaken in this way whether or not his beliefs are actually true. He may in fact love his opinions because they are “sweet”, not because they are true, and this is possible even if the beliefs are in fact true.

Richard Carrier Responds to Pascal’s Wager

Richard Carrier attempts to respond to Pascal’s Wager by suggesting premises which lead to a completely opposite conclusion:

The following argument could be taken as tongue-in-cheek, if it didn’t seem so evidently true. At any rate, to escape the logic of it requires theists to commit to abandoning several of their cherished assumptions about God or Heaven. And no matter what, it presents a successful rebuttal to any form of Pascal’s Wager, by demonstrating that unbelief might still be the safest bet after all (since we do not know whose assumptions are correct, and we therefore cannot exclude the assumptions on which this argument is based).

If his response is taken literally, it is certainly not true in fact, and it is likely that he realizes this, and for this reason says that it could be taken as “tongue-in-cheek.” But since he adds that it seems “so evidently true,” it is not clear that he sees what is wrong with it.

His first point is that God would reward people who are concerned about doing good, and therefore people who are concerned about the truth:

It is a common belief that only the morally good should populate heaven, and this is a reasonable belief, widely defended by theists of many varieties. Suppose there is a god who is watching us and choosing which souls of the deceased to bring to heaven, and this god really does want only the morally good to populate heaven. He will probably select from only those who made a significant and responsible effort to discover the truth. For all others are untrustworthy, being cognitively or morally inferior, or both. They will also be less likely ever to discover and commit to true beliefs about right and wrong. That is, if they have a significant and trustworthy concern for doing right and avoiding wrong, it follows necessarily that they must have a significant and trustworthy concern for knowing right and wrong. Since this knowledge requires knowledge about many fundamental facts of the universe (such as whether there is a god), it follows necessarily that such people must have a significant and trustworthy concern for always seeking out, testing, and confirming that their beliefs about such things are probably correct. Therefore, only such people can be sufficiently moral and trustworthy to deserve a place in heaven–unless god wishes to fill heaven with the morally lazy, irresponsible, or untrustworthy.

But only two groups fit this description: intellectually committed but critical theists, and intellectually committed but critical nontheists (which means both atheists and agnostics, though more specifically secular humanists, in the most basic sense).

His second point is that the world is a test for this:

It is a common belief that certain mysteries, like unexplained evils in the world and god’s silence, are to be explained as a test, and this is a reasonable belief, widely defended by theists of many varieties.

His next argument is that the available evidence tends to show that either God does not exist or that he is evil:

If presented with strong evidence that a god must either be evil or not exist, a genuinely good person will not believe in such a god, or if believing, will not give assent to such a god (as by worship or other assertions of approval, since the good do not approve of evil). Most theists do not deny this, but instead deny that the evidence is strong. But it seems irrefutable that there is strong evidence that a god must either be evil or not exist.

For example, in the bible Abraham discards humanity and morality upon God’s command to kill his son Isaac, and God rewards him for placing loyalty above morality. That is probably evil–a good god would expect Abraham to forego fear and loyalty and place compassion first and refuse to commit an evil act, and would reward him for that, not for compliance. Likewise, God deliberately inflicts unconscionable wrongs upon Job and his family merely to win a debate with Satan. That is probably evil–no good god would do such harm for so petty a reason, much less prefer human suffering to the cajoling of a mere angel. And then God justifies these wrongs to Job by claiming to be able to do whatever he wants, in effect saying that he is beyond morality. That is probably evil–a good god would never claim to be beyond good and evil. And so it goes for all the genocidal slaughter and barbaric laws commanded by God in the bible. Then there are all the natural evils in the world (like diseases and earthquakes) and all the unchecked human evils (i.e. god makes no attempt to catch criminals or stop heinous crimes, etc.). Only an evil god would probably allow such things.

He concludes that only atheists go to heaven:

Of the two groups comprising the only viable candidates for heaven, only nontheists recognize or admit that this evidence strongly implies that God must be evil or not exist. Therefore, only nontheists answer the test as predicted for morally good persons. That is, a morally good person will be intellectually and critically responsible about having true beliefs, and will place this commitment to moral good above all other concerns, especially those that can corrupt or compromise moral goodness, like faith or loyalty. So those who are genuinely worthy of heaven will very probably become nontheists, since their inquiry will be responsible and therefore complete, and will place moral concerns above all others. They will then encounter the undeniable facts of all these unexplained evils (in the bible and in the world) and conclude that God must probably be evil or nonexistent.

In other words, to accept such evils without being given a justification (as is entailed by god’s silence) indicates an insufficient concern for having true beliefs. But to have the courage to maintain unbelief in the face of threats of hell or destruction, as well as numerous forms of social pressure and other hostile factors, is exactly the behavior a god would expect from the genuinely good, rather than capitulation to the will of an evil being, or naive and unjustified trust that an apparently evil being is really good–those are not behaviors of the genuinely good.

It is not completely clear what he thinks about his own argument. His original statement suggests that he realizes that it is somewhat ridiculous, taken as a whole, but it is not exactly clear if he understands why. He concludes:

Since this easily and comprehensively explains all the unexplainable problems of god (like divine hiddenness and apparent evil), while other theologies do not (or at least nowhere so well), it follows that this analysis is probably a better explanation of all the available evidence than any contrary theology. Since this conclusion contradicts the conclusion of every form of Pascal’s Wager, it follows that Pascal’s Wager cannot assure anyone of God’s existence or that belief in God will be the best bet.

This might express his failure to see the largest flaw in his argument. He probably believes that it is actually true that “this analysis is probably a better explanation of all the available evidence than any contrary theology.” But this cannot be true, even assuming that his arguments about good and evil are correct. The fact that very many people accept a Christian theology, and that no one believes Carrier’s suggested theology, is in itself part of the available evidence, and this fact alone outweighs all of his arguments, whether or not they are correct. That is, a Christian theology is more likely to be true as a whole than his proposed theology of “only atheists go to heaven”, regardless of the facts about what good people are likely to do, of the facts about what a good God is likely to do, and so on.

It is a common failure on the part of unbelievers not to notice the evidence that results from the very existence of believers. This is of course an aspect of the common failure of people in general to notice the existence of evidence against their current beliefs. In this sense, Carrier likely does in fact actually fail to notice this evidence. Consequently he has a vague sense that there is something ridiculous about his argument, but he does not quite know what it is.

Nonetheless, although his argument is mistaken as a whole, there are some aspects of it which could be reasonably used by an unbeliever in responding to Pascal’s wager in a truly reasonable way. Such a response would go something like this, “My current beliefs about God and the world are largely a result of the fact that I am trying to know the truth, and the fact that I am trying to know the truth is a part of the fact that I am trying to be a good person. Choosing to believe would be choosing to abandon significant parts of my effort to be a good person. If there is a good God, I would expect him to take these things into account.”

Erroneous Responses to Pascal

Many arguments which are presented against accepting Pascal’s wager are mistaken, some of them in obvious ways. For example, the argument is made that the multiplicity of religious beliefs or potential religious beliefs invalidates the wager:

But Pascal’s argument is seriously flawed. The religious environment that Pascal lived in was simple. Belief and disbelief only boiled down to two choices: Roman Catholicism and atheism. With a finite choice, his argument would be sound. But on Pascal’s own premise that God is infinitely incomprehensible, then in theory, there would be an infinite number of possible theologies about God, all of which are equally probable.

First, let us look at the more obvious possibilities we know of today – possibilities that were either unknown to, or ignored by, Pascal. In the Calvinistic theological doctrine of predestination, it makes no difference what one chooses to believe since, in the final analysis, who actually gets rewarded is an arbitrary choice of God. Furthermore we know of many more gods of many different religions, all of which have different schemes of rewards and punishments. Given that there are more than 2,500 gods known to man, and given Pascal’s own assumptions that one cannot comprehend God (or gods), then it follows that, even the best case scenario (i.e. that God exists and that one of the known Gods and theologies happen to be the correct one) the chances of making a successful choice is less than one in 2,500.

Second, Pascal’s negative theology does not exclude the possibility that the true God and true theology is not one that is currently known to the world. For instance it is possible to think of a God who rewards, say, only those who purposely step on sidewalk cracks. This sounds absurd, but given the premise that we cannot understand God, this possible theology cannot be dismissed. In such a case, the choice of what God to believe would be irrelevant as one would be rewarded on a premise totally distinct from what one actually believes. Furthermore as many atheist philosophers have pointed out, it is also possible to conceive of a deity who rewards intellectual honesty, a God who rewards atheists with eternal bliss simply because they dared to follow where the evidence leads – that given the available evidence, no God exists! Finally we should also note that given Pascal’s premise, it is possible to conceive of a God who is evil and who punishes the good and rewards the evil.

Thus Pascal’s call for us not to consider the evidence but to simply believe on prudential grounds fails.

There is an attempt here to base the response on Pascal’s mistaken claim that the probability of the existence of God (and of Catholic doctrine as a whole) is 50%. This would presumably be because we can know nothing about theological truth. According to this, the website reasons that all possible theological claims should be equally probable, and consequently one will be in any case very unlikely to find the truth, and therefore very unlikely to attain the eternal reward, using Pascal’s apparent assumption that only believers in a specific theology can attain the reward.

The problem with this is that it reasons for Pascal’s mistaken assumptions (as well as changing them in unjustified ways), while in reality the effectiveness of the wager does not precisely depend on these assumptions. If there is a 10% chance that God exists, and the rest is true as Pascal states it, it would still seem to be a good bet that God exists, in terms of the practical consequences. You will probably be wrong, but the gain if you are right will be so great that it will almost certainly outweigh the probable loss.

In reality different theologies are not equally probable, and there will be one which is most probable. Theologies such as the “God who rewards atheism”, which do not have any actual proponents, have very little evidence for them, since they do not even have the evidence resulting from a claim. One cannot expect that two differing positions will randomly have exactly the same amount of evidence for them, so one theology will have more evidence than any other. And even if it did not have overall a probability of more than 50%, it could still be a good bet, given the possibility of the reward, and better than any of the other potential wagers.

The argument is also made that once one admits an infinite reward, it is not possible to distinguish between actions with differing values. This is described here:

If you regularly brush your teeth, there is some chance you will go to heaven and enjoy infinite bliss. On the other hand, there is some chance you will enjoy infinite heavenly bliss even if you do not brush your teeth. Therefore the expectation of brushing your teeth (infinity plus a little extra due to oral health = infinity) is the same as that of not brushing your teeth (infinity minus a bit due to cavities and gingivitis = infinity), from which it follows that dental hygiene is not a particularly prudent course of action. In fact, as soon as we allow infinite utilities, decision theory tells us that any course of action is as good as any other (Duff 1986). Hence we have a reductio ad absurdum against decision theory, at least when it’s extended to infinite cases.

As actually applied, someone might argue that even if the God who rewards atheism is less probable than the Christian God, the expected utility of being Christian or atheist will be infinite in each case, and therefore one will not be a more reasonable choice than another. Some people actually seem to believe that this is a good response, but it is not. The problem here is that decision theory is a mathematical formalism and does not have to correspond precisely with real life. The mathematics does not work when infinity is introduced, but this does not mean there cannot be such an infinity in reality, nor that the two choices would be equal in reality. It simply means you have not chosen the right mathematics to express the situation. To see this clearly, consider the following situation.

You are in a room with two exits, a green door and a red door. The green door has a known probability of 99% of leading to an eternal heaven, and a known probability of 1% of leading to an eternal hell. The red door has a known probability of 99% of leading to an eternal hell, and a known probability of 1% of leading to an eternal heaven.

The point is that if your mathematics says that going out the red door is just as good as going out the green door, your mathematics is wrong. The correct solution is to go out the green door.

I would consider all such arguments, namely arguing that all religious beliefs are equally probable, or that being rewarded for atheism is as probable as being rewarded for Christianity, or that all infinite expectations are equal, are examples of not very serious thinking. These arguments are not only wrong. They are obviously wrong, and obviously motivated by the desire not to believe. Earlier I quoted Thomas Nagel on the fear of religion. After the quoted passage, he continues:

My guess is that this cosmic authority problem is not a rare condition and that it is responsible for much of the scientism and reductionism of our time. One of the tendencies it supports is the ludicrous overuse of evolutionary biology to explain everything about life, including everything about the human mind. Darwin enabled modern secular culture to heave a great collective sigh of relief, by apparently providing a way to eliminate purpose, meaning, and design as fundamental features of the world. Instead they become epiphenomena, generated incidentally by a process that can be entirely explained by the operation of the nonteleological laws of physics on the material of which we and our environments are all composed. There might still be thought to be a religious threat in the existence of the laws of physics themselves, and indeed the existence of anything at all— but it seems to be less alarming to most atheists.

This is a somewhat ridiculous situation.

This fear of religion is very likely the cause of such unreasonable responses. Scott Alexander notes in this comment that such explanations are mistaken:

I find all of the standard tricks used against Pascal’s Wager intellectually unsatisfying because none of them are at the root of my failure to accept it. Yes, it might be a good point that there could be an “atheist God” who punishes anyone who accepts Pascal’s Wager. But even if a super-intelligent source whom I trusted absolutely informed me that there was definitely either the Catholic God or no god at all, I feel like I would still feel like Pascal’s Wager was a bad deal. So it would be dishonest of me to say that the possibility of an atheist god “solves” Pascal’s Wager.

The same thing is true for a lot of the other solutions proposed. Even if this super-intelligent source assured me that yes, if there is a God He will let people into Heaven even if their faith is only based on Pascal’s Wager, that if there is a God He will not punish you for your cynical attraction to incentives, and so on, and re-emphasized that it was DEFINITELY either the Catholic God or nothing, I still wouldn’t happily become a Catholic.

Whatever the solution, I think it’s probably the same for Pascal’s Wager, Pascal’s Mugging, and the Egyptian mummy problem I mentioned last month. Right now, my best guess for that solution is that there are two different answers to two different questions:

Why do we believe Pascal’s Wager is wrong? Scope insensitivity. Eternity in Hell doesn’t sound that much worse, to our brains, than a hundred years in Hell, and we quite rightly wouldn’t accept Pascal’s Wager to avoid a hundred years in Hell. Pascal’s Mugger killing 3^^^3 people doesn’t sound too much worse than him killing 3,333 people, and we quite rightly wouldn’t give him a dollar to get that low a probability of killing 3,333 people.

Why is Pascal’s Wager wrong? From an expected utility point of view, it’s not. In any particular world, not accepting Pascal’s Wager has a 99.999…% chance of leading to a higher payoff. But averaged over very large numbers of possible worlds, accepting Pascal’s Wager or Pascal’s Mugging will have a higher payoff, because of that infinity going into the averages. It’s too bad that doing the rational thing leads to a lower payoff in most cases, but as everyone who’s bought fire insurance and not had their house catch on fire knows, sometimes that happens.

I realize that this position commits me, so far as I am rational, to becoming a theist. But my position that other people are exactly equal in moral value to myself commits me, so far as I am rational, to giving almost all my salary to starving Africans who would get a higher marginal value from it than I do, and I don’t do that either.

While a far more reasonable response, there is wishful thinking going here as well, with the assumption that the probability that a body of religious beliefs is true as a whole is extremely small. This will not generally speaking be the case, or at any rate it will not be as small as he suggests, once the evidence derived from the claim itself is taken into account, just as it is not extremely improbable that a particular book is mostly historical, even though if one considered the statements contained in the book as a random conjunction, one would suppose it to be very improbable.

The Paradox of the Heap

The paradox of the heap argues in this way:

A large pile of sand is composed of grains of sand. But taking away a grain of sand from a pile of sand cannot make a pile of sand stop being a pile of sand. Therefore if you continually take away grains of sand from the pile until only one grain of sand remains, that grain must still be a pile of sand.

A similar argument can be made with any vague word that can differ by an apparently continuous number of degrees. Thus for example it is applied to whether a man has a beard (he should not be able to change from having a beard to not having a beard by the removal of a single hair), to colors (an imperceptible variation of color should not be able to change a thing from being red to not being red), and so on.

The conclusion, that a single grain of sand is a pile of sand, or that a shaven man has a beard, or that the color blue is red, is obviously false. In order to block the deduction, it seems necessary to say that it fails at a particular point. But this means that at some point, a pile of sand will indeed stop being a pile of sand when you take away a single grain. But this seems absurd.

Suppose you don’t know the meaning of “red,” and someone attempts to explain. They presumably do so by pointing to examples of red things. But this does not provide you with a rigid definition of redness that you could use to determine whether some arbitrary color is an example of red or not. Rather, the probability that you will call something red will vary continuously as the color of things becomes more remote from the examples from which you learned the name, being very high for the canonical examples and becoming very low as you approach other colors such as blue.

This explains why setting a boundary where an imperceptible change of color would change something from being red to being not red seems inappropriate. Red doesn’t have a rigid definition in the first place, and assigning such a boundary would mean assigning such a definition. But this would be modifying the meaning of the word. Consequently, if the meaning is accepted in an unmodified form, the deduction cannot logically be blocked, just as in the previous post, if the meaning of “true” is accepted in an unmodified form, one cannot block the deduction that all statements are both true and false.

Someone might conclude from this that I am accepting the conclusions of the paradoxical arguments, and therefore that I am saying that all statements are both true and false, and that a single grain of sand is a pile, and so on.

I am not. Concluding that this is my position is simply making the exact same mistake that is made in the original paradoxes. And that mistake is to assume a perfection in human language which does not exist. “True,” “pile,” and so on, are words that possess meaning in an imperfect way. Ultimately all human words are imperfect in this way, because all human language is vague. The fact that logic cannot block the paradoxical conclusions without modifying the meanings of our words happens not because those conclusions are true, but because the meanings are imperfect, while logic presupposes a perfection of meaning which is simply not there.

In a number of other places I have talked about how various motivations can lead us astray. But there are some areas where the very desire for truth can lead us away from truth, and the discussion of such logical paradoxes, and of the vagueness of human thought and language, is one of those areas. In particular, the desire for truth can lead us to wish to believe that truth is more attainable than it actually is. In this case it would happen by wishing to believe that human language is more perfect than it is, as for example that “red” really does have a meaning that would cause something in an a definitive way to stop being red at some point with an imperceptible change, or in the case of the Liar, to assert that the word “true” really does have something like a level subscript attached to its meaning, or that it has some other definition which can block the paradoxical deductions.

These things are not true. Nor are the paradoxical conclusions.

Quick to Listen to Reality

Nostalgebraist writes about Bayesian updating:

Subjectively, I feel like I’m only capable of a fairly small discrete set of “degrees of belief.”  I think I can distinguish between, say, things I am 90% confident of and things I am only 60% confident of, but I don’t think I can distinguish between being 60% confident in something and 65% confident in it.  Those both just fall under some big mental category called “a bit more likely to be true than false. ”  (I’m sure psychologists have studied this, and I don’t know anything about their findings.  This is just what seems likely to me based on introspection.)

I’ve talked before about whether Bayesian updating makes sense as an ideal for how reasoning should work.  Suppose for now that it is a good ideal.  The “perfect” Bayesian reasoner would have a whole continuum of degrees of belief.  They would typically respond to new evidence by changing some of their degrees of beliefs, although for “weak” or “unconvincing” evidence, the change might be very small.  But since they have a whole continuum of degrees, they can make arbitrarily small changes.

Often when the Bayesian ideal is distilled down to principles that mere humans can follow, one of the principles seems to be “when you learn something new, modify your degrees of belief.”  This sounds nice, and accords with common sense ideas about being open-minded and changing your mind when it is warranted.

However, this principle can easily be read as implying: “if you learn something new, don’tnot modify your degrees of belief.”  Leaving your degrees of belief the same as they were before is what irrational, closed-minded, closed-eyed people do.  (One sometimes hears Bayesians responding to each other’s arguments by saying things like “I have updated in the direction of [your position],” as though they feel that this demonstrates that they are thinking in a responsible manner.  Wouldn’t want to be caught not updating when you learn something new!)

The problem here is not that hard to see.  If you only have, say, 10 different possible degrees of belief, then your smallest possible updates are (on average) going to be jumps of 10% at once.  If you agree to always update in response to new information, no matter how weak it is, then seeing ten pieces of very weak evidence in favor of P will ramp your confidence in P up to the maximum.

In each case, the perfect Bayesian might update by only a very small amount, say 0.01%.  Clearly, if you have the choice between changing by 0% and changing by 10%, the former is closer to the “perfect” choice of 0.01%.  But if you have trained yourself to feel like changing by 0% (i.e. not updating) is irrational and bad, you will keep making 10% jumps until you and the perfect Bayesian are very far apart.

This means that Bayesians – in the sense of “people who follow the norm I’m talking about” – will tend to over-respond to weak but frequently presented evidence.  This will make them tend to be overconfident of ideas that are favored within the communities they belong to, since they’ll be frequently exposed to arguments for those ideas, although those arguments will be of varying quality.

“Overconfident of ideas that are favored within the communities they belong to” is basically a description of everyone, not simply people who accept the norm he is talking about, so even if this happens, it is not much of an objection in comparison to the situation of people in general.

Nonetheless, Nostalgebraist misunderstands the idea of Bayesian updating as applied in real life. Bayes’ theorem is a theorem of probability theory that describes how a numerical probability is updated upon receiving new evidence, and probability theory in general is a formalization of degrees of belief. Since it is a formalization, it is not expected to be a literal description of real life. People do not typically have an exact numerical probability that they assign to a belief. Nonetheless, there is a reasonable way to respond to evidence, and this basically corresponds to Bayes’ theorem, even though it is not a literal numerical calculation.

Nostalgebraist’s objection is that there are only a limited number of ways that it is possible to feel about a proposition. He is likely right that to an untrained person this is likely to be less than ten. Just as people can acquire perfect pitch by training, however, it is likely that someone could learn to distinguish many more than ten degrees of certainty. However, this is not a reasonable way to respond to his argument, because even if someone was calibrated to a precision of 1%, Nostalgebraist’s objection would still be valid. If a person were assigning a numerical probability, he could not always change it by even 1% every time he heard a new argument, or it would be easy for an opponent to move him to absolute certainty of nearly anything.

The real answer is that he is looking in the wrong place for a person’s degree of belief. A belief is not how one happens to feel about a statement. A belief is a voluntary act or habit, and adjusting one’s degree of belief would mean adjusting that habit. The feeling he is talking about, on the other hand, is not in general something voluntary, which means that it is literally impossible to follow the norm he is discussing consistently, applied in the way that he suggests. One cannot simply choose to feel more certain about something. It is true that voluntary actions may be able to affect that feeling, in the same way that voluntary actions can affect anger or fear. But we do not directly choose to be angry or afraid, and we do not directly choose to feel certain or uncertain.

What we can affect, however, is the way we think, speak, and act, and we can change our habits by choosing particular acts of thinking, speaking, and acting. And this is where our subjective degree of belief is found, namely in our pattern of behavior. This pattern can vary in an unlimited number of ways and degrees, and thus his objection cannot be applied to updating in this way. Updating on evidence, then, would be adjusting our pattern of behavior, and not updating would be failing to adjust that pattern. That would begin by the simple recognition that something is new evidence: saying that “I have updated in the direction of your position” would simply mean acknowledging the fact that one has been presented with new evidence, with the implicit commitment to allowing that evidence to affect one’s behavior in the future, as for example by not simply forgetting about that new argument, by having more respect for people who hold that position, and so on in any number of ways.

Of course, it may be that in practice people cannot even do this consistently, or at least not without sometimes adjusting excessively. But this is the same with every human virtue: consistently hitting the precise mean of virtue is impossible. That does not mean that we should adopt the norm of ignoring virtue, which is Nostalgebraist’s implicit suggestion.

This is related to the suggestion of St. James that one should be quick to hear and slow to speak. Being quick to hear implies, among other things, this kind of updating based on the arguments and positions that one hears from others. But the same thing applies to evidence in general, whether it is received from other persons or in other ways. One should be quick to listen to reality.