Rational Irrationality

After giving reasons for thinking that people have preferences over beliefs, Bryan Caplan presents his model of rational irrationality, namely the factors that determine whether or not people give in to such preferences or resist them.

In extreme cases, mistaken beliefs are fatal. A baby-proofed house illustrates many errors that adults cannot afford to make. It is dangerous to think that poisonous substances are candy. It is dangerous to reject the theory of gravity at the top of the stairs. It is dangerous to hold that sticking forks in electrical sockets is harmless fun.

But false beliefs do not have to be deadly to be costly. If the price of oranges is 50 cents each, but you mistakenly believe it is a dollar, you buy too few oranges. If bottled water is, contrary to your impressions, neither healthier nor better-tasting than tap water, you may throw hundreds of dollars down the drain. If your chance of getting an academic job is lower than you guess, you could waste your twenties in a dead-end Ph.D. program.

The cost of error varies with the belief and the believer’s situation. For some people, the belief that the American Civil War came before the American Revolution would be a costly mistake. A history student might fail his exam, a history professor ruin his professional reputation, a Civil War reenactor lose his friends’ respect, a public figure face damaging ridicule.

Normally, however, a firewall stands between this mistake and “real life.” Historical errors are rarely an obstacle to wealth, happiness, descendants, or any standard metric of success. The same goes for philosophy, religion, astronomy, geology, and other “impractical” subjects. The point is not that there is no objectively true answer in these fields. The Revolution really did precede the Civil War. But your optimal course of action if the Revolution came first is identical to your optimal course if the Revolution came second.

To take another example: Think about your average day. What would you do differently if you believed that the earth began in 4004 B.C., as Bishop Ussher infamously maintained? You would still get out of bed, drive to work, eat lunch, go home, have dinner, watch TV, and go to sleep. Ussher’s mistake is cheap.

Virtually the only way that mistakes on these questions injure you is via their social consequences. A lone man on a desert island could maintain practically any historical view with perfect safety. When another person washes up, however, there is a small chance that odd historical views will reduce his respect for his fellow islander, impeding cooperation. Notice, however, that the danger is deviance, not error. If everyone else has sensible historical views, and you do not, your status may fall. But the same holds if everyone else has bizarre historical views and they catch you scoffing.

To use economic jargon, the private cost of an action can be negligible, though its social cost is high. Air pollution is the textbook example. When you drive, you make the air you breathe worse. But the effect is barely perceptible. Your willingness to eliminate your own emissions might be a tenth of a cent. That is the private cost of your pollution. But suppose that you had the same impact on the air of 999,999 strangers. Each disvalues your emissions by a tenth of a cent too. The social cost of your activity—the harm to everyone including yourself—is $1,000, a million times the private cost.

Caplan thus makes the general points that our beliefs on many topics cannot hurt us directly, and frequently can do so only by means of social consequences. He adds the final point that the private cost of an action—or in this case a belief—may be very different from the total cost.

Finally, Caplan presents his economic model of rational irrationality:

Two forces lie at the heart of economic models of choice: preferences and prices. A consumer’s preferences determine the shape of his demand curve for oranges; the market price he faces determines where along that demand curve he resides. What makes this insight deep is its generality. Economists use it to analyze everything from having babies to robbing banks.

Irrationality is a glaring exception. Recognizing irrationality is typically equated with rejecting economics. A “logic of the irrational” sounds self-contradictory. This chapter’s central message is that this reaction is premature. Economics can handle irrationality the same way it handles everything: preferences and prices. As I have already pointed out:

  • People have preferences over beliefs: A nationalist enjoys the belief that foreign-made products are overpriced junk; a surgeon takes pride in the belief that he operates well while drunk.
  • False beliefs range in material cost from free to enormous: Acting on his beliefs would lead the nationalist to overpay for inferior goods, and the surgeon to destroy his career.

Snapping these two building blocks together leads to a simple model of irrational conviction. If agents care about both material wealth and irrational beliefs, then as the price of casting reason aside rises, agents consume less irrationality. I might like to hold comforting beliefs across the board, but it costs too much. Living in a Pollyanna dreamworld would stop be from coping with my problems, like that dead tree in my backyard that looks like it is going to fall on my house.

As I said in the last post, one reason why people argue against such a view is that it can seem psychologically implausible. Caplan takes notes of the same fact:

Arguably the main reason why economists have not long since adopted an approach like mine is that it seems psychologically implausible. Rational irrationality appears to map an odd route to delusion:

Step 1: Figure out the truth to the best of your ability.

Step 2: Weigh the psychological benefits of rejecting the truth against its material costs.

Step 3: If the psychological benefits outweigh the material costs, purge the truth from your mind and embrace error.

The psychological plausibility of this stilted story is underrated.

Of course, this process is not so conscious and explicit in reality, and this is why the above seems so implausible. Caplan presents the more realistic version:

But rational irrationality does not require Orwellian underpinnings. The psychological interpretation can be seriously toned down without changing the model. Above all, the steps should be conceived as tacit. To get in your car and drive away entails a long series of steps—take out your keys, unlock and open the door, sit down, put the key in the ignition, and so on. The thought processes behind these steps are rarely explicit. Yet we know the steps on some level, because when we observe a would-be driver who fails to take one—by, say, trying to open a locked door without using his key—it is easy to state which step he skipped.

Once we recognize that cognitive “steps” are usually tacit, we can enhance the introspective credibility of the steps themselves. The process of irrationality can be recast:

Step 1: Be rational on topics where you have no emotional attachment to a particular answer.

Step 2: On topics where you have an emotional attachment to a particular answer, keep a “lookout” for questions where false beliefs imply a substantial material cost for you.

Step 3: If you pay no substantial material costs of error, go with the flow; believe whatever makes you feel best.

Step 4: If there are substantial material costs of error, raise your level of intellectual self-discipline in order to become more objective.

Step 5: Balance the emotional trauma of heightened objectivity—the progressive shattering of your comforting illusions—against the material costs of error.

There is no need to posit that people start with a clear perception of the truth, then throw it away. The only requirement is that rationality remain on “standby,” ready to engage when error is dangerous.

Caplan offers various examples of this process happening in practice. I will include here only the last example:

Want to bet? We encounter the price-sensitivity of irrationality whenever someone unexpectedly offers us a bet based on our professed beliefs. Suppose you insist that poverty in the Third World is sure to get worse in the next decade. A challenger immediately retorts, “Want to bet? If you’re really ‘sure,’ you won’t mind giving me ten-to-one odds.” Why are you unlikely to accept this wager? Perhaps you never believed your own words; your statements were poetry—or lies. But it is implausible to tar all reluctance to bet with insincerity. People often believe that their assertions are true until you make them “put up or shut up.” A bet moderates their views—that is, changes their minds—whether or not they retract their words.

Bryan Caplan’s account is very closely related to what I have argued elsewhere, namely that people are more influenced by non-truth-related motives in areas remote from the senses. Caplan’s account explains that a large part of the reason for this is simply that being mistaken is less harmful in these areas (at least in a material sense), and consequently that people care less about whether their views in these areas are true, and care more about other factors. This also explains why the person who is offered a bet in the example changes his mind: this is not simply explained by whether or not the truth of the matter can be determined by sensible experience, but by whether a mistaken opinion in this particular case is likely to cause harm or not.

Nonetheless, even if you do care about truth because error can harm you, this too is a love of sweetness, not of truth.

Bryan Caplan on Preferences Over Beliefs

Responding to the criticism mentioned in the previous post, Caplan begins by noting that it is quite possible to observe preferences:

I observe one person’s preferences every day—mine. Within its sphere I trust my introspection more than I could ever trust the work of another economist. Introspection tells me that I am getting hungry, and would be happy to pay a dollar for an ice cream bar. If anything qualifies as “raw data,” this does. Indeed, it is harder to doubt than “raw data” that economists routinely accept—like self-reported earnings.

One thing my introspection tells me is that some beliefs are more emotionally appealing than their opposites. For example, I like to believe that I am right. It is worse to admit error, or lose money because of error, but error is disturbing all by itself. Having these feelings does not imply that I indulge them—no more than accepting money from a source with an agenda implies that my writings are insincere. But the temptation is there.

After this discussion of his own experience, he considers the experience of others:

Introspection is a fine way to learn about your own preferences. But what about the preferences of others? Perhaps you are so abnormal that it is utterly misleading to extrapolate from yourself to the rest of humanity. The simplest way to check is to listen to what other people say about their preferences.

I was once at a dinner with Gary Becker where he scoffed at this idea. His position, roughly, was, “You can’t believe what people say,” though he still paid attention when the waiter named the house specialties. Yes, there is a sound core to Becker’s position. People fail to reflect carefully. People deceive. But contrary to Becker, these are not reasons to ignore their words. We should put less weight on testimony when people speak in haste, or have an incentive to lie. But listening remains more informative than plugging your ears. After all, human beings can detect lies as well as tell them. Experimental psychology documents that liars sometimes give themselves away with demeanor or inconsistencies in their stories.

Once we take the testimony of mankind seriously, evidence of preferences over beliefs abounds. People can’t shut up about them. Consider the words of philosopher George Berkeley:

“I can easily overlook any present momentary sorrow when I reflect that it is in my power to be happy a thousand years hence. If it were not for this thought I had rather be an oyster than a man.”

Paul Samuelson himself revels in the Keynesian revelation, approvingly quoting Wordsworth to capture the joy of the General Theory: “Bliss was it in that dawn to be alive, but to be young was very heaven!”

Many autobiographies describe the pain of abandoning the ideas that once gave meaning to the author’s life. As Whittaker Chambers puts it:

“So great an effort, quite apart from its physical and practical hazards, cannot occur without a profound upheaval of the spirit. No man lightly reverses the faith of an adult lifetime, held implacably to the point of criminality. He reverses it only with a violence greater than the faith he is repudiating.”

No wonder that—in his own words—Chambers broke with Communism “slowly, reluctantly, in agony.” For Arthur Koestler, deconversion was “emotional harakiri.” He adds, “Those who have been caught by the great illusion of our time, and have lived though its moral and intellectual debauch, either give themselves up to a new addiction of the opposite type, or are condemned to pay with a lifelong hangover.” Richard Write laments, “I knew in my heart that I should never be able to feel with that simple sharpness about life, should never again express such passionate hope, should never again make so total a commitment of faith.”

The desire for “hope and illusion” plays a role even in mental illness. According to his biographer, Nobel Prize winner and paranoid schizophrenic John Nash often preferred his fantasy world—where he was a “Messianic godlike figure”—to harsh reality:

“For Nash, the recovery of everyday thought processes produced a sense of diminution and loss…. He refers to his remissions not as joyful returns to a healthy state, but as ‘interludes, as it were, of enforced rationality.'”

One criticism here might go as follows. Yes, Caplan has done a fine job of showing that people find some beliefs attractive and others unattractive, that some beliefs make them happy and some unhappy. But like C.S. Lewis, one can argue that this does not imply that this is why they hold those beliefs. It is likely enough that they have some real reasons as well, and this means that their preferences are irrelevant.

One basis for this objection is probably the idea that sitting down and choosing to believe something seems psychologically implausible. But it does not have to happen so explicitly, even though this is more possible than people might think. The fact that such preferences can be felt as “temptations,” as Caplan puts it in describing his own experience, is an indication that it is entirely possible to give in to the temptation or to resist it, and thus that we can choose our beliefs in effect, even if this is not an explicit thought.

We could compare such situations to the situation of someone addicted to smoking or drinking. Let’s suppose they are trying to get over it, but constantly falling back into the behavior. It may be psychologically implausible to assert, “He says he wants to get over it, but he is just faking. He actually prefers to remain addicted.” But this does not change the fact that every time he goes to the store to buy cigarettes, every time he takes one out to light it, every time he steps outside for a smoke, he exercises his power of choice. In the same way, we determine our beliefs by concrete choices, even though in many cases the idea that the person could have simply decided to choose the opposite belief may be implausible. I have discussed this kind of thing earlier, as for example here. When we are engaged in an argument with someone, and they seem to be getting the better of the argument, it is one choice if we say, “You’re probably right,” and another choice if we say, “You’re just wrong, but you’re clearly incapable of understanding the truth of the matter…” In any case it is certainly a choice, even if it does not feel like one, just as the smoker or the alcoholic may not feel like he has a choice about smoking and drinking.

Caplan has a last consideration:

If neither way of verifying the existence of preferences over beliefs appeals to you, a final one remains. Reverse the direction of reasoning. Smoke usually means fire. The more bizarre a mistake is, the harder it is to attribute to lack of information. Suppose your friend thinks he is Napoleon. It is conceivable that he got an improbable coincidence of misleading signals sufficient to convince any of us. But it is awfully suspicious that he embraces the pleasant view that he is a world-historic figure, rather than, say, Napoleon’s dishwasher. Similarly, suppose an adult sees trade as a zero-sum game. Since he experiences the opposite every day, it is hard to blame his mistake on “lack of information.” More plausibly, like blaming your team’s defeat on cheaters, seeing trade as disguised exploitation soothes those who dislike the market’s outcome.

It is unlikely that Bryan Caplan means to say your friend here is wicked rather than insane. Clearly someone living in the present who believes that he is Napoleon is insane, in the sense that his mind is not working normally. But Caplan’s point is that you cannot simply say, “His mind is not working normally, and therefore he holds an arbitrary belief with no relationship with reality,” but instead he holds a belief which includes something which many people would like to think, namely, “I am a famous and important person,” but which most ordinary people do not in fact think, because it is obviously false (in most cases.) So one way that the person’s mind works differently is that reality doesn’t have as much power to prevent him from holding attractive beliefs as for normal people, much like the case of John Nash as described by Caplan. But the fact that some beliefs are attractive is not a way in which he differs. It is a way in which he is like all of us.

The point about trade is that everyone who buys something at a store believes that he is making himself better off by his purchase, and knows that he makes the store better off as well. So someone who says that trade is zero-sum is contradicting this obvious fact; his claim cannot be due to a lack of evidence regarding the mutual utility of trade.

Love of Truth and Love of Self

Love of self is natural and can extend to almost any aspect of ourselves, including to our beliefs. In other words, we tend to love our beliefs because they are ours. This is a kind of “sweetnesss“. As suggested in the linked post, since we believe that our beliefs are true, it is not easy to distinguish between loving our beliefs for the sake of truth, and loving them because they are ours. But these are two different things: the first is the love of truth, and the second is an aspect of love of self.

Just as we love ourselves, we love the wholes of which we are parts: our family, our country, our religious communities, and so on. These are better than pure love of self, but they too can represent a kind of sweetness: if we love of our beliefs because they are the beliefs of our family, of our friends, of our religious and political communities, or because they are part of our worldview, none of these things is the love of truth, whether or not the beliefs are actually true.

This raises two questions: first, how do we know whether we are acting out of the love of truth, or out of some other love? And second, if there is a way to answer the first question, what can we do about it?

These questions are closely related to a frequent theme of this blog, namely voluntary beliefs, and the motives for these beliefs. Bryan Caplan, in his book The Myth of the Rational Voter, discusses these things under the name of “preferences over beliefs”:

The desire for truth can clash with other motives. Material self-interest is the leading suspect. We distrust salesmen because they make more money if they shade the truth. In markets for ideas, similarly, people often accuse their opponents of being “bought,” their judgment corrupted by a flow of income that would dry up if they changed their minds. Dasgupta and Stiglitz deride the free-market critique of antitrust policy as “well-funded” but “not well-founded.” Some accept funding from interested parties, then bluntly speak their minds anyway. The temptation, however, is to balance being right and being rich.

Social pressure for conformity is another force that conflicts with truth-seeking. Espousing unpopular views often transforms you into an unpopular person. Few want to be pariahs, so they self-censor. If pariahs are less likely to be hired, conformity blends into conflict of interest. However, even bereft of financial consequences, who wants to be hated? The temptation is to balance being right and being liked.

But greed and conformism are not the only forces at war with truth. Human beings also have mixed cognitive motives. One of our goals is to reach correct answers in order to take appropriate action, but that is not the only goal of our thought. On many topics, one position is more comforting, flattering, or exciting, raising the danger that our judgment will be corrupted not by money or social approval, but by our own passions.

Even on a desert isle, some beliefs make us feel better about ourselves. Gustave Le Bon refers to “that portion of hope and illusion without which [men] cannot live.” Religion is the most obvious example. Since it is often considered rude to call attention to the fact, let Gaetano Mosca make the point for me:

“The Christian must be enabled to think with complacency that everybody not of the Christian faith will be damned. The Brahman must be given grounds for rejoicing that he alone is descended from the head of Brahma and has the exalted honor of reading the sacred books. The Buddhist must be taught highly to prize the privilege he has of attaining Nirvana soonest. The Mohammedan must recall with satisfaction that he alone is a true believer, and that all others are infidel dogs in this life and tormented dogs in the next. The radical socialist must be convinced that all who do not think as he does are either selfish, money-spoiled bourgeois or ignorant and servile simpletons. These are all examples of arguments that provide for one’s need of esteeming one’s self and one’s own religion or convictions and at the same time for the need of despising and hating others.”

Worldviews are more a mental security blanket than a serious effort
to understand the world: “Illusions endure because illusion is a need
for almost all men, a need they feel no less strongly than their material needs.” Modern empirical work suggests that Mosca was on to something: The religious consistently enjoy greater life satisfaction. No wonder human beings shield their beliefs from criticism, and cling to them if counterevidence seeps through their defenses.

Most people find the existence of mixed cognitive motives so obvious
that “proof” is superfluous. Jost and his coauthors casually remark in the Psychological Bulletin that “Nearly everyone is aware of the possibility that people are capable of believing what they want to believe, at least within certain limits.” But my fellow economists are unlikely to sign off so easily. If one economist tells another, “Your economics is just a religion,” the allegedly religious economist normally takes the distinction between “emotional ideologue” and “dispassionate scholar” for granted, and paints himself as the latter. But when I assert the generic existence of preferences over beliefs, many economists challenge the whole category. How do I know preferences over beliefs exist? Some eminent economists imply that this is impossible to know because preferences are unobservable.

This is very similar to points that I have made from time to time on this blog. Like Caplan, I consider the fact that beliefs have a voluntary character, at least up to a certain point, to be virtually obvious. Likewise, Caplan points out that in the midst of a discussion an economist may take for granted the idea of the “emotional ideologue,” namely someone whose beliefs are motivated by emotions, but frequently he will not concede the point in generic terms. In a similar way, people in general constantly recognize the influence of motives on beliefs in particular cases, especially in regard to other people, but they frequently fight against the concept in general. C.S. Lewis is one example, although he does concede the point to some extent.

In the next post I will look at Caplan’s response to the economists, and at some point after that bring the discussion back to the question about the love of truth.

Sweet Wine

Aristotle says in the Topics,

For the ‘desire of X’ may mean the desire of it as an end (e.g. the desire of health) or as a means to an end (e.g. the desire of being doctored), or as a thing desired accidentally, as, in the case of wine, the sweet-toothed person desires it not because it is wine but because it is sweet. For essentially he desires the sweet, and only accidentally the wine: for if it be dry, he no longer desires it. His desire for it is therefore accidental.

The person who is interested in sweet wine may not be fully aware of this distinction, especially if he believes that all wine is sweet. With this belief, he may well suppose that he desires wine in itself. But he is mistaken about his own desire: his desire is for the sweet, not for wine, except accidentally.

We can make the same distinction between someone who loves truth and someone who loves an opinion for some other reason, that is, someone who loves “sweet” opinions.

As said above, if all wine were sweet, it would be easy to confuse the love of sweetness with the love of wine. A problem very close to this arises with truth and opinion: not all of a person’s beliefs are true, but as long as he believes them, he thinks that they are true. So if someone loves his beliefs, it appears to him that he loves a set of true beliefs, whether or not this is actually the case. Consequently it may appear to him that loves the truth.

But perhaps he does, and perhaps he doesn’t. He may be mistaken about his own love, just as a person can be mistaken about his desire for wine. And he may be mistaken in this way whether or not his beliefs are actually true. He may in fact love his opinions because they are “sweet”, not because they are true, and this is possible even if the beliefs are in fact true.

New Year’s Resolutions

Several arguments can be made against making such resolutions. In the first place, they almost always fail. Second, the division of time by years is somewhat arbitrary anyway: the fact that it is 2016 now rather than 2015 is determined by convention, not by any particular objective distinction between the two years. If we wished, we could choose some other day as the start of the year.

One can respond to the first argument in two ways, first by saying that even a low success rate could be a good reason for making such resolutions, and second by saying that there is no need to make such a black and white division between success and failure. If you make a resolution and keep it for five days, that is five days of success, even if you fail to keep it for the remaining 360 days of the year.

In any case, a good reason for such resolutions is that human beings are not consistently guided by reason, but often by emotion and habit. Consequently we have certain goals that we suppose we are seeking according to reason, but our concrete actions frequently fail to be proportioned to those goals. A resolution can be seen as a renewal of someone’s intention to act according to reason. According to the second objection above, there is no objective necessity for this to happen on January 1st rather than on some other day. And in fact, there are good reasons to renew your intentions on a much more regular basis, such as every month, every week, and every day. If we accepted the second objection and decided that there was no reason to renew our intentions on any regular basis at all, it would be quite likely that we would fall into a kind of laziness and never renew them, not even on an irregular basis. So there are good reasons to choose the divisions of time which are determined by convention, even if in principle we could choose other times.