Rational Irrationality

After giving reasons for thinking that people have preferences over beliefs, Bryan Caplan presents his model of rational irrationality, namely the factors that determine whether or not people give in to such preferences or resist them.

In extreme cases, mistaken beliefs are fatal. A baby-proofed house illustrates many errors that adults cannot afford to make. It is dangerous to think that poisonous substances are candy. It is dangerous to reject the theory of gravity at the top of the stairs. It is dangerous to hold that sticking forks in electrical sockets is harmless fun.

But false beliefs do not have to be deadly to be costly. If the price of oranges is 50 cents each, but you mistakenly believe it is a dollar, you buy too few oranges. If bottled water is, contrary to your impressions, neither healthier nor better-tasting than tap water, you may throw hundreds of dollars down the drain. If your chance of getting an academic job is lower than you guess, you could waste your twenties in a dead-end Ph.D. program.

The cost of error varies with the belief and the believer’s situation. For some people, the belief that the American Civil War came before the American Revolution would be a costly mistake. A history student might fail his exam, a history professor ruin his professional reputation, a Civil War reenactor lose his friends’ respect, a public figure face damaging ridicule.

Normally, however, a firewall stands between this mistake and “real life.” Historical errors are rarely an obstacle to wealth, happiness, descendants, or any standard metric of success. The same goes for philosophy, religion, astronomy, geology, and other “impractical” subjects. The point is not that there is no objectively true answer in these fields. The Revolution really did precede the Civil War. But your optimal course of action if the Revolution came first is identical to your optimal course if the Revolution came second.

To take another example: Think about your average day. What would you do differently if you believed that the earth began in 4004 B.C., as Bishop Ussher infamously maintained? You would still get out of bed, drive to work, eat lunch, go home, have dinner, watch TV, and go to sleep. Ussher’s mistake is cheap.

Virtually the only way that mistakes on these questions injure you is via their social consequences. A lone man on a desert island could maintain practically any historical view with perfect safety. When another person washes up, however, there is a small chance that odd historical views will reduce his respect for his fellow islander, impeding cooperation. Notice, however, that the danger is deviance, not error. If everyone else has sensible historical views, and you do not, your status may fall. But the same holds if everyone else has bizarre historical views and they catch you scoffing.

To use economic jargon, the private cost of an action can be negligible, though its social cost is high. Air pollution is the textbook example. When you drive, you make the air you breathe worse. But the effect is barely perceptible. Your willingness to eliminate your own emissions might be a tenth of a cent. That is the private cost of your pollution. But suppose that you had the same impact on the air of 999,999 strangers. Each disvalues your emissions by a tenth of a cent too. The social cost of your activity—the harm to everyone including yourself—is $1,000, a million times the private cost.

Caplan thus makes the general points that our beliefs on many topics cannot hurt us directly, and frequently can do so only by means of social consequences. He adds the final point that the private cost of an action—or in this case a belief—may be very different from the total cost.

Finally, Caplan presents his economic model of rational irrationality:

Two forces lie at the heart of economic models of choice: preferences and prices. A consumer’s preferences determine the shape of his demand curve for oranges; the market price he faces determines where along that demand curve he resides. What makes this insight deep is its generality. Economists use it to analyze everything from having babies to robbing banks.

Irrationality is a glaring exception. Recognizing irrationality is typically equated with rejecting economics. A “logic of the irrational” sounds self-contradictory. This chapter’s central message is that this reaction is premature. Economics can handle irrationality the same way it handles everything: preferences and prices. As I have already pointed out:

  • People have preferences over beliefs: A nationalist enjoys the belief that foreign-made products are overpriced junk; a surgeon takes pride in the belief that he operates well while drunk.
  • False beliefs range in material cost from free to enormous: Acting on his beliefs would lead the nationalist to overpay for inferior goods, and the surgeon to destroy his career.

Snapping these two building blocks together leads to a simple model of irrational conviction. If agents care about both material wealth and irrational beliefs, then as the price of casting reason aside rises, agents consume less irrationality. I might like to hold comforting beliefs across the board, but it costs too much. Living in a Pollyanna dreamworld would stop be from coping with my problems, like that dead tree in my backyard that looks like it is going to fall on my house.

As I said in the last post, one reason why people argue against such a view is that it can seem psychologically implausible. Caplan takes notes of the same fact:

Arguably the main reason why economists have not long since adopted an approach like mine is that it seems psychologically implausible. Rational irrationality appears to map an odd route to delusion:

Step 1: Figure out the truth to the best of your ability.

Step 2: Weigh the psychological benefits of rejecting the truth against its material costs.

Step 3: If the psychological benefits outweigh the material costs, purge the truth from your mind and embrace error.

The psychological plausibility of this stilted story is underrated.

Of course, this process is not so conscious and explicit in reality, and this is why the above seems so implausible. Caplan presents the more realistic version:

But rational irrationality does not require Orwellian underpinnings. The psychological interpretation can be seriously toned down without changing the model. Above all, the steps should be conceived as tacit. To get in your car and drive away entails a long series of steps—take out your keys, unlock and open the door, sit down, put the key in the ignition, and so on. The thought processes behind these steps are rarely explicit. Yet we know the steps on some level, because when we observe a would-be driver who fails to take one—by, say, trying to open a locked door without using his key—it is easy to state which step he skipped.

Once we recognize that cognitive “steps” are usually tacit, we can enhance the introspective credibility of the steps themselves. The process of irrationality can be recast:

Step 1: Be rational on topics where you have no emotional attachment to a particular answer.

Step 2: On topics where you have an emotional attachment to a particular answer, keep a “lookout” for questions where false beliefs imply a substantial material cost for you.

Step 3: If you pay no substantial material costs of error, go with the flow; believe whatever makes you feel best.

Step 4: If there are substantial material costs of error, raise your level of intellectual self-discipline in order to become more objective.

Step 5: Balance the emotional trauma of heightened objectivity—the progressive shattering of your comforting illusions—against the material costs of error.

There is no need to posit that people start with a clear perception of the truth, then throw it away. The only requirement is that rationality remain on “standby,” ready to engage when error is dangerous.

Caplan offers various examples of this process happening in practice. I will include here only the last example:

Want to bet? We encounter the price-sensitivity of irrationality whenever someone unexpectedly offers us a bet based on our professed beliefs. Suppose you insist that poverty in the Third World is sure to get worse in the next decade. A challenger immediately retorts, “Want to bet? If you’re really ‘sure,’ you won’t mind giving me ten-to-one odds.” Why are you unlikely to accept this wager? Perhaps you never believed your own words; your statements were poetry—or lies. But it is implausible to tar all reluctance to bet with insincerity. People often believe that their assertions are true until you make them “put up or shut up.” A bet moderates their views—that is, changes their minds—whether or not they retract their words.

Bryan Caplan’s account is very closely related to what I have argued elsewhere, namely that people are more influenced by non-truth-related motives in areas remote from the senses. Caplan’s account explains that a large part of the reason for this is simply that being mistaken is less harmful in these areas (at least in a material sense), and consequently that people care less about whether their views in these areas are true, and care more about other factors. This also explains why the person who is offered a bet in the example changes his mind: this is not simply explained by whether or not the truth of the matter can be determined by sensible experience, but by whether a mistaken opinion in this particular case is likely to cause harm or not.

Nonetheless, even if you do care about truth because error can harm you, this too is a love of sweetness, not of truth.

One thought on “Rational Irrationality

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s