Minimizing Motivated Beliefs

In the last post, we noted that there is a conflict between the goal of accurate beliefs about your future actions, and your own goals about your future. More accurate beliefs will not always lead to a better fulfillment of those goals. This implies that you must be ready to engage in a certain amount of trade, if you desire both truth and other things. Eliezer Yudkowsky argues that self-deception, and therefore also such trade, is either impossible or stupid, depending on how it is understood:

What if self-deception helps us be happy?  What if just running out and overcoming bias will make us—gasp!—unhappy?  Surely, true wisdom would be second-order rationality, choosing when to be rational.  That way you can decide which cognitive biases should govern you, to maximize your happiness.

Leaving the morality aside, I doubt such a lunatic dislocation in the mind could really happen.

Second-order rationality implies that at some point, you will think to yourself, “And now, I will irrationally believe that I will win the lottery, in order to make myself happy.”  But we do not have such direct control over our beliefs.  You cannot make yourself believe the sky is green by an act of will.  You might be able to believe you believed it—though I have just made that more difficult for you by pointing out the difference.  (You’re welcome!)  You might even believe you were happy and self-deceived; but you would not in fact be happy and self-deceived.

For second-order rationality to be genuinely rational, you would first need a good model of reality, to extrapolate the consequences of rationality and irrationality.  If you then chose to be first-order irrational, you would need to forget this accurate view. And then forget the act of forgetting.  I don’t mean to commit the logical fallacy of generalizing from fictional evidence, but I think Orwell did a good job of extrapolating where this path leads.

You can’t know the consequences of being biased, until you have already debiased yourself.  And then it is too late for self-deception.

The other alternative is to choose blindly to remain biased, without any clear idea of the consequences.  This is not second-order rationality.  It is willful stupidity.

There are several errors here. The first is the denial that belief is voluntary. As I remarked in the comments to this post, it is best to think of “choosing to believe a thing” as “choosing to treat this thing as a fact.” And this is something which is indeed voluntary. Thus for example it is by choice that I am, at this very moment, treating it as a fact that belief is voluntary.

There is some truth in Yudkowsky’s remark that “you cannot make yourself believe the sky is green by an act of will.” But this is not because the thing itself is intrinsically involuntary. On the contrary, you could, if you wished, choose to treat the greenness of the sky as a fact, at least for the most part and in most ways. The problem is that you have no good motive to wish to act this way, and plenty of good motives not to act this way. In this sense, it is impossible for most of us to believe that the sky is green in the same way it is impossible for most of us to commit suicide; we simply have no good motive to do either of these things.

Yudkowsky’s second error is connected with the first. Since, according to him, it is impossible to deliberately and directly deceive oneself, self-deception can only happen in an indirect manner: “The other alternative is to choose blindly to remain biased, without any clear idea of the consequences.  This is not second-order rationality.  It is willful stupidity.” The idea is that ordinary beliefs are simply involuntary, but we can have beliefs that are somewhat voluntary by choosing “blindly to remain biased, without any clear idea of the consequences.” Since this is “willful stupidity,” a reasonable person would completely avoid such behavior, and thus all of his beliefs would be involuntary.

Essentially, Yudkowsky is claiming that we have some involuntary beliefs, and that we should avoid adding any voluntary beliefs to our involuntary ones. This view is fundamentally flawed precisely because all of our beliefs are voluntary, and thus we cannot avoid having voluntary beliefs.

Nor is it “willful stupidity” to trade away some truth for the sake of other good things. Completely avoiding this is in fact intrinsically impossible. If you are seeking one good, you are not equally seeking a distinct good; one cannot serve two masters. Thus since all people are interested in some goods distinct from truth, there is no one who fails to trade away some truth for the sake of other things. Yudkowsky’s mistake here is related to his wishful thinking about wishful thinking which I discussed previously. In this way he views himself, at least ideally, as completely avoiding wishful thinking. This is both impossible and unhelpful, impossible in that everyone has such motivated beliefs, and unhelpful because such beliefs can in fact be beneficial.

A better attitude to this matter is adopted by Robin Hanson, as for example when he discusses motives for having opinions in a post which we previously considered here. Bryan Caplan has a similar view, discussed here.

Once we have a clear view of this matter, we can use this to minimize the loss of truth that results from such beliefs. For example, in a post linked above, we discussed the argument that fictional accounts consistently distort one’s beliefs about reality. Rather than pretending that there is no such effect, we can deliberately consider to what extent we wish to be open to this possibility, depending on our other purposes for engaging with such accounts. This is not “willful stupidity”; the stupidity would to be engage in such trades without realizing that such trades are inevitable, and thus not to realize to what extent you are doing it.

Consider one of the cases of voluntary belief discussed in this earlier post. As we quoted at the time, Eric Reitan remarks:

For most horror victims, the sense that their lives have positive meaning may depend on the conviction that a transcendent good is at work redeeming evil. Is the evidential case against the existence of such a good really so convincing that it warrants saying to these horror victims, “Give up hope”? Should we call them irrational when they cling to that hope or when those among the privileged live in that hope for the sake of the afflicted? What does moral decency imply about the legitimacy of insisting, as the new atheists do, that any view of life which embraces the ethico-religious hope should be expunged from the world?

Here, Reitan is proposing that someone believe that “a transcendent good is at work redeeming evil” for the purpose of having “the sense that their lives have positive meaning.” If we look at this as it is, namely as proposing a voluntary belief for the sake of something other than truth, we can find ways to minimize the potential conflict between accuracy and this other goal. For example, the person might simply believe that “my life has a positive meaning,” without trying to explain why this is so. For the reasons given here, “my life has a positive meaning” is necessarily more probable and more known than any explanation for this that might be adopted. To pick a particular explanation and claim that it is more likely would be to fall into the conjunction fallacy.

Of course, real life is unfortunately more complicated. The woman in Reitan’s discussion might well respond to our proposal somewhat in this way (not a real quotation):

Probability is not the issue here, precisely because it is not a question of the truth of the matter in itself. There is a need to actually feel that one’s life is meaningful, not just to believe it. And the simple statement “life is meaningful” will not provide that feeling. Without the feeling, it will also be almost impossible to continue to believe it, no matter what the probability is. So in order to achieve this goal, it is necessary to believe a stronger and more particular claim.

And this response might be correct. Some such goals, due to their complexity, might not be easily achieved without adopting rather unlikely beliefs. For example, Robin Hanson, while discussing his reasons for having opinions, several times mentions the desire for “interesting” opinions. This is a case where many people will not even notice the trade involved, because the desire for interesting ideas seems closely related to the desire for truth. But in fact truth and interestingness are diverse things, and the goals are diverse, and one who desires both will likely engage in some trade. In fact, relative to truth seeking, looking for interesting things is a dangerous endeavor. Scott Alexander notes that interesting things are usually false:

This suggests a more general principle: interesting things should usually be lies. Let me give three examples.

I wrote in Toxoplasma of Rage about how even when people crusade against real evils, the particular stories they focus on tend to be false disproportionately often. Why? Because the thousands of true stories all have some subtleties or complicating factors, whereas liars are free to make up things which exactly perfectly fit the narrative. Given thousands of stories to choose from, the ones that bubble to the top will probably be the lies, just like on Reddit.

Every time I do a links post, even when I am very careful to double- and triple- check everything, and to only link to trustworthy sources in the mainstream media, a couple of my links end up being wrong. I’m selecting for surprising-if-true stories, but there’s only one way to get surprising-if-true stories that isn’t surprising, and given an entire Internet to choose from, many of the stories involved will be false.

And then there’s bad science. I can’t remember where I first saw this, so I can’t give credit, but somebody argued that the problem with non-replicable science isn’t just publication bias or p-hacking. It’s that some people will be sloppy, biased, or just stumble through bad luck upon a seemingly-good methodology that actually produces lots of false positives, and that almost all interesting results will come from these people. They’re the equivalent of Reddit liars – if there are enough of them, then all of the top comments will be theirs, since they’re able to come up with much more interesting stuff than the truth-tellers. In fields where sloppiness is easy, the truth-tellers will be gradually driven out, appearing to be incompetent since they can’t even replicate the most basic findings of the field, let alone advance it in any way. The sloppy people will survive to train the next generation of PhD students, and you’ll end up with a stable equilibrium.

In a way this makes the goal of believing interesting things much like the woman’s case. The goal of “believing interesting things” will be better achieved by more complex and detailed beliefs, even though to the extent that they are more complex and detailed, they are simply that much less likely to be true.

The point of this present post, then, is not to deny that some goals might be such that they are better attained with rather unlikely beliefs, and in some cases even in proportion to the unlikelihood of the beliefs. Rather, the point is that a conscious awareness of the trades involved will allow a person to minimize the loss of truth involved. If you never look at your bank account, you will not notice how much money you are losing from that monthly debit for internet. In the same way, if you hold Yudkowksy’s opinion, and believe that you never trade away truth for other things, which is itself both false and motivated, you are like someone who never looks at your account: you will not notice how much you are losing.

The More Known and the Conjunction Fallacy

St. Thomas explains in what sense we know the universal before the particular, and in what sense the particular before the universal:

In our knowledge there are two things to be considered.

First, that intellectual knowledge in some degree arises from sensible knowledge: and, because sense has singular and individual things for its object, and intellect has the universal for its object, it follows that our knowledge of the former comes before our knowledge of the latter.

Secondly, we must consider that our intellect proceeds from a state of potentiality to a state of actuality; and every power thus proceeding from potentiality to actuality comes first to an incomplete act, which is the medium between potentiality and actuality, before accomplishing the perfect act. The perfect act of the intellect is complete knowledge, when the object is distinctly and determinately known; whereas the incomplete act is imperfect knowledge, when the object is known indistinctly, and as it were confusedly. A thing thus imperfectly known, is known partly in act and partly in potentiality, and hence the Philosopher says (Phys. i, 1), that “what is manifest and certain is known to us at first confusedly; afterwards we know it by distinguishing its principles and elements.” Now it is evident that to know an object that comprises many things, without proper knowledge of each thing contained in it, is to know that thing confusedly. In this way we can have knowledge not only of the universal whole, which contains parts potentially, but also of the integral whole; for each whole can be known confusedly, without its parts being known. But to know distinctly what is contained in the universal whole is to know the less common, as to “animal” indistinctly is to know it as “animal”; whereas to know “animal” distinctly is know it as “rational” or “irrational animal,” that is, to know a man or a lion: therefore our intellect knows “animal” before it knows man; and the same reason holds in comparing any more universal idea with the less universal.

Moreover, as sense, like the intellect, proceeds from potentiality to act, the same order of knowledge appears in the senses. For by sense we judge of the more common before the less common, in reference both to place and time; in reference to place, when a thing is seen afar off it is seen to be a body before it is seen to be an animal; and to be an animal before it is seen to be a man, and to be a man before it seen to be Socrates or Plato; and the same is true as regards time, for a child can distinguish man from not man before he distinguishes this man from that, and therefore “children at first call men fathers, and later on distinguish each one from the others” (Phys. i, 1). The reason of this is clear: because he who knows a thing indistinctly is in a state of potentiality as regards its principle of distinction; as he who knows “genus” is in a state of potentiality as regards “difference.” Thus it is evident that indistinct knowledge is midway between potentiality and act.

We must therefore conclude that knowledge of the singular and individual is prior, as regards us, to the knowledge of the universal; as sensible knowledge is prior to intellectual knowledge. But in both sense and intellect the knowledge of the more common precedes the knowledge of the less common.

The universal is known from the particular in the sense that we learn the nature of the universal from the experience of particulars. But both in regard to the universal and in regard to the particular, our knowledge is first vague and confused, and becomes more distinct as it is perfected. In St. Thomas’s example, one can see that something is a body before noticing that it is an animal, and an animal before noticing that it is a man. The thing that might be confusing here is that the more certain knowledge is also the less perfect knowledge: looking at the thing in the distance, it is more certain that it is some kind of body, but it is more perfect to know that it is a man.

Insofar as probability theory is a formalization of degrees of belief, the same thing is found, and the same confusion can occur. Objectively, the more general claim should always be understood to be more probable, but the more specific claim, representing what would be more perfect knowledge, can seem more explanatory, and therefore might appear more likely. This false appearance is known as the conjunction fallacy. Thus for example as I continue to add to a blog post, the post might become more convincing. But in fact the chance that I am making a serious error in the post can only increase, not decrease, with every additional sentence.