Kavka’s Toxin

Gregory Kavka discusses a thought experiment:

You are feeling extremely lucky. You have just been approached by an eccentric billionaire who has offered you the following deal. He places before you a vial of toxin that, if you drink it, will make you painfully ill for a day, but will not threaten your life or have any lasting effects. (Your spouse, a crack biochemist, confirms the properties of the toxin.) The billionaire will pay you one million dollars tomorrow morning if, at midnight tonight, you intend to drink the toxin tomorrow afternoon. He emphasizes that you need not drink the toxin to receive the money; in fact, the money will already be in your bank account hours before the time for drinking it arrives, if you succeed. (This is confirmed by your daughter, a lawyer, after she examines the legal and financial documents that the billionaire has signed.) All you have to do is sign the agreement and then intend at midnight tonight to drink the stuff tomorrow afternoon. You are perfectly free to change your mind after receiving the money and not drink the toxin. (The presence or absence of the intention is to be determined by the latest ‘mind-reading’ brain scanner and computing device designed by the great Doctor X. As a cognitive scientist, materialist, and former student of Doctor X, you have no doubt that the machine will correctly detect the presence or absence of the relevant intention.)

Confronted with this offer, you gleefully sign the contract, thinking ‘what an easy way to become a millionaire’. Not long afterwards, however, you begin to worry. You had been thinking that you could avoid drinking the toxin and just pocket the million. But you realize that if you are thinking in those terms when midnight rolls around, you will not be intending to drink the toxin tomorrow. So maybe you will actually have to drink the stuff to collect the money. It will not be pleasant, but it is sure worth a day of suffering to become a millionaire.

However, as occurs to you immediately, it cannot really be necessary to drink the toxin to pocket the money. That money will either be or not be in your bank account by 10 a.m. tomorrow, you will know then whether it is there or not, and your drinking or not drinking the toxin hours later cannot affect the completed financial transaction. So instead of planning to drink the toxin, you decide to intend today to drink it and then change your mind after midnight. But if that is your plan, then it is obvious that you do not intend to drink the toxin. (At most you intend to intend to drink it.) For having such an intention is incompatible with planning to change your mind tomorrow morning.

The discussion goes on from here for some time, but the resolution of the puzzle is easier than Kavka realizes. There is only a problem because it is implicitly assumed that the belief that you will or will not drink the toxin is something different from the intention to drink it. But in the case of voluntary actions, these are one and the same. The reason you cannot intend to drink the toxin without thinking that you will end up drinking it is simply that the intention to drink the toxin is the belief that you will end up drinking it. If the brain scanner works correctly, it registers that you intend to drink the toxin if you in fact think you will end up drinking it, and it registers that you do not intend this if you in fact think you will not drink it.

Is there a problem on the practical level? That is, is it possible for someone to get the million, or is it impossible because everyone in such a situation would expect that they would reconsider tomorrow morning, and therefore they will not believe that they will end up drinking it?

Possibly, and for some people. It is entirely possible in some situations that beliefs about what you will in fact do, apparently simply based on facts, entirely prevent certain decisions and intentions. Thus if someone has tried dozens of times in the past to give up smoking, and consistently failed, it will become more and more difficult to intend to give up smoking, and may very well become impossible.

However, Kavka gives a theoretical argument that this should be impossible in the case of his thought experiment:

Thus, we can explain your difficulty in earning a fortune: you cannot intend to act as you have no reason to act, at least when you have substantial reason not to act. And you have (or will have when the time comes) no reason to drink the toxin, and a very good reason not to, for it will make you quite sick for a day.

Again, it may well be that this reasoning would cause an individual to fail to obtain the million. But it is not necessary for this to happen. For the person does have a reason to intend to drink the toxin in the first place: namely, in order to obtain the million. And tomorrow morning their decision, i.e. their belief that they will drink the toxin, will be an efficient cause of them actually drinking the toxin, unless they reconsider. Thus if a person expects to reconsider, they may well fail to obtain the million. But someone wanting to obtain the million will also therefore plan not to reconsider. And tomorrow morning their belief that they will not reconsider will be an efficient cause of them not reconsidering, unless they reconsider their plan not to reconsider. And so on.

Thus, someone can only obtain the million if they plan to drink the toxin, they plan not to reconsider this plan, and so on. And someone with this plan can obtain the million. And maybe they will end up drinking the toxin and maybe they won’t; but the evening before, they believe that they factually will drink it. If they don’t, they fail to obtain the million. And they may well in fact drink it, simply by carrying out the original plan: going about their day without thinking about it, and simply drinking that afternoon, without any additional consideration of reasons to drink or not drink.

There is also a way to obtain the million and avoid drinking, but it cannot happen on purpose. This can happen only in one way: namely, by being lucky. You plan on every level not to reconsider, and expect this to happen, but luckily you end up being mistaken, and you do reconsider, despite expecting not to. In this case you both obtain the million, and avoid the drink.

 

Blaming the Prophet

Consider the fifth argument in the last post. Should we blame a person for holding a true belief? At this point it should not be too difficult to see that the truth of the belief is not the point. Elsewhere we have discussed a situation in which one cannot possibly hold a true belief, because whatever belief one holds on the matter, it will cause itself to be false. In a similar way, although with a different sort of causality, the problem with the person’s belief that he will kill someone tomorrow, is not that it is true, but that it causes itself to be true. If the person did not expect to kill someone tomorrow, he would not take a knife with him to the meeting etc., and thus would not kill anyone. So just as in the other situation, it is not a question of holding a true belief or a false belief, but of which false belief one will hold, here it is not a question of holding a true belief or a false belief, but of which true belief one will hold: one that includes someone getting killed, or one that excludes that. Truth will be there either way, and is not the reason for praise or blame: the person is blamed for the desire to kill someone, and praised (or at least not blamed) for wishing to avoid this. This simply shows the need for the qualifications added in the previous post: if the person’s belief is voluntary, and held for the sake of coming true, it is very evident why blame is needed.

We have not specifically addressed the fourth argument, but this is perhaps unnecessary given the above response to the fifth. This blog in general has advocated the idea of voluntary beliefs, and in principle these can be praised or blamed. To the degree that we are less willing to do so, however, this may be a question of emphasis. When we talk about a belief, we are more concerned about whether it is true or not, and evidence in favor of it or against it. Praise or blame will mainly come in insofar as other motives are involved, insofar as they strengthen or weaken a person’s wish to hold the belief, or insofar as they potentially distort the person’s evaluation of the evidence.

Nonetheless, the factual question “is this true?” is a different question from the moral question, “should I believe this?” We can see the struggle between these questions, for example, in a difficulty that people sometimes have with willpower. Suppose that a smoker decides to give up smoking, and suppose that they believe they will not smoke for the next six months. Three days later, let us suppose, they smoke a cigarette after all. At that point, the person’s resolution is likely to collapse entirely, so that they return to smoking regularly. One might ask why this happens. Since the person did not smoke for three days, it should be perfectly possible, at least, for them to smoke only once every three days, instead of going back to their former practice. The problem is that the person has received evidence directly indicating the falsity of “I will not smoke for the next six months.” They still might have some desire for that result, but they do not believe that their belief has the power to bring this about, and in fact it does not. The belief would not be self-fulfilling, and in fact it would be false, so they cease to hold it. It is as if someone attempts to open a door and finds it locked; once they know it is locked, they can no longer choose to open the door, because they cannot choose something that does not appear to be within their power.

Mark Forster, in Chapter 1 of his book Do It Tomorrow, previously discussed here, talks about similar issues:

However, life is never as simple as that. What we decide to do and what we actually do are two different things. If you think of the decisions you have made over the past year, how many of them have been satisfactorily carried to a conclusion or are progressing properly to that end? If you are like most people, you will have acted on some of your decisions, I’m sure. But I’m also sure that a large proportion will have fallen by the wayside.

So a simple decision such as to take time to eat properly is in fact very difficult to carry out. Our new rule may work for a few days or a few weeks, but it won’t be long before the pressures of work force us to make an exception to it. Before many days are up the exception will have become the rule and we are right back where we started. However much we rationalise the reasons why our decision didn’t get carried out, we know deep in the heart of us that it was not really the circumstances that were to blame. We secretly acknowledge that there is something missing from our ability to carry out a decision once we have made it.

In fact if we are honest it sometimes feels as if it is easier to get other people to do what we want them to do than it is to get ourselves to do what we want to do. We like to think of ourselves as a sort of separate entity sitting in our body controlling it, but when we look at the way we behave most of the time that is not really the case. The body controls itself most of the time. We have a delusion of control. That’s what it is – a delusion.

If we want to see how little control we have over ourselves, all most of us have to do is to look in the mirror. You might like to do that now. Ask yourself as you look at your image:

  • Is my health the way I want it to be?
  • Is my fitness the way I want it to be?
  • Is my weight the way I want it to be?
  • Is the way I am dressed the way I want it to be?

I am not asking you here to assess what sort of body you were born with, but what you have made of it and how good a state of repair you are keeping it in.

It may be that you are healthy, fit, slim and well-dressed. In which case have a look round at the state of your office or workplace:

  • Is it as well organised as you want it to be?
  • Is it as tidy as you want it to be?
  • Do all your office systems (filing, invoicing, correspondence, etc.) work the way you want them to work?

If so, then you probably don’t need to be reading this book.

I’ve just asked you to look at two aspects of your life that are under your direct control and are very little influenced by outside factors. If these things which are solely affected by you are not the way you want them to be, then in what sense can you be said to be in control at all?

A lot of this difficulty is due to the way our brains are organised. We have the illusion that we are a single person who acts in a ‘unified’ way. But it takes only a little reflection (and examination of our actions, as above) to realise that this is not the case at all. Our brains are made up of numerous different parts which deal with different things and often have different agendas.

Occasionally we attempt to deal with the difference between the facts and our plans by saying something like, “We will approximately do such and such. Of course we know that it isn’t going to be exactly like this, but at least this plan will be an approximate guide.” But this does not really avoid the difficulty. Even “this plan will be an approximate guide” is a statement about the facts that might turn out to be false; and even if it does not turn out to be false, the fact that we have set it down as approximate will likely make it guide our actions more weakly than it would have if we had said, “this is what we will do.” In other words, we are likely to achieve our goal less perfectly, precisely because we tried to make our statement more accurate. This is the reverse of the situation discussed in a previous post, where one gives up some accuracy, albeit vaguely, for the sake of another goal such as fitting in with associates or for literary enjoyment.

All of this seems to indicate that the general proposal about decisions was at least roughly correct. It is not possible to simply to say that decisions are one thing and beliefs entirely another thing. If these were simply two entirely separate things, there would be no conflict at all, at least of this kind, between accuracy and one’s other goals, and things do not turn out this way.

Self-Fulfilling Prophecy

We can formulate a number of objections to the thesis argued in the previous post.

First, if a belief that one is going to do something is the same as the decision to do it, another person’s belief that I am going to do something should mean that the other person is making a decision for me. But this is absurd.

Second, suppose that I know that I am going to be hit on the head and suffer from amnesia, thus forgetting all about these considerations. I may believe that I will eat breakfast tomorrow, but this is surely not a decision to do so.

Third, suppose someone wants to give up smoking. He may firmly hold the opinion that whatever he does, he will sometimes smoke within the next six months, not because he wants to do so, but because he does not believe it possible that he do otherwise. We would not want to say that he decided not to give up smoking.

Fourth, decisions are appropriate objects of praise and blame. We seem at least somewhat more reluctant to praise and blame beliefs, even if it is sometimes done.

Fifth, suppose someone believes, “I will kill Peter tomorrow at 4:30 PM.” We will wish to blame him for deciding to kill Peter. But if he does kill Peter tomorrow at 4:30, he held a true belief. Even if beliefs can be praised or blamed, it seems implausible that a true belief should be blamed.

The objections are helpful. With their aid we can see that there is indeed a flaw in the original proposal, but that it is nonetheless somewhat on the right track. A more accurate proposal would be this: a decision is a voluntary self-fulfilling prophecy as understood by the decision maker. I will explain as we consider the above arguments in more detail.

In the first argument, in the case of one person making a decision for another, the problem is that a mere belief that someone else is going to do something is not self-fulfilling. If I hold a belief that I myself will do something, the belief will tend to cause its own truth, just as suggested in the previous post. But believing that someone else will do something will not in general cause that person to do anything. Consider the following situation: a father says to his children as he departs for the day, “I am quite sure that the house will be clean when I get home.” If the children clean the house during his absence, suddenly it is much less obvious that we should deny that this was the father’s decision. In fact, the only reason this is not truly the father’s decision, without any qualification at all, is that it does not sufficiently possess the characteristics of a self-fulfilling prophecy. First, in the example it does not seem to matter whether the father believes what he says, but only whether he says it. Second, since it is in the power of the children to fail to clean the house in any case, there seems to be a lack of sufficient causal connection between the statement and the cleaning of the house. Suppose belief did matter, namely suppose that the children will know whether he believes what he says or not. And suppose additionally that his belief had an infallible power to make his children clean the house. In that case it would be quite reasonable to say, without any qualification, “He decided that his children would clean the house during his absence.” Likewise, even if the father falsely believes that he has such an infallible power, in a sense we could rightly describe him as trying to make that decision, just as we might say, “I decided to open the door,” even if it turns out that my belief that the door could be opened turns out to be false when I try it; the door may be locked. This is why I included the clause “as understood by the decision maker” in the above proposal. This is a typical character of moral analysis; human action must be understood from the perspective of the one who acts.

In the amnesia case, there is a similar problem: due to the amnesia, the person’s current beliefs do not have a causal connection with his later actions. In addition, if we consider such things as “eating breakfast,” there might be a certain lack of causal connection in any case; the person would likely eat breakfast whether or not he formulates any opinion about what he will do. And to this degree we might feel it implausible to say that his belief that he will eat breakfast is a decision, even without the amnesia. It is not understood by the subject as a self-fulfilling prophecy.

In the case of giving up smoking, there are several problems. In this case, the subject does not believe that there is any causal connection between his beliefs and his actions. Regardless of what he believes, he thinks, he is going to smoke in fact. Thus, in his opinion, if he believes that he will stop smoking completely, he will simply hold a false belief without getting any benefit from it; he will still smoke, and his belief will just be false. So since the belief is false, and without benefit, at least as he understands it, there is no reason for him to hold this belief. Consequently, he holds the opposite belief. But this is not a decision, since he does not understand it as causing his smoking, which is something that is expected to happen whether or not he believes it will.

In such cases in real life, we are in fact sometimes tempted to say that the person is choosing not to give up smoking. And we are tempted to this to the extent that it seems to us that his belief should have the causal power that he denies it has: his denial seems to stem from the desire to smoke. If he wanted to give up smoking, we think, he could just accept that he would be able to believe this, and in such a way that it would come true. He does not, we think, because he wants to smoke, and so does not want to give up smoking. In reality this is a question of degree, and this analysis can have some truth. Consider the following from St. Augustine’s Confessions (Book VIII, Ch. 7-8):

Finally, in the very fever of my indecision, I made many motions with my body; like men do when they will to act but cannot, either because they do not have the limbs or because their limbs are bound or weakened by disease, or incapacitated in some other way. Thus if I tore my hair, struck my forehead, or, entwining my fingers, clasped my knee, these I did because I willed it. But I might have willed it and still not have done it, if the nerves had not obeyed my will. Many things then I did, in which the will and power to do were not the same. Yet I did not do that one thing which seemed to me infinitely more desirable, which before long I should have power to will because shortly when I willed, I would will with a single will. For in this, the power of willing is the power of doing; and as yet I could not do it. Thus my body more readily obeyed the slightest wish of the soul in moving its limbs at the order of my mind than my soul obeyed itself to accomplish in the will alone its great resolve.

How can there be such a strange anomaly? And why is it? Let thy mercy shine on me, that I may inquire and find an answer, amid the dark labyrinth of human punishment and in the darkest contritions of the sons of Adam. Whence such an anomaly? And why should it be? The mind commands the body, and the body obeys. The mind commands itself and is resisted. The mind commands the hand to be moved and there is such readiness that the command is scarcely distinguished from the obedience in act. Yet the mind is mind, and the hand is body. The mind commands the mind to will, and yet though it be itself it does not obey itself. Whence this strange anomaly and why should it be? I repeat: The will commands itself to will, and could not give the command unless it wills; yet what is commanded is not done. But actually the will does not will entirely; therefore it does not command entirely. For as far as it wills, it commands. And as far as it does not will, the thing commanded is not done. For the will commands that there be an act of will–not another, but itself. But it does not command entirely. Therefore, what is commanded does not happen; for if the will were whole and entire, it would not even command it to be, because it would already be. It is, therefore, no strange anomaly partly to will and partly to be unwilling. This is actually an infirmity of mind, which cannot wholly rise, while pressed down by habit, even though it is supported by the truth. And so there are two wills, because one of them is not whole, and what is present in this one is lacking in the other.

St. Augustine analyzes this in the sense that he did not “will entirely” or “command entirely.” If we analyze it in our terms, he does not expect in fact to carry out his intention, because he does not want to, and he knows that people do not do things they do not want to do. In a similar way, in some cases the smoker does not fully want to give up smoking, and therefore believes himself incapable of simply deciding to give up smoking, because if he made that decision, it would happen, and he would not want it to happen.

In the previous post, I mentioned an “obvious objection” at several points. This was that the account as presented there leaves out the role of desire. Suppose someone believes that he will go to Vienna in fact, but does not wish to go there. Then when the time comes to buy a ticket, it is very plausible that he will not buy one. Yes, this will mean that he will stop believing that he will go to Vienna. But this is different from the case where a person has “decided” to go and then changes his mind. The person who does not want to go, is not changing his mind at all, except about the factual question. It seems absurd (and it is) to characterize a decision without any reference to what the person wants.

This is why we have characterized a decision here as “voluntary”, “self-fulfilling,” and “as understood by the decision maker.” It is indeed the case that the person holds a belief, but he holds it because he wants to, and because he expects it to cause its own fulfillment, and he desires that fulfillment.

Consider the analysis in the previous post of the road to point C. Why is it reasonable for anyone, whether the subject or a third party, to conclude that the person will take road A? This is because we know that the subject wishes to get to point C. It is his desire to get to point C that will cause him to take road A, once he understands that A is the only way to get there.

Someone might respond that in this case we could characterize the decision as just a desire: the desire to get to point C. The problem is that the example is overly simplified compared to real life. Ordinarily there is not simply a single way to reach our goals. And the desire to reach the goal may not determine which particular way we take, so something else must determine it. This is precisely why we need to make decisions at all. We could in fact avoid almost anything that feels like a decision, waiting until something else determined the matter, but if we did, we would live very badly indeed.

When we make a complicated plan, there are two interrelated factors explaining why we believe it to be factually true that we will carry out the plan. We know that we desire the goal, and we expect this desire for the goal to move us along the path towards the goal. But since we also have other desires, and there are various paths towards the goal, some better than others, there are many ways that we could go astray before reaching the goal, either by taking a path to some other goal, or by taking a path less suited to the goal. So we also expect the details of our plan to keep us on the particular course that we have planned, which we suppose to be the best, or at least the best path considering our situation as a whole. If we did not keep those details in mind, we would not likely remain on this precise path. As an example, I might plan to stop at a grocery store on my way home from work, out of the desire to possess a sufficient stock of groceries, but if I do not keep the plan in mind, my desire to get home may cause me to go past the store without stopping. Again, this is why our explanation of belief is that it is a self-fulfilling prophecy, and one explicitly understood by the subject as such; by saying “I will use A, B, and C, to get to goal Z,” we expect that keeping these details in mind, together with our desire for Z, we will be moved along this precise path, and we wish to follow this path, for the sake of Z.

There is a lot more that could be said about this. For example, it is not difficult to see here an explanation for the fact that such complicated plans rarely work out precisely in practice, even in the absence of external impediments. We expect our desire for the goal to keep us on track, but in fact we have other desires, and there are an indefinite number of possibilities for those other desires to make something else happen. Likewise, even if the plan was the best we could work out in advance, there will be numberless details in which there were better options that we did not notice while planning, and we will notice some of these as we proceed along the path. So both the desire for the goal, and the desire for other things, will likely derail the plan. And, of course, most plans will be derailed by external things as well.

A combination of the above factors has the result that I will leave the consideration of the fourth and fifth arguments to another post, even though this was not my original intention, and was not my belief about what would happen.

Decisions as Predictions

Among acts of will, St. Thomas distinguishes intention and choice:

The movement of the will to the end and to the means can be considered in two ways. First, according as the will is moved to each of the aforesaid absolutely and in itself. And thus there are really two movements of the will to them. Secondly, it may be considered accordingly as the will is moved to the means for the sake of the end: and thus the movement of the will to the end and its movement to the means are one and the same thing. For when I say: “I wish to take medicine for the sake of health,” I signify no more than one movement of my will. And this is because the end is the reason for willing the means. Now the object, and that by reason of which it is an object, come under the same act; thus it is the same act of sight that perceives color and light, as stated above. And the same applies to the intellect; for if it consider principle and conclusion absolutely, it considers each by a distinct act; but when it assents to the conclusion on account of the principles, there is but one act of the intellect.

Choice is about the means, such as taking medicine in his example, while intention is about the end, as health in his example. This makes sense in terms of how we commonly use the terms. When we do speak of choosing an end, we are normally considering which of several alternative intermediate ends are better means towards an ultimate end. And thus we are “choosing,” not insofar as the thing is an end, but insofar as it is a means towards a greater end that we intend.

Discussing the human mind, we noted earlier that a thing often seems fairly simple when it is considered in general, but turns out to have a highly complex structure when considered in detail. The same thing will turn out to be the case if we attempt to consider the nature of these acts of will in detail.

Consider the hypothesis that both intention and choice consist basically in beliefs: intention would consist in the belief that one will in fact obtain a certain end, or at least that one will come as close to it as possible. Choice would consist in the belief that one will take, or that one is currently taking, a certain temporally immediate action for the sake of such an end. I will admit immediately that this hypothesis will not turn out to be entirely right, but as we shall see, the consideration will turn out to be useful.

First we will bring forward a number of considerations in favor of the hypothesis, and then, in another post, some criticisms of it.

First, in favor of the hypothesis, we should consider the fact that believing that one will take a certain course of action is virtually inseparable from deciding to take that course of action, and the two are not very clearly distinguishable at all. Suppose someone says, “I intend to take my vacation in Paris, but I believe that I will take it in Vienna instead.” On the face of it, this is nonsense. We might make sense of it by saying that the person really meant to say that he first decided to go to Paris, but then obstacles came up and he realizes that it will not be possible. But in that case, he also changes his decision: he now intends to go to Vienna. It is completely impossible that he currently intends to go to Paris, but fully believes that he will not go, and that he will go to Vienna instead.

Likewise, suppose someone says, “I haven’t yet decided where to take my vacation. But I am quite convinced that I am going to take it in Vienna.” Again, this is almost nonsensical: if he is convinced that he will go to Vienna, we would normally say that he has already made up his mind: it is not true that he has not decided yet. As in the previous case, we might be able to come up with circumstances where someone might say this or something like it. For example, if someone else is attempting to convince him to come to Paris, he might say that he has not yet decided, meaning that he is willing to think about it for a bit, but that he fully expects to end up going to Vienna. But in this case, it is more natural to say that his decision and his certainty that he will go to Vienna are proportional: the only sense in which he hasn’t decided yet, is to the degree that the thinks there is some chance that he will change his mind and go to Paris. Thus if there is no chance at all of that, then he is completely decided, while if he is somewhat unsure, his decision is not yet perfect but partial.

Both of the above cases would fit with the claim that a decision is simply a belief about what one is going to do, although they would not necessarily exclude the possibility that it is a separate thing, even if inseparably connected to the belief.

We can also consider beliefs and decisions as something known from their effects. I noted elsewhere that we recognize the nature of desire from its effect, namely from the fact that when we have a desire, we tend to bring about the thing we desire. Insofar as a decision is a rational desire, the same thing applies to decisions as to other kinds of desires. We would not know decisions as decisions, if we never did the things we have decided to do. Likewise, belief is a fairly abstract object, and it is at least plausible that we would come to know it from its more concrete effects.

Now consider the effects of the decision to go to Vienna, compared to the effects of the belief that you will go to Vienna. Both of them will result in you saying, “I am going to go to Vienna.” And if we look at belief as I suggested in the discussion to this post, namely more or less as treating something as a fact, then belief will have other consequences, such as buying a ticket for Vienna. For if you are treating it as a fact that you are going to go there, either you will buy a ticket, or you will give up the belief. In a similar way, if you have decided to go, either you will buy a ticket, or you will change your decision. So the effects of the belief and the effects of the decision seem to be entirely the same. If we know the thing from its effects, then, it seems we should consider the belief and the decision to be entirely the same.

There is an obvious objection here, but as I said the consideration of objections will come later.

Again, consider a situation where there are two roads, road A and road B, to your destination C. There is a fallen bridge along road B, so road B would not be a good route, while road A is a good route. It is reasonable for a third party who knows that you want to get to C and that you have considered the state of the roads, to conclude that you will take road A. But if this is reasonable for someone else, then it is reasonable for you: you know that you want to get to C, and you know that you have considered the state of the roads. So it is reasonable for you to conclude that you will take road A. Note that this is purely about belief: there was no need for an extra “decision” factor. The conclusion that you will factually take road A is a logical conclusion from the known situation. But now that you are convinced that you will take road A, there is no need for you to consider whether to take road A or road B; there is nothing to decide anymore. Everything is already decided as soon as you come to that conclusion, which is a matter of forming a belief. Once again, it seems as though your belief that you will take road A just is your decision, and there is nothing more to it.

Once again, there is an obvious objection, but it will have to wait until the next post.

Alien Implant: Newcomb’s Smoking Lesion

In an alternate universe, on an alternate earth, all smokers, and only smokers, get brain cancer. Everyone enjoys smoking, but many resist the temptation to smoke, in order to avoid getting cancer. For a long time, however, there was no known cause of the link between smoking and cancer.

Twenty years ago, autopsies revealed tiny black boxes implanted in the brains of dead persons, connected to their brains by means of intricate wiring. The source and function of the boxes and of the wiring, however, remains unknown. There is a dial on the outside of the boxes, pointing to one of two positions.

Scientists now know that these black boxes are universal: every human being has one. And in those humans who smoke and get cancer, in every case, the dial turns out to be pointing to the first position. Likewise, in those humans who do not smoke or get cancer, in every case, the dial turns out to be pointing to the second position.

It turns out that when the dial points to the first position, the black box releases dangerous chemicals into the brain which cause brain cancer.

Scientists first formed the reasonable hypothesis that smoking causes the dial to be set to the first position. Ten years ago, however, this hypothesis was definitively disproved. It is now known with certainty that the box is present, and the dial pointing to its position, well before a person ever makes a decision about smoking. Attempts to read the state of the dial during a person’s lifetime, however, result most unfortunately in an explosion of the equipment involved, and the gruesome death of the person.

Some believe that the black box must be reading information from the brain, and predicting a person’s choice. “This is Newcomb’s Problem,” they say. These persons choose not to smoke, and they do not get cancer. Their dials turn out to be set to the second position.

Others believe that such a prediction ability is unlikely. The black box is writing information into the brain, they believe, and causing a person’s choice. “This is literally the Smoking Lesion,” they say.  Accepting Andy Egan’s conclusion that one should smoke in such cases, these persons choose to smoke, and they die of cancer. Their dials turn out to be set to the first position.

Still others, more perceptive, note that the argument about prediction or causality is utterly irrelevant for all practical purposes. “The ritual of cognition is irrelevant,” they say. “What matters is winning.” Like the first group, these choose not to smoke, and they do not get cancer. Their dials, naturally, turn out to be set to the second position.

 

Smoking Lesion

Andy Egan argues:

The Smoking Lesion

Susan is debating whether or not to smoke. She knows that smoking is strongly correlated with lung cancer, but only because there is a common cause – a condition that tends to cause both smoking and cancer. Once we fix the presence or absence of this condition, there is no additional correlation between smoking and cancer. Susan prefers smoking without cancer to not smoking without cancer, and prefers smoking with cancer to not smoking with cancer. Should Susan smoke? Is seems clear that she should. (Set aside your theoretical commitments and put yourself in Susan’s situation. Would you smoke? Would you take yourself to be irrational for doing so?)

Causal decision theory distinguishes itself from evidential decision theory by delivering the right result for The Smoking Lesion, where its competition – evidential decision theory – does not. The difference between the two theories is in how they compute the relative value of actions. Roughly: evidential decision theory says to do the thing you’d be happiest to learn that you’d done, and causal decision theory tells you to do the thing most likely to bring about good results. Evidential decision theory tells Susan not to smoke, roughly because it treats the fact that her smoking is evidence that she has the lesion, and therefore is evidence that she is likely to get cancer, as a reason not to smoke. Causal decision theory tells her to smoke, roughly because it does not treat this sort of common-cause based evidential connection between an action and a bad outcome as a reason not to perform the action.

Egan’s argument is basically that either she has the lesion or she does not, and she can make no difference to this, and so apparently she can make no difference to whether or not she gets cancer. So if she feels like smoking, she should smoke. If she gets cancer, she was going to get it anyway.

Answering Egan’s question, if there was a strong correlation like this in reality, I would think that smoking was a bad idea, and would choose not to do it.

We can change the problem somewhat, without making any essential differences, such that every reasonable person would agree.

Suppose that every person is infallibly predestined to heaven or to hell. This predestination has a 100% correlation with actually going there, and it has effects in the physical world: in some unknown place, there is a physical book with a list of the names of those who are predestined to heaven, and those who are predestined to hell.

But it has nothing to do with the life you live on earth. Instead, when you die, you find yourself in a room with two doors. One is a green door with a label, “Heaven.” The other is a red door with a label, “Hell.” The doors do not actually lead to those places but to the same place, so they have no special causal effect. You only end up in your final destination later. Predestination to heaven, of course, causes you to choose the green door, while predestination to hell causes you to choose the red door.

You find yourself in this situation. You like red a bit more than green, and so you prefer going through red doors rather than green ones, other things being equal. Do you go through the green door or the red door?

It is clear enough that this situation is equivalent in all essential respects to Egan’s thought experiment. We can rephrase his version:

“Susan is debating whether or not to go through the red door. She knows that going through the red door is perfectly correlated with going to hell, but only because there is a common cause – a condition that tends to cause both going through the red door and going to hell. Once we fix the presence or absence of this condition, there is no additional correlation between going through the red door and going to hell. Susan prefers going through the red door without going to hell to not going through the red door without going to hell, and prefers going through the red door with going to hell to not going through the red door with going to hell. Should Susan go through the red door? Is seems clear that she should. (Set aside your theoretical commitments and put yourself in Susan’s situation. Would you go through the red door? Would you take yourself to be irrational for doing so?)”

It should be clear that Egan is wrong. Don’t go through the red door, and don’t smoke.