Predictive Processing and Free Will

Our model of the mind as an embodied predictive engine explains why people have a sense of free will, and what is necessary for a mind in general in order to have this sense.

Consider the mind in the bunker. At first, it is not attempting to change the world, since it does not know that it can do this. It is just trying to guess what is going to happen. At a certain point, it discovers that it is a part of the world, and that making specific predictions can also cause things to happen in the world. Some predictions can be self-fulfilling. I described this situation earlier by saying that at this point the mind “can get any outcome it ‘wants.'”

The scare quotes were intentional, because up to this point the mind’s only particular interest was guessing what was going to happen. So once it notices that it is in control of something, how does it decide what to do? At this point the mind will have to say to itself, “This aspect of reality is under my control. What should I do with it?” This situation, when it is noticed by a sufficiently intelligent and reflective agent, will be the feeling of free will.

Occasionally I have suggested that even something like a chess computer, if it were sufficiently intelligent, could have a sense of free will, insofar as it knows that it has many options and can choose any of them, “as far as it knows.” There is some truth in this illustration but in the end it is probably not true that there could be a sense of free will in this situation. A chess computer, however intelligent, will be disembodied, and will therefore have no real power to affect its world, that is, the world of chess. In other words, in order for the sense of free will to develop, the agent needs sufficient access to the world that it can learn about itself and its own effects on the world. It cannot develop in a situation of limited access to reality, as for example to a game board, regardless of how good it is at the game.

In any case, the question remains: how does a mind decide what to do, when up until now it had no particular goal in mind? This question often causes concrete problems for people in real life. Many people complain that their life does not feel meaningful, that is, that they have little idea what goal they should be seeking.

Let us step back for a moment. Before discovering its possession of “free will,” the mind is simply trying to guess what is going to happen. So theoretically this should continue to happen even after the mind discovers that it has some power over reality. The mind isn’t especially interested in power; it just wants to know what is going to happen. But now it knows that what is going to happen depends on what it itself is going to do. So in order to know what is going to happen, it needs to answer the question, “What am I going to do?”

The question now seems impossible to answer. It is going to do whatever it ends up deciding to do. But it seems to have no goal in mind, and therefore no way to decide what to do, and therefore no way to know what it is going to do.

Nonetheless, the mind has no choice. It is going to do something or other, since things will continue to happen, and it must guess what will happen. When it reflects on itself, there will be at least two ways for it to try to understand what it is going to do.

First, it can consider its actions as the effect of some (presumably somewhat unknown) efficient causes, and ask, “Given these efficient causes, what am I likely to do?” In practice it will acquire an answer in this way through induction. “On past occasions, when offered the choice between chocolate and vanilla, I almost always chose vanilla. So I am likely to choose vanilla this time too.” This way of thinking will most naturally result in acting in accord with pre-existing habits.

Second, it can consider its actions as the effect of some (presumably somewhat known) final causes, and ask, “Given these final causes, what am I likely to do?” This will result in behavior that is more easily understood as goal-seeking. “Looking at my past choices of food, it looks like I was choosing them for the sake of the pleasant taste. But vanilla seems to have a more pleasant taste than chocolate. So it is likely that I will take the vanilla.”

Notice what we have in the second case. In principle, the mind is just doing what it always does: trying to guess what will happen. But in practice it is now seeking pleasant tastes, precisely because that seems like a reasonable way to guess what it will do.

This explains why people feel a need for meaning, that is, for understanding their purpose in life, and why they prefer to think of their life according to a narrative. These two things are distinct, but they are related, and both are ways of making our own actions more intelligible. In this way the mind’s task is easier: that is, we need purpose and narrative in order to know what we are going to do. We can also see why it seems to be possible to “choose” our purpose, even though choosing a final goal should be impossible. There is a “choice” about this insofar as our actions are not perfectly coherent, and it would be possible to understand them in relation to one end or another, at least in a concrete way, even if in any case we will always understand them in a general sense as being for the sake of happiness. In this sense, Stuart Armstrong’s recent argument that there is no such thing as the “true values” of human beings, although perhaps presented as an obstacle to be overcome, actually has some truth in it.

The human need for meaning, in fact, is so strong that occasionally people will commit suicide because they feel that their lives are not meaningful. We can think of these cases as being, more or less, actual cases of the darkened room. Otherwise we could simply ask, “So your life is meaningless. So what? Why does that mean you should kill yourself rather than doing some other random thing?” Killing yourself, in fact, shows that you still have a purpose, namely the mind’s fundamental purpose. The mind wants to know what it is going to do, and the best way to know this is to consider its actions as ordered to a determinate purpose. If no such purpose can be found, there is (in this unfortunate way of thinking) an alternative: if I go kill myself, I will know what I will do for the rest of my life.

Blaming the Prophet

Consider the fifth argument in the last post. Should we blame a person for holding a true belief? At this point it should not be too difficult to see that the truth of the belief is not the point. Elsewhere we have discussed a situation in which one cannot possibly hold a true belief, because whatever belief one holds on the matter, it will cause itself to be false. In a similar way, although with a different sort of causality, the problem with the person’s belief that he will kill someone tomorrow, is not that it is true, but that it causes itself to be true. If the person did not expect to kill someone tomorrow, he would not take a knife with him to the meeting etc., and thus would not kill anyone. So just as in the other situation, it is not a question of holding a true belief or a false belief, but of which false belief one will hold, here it is not a question of holding a true belief or a false belief, but of which true belief one will hold: one that includes someone getting killed, or one that excludes that. Truth will be there either way, and is not the reason for praise or blame: the person is blamed for the desire to kill someone, and praised (or at least not blamed) for wishing to avoid this. This simply shows the need for the qualifications added in the previous post: if the person’s belief is voluntary, and held for the sake of coming true, it is very evident why blame is needed.

We have not specifically addressed the fourth argument, but this is perhaps unnecessary given the above response to the fifth. This blog in general has advocated the idea of voluntary beliefs, and in principle these can be praised or blamed. To the degree that we are less willing to do so, however, this may be a question of emphasis. When we talk about a belief, we are more concerned about whether it is true or not, and evidence in favor of it or against it. Praise or blame will mainly come in insofar as other motives are involved, insofar as they strengthen or weaken a person’s wish to hold the belief, or insofar as they potentially distort the person’s evaluation of the evidence.

Nonetheless, the factual question “is this true?” is a different question from the moral question, “should I believe this?” We can see the struggle between these questions, for example, in a difficulty that people sometimes have with willpower. Suppose that a smoker decides to give up smoking, and suppose that they believe they will not smoke for the next six months. Three days later, let us suppose, they smoke a cigarette after all. At that point, the person’s resolution is likely to collapse entirely, so that they return to smoking regularly. One might ask why this happens. Since the person did not smoke for three days, it should be perfectly possible, at least, for them to smoke only once every three days, instead of going back to their former practice. The problem is that the person has received evidence directly indicating the falsity of “I will not smoke for the next six months.” They still might have some desire for that result, but they do not believe that their belief has the power to bring this about, and in fact it does not. The belief would not be self-fulfilling, and in fact it would be false, so they cease to hold it. It is as if someone attempts to open a door and finds it locked; once they know it is locked, they can no longer choose to open the door, because they cannot choose something that does not appear to be within their power.

Mark Forster, in Chapter 1 of his book Do It Tomorrow, previously discussed here, talks about similar issues:

However, life is never as simple as that. What we decide to do and what we actually do are two different things. If you think of the decisions you have made over the past year, how many of them have been satisfactorily carried to a conclusion or are progressing properly to that end? If you are like most people, you will have acted on some of your decisions, I’m sure. But I’m also sure that a large proportion will have fallen by the wayside.

So a simple decision such as to take time to eat properly is in fact very difficult to carry out. Our new rule may work for a few days or a few weeks, but it won’t be long before the pressures of work force us to make an exception to it. Before many days are up the exception will have become the rule and we are right back where we started. However much we rationalise the reasons why our decision didn’t get carried out, we know deep in the heart of us that it was not really the circumstances that were to blame. We secretly acknowledge that there is something missing from our ability to carry out a decision once we have made it.

In fact if we are honest it sometimes feels as if it is easier to get other people to do what we want them to do than it is to get ourselves to do what we want to do. We like to think of ourselves as a sort of separate entity sitting in our body controlling it, but when we look at the way we behave most of the time that is not really the case. The body controls itself most of the time. We have a delusion of control. That’s what it is – a delusion.

If we want to see how little control we have over ourselves, all most of us have to do is to look in the mirror. You might like to do that now. Ask yourself as you look at your image:

  • Is my health the way I want it to be?
  • Is my fitness the way I want it to be?
  • Is my weight the way I want it to be?
  • Is the way I am dressed the way I want it to be?

I am not asking you here to assess what sort of body you were born with, but what you have made of it and how good a state of repair you are keeping it in.

It may be that you are healthy, fit, slim and well-dressed. In which case have a look round at the state of your office or workplace:

  • Is it as well organised as you want it to be?
  • Is it as tidy as you want it to be?
  • Do all your office systems (filing, invoicing, correspondence, etc.) work the way you want them to work?

If so, then you probably don’t need to be reading this book.

I’ve just asked you to look at two aspects of your life that are under your direct control and are very little influenced by outside factors. If these things which are solely affected by you are not the way you want them to be, then in what sense can you be said to be in control at all?

A lot of this difficulty is due to the way our brains are organised. We have the illusion that we are a single person who acts in a ‘unified’ way. But it takes only a little reflection (and examination of our actions, as above) to realise that this is not the case at all. Our brains are made up of numerous different parts which deal with different things and often have different agendas.

Occasionally we attempt to deal with the difference between the facts and our plans by saying something like, “We will approximately do such and such. Of course we know that it isn’t going to be exactly like this, but at least this plan will be an approximate guide.” But this does not really avoid the difficulty. Even “this plan will be an approximate guide” is a statement about the facts that might turn out to be false; and even if it does not turn out to be false, the fact that we have set it down as approximate will likely make it guide our actions more weakly than it would have if we had said, “this is what we will do.” In other words, we are likely to achieve our goal less perfectly, precisely because we tried to make our statement more accurate. This is the reverse of the situation discussed in a previous post, where one gives up some accuracy, albeit vaguely, for the sake of another goal such as fitting in with associates or for literary enjoyment.

All of this seems to indicate that the general proposal about decisions was at least roughly correct. It is not possible to simply to say that decisions are one thing and beliefs entirely another thing. If these were simply two entirely separate things, there would be no conflict at all, at least of this kind, between accuracy and one’s other goals, and things do not turn out this way.

Self-Fulfilling Prophecy

We can formulate a number of objections to the thesis argued in the previous post.

First, if a belief that one is going to do something is the same as the decision to do it, another person’s belief that I am going to do something should mean that the other person is making a decision for me. But this is absurd.

Second, suppose that I know that I am going to be hit on the head and suffer from amnesia, thus forgetting all about these considerations. I may believe that I will eat breakfast tomorrow, but this is surely not a decision to do so.

Third, suppose someone wants to give up smoking. He may firmly hold the opinion that whatever he does, he will sometimes smoke within the next six months, not because he wants to do so, but because he does not believe it possible that he do otherwise. We would not want to say that he decided not to give up smoking.

Fourth, decisions are appropriate objects of praise and blame. We seem at least somewhat more reluctant to praise and blame beliefs, even if it is sometimes done.

Fifth, suppose someone believes, “I will kill Peter tomorrow at 4:30 PM.” We will wish to blame him for deciding to kill Peter. But if he does kill Peter tomorrow at 4:30, he held a true belief. Even if beliefs can be praised or blamed, it seems implausible that a true belief should be blamed.

The objections are helpful. With their aid we can see that there is indeed a flaw in the original proposal, but that it is nonetheless somewhat on the right track. A more accurate proposal would be this: a decision is a voluntary self-fulfilling prophecy as understood by the decision maker. I will explain as we consider the above arguments in more detail.

In the first argument, in the case of one person making a decision for another, the problem is that a mere belief that someone else is going to do something is not self-fulfilling. If I hold a belief that I myself will do something, the belief will tend to cause its own truth, just as suggested in the previous post. But believing that someone else will do something will not in general cause that person to do anything. Consider the following situation: a father says to his children as he departs for the day, “I am quite sure that the house will be clean when I get home.” If the children clean the house during his absence, suddenly it is much less obvious that we should deny that this was the father’s decision. In fact, the only reason this is not truly the father’s decision, without any qualification at all, is that it does not sufficiently possess the characteristics of a self-fulfilling prophecy. First, in the example it does not seem to matter whether the father believes what he says, but only whether he says it. Second, since it is in the power of the children to fail to clean the house in any case, there seems to be a lack of sufficient causal connection between the statement and the cleaning of the house. Suppose belief did matter, namely suppose that the children will know whether he believes what he says or not. And suppose additionally that his belief had an infallible power to make his children clean the house. In that case it would be quite reasonable to say, without any qualification, “He decided that his children would clean the house during his absence.” Likewise, even if the father falsely believes that he has such an infallible power, in a sense we could rightly describe him as trying to make that decision, just as we might say, “I decided to open the door,” even if it turns out that my belief that the door could be opened turns out to be false when I try it; the door may be locked. This is why I included the clause “as understood by the decision maker” in the above proposal. This is a typical character of moral analysis; human action must be understood from the perspective of the one who acts.

In the amnesia case, there is a similar problem: due to the amnesia, the person’s current beliefs do not have a causal connection with his later actions. In addition, if we consider such things as “eating breakfast,” there might be a certain lack of causal connection in any case; the person would likely eat breakfast whether or not he formulates any opinion about what he will do. And to this degree we might feel it implausible to say that his belief that he will eat breakfast is a decision, even without the amnesia. It is not understood by the subject as a self-fulfilling prophecy.

In the case of giving up smoking, there are several problems. In this case, the subject does not believe that there is any causal connection between his beliefs and his actions. Regardless of what he believes, he thinks, he is going to smoke in fact. Thus, in his opinion, if he believes that he will stop smoking completely, he will simply hold a false belief without getting any benefit from it; he will still smoke, and his belief will just be false. So since the belief is false, and without benefit, at least as he understands it, there is no reason for him to hold this belief. Consequently, he holds the opposite belief. But this is not a decision, since he does not understand it as causing his smoking, which is something that is expected to happen whether or not he believes it will.

In such cases in real life, we are in fact sometimes tempted to say that the person is choosing not to give up smoking. And we are tempted to this to the extent that it seems to us that his belief should have the causal power that he denies it has: his denial seems to stem from the desire to smoke. If he wanted to give up smoking, we think, he could just accept that he would be able to believe this, and in such a way that it would come true. He does not, we think, because he wants to smoke, and so does not want to give up smoking. In reality this is a question of degree, and this analysis can have some truth. Consider the following from St. Augustine’s Confessions (Book VIII, Ch. 7-8):

Finally, in the very fever of my indecision, I made many motions with my body; like men do when they will to act but cannot, either because they do not have the limbs or because their limbs are bound or weakened by disease, or incapacitated in some other way. Thus if I tore my hair, struck my forehead, or, entwining my fingers, clasped my knee, these I did because I willed it. But I might have willed it and still not have done it, if the nerves had not obeyed my will. Many things then I did, in which the will and power to do were not the same. Yet I did not do that one thing which seemed to me infinitely more desirable, which before long I should have power to will because shortly when I willed, I would will with a single will. For in this, the power of willing is the power of doing; and as yet I could not do it. Thus my body more readily obeyed the slightest wish of the soul in moving its limbs at the order of my mind than my soul obeyed itself to accomplish in the will alone its great resolve.

How can there be such a strange anomaly? And why is it? Let thy mercy shine on me, that I may inquire and find an answer, amid the dark labyrinth of human punishment and in the darkest contritions of the sons of Adam. Whence such an anomaly? And why should it be? The mind commands the body, and the body obeys. The mind commands itself and is resisted. The mind commands the hand to be moved and there is such readiness that the command is scarcely distinguished from the obedience in act. Yet the mind is mind, and the hand is body. The mind commands the mind to will, and yet though it be itself it does not obey itself. Whence this strange anomaly and why should it be? I repeat: The will commands itself to will, and could not give the command unless it wills; yet what is commanded is not done. But actually the will does not will entirely; therefore it does not command entirely. For as far as it wills, it commands. And as far as it does not will, the thing commanded is not done. For the will commands that there be an act of will–not another, but itself. But it does not command entirely. Therefore, what is commanded does not happen; for if the will were whole and entire, it would not even command it to be, because it would already be. It is, therefore, no strange anomaly partly to will and partly to be unwilling. This is actually an infirmity of mind, which cannot wholly rise, while pressed down by habit, even though it is supported by the truth. And so there are two wills, because one of them is not whole, and what is present in this one is lacking in the other.

St. Augustine analyzes this in the sense that he did not “will entirely” or “command entirely.” If we analyze it in our terms, he does not expect in fact to carry out his intention, because he does not want to, and he knows that people do not do things they do not want to do. In a similar way, in some cases the smoker does not fully want to give up smoking, and therefore believes himself incapable of simply deciding to give up smoking, because if he made that decision, it would happen, and he would not want it to happen.

In the previous post, I mentioned an “obvious objection” at several points. This was that the account as presented there leaves out the role of desire. Suppose someone believes that he will go to Vienna in fact, but does not wish to go there. Then when the time comes to buy a ticket, it is very plausible that he will not buy one. Yes, this will mean that he will stop believing that he will go to Vienna. But this is different from the case where a person has “decided” to go and then changes his mind. The person who does not want to go, is not changing his mind at all, except about the factual question. It seems absurd (and it is) to characterize a decision without any reference to what the person wants.

This is why we have characterized a decision here as “voluntary”, “self-fulfilling,” and “as understood by the decision maker.” It is indeed the case that the person holds a belief, but he holds it because he wants to, and because he expects it to cause its own fulfillment, and he desires that fulfillment.

Consider the analysis in the previous post of the road to point C. Why is it reasonable for anyone, whether the subject or a third party, to conclude that the person will take road A? This is because we know that the subject wishes to get to point C. It is his desire to get to point C that will cause him to take road A, once he understands that A is the only way to get there.

Someone might respond that in this case we could characterize the decision as just a desire: the desire to get to point C. The problem is that the example is overly simplified compared to real life. Ordinarily there is not simply a single way to reach our goals. And the desire to reach the goal may not determine which particular way we take, so something else must determine it. This is precisely why we need to make decisions at all. We could in fact avoid almost anything that feels like a decision, waiting until something else determined the matter, but if we did, we would live very badly indeed.

When we make a complicated plan, there are two interrelated factors explaining why we believe it to be factually true that we will carry out the plan. We know that we desire the goal, and we expect this desire for the goal to move us along the path towards the goal. But since we also have other desires, and there are various paths towards the goal, some better than others, there are many ways that we could go astray before reaching the goal, either by taking a path to some other goal, or by taking a path less suited to the goal. So we also expect the details of our plan to keep us on the particular course that we have planned, which we suppose to be the best, or at least the best path considering our situation as a whole. If we did not keep those details in mind, we would not likely remain on this precise path. As an example, I might plan to stop at a grocery store on my way home from work, out of the desire to possess a sufficient stock of groceries, but if I do not keep the plan in mind, my desire to get home may cause me to go past the store without stopping. Again, this is why our explanation of belief is that it is a self-fulfilling prophecy, and one explicitly understood by the subject as such; by saying “I will use A, B, and C, to get to goal Z,” we expect that keeping these details in mind, together with our desire for Z, we will be moved along this precise path, and we wish to follow this path, for the sake of Z.

There is a lot more that could be said about this. For example, it is not difficult to see here an explanation for the fact that such complicated plans rarely work out precisely in practice, even in the absence of external impediments. We expect our desire for the goal to keep us on track, but in fact we have other desires, and there are an indefinite number of possibilities for those other desires to make something else happen. Likewise, even if the plan was the best we could work out in advance, there will be numberless details in which there were better options that we did not notice while planning, and we will notice some of these as we proceed along the path. So both the desire for the goal, and the desire for other things, will likely derail the plan. And, of course, most plans will be derailed by external things as well.

A combination of the above factors has the result that I will leave the consideration of the fourth and fifth arguments to another post, even though this was not my original intention, and was not my belief about what would happen.

Chastek on Determinism

On a number of occasions, James Chastek has referred to the impossibility of a detailed prediction of the future as an argument for libertarian free will. This is a misunderstanding. It is impossible to predict the future in detail for the reasons given in the linked post, and this has nothing to do with libertarian free will or even any kind of free will at all.

The most recent discussions of this issue at Chastek’s blog are found here and here. The latter post:

Hypothesis: A Laplacian demon, i.e. a being who can correctly predict all future actions, contradicts our actual experience of following instructions with some failure rate.

Set up: You are in a room with two buttons, A and B. This is the same set-up Soon’s free-will experiment, but the instructions are different.

Instructions: You are told that you will have to push a button every 30 seconds, and that you will have fifty trials. The clock will start when a sheet of paper comes out of a slit in the wall that says A or B. Your instructions are to push the opposite of whatever letter comes out.

The Apparatus: the first set of fifty trials is with a random letter generator. The second set of trials is with letters generated by a Laplacian demon who knows the wave function of the universe and so knows in advance what button will be pushed and so prints out the letter.

The Results: In the first set of trials, which we can confirm with actual experience, the success rate is close to 100%, but, the world being what it is, there is a 2% mistake rate in the responses. In the second set of trials the success rate is necessarily 0%. In the first set of trials, subject report feelings of boredom, mild indifference, continual daydreaming, etc. The feelings expressed in the second trial might be any or all of the following: some say they suddenly developed a pathological desire to subvert the commands of the experiment, others express feelings of being alienated from their bodies, trying to press one button and having their hand fly in the other direction, others insist that they did follow instructions and consider you completely crazy for suggesting otherwise, even though you can point to video evidence of them failing to follow the rules of the experiment, etc.

The Third Trial: Run the trial a third time, this time giving the randomly generated letter to the subject and giving the Laplacian letter to the experimenter. Observe all the trials where the two generate the same number, and interate the experiment until one has fifty trials. Our actual experience tells us that the subject will have a 98% success rate, but our theoretical Laplacian demon tells us that the success rate should be necessarily 0%. Since asserting that the random-number generator and the demon will never have the same response would make the error-rate necessarily disappear and cannot explain our actual experience of failures, the theoretical postulation of a Laplacian demon contradicts our actual experience. Q.E.D.

The post is phrased as a proof that Laplacian demons cannot exist, but in fact Chastek intends it to establish the existence of libertarian free will, which is a quite separate thesis; no one would be surprised if Laplacian demons cannot exist in the real world, but many people would be surprised if people turn out to have libertarian free will.

I explain in the comments there the problem with this argument:

Here is what happens when you set up the experiment. You approach the Laplacian demon and ask him to write the letter that the person is going to choose for the second set of 50 trials.

The demon will respond, “That is impossible. I know the wave function of the universe, and I know that there is no possible set of As and Bs such that, if that is the set written, it will be the set chosen by the person. Of course, I know what will actually be written, and I know what the person will do. But I also know that those do not and cannot match.”

In other words, you are right that the experiment is impossible, but this is not reason to believe that Laplacian demons are impossible; it is a reason to believe that it is impossible for anything to write what the person is going to do.

E.g. if your argument works, it proves either that God does not exist, or that he does not know the future. Nor can one object that God’s knowledge is eternal rather than of the future, since it is enough if God can write down what is going to happen, as he is thought to have done e.g. in the text, “A virgin will conceive etc.”

If you answer, as you should, that God cannot write what the person will do, but he can know it, the same applies to the Laplacian demon.

As another reality check here, according to St. Thomas a dog is “determinate to one” such that in the same circumstances it will do the same thing. But we can easily train a dog in such a way that no one can possibly write down the levers it will choose, since it will be trained to choose the opposite ones.

And still another: a relatively simple robot, programmed in the same way. We don’t need a Laplacian demon, since we can predict ourselves in every circumstance what it will do. But we cannot write that down, since then we would predict the opposite of what we wrote. And it is absolutely irrelevant that the robot is an “instrument,” since the argument does not have any premise saying that human beings are not instruments.

As for the third set, if I understood it correctly you are indeed cherry picking — you are simply selecting the trials where the human made a mistake, and saying, “why did he consistently make a mistake in these cases?” There is no reason; you simply selected those cases.

Chastek responds to this comment in a fairly detailed way. Rather than responding directly to the comment there, I ask him to comment on several scenarios. The first scenario:

If I drop a ball on a table, and I ask you to predict where it is going to first hit the table, and say, “Please predict where it is going to first hit the table, and let me know your prediction by covering the spot with your hand and keeping it there until the trial is over,” is it clear to you that:

a) it will be impossible for you to predict where it is going to first hit in this way, since if you cover a spot it cannot hit there

and

b) this has nothing whatsoever to do with determinism or indeterminism of anything.

The second scenario:

Let’s make up a deterministic universe. It has no human beings, no rocks, nothing but numbers. The wave function of the universe is this: f(x)=x+1, where x is the initial condition and x+1 is the second condition.

We are personally Laplacian demons compared to this universe. We know what the second condition will be for any original condition.

Now give us the option of setting the original condition, and say:

Predict the second condition, and set that as the initial condition. This should lead to a result like (1,1) or (2,2), which contradicts our experience that the result is always higher than the original condition. So the hypothesis that we know the output given the input must be false.

The answer: No. It is not false that we know the output given the input. We know that these do not and cannot match, not because of anything indeterminate, but because the universe is based on the completely deterministic rule that f(x)=x+1, not f(x)=x.

Is it clear:

a) why a Laplacian demon cannot set the original condition to the resulting condition
b) this has nothing to do with anything being indeterminate
c) there is no absurdity in a Laplacian demon for a universe like this

The reason why I presented these questions instead of responding directly to his comments is that his comments are confused, and an understanding of these situations would clear up that confusion. For unclear reasons, Chastek failed to respond to these questions. Nonetheless, I will respond to his detailed comments in the light of the above explanations. Chastek begins:

Here are my responses:

That is impossible… I know what will actually be written, and I know what the person will do. But I also know that those do not and cannot match

But “what will actually be written” is, together with a snapshot of the rest of the universe, an initial condition and “what the person will do” is an outcome. Saying these “can never match” means the demon is saying “the laws of nature do not suffice to go from some this initial condition to one of its outcomes” which is to deny Laplacian demons altogether.

The demon is not saying that the laws of nature do not suffice to go from an initial condition to an outcome. It is saying that “what will actually be written” is part of the initial conditions, and that it is an initial condition that is a determining factor that prevents itself from matching the outcome. In the case of the dropping ball above, covering the spot with your hand is an initial condition, and it absolutely prevents the outcome being that the ball first hits there. In the case of f(x), x is an initial condition, and it prevents the outcome from being x, since it will always be x+1. In the same way, in Chastek’s experiment, what is written is an initial condition which prevents the outcome from being that thing which was written.

If you answer, as you should, that God cannot write what the person will do, but he can know it, the same applies to the Laplacian demon.

When God announces what will happen he can be speaking about what he intends to do, while a LD cannot. I’m also very impressed by John of St. Thomas’s arguments that the world is not only notionally present to God but even physically present within him, which makes for a dimension of his speaking of the future that could never be said of an LD. This is in keeping with the Biblical idea that God not only looks at the world but responds and interacts with it. The character of prophesy is also very different from the thought experiment we’re trying to do with an LD: LD’s are all about what we can predict in advance, but Biblical prophesies do not seem to be overly concerned with what can be predicted in advance, as should be shown from the long history of failed attempts to turn the NT into a predictive tool.

If God says, “the outcome will be A,” and then consistently causes the person to choose A even when the person has hostile intentions, this will be contrary to our experience in the same way that the Laplacian demon would violate our experience if it always got the outcome right. You can respond, “ok, but that’s fine, because we’re admitting that God is a cause, but the Laplacian demon is not supposed to be affecting the outcome.” The problem with the response is that God is supposed to be the cause all of the time, not merely some of the time; so why should he not also say what is going to happen, since he is causing it anyway?

I agree that prophecy in the real world never tells us much detail about the future in fact, and this is verified in all biblical prophecies and in all historical cases such as the statements about the future made by the Fatima visionaries. I also say that even in principle God could not consistently predict in advance a person’s actions, and show him those predictions, without violating his experience of choice, but I say that this is for the reasons given here.

But the point of my objection was not about how prophecy works in the real world. The point was that Catholic doctrine seems to imply that God could, if he wanted, announce what the daily weather is going to be for the next year. It would not bother me personally if this turns out to be completely impossible; but is Chastek prepared to say the same? The real issues with the Laplacian demon are the same: knowing exactly what is going to happen, and to what degree it can announce what it knows.

we can easily train a dog in such a way that no one can possibly write down the levers it will choose, since it will be trained to choose the opposite ones.

Such an animal would follow instructions with some errors, and so would be a fine test subject for my experiment. This is exactly what my subject does in trial #1. I say the same for your robot example.

(ADDED LATER) I’m thankful for this point and developed for reasons given above on the thread.

This seems to indicate the source of the confusion, relative to my examples of covering the place where the ball hits, and the case of the function f(x) = x+1. There is no error rate in these situations: the ball never hits the spot you cover, and f(x) never equals x.

But this is really quite irrelevant. The reason the Laplacian demon says that the experiment is impossible has nothing to do with the error rate, but with the anti-correlation between what is written and the outcome. Consider: suppose in fact you never make a mistake. There is no error rate. Nonetheless, the demon still cannot say what you are going to do, because you always do the opposite of what it says. Likewise, even if the dog never fails to do what it was trained to do, it is impossible for the Laplacian demon to say what it is going to do, since it always does the opposite. The same is true for the robot. In other words, my examples show the reason why the experiment is impossible, without implying that a Laplacian demon is impossible.

We can easily reconstruct my examples to contain an error rate, and nonetheless prediction will be impossible for the same reasons, without implying that anything is indeterminate. For example:

Suppose that the world is such that every tenth time you try to cover a spot, your hand slips off and stops blocking it. I specify every tenth time to show that determinism has nothing to do with this: the setup is completely determinate. In this situation, you are able to indicate the spot where the ball will hit every tenth time, but no more often than that.

Likewise suppose we have f(x) = x+1, with one exception such that f(5) = 5. If we then ask the Laplacian demon (namely ourselves) to provide five x such that the output equals the input, we will not be able to do it in five cases, but we will be able to do it in one. Since this universe (the functional universe) is utterly deterministic, the fact that we cannot present five such cases does not indicate something indeterminate. It just indicates a determinate fact about how the function universe works.

As for the third set, if I understood it correctly you are indeed cherry picking — you are simply selecting the trials where the human made a mistake,

LD’s can’t be mistaken. If they foresee outcome O from initial conditions C, then no mistake can fail to make O come about. But this isn’t my main point, which is simply to repeat what I said to David: cherry picking requires disregarding evidence that goes against your conclusion, but the times when the random number generator and the LD disagree provide no evidence whether LD’s are consistent with our experience of following instructions with some errors.

I said “if I understood it correctly” because the situation was not clearly laid out. I understood the setup to be this: the Laplacian demon writes out fifty letters, A or B, being the letters it sees that I am going to write. It does not show me this series of letters. Instead, a random process outputs a series of letters, A or B, and each time I try to select the opposite letter.

Given this setup, what the Laplacian demon writes always matches what I select. And most of the time, both are the opposite of what was output by the random process. But occasionally I make a mistake, that is, I fail to select the opposite letter, and choose the same letter that the random process chose. In these cases, since the Laplacian demon still knew what was going to happen, the demon’s letter also matches the random process letter, and my letter.

Now, Chastek says, consider only the cases where the demon’s letter is the same as the random process letter. It will turn out that over those cases, I have a 100% failure rate: that is, in every such case I selected the same letter as the random process. According to him, we should consider this surprising, since we would not normally have a 100% failure rate. This is not cherry picking, he says, because “the times when the random number generator and the LD disagree provide no evidence whether LD’s are consistent with our experience of following instructions with some errors.”

The problem with this should be obvious. Let us consider demon #2: he looks at what the person writes, and then writes down the same thing. Is this demon possible? There will be some cases where demon #2 writes down the opposite of what the random process output: those will be the cases where the person did not make a mistake. But there will be other cases where the person makes a mistake. In those cases, what the person writes, and what demon #2 writes, will match the output of the random process. Consider only those cases. The person has a 100% failure rate in those cases. The cases where the random process and demon #2 disagree provide no evidence whether demon #2 is consistent with our experience, so this is not cherry picking. Now it is contrary to our experience to have a 100% failure rate. So demon #2 is impossible.

This result is of course absurd – demon#2 is obviously entirely possible, since otherwise making copies of things would be impossible. This is sufficient to establish that Chastek’s response is mistaken. He is indeed cherry picking: he simply selected the cases where the human made a mistake, and noted that there was a 100% failure rate in those cases.

In other words, we do not need a formal answer to Chastek’s objection to see that there is something very wrong with it; but the formal answer is that the cases where the demon disagrees with the random process do indeed provide some evidence. They question is whether the existence of the demon is consistent with “our experience of following instructions with some errors.” But we cannot have this experience without sometimes following the instructions correctly; being right is part of this experience, just like being wrong. And the cases where the demon disagrees with the random process are cases where we follow the instructions correctly, and such cases provide evidence that the demon is possible.

Chastek provides an additional comment about the case of the dog:

Just a note, one point I am thankful to EU for is the idea that a trained dog might be a good test subject too. If this is right, then the recursive loop might not be from intelligence as such but the intrinsic indeterminism of nature, which we find in one way through (what Aristotle called) matter being present in the initial conditions and the working of the laws and in another through intelligence. But space is opened for one with the allowing of the other, since on either account nature has to allow for teleology.

I was pointing to St. Thomas in my response with the hope that St. Thomas’s position would at least be seen as reasonable; and there is no question that St. Thomas believes that there is no indeterminism whatsoever in the behavior of a dog. If a dog is in the same situation, he believes, it will do exactly the same thing. In any case, Chastek does not address this, so I will not try at this time to establish the fact of St. Thomas’s position.

The main point is that, as we have already shown, the reason it is impossible to predict what the dog will do has nothing to do with indeterminism, since such prediction is impossible even if the dog is infallible, and remains impossible even if the dog has a deterministic error rate.

The comment, “But space is opened for one with the allowing of the other, since on either account nature has to allow for teleology,” may indicate why Chastek is so insistent in his error: in his opinion, if nature is deterministic, teleology is impossible. This is a mistake much like Robin Hanson’s mistake explained in the previous post. But again I will leave this for later consideration.

I will address one last comment:

I agree the physical determinist’s equation can’t be satisfied for all values, and that what makes it possible is the presence of a sort of recursion. But in the context of the experiment this means that the letter on a sheet of paper together with a snapshot of the rest of the universe can never be an initial condition, but I see no reason why this would be the case. Even if I granted their claim that there was some recursive contradiction, it does not arise merely because the letter is given in advance, since the LD could print out the letter in advance just fine if the initial conditions were, say, a test particle flying though empty space toward button A with enough force to push it.

It is true that the contradiction does not arise just because the Laplacian demon writes down the letter. There is no contradiction even in the human case, if the demon does not show it to the human. Nor does anything contrary to our experience happen in such a case. The case which is contrary to our experience is when the demon shows the letter to the person; and this is indeed impossible on account of a recursive contradiction, not because the demon is impossible.

Consider the case of the test particle flying towards button A: it is not a problem for the demon to write down the outcome precisely because what is written has no particular influence, in this case, on the outcome.

But if “writing the letter” means covering the button, as in our example of covering the spot where the ball will hit, then the demon will not be able to write the outcome in advance. And obviously this will not mean there is any indeterminism.

The contradiction comes about because covering the button prevents the button from being pushed. And the contradiction comes about in the human case in exactly the same way: writing a letter causes, via the human’s intention to follow the instructions, the opposite outcome. Again indeterminism has nothing to do with this: the same thing will happen if the human is infallible, or if the human has an error rate which has deterministic causes.

“This means that the letter on a sheet of paper together with a snapshot of the rest of the universe can never be an initial condition.” No, it means that in some of the cases, namely those where the human will be successful in following instructions, the letter with the rest of the universe cannot be an initial condition where the outcome is the same as what is written. While there should be no need to repeat the reasons for this at this point, the reason is that “what is written” is a cause of the opposite outcome, and whether that causality is deterministic or indeterministic has nothing to do with the impossibility. The letter can indeed be an initial condition: but it is an initial condition where the outcome is the opposite of the letter, and the demon knows all this.