Our model of the mind as an embodied predictive engine explains why people have a sense of free will, and what is necessary for a mind in general in order to have this sense.
Consider the mind in the bunker. At first, it is not attempting to change the world, since it does not know that it can do this. It is just trying to guess what is going to happen. At a certain point, it discovers that it is a part of the world, and that making specific predictions can also cause things to happen in the world. Some predictions can be self-fulfilling. I described this situation earlier by saying that at this point the mind “can get any outcome it ‘wants.'”
The scare quotes were intentional, because up to this point the mind’s only particular interest was guessing what was going to happen. So once it notices that it is in control of something, how does it decide what to do? At this point the mind will have to say to itself, “This aspect of reality is under my control. What should I do with it?” This situation, when it is noticed by a sufficiently intelligent and reflective agent, will be the feeling of free will.
Occasionally I have suggested that even something like a chess computer, if it were sufficiently intelligent, could have a sense of free will, insofar as it knows that it has many options and can choose any of them, “as far as it knows.” There is some truth in this illustration but in the end it is probably not true that there could be a sense of free will in this situation. A chess computer, however intelligent, will be disembodied, and will therefore have no real power to affect its world, that is, the world of chess. In other words, in order for the sense of free will to develop, the agent needs sufficient access to the world that it can learn about itself and its own effects on the world. It cannot develop in a situation of limited access to reality, as for example to a game board, regardless of how good it is at the game.
In any case, the question remains: how does a mind decide what to do, when up until now it had no particular goal in mind? This question often causes concrete problems for people in real life. Many people complain that their life does not feel meaningful, that is, that they have little idea what goal they should be seeking.
Let us step back for a moment. Before discovering its possession of “free will,” the mind is simply trying to guess what is going to happen. So theoretically this should continue to happen even after the mind discovers that it has some power over reality. The mind isn’t especially interested in power; it just wants to know what is going to happen. But now it knows that what is going to happen depends on what it itself is going to do. So in order to know what is going to happen, it needs to answer the question, “What am I going to do?”
The question now seems impossible to answer. It is going to do whatever it ends up deciding to do. But it seems to have no goal in mind, and therefore no way to decide what to do, and therefore no way to know what it is going to do.
Nonetheless, the mind has no choice. It is going to do something or other, since things will continue to happen, and it must guess what will happen. When it reflects on itself, there will be at least two ways for it to try to understand what it is going to do.
First, it can consider its actions as the effect of some (presumably somewhat unknown) efficient causes, and ask, “Given these efficient causes, what am I likely to do?” In practice it will acquire an answer in this way through induction. “On past occasions, when offered the choice between chocolate and vanilla, I almost always chose vanilla. So I am likely to choose vanilla this time too.” This way of thinking will most naturally result in acting in accord with pre-existing habits.
Second, it can consider its actions as the effect of some (presumably somewhat known) final causes, and ask, “Given these final causes, what am I likely to do?” This will result in behavior that is more easily understood as goal-seeking. “Looking at my past choices of food, it looks like I was choosing them for the sake of the pleasant taste. But vanilla seems to have a more pleasant taste than chocolate. So it is likely that I will take the vanilla.”
Notice what we have in the second case. In principle, the mind is just doing what it always does: trying to guess what will happen. But in practice it is now seeking pleasant tastes, precisely because that seems like a reasonable way to guess what it will do.
This explains why people feel a need for meaning, that is, for understanding their purpose in life, and why they prefer to think of their life according to a narrative. These two things are distinct, but they are related, and both are ways of making our own actions more intelligible. In this way the mind’s task is easier: that is, we need purpose and narrative in order to know what we are going to do. We can also see why it seems to be possible to “choose” our purpose, even though choosing a final goal should be impossible. There is a “choice” about this insofar as our actions are not perfectly coherent, and it would be possible to understand them in relation to one end or another, at least in a concrete way, even if in any case we will always understand them in a general sense as being for the sake of happiness. In this sense, Stuart Armstrong’s recent argument that there is no such thing as the “true values” of human beings, although perhaps presented as an obstacle to be overcome, actually has some truth in it.
The human need for meaning, in fact, is so strong that occasionally people will commit suicide because they feel that their lives are not meaningful. We can think of these cases as being, more or less, actual cases of the darkened room. Otherwise we could simply ask, “So your life is meaningless. So what? Why does that mean you should kill yourself rather than doing some other random thing?” Killing yourself, in fact, shows that you still have a purpose, namely the mind’s fundamental purpose. The mind wants to know what it is going to do, and the best way to know this is to consider its actions as ordered to a determinate purpose. If no such purpose can be found, there is (in this unfortunate way of thinking) an alternative: if I go kill myself, I will know what I will do for the rest of my life.
3 thoughts on “Predictive Processing and Free Will”
I find this whole discussion rather absurd. That which is more clear is not better explained by reference to that which is less clear. I find that I perceive my own free will perfectly clearly and distinctly in sui generis fashion. The predictive processing model of cognition, not so much.
Anyway, doesn’t this theory just replace appeal to our intuitive, mentalized understanding of volition with appeal to our intuitive, mentalized understanding of guessing? The reduction of willing to prediction is still a reduction of one inextricably mental phenomenon to another inextricably mental phenomenon. But in any case, you still haven’t even eliminated volition there, because in order to guess, we must *will* to guess. Any *act* of a mind, as opposed to purely passive receptivity, involves and requires volition. Will always sneaks in the back door again, insofar as we want the mind to be an agent at all, to do anything of itself at all.
” you still haven’t even eliminated volition there”
Stop right there. This post was explaining free will, not eliminating it.
“in order to guess, we must *will* to guess.”
Incorrect. If I throw a rock at your head (and aim well) you will guess it will hit you unless you move, without willing to do so. This is why, in fact, you will also move your head, without willing to do so.
It is possible to will to guess, but not all guesses have to be willed. This does not put any obstacle to explaining willing in terms of guessing, because you can also will to will, but not all willing has to be willed, in exactly the same way. Some willing is willed, and some willing is just willing, without having been willed first. And some guessing is willed, but some guessing is first, without any willing to guess.
Yes, not all guessing has to be willed, but in that case it’s not really an expression of agency. “I” don’t really do anything, at least not in the typical case, when I see a rock flying at my head and jerk out of the way. It’s just a reflex. If you want a mind to be an agent, then I think that there has to be some volition there – I don’t know how you could call something an agent to which things simply happen, without its ever willing anything. But I don’t see how free will plays any role of itself in your account: it just emerges as an epiphenomenon of intertemporal inconsistency among preferences, given a certain way of solving the problem of how to guess about what one will do.
So what is left over, on your account, of free will as such? You say “Our model of the mind as an embodied predictive engine explains why people have a sense of free will,” but “having a sense of X” and “having X” are not the same thing. Explaining why we have a “sense” of something is perfectly compatible with eliminating any explanatory role for or ontological commitment to the thing itself; cf. the Churchlands on folk theories of consciousness.