Rao’s Divergentism

The main point of this post is to encourage the reader who has not yet done so, to read Venkatesh Rao’s essay Can You Hear Me Now. I will not say too much about it. The purpose is potentially for future reference, and simply to point out a connection with some current topics here.

Rao begins:

The fundamental question of life, the universe and everything is the one popularized by the Verizon guy in the ad: Can you hear me now?

This conclusion grew out of a conversation I had about a year ago, with some friends, in which I proposed a modest-little philosophy I dubbed divergentism. Here is a picture.

https://206hwf3fj4w52u3br03fi242-wpengine.netdna-ssl.com/wp-content/uploads/2015/12/divergentism.jpg

Divergentism is the idea that as individuals grow out into the universe, they diverge from each other in thought-space. This, I argued, is true even if in absolute terms, the sum of shared beliefs is steadily increasing. Because the sum of beliefs that are not shared increases even faster on average. Unfortunately, you are unique, just like everybody else.

If you are a divergentist, you believe that as you age, the average answer to the fundamental Verizon question slowly drifts, as you age, from yes, to no, to silenceIf you’re unlucky, you’re a hedgehog and get unhappier and unhappier about this as you age. If you are lucky, you’re a fox and you increasingly make your peace with this condition. If you’re really lucky, you die too early to notice the slowly descending silence, before it even becomes necessary to Google the phrase existential horror.

To me, this seemed like a completely obvious idea. Much to my delight, most people I ran it by immediately hated it.

The entire essay is worth reading.

I would question whether this is really the “fundamental question of life, the universe, and everything,” but Rao has a point. People do tend to think of their life as meaningful on account of social connections, and if those social connections grow increasingly weaker, they will tend to worry that their life is becoming less meaningful.

The point about the intellectual life of an individual is largely true. This is connected to what I said about the philosophical progress of an individual some days ago. There is also a connection with Kuhn’s idea of how the progress of the sciences causes a gulf to arise between them in such a way that it becomes more and more difficult for scientists in different fields to communicate with one another. If we look at the overall intellectual life of an individual as a sort of individual advancing science, the “sciences” of each individual will generally speaking tend to diverge from one another, allowing less and less communication. This is not about people making mistakes, although obviously making mistakes will contribute to this process. As Rao says, it may be that “the sum of shared beliefs is steadily increasing,” but this will not prevent their intellectual lives overall from diverging, just as the divergence of the sciences does not result from falsity, but from increasingly detailed focus on different truths.

Words, Meaning, and Formal Copies

There is quick way to respond to the implicit questions at the end of the last post. I noted in an earlier discussion of form that form is not only copied into the mind; it is also copied into language itself. Any time you describe something in words, you are to some degree copying its form into your description.

This implies that Aristotle’s objection that a mind using an organ would not be able to know all things could equally be made against the possibility of describing all things in words. There simply are not enough combinations of words to relate them to all possible combinations of things; thus, just as a black and white image cannot imitate every aspect of a colored scene, so words cannot possibly describe every aspect of reality.

Two things are evident from this comparison:

First, the objection fails overall. There is nothing that cannot be described in words because words are flexible. If we don’t have a word for something, then we can make up a name. Similarly, the meaning of a single word depends on context.  The word “this” can refer to pretty much anything, depending on the context in which it is used. Likewise meaning can be affected by the particular situation of the person using the word, or by broader cultural contexts, and so on.

Second, there is some truth in the objection. It is indeed impossible to describe every aspect of reality at the same time and in complete detail, and the objection gives a very good reason for this: there are simply not enough linguistic combinations to represent all possible combinations of things. The fact that language is not prime matter does mean that language cannot express every detail of reality at once: the determination that is already there does exclude this possibility. But the flexibility of language prevents there from being any particular aspect of things that cannot be described.

My claim about the mind is the same. There is nothing that cannot be understood by the mind, despite the fact that the mind uses the brain, because the relationship between the brain, mind, and world is a flexible one. Just as the word “this” can refer to pretty much anything, so also the corresponding thought. But on the other hand, the limitations of the brain do mean that a perfectly detailed knowledge of everything is excluded.

Our Interlocutor Insists

In a sense, the above account is sufficient to respond to the objection. There does not seem to be a reason to hold Aristotle’s account of the immateriality of the mind, unless there is also a reason to hold that language cannot be used to describe some things, and this does not seem like a reasonable position. Nonetheless, this response will give rise to a new and more detailed objection.

A black and white scene, it will be said, really and truly copies some aspects of a colored scene, and fails to copy others. Thus right angles in the black and white scene may be identical to right angles in the colored scene. The angles are really copied, and the angles are not. But language seems different: since it is conventional, it does not really copy anything. We just pretend, as it were, that we are copying the thing. “Let the word ‘cat’ stand for a cat,” we say, but there is nothing catlike about the word in reality. The form of the cat is not really copied into the word, or so it will be argued. And since we are not really copying anything, this is why language has the flexibility to be able to describe all things. The meaning of thoughts, however, is presumably not conventional. So it seems that we need to copy things in a real way into the mind, the way we copy aspects of a colored scene into a black and white image. And thus, meaning in the mind should not be flexible in this way, and a particular material medium (such as the brain) would still impede knowing all things, the way the black and white image excludes color.

Formal Copies

The above objection is similar to Hilary Lawson’s argument that words cannot really refer to things. In the post linked above on form and reality, we quoted his argument that cause and effect do not have anything in common. I will reproduce that argument here; for the purpose of the following discussion it might be useful to the reader to refer to the remainder of that post.

For a system of closure to provide a means of intervention in openness and thus to function as a closure machine, it requires a means of converting the flux of openness into an array of particularities. This initial layer of closure will be identified as ‘preliminary closure’. As with closure generally, preliminary closure consists in the realisation of particularity as a consequence of holding that which is different as the same. This is achieved through the realisation of material in response to openness. The most minimal example of a system of closure consists of a single preliminary closure. Such a system requires two discrete states, or at least states that can be held as if they were discrete. It is not difficult to provide mechanical examples of such systems which allow for a single preliminary closure. A mousetrap for example, can be regarded as having two discrete states: it is either set, it is ready, or it has sprung, it has gone off. Many different causes may have led to it being in one state or another: it may have been sprung by a mouse, but it could also have been knocked by someone or something, or someone could have deliberately set it off. In the context of the mechanism all of these variations are of no consequence, it is either set or it has sprung. The diversity of the immediate environment is thereby reduced to single state and its absence: it is either set or it is not set. Any mechanical arrangement that enables a system to alternate between two or more discrete states is thereby capable of providing the basis for preliminary closure. For example, a bell or a gate could function as the basis for preliminary closure. The bell can either ring or not ring, the gate can be closed or not closed. The bell may ring as the result of the wind, or a person or animal shaking it, but the cause of the response is in the context of system of no consequence. The bell either rings or it doesn’t. Similarly, the gate may be in one state or another because it has been deliberately moved, or because something or someone has dislodged it accidentally, but these variations are not relevant in the context of the state of system, which in this case is the position of the gate. In either case the cause of the bell ringing or the gate closing is infinitely varied, but in the context of the system the variety of inputs is not accessible to the system and thus of no consequence.

Lawson’s basic argument is that any particular effect could result from any of an infinite number of different causes, and the cause and effect might be entirely different: the effect might be ringing of a bell, but the cause was not bell-like at all, and did not have a ringing sound. So the effect, he says, tells you nothing at all about the cause. In a similar way, he claims, our thoughts cause our words, but our words and our thoughts have nothing in common, and thus our words tell us nothing about our thoughts; and in that sense they do not refer to anything, not even to our thoughts. Likewise, he says, the world causes our thoughts, but since the cause and effect have nothing in common, our thoughts tell us nothing about the world, and do not even refer to it.

As I responded at the time, this account is mistaken from the very first step. Cause and effect always have something in common, namely the cause-effect relationship, although they each have different ends of that relationship. They will also have other things in common depending on the particular nature of the cause and effect in question. Similarly, the causes that are supposedly utterly diverse, in Lawson’s account, have something in common themselves: every situation that rings the bell has “aptness to ring the bell” in common. And when the bell is rung, it “refers” to these situations by the implication that we are in a situation that has aptness to ring the bell, rather than in one of the other situations.

It is not accidental here that “refer” and “relate” are taken from forms of the same verb. Lawson’s claim that words do not “refer” to things is basically the same as the claim that they are not really related to things. And the real problem is that he is looking at matter (in this case the bell) without considering form (in this case the bell’s relationship with the world.)

In a similar way, to say that the word “cat” is not catlike is to look at the sound or at the text as matter, without considering its form, namely the relationship it has with the surrounding context which causes that word to be used. But that relationship is real; the fact that the word is conventional does not prevent it from being true that human experience of cats is the cause of thoughts of cats, and that thoughts of cats are concretely the cause of the usage of the word “cat,” even if they could in some other situation have caused some other word to be used.

I argued in the post on the nature of form (following the one with the discussion of Lawson) that form is a network of relationships apt to make something one. Insofar as an effect really receives form from a cause in the above way, words really receive meaning from the context that gives rise to their use. And in this way, it is not true that form in language is unlike form in a black and white scene, such that one could say that form in the scene is “real” and form in language is not. Both are real.

Thus the objection fails. Nonetheless, it is true that it is easier to see why it is possible to describe anything in words, than it is to see why anything can be known. And this happens simply because “anything is describable in words” precisely because “anything can be known.” So the fact that anything can be known is the more remote cause, and thus harder to know.

 

Employer and Employee Model of Human Psychology

This post builds on the ideas in the series of posts on predictive processing and the followup posts, and also on those relating truth and expectation. Consequently the current post will likely not make much sense to those who have not read the earlier content, or to those that read it but mainly disagreed.

We set out the model by positing three members of the “company” that constitutes a human being:

The CEO. This is the predictive engine in the predictive processing model.

The Vice President. In the same model, this is the force of the historical element in the human being, which we used to respond to the “darkened room” problem. Thus for example the Vice President is responsible for the fact that someone is likely to eat soon, regardless of what they believe about this. Likewise, it is responsible for the pursuit of sex, the desire for respect and friendship, and so on. In general it is responsible for behaviors that would have been historically chosen and preserved by natural selection.

The Employee. This is the conscious person who has beliefs and goals and free will and is reflectively aware of these things. In other words, this is you, at least in a fairly ordinary way of thinking of yourself. Obviously, in another way you are composed from all of them.

Why have we arranged things in this way? Descartes, for example, would almost certainly disagree violently with this model. The conscious person, according to him, would surely be the CEO, and not an employee. And what is responsible for the relationship between the CEO and the Vice President? Let us start with this point first, before we discuss the Employee. We make the predictive engine the CEO because in some sense this engine is responsible for everything that a human being does, including the behaviors preserved by natural selection. On the other hand, the instinctive behaviors of natural selection are not responsible for everything, but they can affect the course of things enough that it is useful for the predictive engine to take them into account. Thus for example in the post on sex and minimizing uncertainty, we explained why the predictive engine will aim for situations that include having sex and why this will make its predictions more confident. Thus, the Vice President advises certain behaviors, the CEO talks to the Vice President, and the CEO ends up deciding on a course of action, which ultimately may or may not be the one advised by the Vice President.

While neither the CEO nor the Vice President is a rational being, since in our model we place the rationality in the Employee, that does not mean they are stupid. In particular, the CEO is very good at what it does. Consider a role playing video game where you have a character that can die and then resume. When someone first starts to play the game, they may die frequently. After they are good at the game, they may die only rarely, perhaps once in many days or many weeks. Our CEO is in a similar situation, but it frequently goes 80 years or more without dying, on its very first attempt. It is extremely good at its game.

What are their goals? The CEO basically wants accurate predictions. In this sense, it has one unified goal. What exactly counts as more or less accurate here would be a scientific question that we probably cannot resolve by philosophical discussion. In fact, it is very possible that this would differ in different circumstances: in this sense, even though it has a unified goal, it might not be describable by a consistent utility function. And even if it can be described in that way, since the CEO is not rational, it does not (in itself) make plans to bring about correct predictions. Making good predictions is just what it does, as falling is what a rock does. There will be some qualifications on this, however, when we discuss how the members of the company relate to one another.

The Vice President has many goals: eating regularly, having sex, having and raising children, being respected and liked by others, and so on. And even more than in the case of the CEO, there is no reason for these desires to form a coherent set of preferences. Thus the Vice President might advise the pursuit of one goal, but then change its mind in the middle, for no apparent reason, because it is suddenly attracted by one of the other goals.

Overall, before the Employee is involved, human action is determined by a kind of negotiation between the CEO and the Vice President. The CEO, which wants good predictions, has no special interest in the goals of the Vice President, but it cooperates with them because when it cooperates its predictions tend to be better.

What about the Employee? This is the rational being, and it has abstract concepts which it uses as a formal copy of the world. Before I go on, let me insist clearly on one point. If the world is represented in a certain way in the Employee’s conceptual structure, that is the way the Employee thinks the world is. And since you are the Employee, that is the way you think the world actually is. The point is that once we start thinking this way, it is easy to say, “oh, this is just a model, it’s not meant to be the real thing.” But as I said here, it is not possible to separate the truth of statements from the way the world actually is: your thoughts are formulated in concepts, but they are thoughts about the way things are. Again, all statements are maps, and all statements are about the territory.

The CEO and the Vice President exist as soon a human being has a brain; in fact some aspects of the Vice President would exist even before that. But the Employee, insofar as it refers to something with rational and self-reflective knowledge, takes some time to develop. Conceptual knowledge of the world grows from experience: it doesn’t exist from the beginning. And the Employee represents goals in terms of its conceptual structure. This is just a way of saying that as a rational being, if you say you are pursuing a goal, you have to be able to describe that goal with the concepts that you have. Consequently you cannot do this until you have some concepts.

We are ready to address the question raised earlier. Why are you the Employee, and not the CEO? In the first place, the CEO got to the company first, as we saw above. Second, consider what the conscious person does when they decide to pursue a goal. There seems to be something incoherent about “choosing a goal” in the first place: you need a goal in order to decide which means will be a good means to choose. And yet, as I said here, people make such choices anyway. And the fact that you are the Employee, and not the CEO, is the explanation for this. If you were the CEO, there would indeed be no way to choose an end. That is why the actual CEO makes no such choice: its end is already determinate, namely good predictions. And you are hired to help out with this goal. Furthermore, as a rational being, you are smarter than the CEO and the Vice President, so to speak. So you are allowed to make complicated plans that they do not really understand, and they will often go along with these plans. Notably, this can happen in real life situations of employers and employees as well.

But take an example where you are choosing an end: suppose you ask, “What should I do with my life?” The same basic thing will happen if you ask, “What should I do today,” but the second question may be easier to answer if you have some answer to the first. What sorts of goals do you propose in answer to the first question, and what sort do you actually end up pursuing?

Note that there are constraints on the goals that you can propose. In the first place, you have to be able to describe the goal with the concepts you currently have: you cannot propose to seek a goal that you cannot describe. Second, the conceptual structure itself may rule out some goals, even if they can be described. For example, the idea of good is part of the structure, and if something is thought to be absolutely bad, the Employee will (generally) not consider proposing this as a goal. Likewise, the Employee may suppose that some things are impossible, and it will generally not propose these as goals.

What happens then is this: the Employee proposes some goal, and the CEO, after consultation with the Vice President, decides to accept or reject it, based on the CEO’s own goal of getting good predictions. This is why the Employee is an Employee: it is not the one ultimately in charge. Likewise, as was said, this is why the Employee seems to be doing something impossible, namely choosing goals. Steven Kaas makes a similar point,

You are not the king of your brain. You are the creepy guy standing next to the king going “a most judicious choice, sire”.

This is not quite the same thing, since in our model you do in fact make real decisions, including decisions about the end to be pursued. Nonetheless, the point about not being the one ultimately in charge is correct. David Hume also says something similar when he says, “Reason is, and ought only to be the slave of the passions, and can never pretend to any other office than to serve and obey them.” Hume’s position is not exactly right, and in fact seems an especially bad way of describing the situation, but the basic point that there is something, other than yourself in the ordinary sense, judging your proposed means and ends and deciding whether to accept them, is one that stands.

Sometimes the CEO will veto a proposal precisely because it very obviously leaves things vague and uncertain, which is contrary to its goal of having good predictions. I once spoke of the example that a person cannot directly choose to “write a paper.” In our present model, the Employee proposes “we’re going to write a paper now,” and the CEO responds, “That’s not a viable plan as it stands: we need more detail.”

While neither the CEO nor the Vice President is a rational being, the Vice President is especially irrational, because of the lack of unity among its goals. Both the CEO and the Employee would like to have a unified plan for one’s whole life: the CEO because this makes for good predictions, and the Employee because this is the way final causes work, because it helps to make sense of one’s life, and because “objectively good” seems to imply something which is at least consistent, which will never prefer A to B, B to C, and C to A. But the lack of unity among the Vice President’s goals means that it will always come to the CEO and object, if the person attempts to coherently pursue any goal. This will happen even if it originally accepts the proposal to seek a particular goal.

Consider this real life example from a relationship between an employer and employee:

 

Employer: Please construct a schedule for paying these bills.

Employee: [Constructs schedule.] Here it is.

Employer: Fine.

[Time passes, and the first bill comes due, according to the schedule.]

Employer: Why do we have to pay this bill now instead of later?

 

In a similar way, this sort of scenario is common in our model:

 

Vice President: Being fat makes us look bad. We need to stop being fat.

CEO: Ok, fine. Employee, please formulate a plan to stop us from being fat.

Employee: [Formulates a diet.] Here it is.

[Time passes, and the plan requires skipping a meal.]

Vice President: What is this crazy plan of not eating!?!

CEO: Fine, cancel the plan for now and we’ll get back to it tomorrow.

 

In the real life example, the behavior of the employer is frustrating and irritating to the employee because there is literally nothing they could have proposed that the employer would have found acceptable. In the same way, this sort of scenario in our model is frustrating to the Employee, the conscious person, because there is no consistent plan they could have proposed that would have been acceptable to the Vice President: either they would have objected to being fat, or they would have objected to not eating.

In later posts, we will fill in some details and continue to show how this model explains various aspects of human psychology. We will also answer various objections.

Truth and Expectation

Suppose I see a man approaching from a long way off. “That man is pretty tall,” I say to a companion. The man approaches, and we meet him. Now I can see how tall he is. Suppose my companion asks, “Were you right that the man is pretty tall, or were you mistaken?”

“Pretty tall,” of course, is itself “pretty vague,” and there surely is not some specific height in inches that would be needed in order for me to say that I was right. What then determines my answer? Again, I might just respond, “It’s hard to say.” But in some situations I would say, “yes, I was definitely right,” or “no, I was definitely wrong.” What are those situations?

Psychologically, I am likely to determine the answer by how I feel about what I know about the man’s height now, compared to what I knew in advance. If I am surprised at how short he is, I am likely to say that I was wrong. And if I am not surprised at all by his height, or if I am surprised at how tall he is, then I am likely to say that I was right. So my original pretty vague statement ends up being made somewhat more precise by being placed in relationship with my expectations. Saying that he is pretty tall implies that I have certain expectations about his height, and if those expectations are verified, then I will say that I was right, and if those expectations are falsified, at least in a certain direction, then I will say that I was wrong.

This might suggest a theory like logical positivism. The meaning of a statement seems to be defined by the expectations that it implies. But it seems easy to find a decisive refutation of this idea. “There are stars outside my past and future light cones,” for example, is undeniably meaningful, and we know what it means, but it does not seem to imply any particular expectations about what is going to happen to me.

But perhaps we should simply somewhat relax the claim about the relationship between meaning and expectations, rather than entirely retracting it. Consider the original example. Obviously, when I say, “that man is pretty tall,” the statement is a statement about the man. It is not a statement about what is going to happen to me. So it is incorrect to say that the meaning of the statement is the same as my expectations. Nonetheless, the meaning in the example receives something, at the least some of its precision, from my expectations. Different people will be surprised by different heights in such a case, and it will be appropriate to say that they disagree somewhat about the meaning of “pretty tall.” But not because they had some logical definition in their minds which disagreed with the definition in someone’s else’s mind. Instead, the difference of meaning is based on the different expectations themselves.

But does a statement always receive some precision in its meaning from expectation, or are there cases where nothing at all is received from one’s expectations? Consider the general claim that “X is true.” This in fact implies some expectations: I do not expect “someone omniscient will tell me that X is false.” I do not expect that “someone who finds out the truth about X will tell me that X is false.” I do not expect that “I will discover the truth about X and it will turn out that it was false.” Note that these expectations are implied even in cases like the claim about the stars and my future light cone. Now the hopeful logical positivist might jump in at this point and say, “Great. So why can’t we go back to the idea that meaning is entirely defined by expectations?” But returning to that theory would be cheating, so to speak, because these expectations include the abstract idea of X being true, so this must be somehow meaningful apart from these particular expectations.

These expectations do, however, give the vaguest possible framework in which to make a claim at all. And people do, sometimes, make claims with little expectation of anything besides these things, and even with little or no additional understanding of what they are talking about. For example, in the cases that Robin Hanson describes as “babbling,” the person understands little of the implications of what he is saying except the idea that “someone who understood this topic would say something like this.” Thus it seems reasonable to say that expectations do always contribute something to making meaning more precise, even if they do not wholly constitute one’s meaning. And this consequence seems pretty natural if it is true that expectation is itself one of the most fundamental activities of a mind.

Nonetheless, the precision that can be contributed in this way will never be an infinite precision, because one’s expectations themselves cannot be defined with infinite precision. So whether or not I am surprised by the man’s height in the original example, may depend in borderline cases on what exactly happens during the time between my original assessment and the arrival of the man. “I will be surprised” or “I will not be surprised” are in themselves contingent facts which could depend on many factors, not only on the man’s height. Likewise, whether or not my state actually constitutes surprise will itself be something that has borderline cases.

Predictive Processing and Free Will

Our model of the mind as an embodied predictive engine explains why people have a sense of free will, and what is necessary for a mind in general in order to have this sense.

Consider the mind in the bunker. At first, it is not attempting to change the world, since it does not know that it can do this. It is just trying to guess what is going to happen. At a certain point, it discovers that it is a part of the world, and that making specific predictions can also cause things to happen in the world. Some predictions can be self-fulfilling. I described this situation earlier by saying that at this point the mind “can get any outcome it ‘wants.'”

The scare quotes were intentional, because up to this point the mind’s only particular interest was guessing what was going to happen. So once it notices that it is in control of something, how does it decide what to do? At this point the mind will have to say to itself, “This aspect of reality is under my control. What should I do with it?” This situation, when it is noticed by a sufficiently intelligent and reflective agent, will be the feeling of free will.

Occasionally I have suggested that even something like a chess computer, if it were sufficiently intelligent, could have a sense of free will, insofar as it knows that it has many options and can choose any of them, “as far as it knows.” There is some truth in this illustration but in the end it is probably not true that there could be a sense of free will in this situation. A chess computer, however intelligent, will be disembodied, and will therefore have no real power to affect its world, that is, the world of chess. In other words, in order for the sense of free will to develop, the agent needs sufficient access to the world that it can learn about itself and its own effects on the world. It cannot develop in a situation of limited access to reality, as for example to a game board, regardless of how good it is at the game.

In any case, the question remains: how does a mind decide what to do, when up until now it had no particular goal in mind? This question often causes concrete problems for people in real life. Many people complain that their life does not feel meaningful, that is, that they have little idea what goal they should be seeking.

Let us step back for a moment. Before discovering its possession of “free will,” the mind is simply trying to guess what is going to happen. So theoretically this should continue to happen even after the mind discovers that it has some power over reality. The mind isn’t especially interested in power; it just wants to know what is going to happen. But now it knows that what is going to happen depends on what it itself is going to do. So in order to know what is going to happen, it needs to answer the question, “What am I going to do?”

The question now seems impossible to answer. It is going to do whatever it ends up deciding to do. But it seems to have no goal in mind, and therefore no way to decide what to do, and therefore no way to know what it is going to do.

Nonetheless, the mind has no choice. It is going to do something or other, since things will continue to happen, and it must guess what will happen. When it reflects on itself, there will be at least two ways for it to try to understand what it is going to do.

First, it can consider its actions as the effect of some (presumably somewhat unknown) efficient causes, and ask, “Given these efficient causes, what am I likely to do?” In practice it will acquire an answer in this way through induction. “On past occasions, when offered the choice between chocolate and vanilla, I almost always chose vanilla. So I am likely to choose vanilla this time too.” This way of thinking will most naturally result in acting in accord with pre-existing habits.

Second, it can consider its actions as the effect of some (presumably somewhat known) final causes, and ask, “Given these final causes, what am I likely to do?” This will result in behavior that is more easily understood as goal-seeking. “Looking at my past choices of food, it looks like I was choosing them for the sake of the pleasant taste. But vanilla seems to have a more pleasant taste than chocolate. So it is likely that I will take the vanilla.”

Notice what we have in the second case. In principle, the mind is just doing what it always does: trying to guess what will happen. But in practice it is now seeking pleasant tastes, precisely because that seems like a reasonable way to guess what it will do.

This explains why people feel a need for meaning, that is, for understanding their purpose in life, and why they prefer to think of their life according to a narrative. These two things are distinct, but they are related, and both are ways of making our own actions more intelligible. In this way the mind’s task is easier: that is, we need purpose and narrative in order to know what we are going to do. We can also see why it seems to be possible to “choose” our purpose, even though choosing a final goal should be impossible. There is a “choice” about this insofar as our actions are not perfectly coherent, and it would be possible to understand them in relation to one end or another, at least in a concrete way, even if in any case we will always understand them in a general sense as being for the sake of happiness. In this sense, Stuart Armstrong’s recent argument that there is no such thing as the “true values” of human beings, although perhaps presented as an obstacle to be overcome, actually has some truth in it.

The human need for meaning, in fact, is so strong that occasionally people will commit suicide because they feel that their lives are not meaningful. We can think of these cases as being, more or less, actual cases of the darkened room. Otherwise we could simply ask, “So your life is meaningless. So what? Why does that mean you should kill yourself rather than doing some other random thing?” Killing yourself, in fact, shows that you still have a purpose, namely the mind’s fundamental purpose. The mind wants to know what it is going to do, and the best way to know this is to consider its actions as ordered to a determinate purpose. If no such purpose can be found, there is (in this unfortunate way of thinking) an alternative: if I go kill myself, I will know what I will do for the rest of my life.