Employer and Employee Model: Truth

In the remote past, I suggested that I would someday follow up on this post. In the current post, I begin to keep that promise.

We can ask about the relationship of the various members of our company with the search for truth.

The CEO, as the predictive engine, has a fairly strong interest in truth, but only insofar as truth is frequently necessary in order to get predictive accuracy. Consequently our CEO will usually insist on the truth when it affects our expectations regarding daily life, but it will care less when we consider things remote from the senses. Additionally, the CEO is highly interested in predicting the behavior of the Employee, and it is not uncommon for falsehood to be better than truth for this purpose.

To put this in another way, the CEO’s interest in truth is instrumental: it is sometimes useful for the CEO’s true goal, predictive accuracy, but not always, and in some cases it can even be detrimental.

As I said here, the Employee is, roughly speaking, the human person as we usually think of one, and consequently the Employee has the same interest in truth that we do. I personally consider truth to be an ultimate end,  and this is probably the opinion of most people, to a greater or lesser degree. In other words, most people consider truth a good thing, even apart from instrumental considerations. Nonetheless, all of us care about various things besides truth, and therefore we also occasionally trade truth for other things.

The Vice President has perhaps the least interest in truth. We could say that they too have some instrumental concern about truth. Thus for example the VP desires food, and this instrumentally requires true ideas about where food is to be found. Nonetheless, as I said in the original post, the VP is the least rational and coherent, and may easily fail to notice such a need. Thus the VP might desire the status resulting from winning an argument, so to speak, but also desire the similar status that results from ridiculing the person holding an opposing view. The frequent result is that a person believes the falsehood that ridiculing an opponent generally increases the chance that they will change their mind (e.g. see John Loftus’s attempt to justify ridicule.)

Given this account, we can raise several disturbing questions.

First, although we have said the Employee values truth in itself, can this really be true, rather than simply a mistaken belief on the part of the Employee? As I suggested in the original account, the Employee is in some way a consequence of the CEO and the VP. Consequently, if neither of these places intrinsic value on truth, how is it possible that the Employee does?

Second, even if the Employee sincerely places an intrinsic value on truth, how is this not a misplaced value? Again, if the Employee is something like a result of the others, what is good for the Employee should be what is good for the others, and thus if truth is not intrinsically good for the others, it should not be intrinsically good for the Employee.

In response to the first question, the Employee can indeed believe in the intrinsic value of truth, and of many other things to which the CEO and VP do not assign intrinsic value. This happens because as we are considering the model, there is a real division of labor, even if the Employee arises historically in a secondary manner. As I said in the other post, the Employee’s beliefs are our beliefs, and the Employee can believe anything that we believe. Furthermore, the Employee can really act on such beliefs about the goodness of truth or other things, even when the CEO and VP do not have the same values. The reason for this is the same as the reason that the CEO will often go along with the desires of the VP, even though the CEO places intrinsic value only on predictive accuracy. The linked post explains, in effect, why the CEO goes along with sex, even though only the VP really wants it. In a similar way, if the Employee believes that sex outside of marriage is immoral, the CEO often goes along with avoiding such sex, even though the CEO cares about predictive accuracy, not about sex or its avoidance. Of course, in this particular case, there is a good chance of conflict between the Employee and VP, and the CEO dislikes conflict, since it makes it harder to predict what the person overall will end up doing. And since the VP very rarely changes its mind in this case, the CEO will often end up encouraging the Employee to change their mind about the morality of such sex: thus one of the most frequent reasons why people abandon their religion is that it says that sex in some situations is wrong, but they still desire sex in those situations.

In response to the second, the Employee is not wrong to suppose that truth is intrinsically valuable. The argument against this would be that the human good is based on human flourishing, and (it is claimed) we do not need truth for such flourishing, since the CEO and VP do not care about truth in itself. The problem with this is that such flourishing requires that the Employee care about truth, and even the CEO needs the Employee to care in this way, for the sake of its own goal of predictive accuracy. Consider a real-life company: the employer does not necessarily care about whether the employee is being paid, considered in itself, but only insofar as it is instrumentally useful for convincing the employee to work for the employer. But the employer does care about whether the employee cares about being paid: if the employee does not care about being paid, they will not work for the employer.

Concern for truth in itself, apart from predictive accuracy, affects us when we consider things that cannot possibly affect our future experience: thus in previous cases I have discussed the likelihood that there are stars and planets outside the boundaries of the visible universe. This is probably true; but if I did not care about truth in itself, I might as well say that the universe is surrounded by purple elephants. I do not expect any experience to verify or falsify the claim, so why not make it? But now notice the problem for the CEO: the CEO needs to predict what the Employee is going to do, including what they will say and believe. This will instantly become extremely difficult if the Employee decides that they can say and believe whatever they like, without regard for truth, whenever the claim will not affect their experiences. So for its own goal of predictive accuracy, the CEO needs the Employee to value truth in itself, just as an ordinary employer needs their employee to value their salary.

In real life this situation can cause problems. The employer needs their employee to care about being paid, but if they care too much, they may constantly be asking for raises, or they may quit and go work for someone who will pay more. The employer does not necessarily like these situations. In a similar way, the CEO in our company may worry if the Employee insists too much on absolute truth, because as discussed elsewhere, it can lead to other situations with unpredictable behavior from the Employee, or to situations where there is a great deal of uncertainty about how society will respond to the Employee’s behavior.

Overall, this post perhaps does not say much in substance that we have not said elsewhere, but it will perhaps provide an additional perspective on these matters.

Employer and Employee Model of Human Psychology

This post builds on the ideas in the series of posts on predictive processing and the followup posts, and also on those relating truth and expectation. Consequently the current post will likely not make much sense to those who have not read the earlier content, or to those that read it but mainly disagreed.

We set out the model by positing three members of the “company” that constitutes a human being:

The CEO. This is the predictive engine in the predictive processing model.

The Vice President. In the same model, this is the force of the historical element in the human being, which we used to respond to the “darkened room” problem. Thus for example the Vice President is responsible for the fact that someone is likely to eat soon, regardless of what they believe about this. Likewise, it is responsible for the pursuit of sex, the desire for respect and friendship, and so on. In general it is responsible for behaviors that would have been historically chosen and preserved by natural selection.

The Employee. This is the conscious person who has beliefs and goals and free will and is reflectively aware of these things. In other words, this is you, at least in a fairly ordinary way of thinking of yourself. Obviously, in another way you are composed from all of them.

Why have we arranged things in this way? Descartes, for example, would almost certainly disagree violently with this model. The conscious person, according to him, would surely be the CEO, and not an employee. And what is responsible for the relationship between the CEO and the Vice President? Let us start with this point first, before we discuss the Employee. We make the predictive engine the CEO because in some sense this engine is responsible for everything that a human being does, including the behaviors preserved by natural selection. On the other hand, the instinctive behaviors of natural selection are not responsible for everything, but they can affect the course of things enough that it is useful for the predictive engine to take them into account. Thus for example in the post on sex and minimizing uncertainty, we explained why the predictive engine will aim for situations that include having sex and why this will make its predictions more confident. Thus, the Vice President advises certain behaviors, the CEO talks to the Vice President, and the CEO ends up deciding on a course of action, which ultimately may or may not be the one advised by the Vice President.

While neither the CEO nor the Vice President is a rational being, since in our model we place the rationality in the Employee, that does not mean they are stupid. In particular, the CEO is very good at what it does. Consider a role playing video game where you have a character that can die and then resume. When someone first starts to play the game, they may die frequently. After they are good at the game, they may die only rarely, perhaps once in many days or many weeks. Our CEO is in a similar situation, but it frequently goes 80 years or more without dying, on its very first attempt. It is extremely good at its game.

What are their goals? The CEO basically wants accurate predictions. In this sense, it has one unified goal. What exactly counts as more or less accurate here would be a scientific question that we probably cannot resolve by philosophical discussion. In fact, it is very possible that this would differ in different circumstances: in this sense, even though it has a unified goal, it might not be describable by a consistent utility function. And even if it can be described in that way, since the CEO is not rational, it does not (in itself) make plans to bring about correct predictions. Making good predictions is just what it does, as falling is what a rock does. There will be some qualifications on this, however, when we discuss how the members of the company relate to one another.

The Vice President has many goals: eating regularly, having sex, having and raising children, being respected and liked by others, and so on. And even more than in the case of the CEO, there is no reason for these desires to form a coherent set of preferences. Thus the Vice President might advise the pursuit of one goal, but then change its mind in the middle, for no apparent reason, because it is suddenly attracted by one of the other goals.

Overall, before the Employee is involved, human action is determined by a kind of negotiation between the CEO and the Vice President. The CEO, which wants good predictions, has no special interest in the goals of the Vice President, but it cooperates with them because when it cooperates its predictions tend to be better.

What about the Employee? This is the rational being, and it has abstract concepts which it uses as a formal copy of the world. Before I go on, let me insist clearly on one point. If the world is represented in a certain way in the Employee’s conceptual structure, that is the way the Employee thinks the world is. And since you are the Employee, that is the way you think the world actually is. The point is that once we start thinking this way, it is easy to say, “oh, this is just a model, it’s not meant to be the real thing.” But as I said here, it is not possible to separate the truth of statements from the way the world actually is: your thoughts are formulated in concepts, but they are thoughts about the way things are. Again, all statements are maps, and all statements are about the territory.

The CEO and the Vice President exist as soon a human being has a brain; in fact some aspects of the Vice President would exist even before that. But the Employee, insofar as it refers to something with rational and self-reflective knowledge, takes some time to develop. Conceptual knowledge of the world grows from experience: it doesn’t exist from the beginning. And the Employee represents goals in terms of its conceptual structure. This is just a way of saying that as a rational being, if you say you are pursuing a goal, you have to be able to describe that goal with the concepts that you have. Consequently you cannot do this until you have some concepts.

We are ready to address the question raised earlier. Why are you the Employee, and not the CEO? In the first place, the CEO got to the company first, as we saw above. Second, consider what the conscious person does when they decide to pursue a goal. There seems to be something incoherent about “choosing a goal” in the first place: you need a goal in order to decide which means will be a good means to choose. And yet, as I said here, people make such choices anyway. And the fact that you are the Employee, and not the CEO, is the explanation for this. If you were the CEO, there would indeed be no way to choose an end. That is why the actual CEO makes no such choice: its end is already determinate, namely good predictions. And you are hired to help out with this goal. Furthermore, as a rational being, you are smarter than the CEO and the Vice President, so to speak. So you are allowed to make complicated plans that they do not really understand, and they will often go along with these plans. Notably, this can happen in real life situations of employers and employees as well.

But take an example where you are choosing an end: suppose you ask, “What should I do with my life?” The same basic thing will happen if you ask, “What should I do today,” but the second question may be easier to answer if you have some answer to the first. What sorts of goals do you propose in answer to the first question, and what sort do you actually end up pursuing?

Note that there are constraints on the goals that you can propose. In the first place, you have to be able to describe the goal with the concepts you currently have: you cannot propose to seek a goal that you cannot describe. Second, the conceptual structure itself may rule out some goals, even if they can be described. For example, the idea of good is part of the structure, and if something is thought to be absolutely bad, the Employee will (generally) not consider proposing this as a goal. Likewise, the Employee may suppose that some things are impossible, and it will generally not propose these as goals.

What happens then is this: the Employee proposes some goal, and the CEO, after consultation with the Vice President, decides to accept or reject it, based on the CEO’s own goal of getting good predictions. This is why the Employee is an Employee: it is not the one ultimately in charge. Likewise, as was said, this is why the Employee seems to be doing something impossible, namely choosing goals. Steven Kaas makes a similar point,

You are not the king of your brain. You are the creepy guy standing next to the king going “a most judicious choice, sire”.

This is not quite the same thing, since in our model you do in fact make real decisions, including decisions about the end to be pursued. Nonetheless, the point about not being the one ultimately in charge is correct. David Hume also says something similar when he says, “Reason is, and ought only to be the slave of the passions, and can never pretend to any other office than to serve and obey them.” Hume’s position is not exactly right, and in fact seems an especially bad way of describing the situation, but the basic point that there is something, other than yourself in the ordinary sense, judging your proposed means and ends and deciding whether to accept them, is one that stands.

Sometimes the CEO will veto a proposal precisely because it very obviously leaves things vague and uncertain, which is contrary to its goal of having good predictions. I once spoke of the example that a person cannot directly choose to “write a paper.” In our present model, the Employee proposes “we’re going to write a paper now,” and the CEO responds, “That’s not a viable plan as it stands: we need more detail.”

While neither the CEO nor the Vice President is a rational being, the Vice President is especially irrational, because of the lack of unity among its goals. Both the CEO and the Employee would like to have a unified plan for one’s whole life: the CEO because this makes for good predictions, and the Employee because this is the way final causes work, because it helps to make sense of one’s life, and because “objectively good” seems to imply something which is at least consistent, which will never prefer A to B, B to C, and C to A. But the lack of unity among the Vice President’s goals means that it will always come to the CEO and object, if the person attempts to coherently pursue any goal. This will happen even if it originally accepts the proposal to seek a particular goal.

Consider this real life example from a relationship between an employer and employee:

 

Employer: Please construct a schedule for paying these bills.

Employee: [Constructs schedule.] Here it is.

Employer: Fine.

[Time passes, and the first bill comes due, according to the schedule.]

Employer: Why do we have to pay this bill now instead of later?

 

In a similar way, this sort of scenario is common in our model:

 

Vice President: Being fat makes us look bad. We need to stop being fat.

CEO: Ok, fine. Employee, please formulate a plan to stop us from being fat.

Employee: [Formulates a diet.] Here it is.

[Time passes, and the plan requires skipping a meal.]

Vice President: What is this crazy plan of not eating!?!

CEO: Fine, cancel the plan for now and we’ll get back to it tomorrow.

 

In the real life example, the behavior of the employer is frustrating and irritating to the employee because there is literally nothing they could have proposed that the employer would have found acceptable. In the same way, this sort of scenario in our model is frustrating to the Employee, the conscious person, because there is no consistent plan they could have proposed that would have been acceptable to the Vice President: either they would have objected to being fat, or they would have objected to not eating.

In later posts, we will fill in some details and continue to show how this model explains various aspects of human psychology. We will also answer various objections.

How Sex Minimizes Uncertainty

This is in response to an issue raised by Scott Alexander on his Tumblr.

I actually responded to the dark room problem of predictive processing earlier. However, here I will construct an imaginary model which will hopefully explain the same thing more clearly and briefly.

Suppose there is dust particle which falls towards the ground 90% of the time, and is blown higher into the air 10% of the time.

Now suppose we bring the dust particle to life, and give it the power of predictive processing. If it predicts it will move in a certain direction, this will tend to cause it to move in that direction. However, this causal power is not infallible. So we can suppose that if it predicts it will move where it was going to move anyway, in the dead situation, it will move in that direction. But if it predicts it will move in the opposite direction from where it would have moved in the dead situation, then let us suppose that it will move in the predicted direction 75% of the time, while in the remaining 25% of the time, it will move in the direction the dead particle would have moved, and its prediction will be mistaken.

Now if the particle predicts it will fall towards the ground, then it will fall towards the ground 97.5% of the time, and in the remaining 2.5% of the time it will be blown higher in the air.

Meanwhile, if the particle predicts that it will be blown higher, then it will be blown higher in 77.5% of cases, and in 22.5% of cases it will fall downwards.

97.5% accuracy is less uncertain than 77.5% accuracy, so the dust particle will minimize uncertainty by consistently predicting that it will fall downwards.

The application to sex and hunger and so on should be evident.