How to Build an Artificial Human

I was going to use “Artificial Intelligence” in the title here but realized after thinking about it that the idea is really more specific than that.

I came up with the idea here while thinking more about the problem I raised in an earlier post about a serious obstacle to creating an AI. As I said there:

Current AI systems are not universal, and clearly have no ability whatsoever to become universal, without first undergoing deep changes in those systems, changes that would have to be initiated by human beings. What is missing?

The problem is the training data. The process of evolution produced the general ability to learn by using the world itself as the training data. In contrast, our AI systems take a very small subset of the world (like a large set of Go games or a large set of internet text), and train a learning system on that subset. Why take a subset? Because the world is too large to fit into a computer, especially if that computer is a small part of the world.

This suggests that going from the current situation to “artificial but real” intelligence is not merely a question of making things better and better little by little. There is a more fundamental problem that would have to be overcome, and it won’t be overcome simply by larger training sets, by faster computing, and things of this kind. This does not mean that the problem is impossible, but it may turn out to be much more difficult than people expected. For example, if there is no direct solution, people might try to create Robin Hanson’s “ems”, where one would more or less copy the learning achieved by natural selection. Or even if that is not done directly, a better understanding of what it means to “know how to learn,” might lead to a solution, although probably one that would not depend on training a model on massive amounts of data.

Proposed Predictive Model

Perhaps I was mistaken in saying that “larger training sets” would not be enough, at any rate enough to get past this basic obstacle. Perhaps it is enough to choose the subset correctly… namely by choosing the subset of the world that we know to contain general intelligence. Instead of training our predictive model on millions of Go games or millions of words, we will train it on millions of human lives.

This project will be extremely expensive. We might need to hire 10 million people to rigorously lifelog for the next 10 years. This has to be done with as much detail as possible; in particular we would want them recording constant audio and visual streams, as well as much else as possible. If we pay our crew an annual salary of $75,000 for this, this will come to $7.5 trillion; there will be some small additions for equipment and maintenance, but all of this will be very small compared to the salary costs.

Presumably in order to actually build such a large model, various scaling issues would come up and need to be solved. And in principle nothing prevents these from being very hard to solve, or even impossible in practice. But since we do not know that this would happen, let us skip over this and pretend that we have succeeded in building the model. Once this is done, our model should be able to fairly easily take a point in a person’s life and give a fairly sensible continuation over at least a short period of time, just as GPT-3 can give fairly sensible continuations to portions of text.

It may be that this is enough to get past the obstacle described above, and once this is done, it might be enough to build a general intelligence using other known principles, perhaps with some research and refinement that could be done during the years in which our crew would be building their records.

Required Elements

Live learning. In the post discussing the obstacle, I noted that there are two kinds of learning, the type that comes from evolution, and the type that happens during life. Our model represents the type that comes from evolution; unlike GPT-3, which cannot learn anything new, we need our AI to remember what has actually happened during its life and to be able to use this to acquire knowledge about its particular situation. This is not difficult in theory but you would need to think carefully about how this should interact with the general model; you do not want to simply add its particular experiences as another individual example (not that such an addition to an already trained model is simple anyway.)

Causal model. Our AI needs not just a general predictive model of the world, but specifically a causal one; not just the general idea that “when you see A, you will soon see B,” but the idea that “when there is an A — which may or may not be seen — it will make a B, which you may or may not see.” This is needed for many reasons, but in particular, without such a causal model, long term predictions or planning will be impossible. If you take a model like GPT-3 and force it to continue producing text indefinitely, it will either repeat itself or eventually go completely off topic. The same thing would happen to our human life model — if we simply used the model without any causal structure, and forced it to guess what would happen indefinitely far into the future, it would eventually produce senseless predictions.

In the paper Making Sense of Raw Input, published by Google Deepmind, there is a discussion of an implementation of this sort of model, although trained on an extremely easy environment (compared to our task, which would be train it on human lives).

The Apperception Engine attempts to discern the nomological structure that underlies the raw sensory input. In our experiments, we found the induced theory to be very accurate as a predictive model, no matter how many time steps into the future we predict. For example, in Seek Whence (Section 5.1), the theory induced in Fig. 5a allows us to predict all future time steps of the series, and the accuracy of the predictions does not decay with time.

In Sokoban (Section 5.2), the learned dynamics are not just 100% correct on all test trajectories, but they are provably 100% correct. These laws apply to all Sokoban worlds, no matter how large, and no matter how many objects. Our system is, to the best of our knowledge, the first that is able to go from raw video of non-trivial games to an explicit first-order nomological model that is provably correct.

In the noisy sequences experiments (Section 5.3), the induced theory is an accurate predictive model. In Fig. 19, for example, the induced theory allows us to predict all future time steps of the series, and does not degenerate as we go further into the future.

(6.1.2 Accuracy)

Note that this does not have the problem of quick divergence from reality as you predict into the distant future. It will also improve our AI’s live learning:

A system that can learn an accurate dynamics model from a handful of examples is extremely useful for model-based reinforcement learning. Standard model-free algorithms require millions of episodes before they can reach human performance on a range of tasks [31]. Algorithms that learn an implicit model are able to solve the same tasks in thousands of episodes [82]. But a system that learns an accurate dynamics model from a handful of examples should be able to apply that model to plan, anticipating problems in imagination rather than experiencing them in reality [83], thus opening the door to extremely sample efficient model-based reinforcement learning. We anticipate a system that can learn the dynamics of an ATARI game from a handful of trajectories,19 and then apply that model to plan, thus playing at reasonable human level on its very first attempt.

(6.1.3. Data efficiency)

“We anticipate”, as in Google has not yet built such a thing, but that they expect to be able to build it.

Scaling a causal model to work on our human life dataset will probably require some of the most difficult new research of this entire proposal.

Body. In order to engage in live learning, our AI needs to exist in the world in some way. And for the predictive model to do it any good, the world that it exists in needs to be a roughly human world. So there are two possibilities: either we simulate a human world in which it will possess a simulated human body, or we give it a robotic human-like body that will exist physically in the human world.

In relationship to our proposal, these are not very different, but the former is probably more difficult, since we would have to simulate pretty much the entire world, and the more distant our simulation is from the actual world, the less helpful its predictive model would turn out to be.

Sensation. Our AI will need to receive input from the world through something like “senses.” These will need to correspond reasonably well with the data as provided in the model; e.g. since we expect to have audio and visual recording, our AI will need sight and hearing.

Predictive Processing. Our AI will need to function this way in order to acquire self-knowledge and free will, without which we would not consider it to possess general intelligence, however good it might be at particular tasks. In particular, at every point in time it will have predictions, based on the general human-life predictive model and on its causal model of the world, about what will happen in the near future. These predictions need to function in such a way that when it makes a relevant prediction, e.g. when it predicts that it will raise its arm, it will actually raise its arm.

(We might not want this to happen 100% of the time — if such a prediction is very far from the predictive model, we might want the predictive model to take precedence over this power over itself, much as happens with human beings.)

Thought and Internal Sensation. Our AI needs to be able to notice that when it predicts it will raise its arm, it succeeds, and it needs to learn that in these cases its prediction is the cause of raising the arm. Only in this way will its live learning produce a causal model of the world which actually has self knowledge: “When I decide to raise my arm, it happens.” This will also tell it the distinction between itself and the rest of the world; if it predicts the sun will change direction, this does not happen. In order for all this to happen, the AI needs to be able to see its own predictions, not just what happens; the predictions themselves have to become a kind of input, similar to sight and hearing.

What was this again?

If we don’t run into any new fundamental obstacle along the way (I mentioned a few points where this might happen), the above procedure might be able to actually build an artificial general intelligence at a rough cost of $10 trillion (rounded up to account for hardware, research, and so on) and a time period of 10-20 years. But I would call your attention to a couple of things:

First, this is basically an artificial human, even to the extent that the easiest implementation likely requires giving it a robotic human body. It is not more general than that, and there is little reason to believe that our AI would be much more intelligent than a normal human, or that we could easily make it more intelligent. It would be fairly easy to give it quick mental access to other things, like mathematical calculation or internet searches, but this would not be much faster than a human being with a calculator or internet access. Like with GPT-N, one factor that would tend to limit its intelligence is that its predictive model is based on the level of intelligence found in human beings; there is no reason it would predict it would behave more intelligently, and so no reason why it would.

Second, it is extremely unlikely than anyone will implement this research program anytime soon. Why? Because you don’t get anything out of it except an artificial human. We have easier and less expensive ways to make humans, and $10 trillion is around the most any country has ever spent on anything, and never deliberately on one single project. Nonetheless, if no better way to make an AI is found, one can expect that eventually something like this will be implemented; perhaps by China in the 22nd century.

Third, note that “values” did not come up in this discussion. I mentioned this in one of the earlier posts on predictive processing:

The idea of the “desert landscape” seems to be that this account appears to do away with the idea of the good, and the idea of desire. The brain predicts what it is going to do, and those predictions cause it to do those things. This all seems purely intellectual: it seems that there is no purpose or goal or good involved.

The correct response to this, I think, is connected to what I have said elsewhere about desire and good. I noted there that we recognize our desires as desires for particular things by noticing that when we have certain feelings, we tend to do certain things. If we did not do those things, we would never conclude that those feelings are desires for doing those things. Note that someone could raise a similar objection here: if this is true, then are not desire and good mere words? We feel certain feelings, and do certain things, and that is all there is to be said. Where is good or purpose here?

The truth here is that good and being are convertible. The objection (to my definition and to Clark’s account) is not a reasonable objection at all: it would be a reasonable objection only if we expected good to be something different from being, in which case it would of course be nothing at all.

There was no need for an explicit discussion of values because they are an indirect consequence. What would our AI care about? It would care roughly speaking about the same things we care about, because it would predict (and act on the prediction) that it would live a life similar to a human life. There is definitely no specific reason to think it would be interested in taking over the world, although this cannot be excluded absolutely, since this is an interest that humans sometimes have. Note also that Nick Bostrom was wrong: I have just made a proposal that might actually succeed in making a human-like AI, but there is no similar proposal that would make an intelligent paperclip maximizer.

This is not to say that we should not expect any bad behavior at all from such a being; the behavior of the AI in the film Ex Machina is a plausible fictional representation of what could go wrong. Since what it is “trying” to do is to get predictive accuracy, and its predictions are based on actual human lives, it will “feel bad” about the lack of accuracy that results from the fact that it is not actually human, and it may act on those feelings.

Employer and Employee Model of Human Psychology

This post builds on the ideas in the series of posts on predictive processing and the followup posts, and also on those relating truth and expectation. Consequently the current post will likely not make much sense to those who have not read the earlier content, or to those that read it but mainly disagreed.

We set out the model by positing three members of the “company” that constitutes a human being:

The CEO. This is the predictive engine in the predictive processing model.

The Vice President. In the same model, this is the force of the historical element in the human being, which we used to respond to the “darkened room” problem. Thus for example the Vice President is responsible for the fact that someone is likely to eat soon, regardless of what they believe about this. Likewise, it is responsible for the pursuit of sex, the desire for respect and friendship, and so on. In general it is responsible for behaviors that would have been historically chosen and preserved by natural selection.

The Employee. This is the conscious person who has beliefs and goals and free will and is reflectively aware of these things. In other words, this is you, at least in a fairly ordinary way of thinking of yourself. Obviously, in another way you are composed from all of them.

Why have we arranged things in this way? Descartes, for example, would almost certainly disagree violently with this model. The conscious person, according to him, would surely be the CEO, and not an employee. And what is responsible for the relationship between the CEO and the Vice President? Let us start with this point first, before we discuss the Employee. We make the predictive engine the CEO because in some sense this engine is responsible for everything that a human being does, including the behaviors preserved by natural selection. On the other hand, the instinctive behaviors of natural selection are not responsible for everything, but they can affect the course of things enough that it is useful for the predictive engine to take them into account. Thus for example in the post on sex and minimizing uncertainty, we explained why the predictive engine will aim for situations that include having sex and why this will make its predictions more confident. Thus, the Vice President advises certain behaviors, the CEO talks to the Vice President, and the CEO ends up deciding on a course of action, which ultimately may or may not be the one advised by the Vice President.

While neither the CEO nor the Vice President is a rational being, since in our model we place the rationality in the Employee, that does not mean they are stupid. In particular, the CEO is very good at what it does. Consider a role playing video game where you have a character that can die and then resume. When someone first starts to play the game, they may die frequently. After they are good at the game, they may die only rarely, perhaps once in many days or many weeks. Our CEO is in a similar situation, but it frequently goes 80 years or more without dying, on its very first attempt. It is extremely good at its game.

What are their goals? The CEO basically wants accurate predictions. In this sense, it has one unified goal. What exactly counts as more or less accurate here would be a scientific question that we probably cannot resolve by philosophical discussion. In fact, it is very possible that this would differ in different circumstances: in this sense, even though it has a unified goal, it might not be describable by a consistent utility function. And even if it can be described in that way, since the CEO is not rational, it does not (in itself) make plans to bring about correct predictions. Making good predictions is just what it does, as falling is what a rock does. There will be some qualifications on this, however, when we discuss how the members of the company relate to one another.

The Vice President has many goals: eating regularly, having sex, having and raising children, being respected and liked by others, and so on. And even more than in the case of the CEO, there is no reason for these desires to form a coherent set of preferences. Thus the Vice President might advise the pursuit of one goal, but then change its mind in the middle, for no apparent reason, because it is suddenly attracted by one of the other goals.

Overall, before the Employee is involved, human action is determined by a kind of negotiation between the CEO and the Vice President. The CEO, which wants good predictions, has no special interest in the goals of the Vice President, but it cooperates with them because when it cooperates its predictions tend to be better.

What about the Employee? This is the rational being, and it has abstract concepts which it uses as a formal copy of the world. Before I go on, let me insist clearly on one point. If the world is represented in a certain way in the Employee’s conceptual structure, that is the way the Employee thinks the world is. And since you are the Employee, that is the way you think the world actually is. The point is that once we start thinking this way, it is easy to say, “oh, this is just a model, it’s not meant to be the real thing.” But as I said here, it is not possible to separate the truth of statements from the way the world actually is: your thoughts are formulated in concepts, but they are thoughts about the way things are. Again, all statements are maps, and all statements are about the territory.

The CEO and the Vice President exist as soon a human being has a brain; in fact some aspects of the Vice President would exist even before that. But the Employee, insofar as it refers to something with rational and self-reflective knowledge, takes some time to develop. Conceptual knowledge of the world grows from experience: it doesn’t exist from the beginning. And the Employee represents goals in terms of its conceptual structure. This is just a way of saying that as a rational being, if you say you are pursuing a goal, you have to be able to describe that goal with the concepts that you have. Consequently you cannot do this until you have some concepts.

We are ready to address the question raised earlier. Why are you the Employee, and not the CEO? In the first place, the CEO got to the company first, as we saw above. Second, consider what the conscious person does when they decide to pursue a goal. There seems to be something incoherent about “choosing a goal” in the first place: you need a goal in order to decide which means will be a good means to choose. And yet, as I said here, people make such choices anyway. And the fact that you are the Employee, and not the CEO, is the explanation for this. If you were the CEO, there would indeed be no way to choose an end. That is why the actual CEO makes no such choice: its end is already determinate, namely good predictions. And you are hired to help out with this goal. Furthermore, as a rational being, you are smarter than the CEO and the Vice President, so to speak. So you are allowed to make complicated plans that they do not really understand, and they will often go along with these plans. Notably, this can happen in real life situations of employers and employees as well.

But take an example where you are choosing an end: suppose you ask, “What should I do with my life?” The same basic thing will happen if you ask, “What should I do today,” but the second question may be easier to answer if you have some answer to the first. What sorts of goals do you propose in answer to the first question, and what sort do you actually end up pursuing?

Note that there are constraints on the goals that you can propose. In the first place, you have to be able to describe the goal with the concepts you currently have: you cannot propose to seek a goal that you cannot describe. Second, the conceptual structure itself may rule out some goals, even if they can be described. For example, the idea of good is part of the structure, and if something is thought to be absolutely bad, the Employee will (generally) not consider proposing this as a goal. Likewise, the Employee may suppose that some things are impossible, and it will generally not propose these as goals.

What happens then is this: the Employee proposes some goal, and the CEO, after consultation with the Vice President, decides to accept or reject it, based on the CEO’s own goal of getting good predictions. This is why the Employee is an Employee: it is not the one ultimately in charge. Likewise, as was said, this is why the Employee seems to be doing something impossible, namely choosing goals. Steven Kaas makes a similar point,

You are not the king of your brain. You are the creepy guy standing next to the king going “a most judicious choice, sire”.

This is not quite the same thing, since in our model you do in fact make real decisions, including decisions about the end to be pursued. Nonetheless, the point about not being the one ultimately in charge is correct. David Hume also says something similar when he says, “Reason is, and ought only to be the slave of the passions, and can never pretend to any other office than to serve and obey them.” Hume’s position is not exactly right, and in fact seems an especially bad way of describing the situation, but the basic point that there is something, other than yourself in the ordinary sense, judging your proposed means and ends and deciding whether to accept them, is one that stands.

Sometimes the CEO will veto a proposal precisely because it very obviously leaves things vague and uncertain, which is contrary to its goal of having good predictions. I once spoke of the example that a person cannot directly choose to “write a paper.” In our present model, the Employee proposes “we’re going to write a paper now,” and the CEO responds, “That’s not a viable plan as it stands: we need more detail.”

While neither the CEO nor the Vice President is a rational being, the Vice President is especially irrational, because of the lack of unity among its goals. Both the CEO and the Employee would like to have a unified plan for one’s whole life: the CEO because this makes for good predictions, and the Employee because this is the way final causes work, because it helps to make sense of one’s life, and because “objectively good” seems to imply something which is at least consistent, which will never prefer A to B, B to C, and C to A. But the lack of unity among the Vice President’s goals means that it will always come to the CEO and object, if the person attempts to coherently pursue any goal. This will happen even if it originally accepts the proposal to seek a particular goal.

Consider this real life example from a relationship between an employer and employee:

 

Employer: Please construct a schedule for paying these bills.

Employee: [Constructs schedule.] Here it is.

Employer: Fine.

[Time passes, and the first bill comes due, according to the schedule.]

Employer: Why do we have to pay this bill now instead of later?

 

In a similar way, this sort of scenario is common in our model:

 

Vice President: Being fat makes us look bad. We need to stop being fat.

CEO: Ok, fine. Employee, please formulate a plan to stop us from being fat.

Employee: [Formulates a diet.] Here it is.

[Time passes, and the plan requires skipping a meal.]

Vice President: What is this crazy plan of not eating!?!

CEO: Fine, cancel the plan for now and we’ll get back to it tomorrow.

 

In the real life example, the behavior of the employer is frustrating and irritating to the employee because there is literally nothing they could have proposed that the employer would have found acceptable. In the same way, this sort of scenario in our model is frustrating to the Employee, the conscious person, because there is no consistent plan they could have proposed that would have been acceptable to the Vice President: either they would have objected to being fat, or they would have objected to not eating.

In later posts, we will fill in some details and continue to show how this model explains various aspects of human psychology. We will also answer various objections.

How Sex Minimizes Uncertainty

This is in response to an issue raised by Scott Alexander on his Tumblr.

I actually responded to the dark room problem of predictive processing earlier. However, here I will construct an imaginary model which will hopefully explain the same thing more clearly and briefly.

Suppose there is dust particle which falls towards the ground 90% of the time, and is blown higher into the air 10% of the time.

Now suppose we bring the dust particle to life, and give it the power of predictive processing. If it predicts it will move in a certain direction, this will tend to cause it to move in that direction. However, this causal power is not infallible. So we can suppose that if it predicts it will move where it was going to move anyway, in the dead situation, it will move in that direction. But if it predicts it will move in the opposite direction from where it would have moved in the dead situation, then let us suppose that it will move in the predicted direction 75% of the time, while in the remaining 25% of the time, it will move in the direction the dead particle would have moved, and its prediction will be mistaken.

Now if the particle predicts it will fall towards the ground, then it will fall towards the ground 97.5% of the time, and in the remaining 2.5% of the time it will be blown higher in the air.

Meanwhile, if the particle predicts that it will be blown higher, then it will be blown higher in 77.5% of cases, and in 22.5% of cases it will fall downwards.

97.5% accuracy is less uncertain than 77.5% accuracy, so the dust particle will minimize uncertainty by consistently predicting that it will fall downwards.

The application to sex and hunger and so on should be evident.

Predictive Processing and Free Will

Our model of the mind as an embodied predictive engine explains why people have a sense of free will, and what is necessary for a mind in general in order to have this sense.

Consider the mind in the bunker. At first, it is not attempting to change the world, since it does not know that it can do this. It is just trying to guess what is going to happen. At a certain point, it discovers that it is a part of the world, and that making specific predictions can also cause things to happen in the world. Some predictions can be self-fulfilling. I described this situation earlier by saying that at this point the mind “can get any outcome it ‘wants.'”

The scare quotes were intentional, because up to this point the mind’s only particular interest was guessing what was going to happen. So once it notices that it is in control of something, how does it decide what to do? At this point the mind will have to say to itself, “This aspect of reality is under my control. What should I do with it?” This situation, when it is noticed by a sufficiently intelligent and reflective agent, will be the feeling of free will.

Occasionally I have suggested that even something like a chess computer, if it were sufficiently intelligent, could have a sense of free will, insofar as it knows that it has many options and can choose any of them, “as far as it knows.” There is some truth in this illustration but in the end it is probably not true that there could be a sense of free will in this situation. A chess computer, however intelligent, will be disembodied, and will therefore have no real power to affect its world, that is, the world of chess. In other words, in order for the sense of free will to develop, the agent needs sufficient access to the world that it can learn about itself and its own effects on the world. It cannot develop in a situation of limited access to reality, as for example to a game board, regardless of how good it is at the game.

In any case, the question remains: how does a mind decide what to do, when up until now it had no particular goal in mind? This question often causes concrete problems for people in real life. Many people complain that their life does not feel meaningful, that is, that they have little idea what goal they should be seeking.

Let us step back for a moment. Before discovering its possession of “free will,” the mind is simply trying to guess what is going to happen. So theoretically this should continue to happen even after the mind discovers that it has some power over reality. The mind isn’t especially interested in power; it just wants to know what is going to happen. But now it knows that what is going to happen depends on what it itself is going to do. So in order to know what is going to happen, it needs to answer the question, “What am I going to do?”

The question now seems impossible to answer. It is going to do whatever it ends up deciding to do. But it seems to have no goal in mind, and therefore no way to decide what to do, and therefore no way to know what it is going to do.

Nonetheless, the mind has no choice. It is going to do something or other, since things will continue to happen, and it must guess what will happen. When it reflects on itself, there will be at least two ways for it to try to understand what it is going to do.

First, it can consider its actions as the effect of some (presumably somewhat unknown) efficient causes, and ask, “Given these efficient causes, what am I likely to do?” In practice it will acquire an answer in this way through induction. “On past occasions, when offered the choice between chocolate and vanilla, I almost always chose vanilla. So I am likely to choose vanilla this time too.” This way of thinking will most naturally result in acting in accord with pre-existing habits.

Second, it can consider its actions as the effect of some (presumably somewhat known) final causes, and ask, “Given these final causes, what am I likely to do?” This will result in behavior that is more easily understood as goal-seeking. “Looking at my past choices of food, it looks like I was choosing them for the sake of the pleasant taste. But vanilla seems to have a more pleasant taste than chocolate. So it is likely that I will take the vanilla.”

Notice what we have in the second case. In principle, the mind is just doing what it always does: trying to guess what will happen. But in practice it is now seeking pleasant tastes, precisely because that seems like a reasonable way to guess what it will do.

This explains why people feel a need for meaning, that is, for understanding their purpose in life, and why they prefer to think of their life according to a narrative. These two things are distinct, but they are related, and both are ways of making our own actions more intelligible. In this way the mind’s task is easier: that is, we need purpose and narrative in order to know what we are going to do. We can also see why it seems to be possible to “choose” our purpose, even though choosing a final goal should be impossible. There is a “choice” about this insofar as our actions are not perfectly coherent, and it would be possible to understand them in relation to one end or another, at least in a concrete way, even if in any case we will always understand them in a general sense as being for the sake of happiness. In this sense, Stuart Armstrong’s recent argument that there is no such thing as the “true values” of human beings, although perhaps presented as an obstacle to be overcome, actually has some truth in it.

The human need for meaning, in fact, is so strong that occasionally people will commit suicide because they feel that their lives are not meaningful. We can think of these cases as being, more or less, actual cases of the darkened room. Otherwise we could simply ask, “So your life is meaningless. So what? Why does that mean you should kill yourself rather than doing some other random thing?” Killing yourself, in fact, shows that you still have a purpose, namely the mind’s fundamental purpose. The mind wants to know what it is going to do, and the best way to know this is to consider its actions as ordered to a determinate purpose. If no such purpose can be found, there is (in this unfortunate way of thinking) an alternative: if I go kill myself, I will know what I will do for the rest of my life.

Zombies and Ignorance of the Formal Cause

Let’s look again at Robin Hanson’s account of the human mind, considered previously here.

Now what I’ve said so far is usually accepted as uncontroversial, at least when applied to the usual parts of our world, such as rivers, cars, mountains laptops, or ants. But as soon as one claims that all this applies to human minds, suddenly it gets more controversial. People often state things like this:

I am sure that I’m not just a collection of physical parts interacting, because I’m aware that I feel. I know that physical parts interacting just aren’t the kinds of things that can feel by themselves. So even though I have a physical body made of parts, and there are close correlations between my feelings and the states of my body parts, there must be something more than that to me (and others like me). So there’s a deep mystery: what is this extra stuff, where does it arise, how does it change, and so on. We humans care mainly about feelings, not physical parts interacting; we want to know what out there feels so we can know what to care about.

But consider a key question: Does this other feeling stuff interact with the familiar parts of our world strongly and reliably enough to usually be the actual cause of humans making statements of feeling like this?

What would someone mean by making the original statement that “I know that physical parts interacting just aren’t the kinds of things that can feel by themselves”? If we give this a charitable interpretation, the meaning is that “a collection of physical parts” is something many, and so is not a suitable subject for predicates like “sees” and “understands.” Something that sees is something one, and something that understands is something one.

This however is not Robin’s interpretation. Instead, he understands it to mean that besides the physical parts, there has to be one additional part, namely one which is a part in the same sense of “part”, but which is not physical. And indeed, some tend to think this way. But this of course is not helpful, because the reason a collection of parts is not a suitable subject for seeing or understanding is not because those parts are physical, but because the subject is not something one. And this would remain even if you add a non-physical part or parts. Instead, what is needed to be such a subject is that the subject be something one, namely a living being with the sense of sight, in order to see, or one with the power of reason, for understanding.

What do you need in order to get one such subject from “a collection of parts”? Any additional part, physical or otherwise, will just make the collection bigger; it will not make the subject something one. It is rather the formal cause of a whole that makes the parts one, and this formal cause is not a part in the same sense. It is not yet another part, even a non-physical one.

Reading Robin’s discussion in this light, it is clear that he never even considers formal causes. He does not even ask whether there is such a thing. Rather, he speaks only of material and efficient causes, and appears to be entirely oblivious even to the idea of a formal cause. Thus when asking whether there is anything in addition to the “collection of parts,” he is asking whether there is any additional material cause. And naturally, nothing will have material causes other than the things it is made out of, since “what a thing is made out of” is the very meaning of a material cause.

Likewise, when he says, “Does this other feeling stuff interact with the familiar parts of our world strongly and reliably enough to usually be the actual cause of humans making statements of feeling like this?”, he shows in two ways his ignorance of formal causes. First, by talking about “feeling stuff,” which implies a kind of material cause. Second, when he says, “actual cause of humans making statements” he is evidently speaking about the efficient cause of people producing sounds or written words.

In both cases, formal causality is the relevant causality. There is no “feeling stuff” at all; rather, certain things are things like seeing or understanding, which are unified actions, and these are unified by their forms. Likewise, we can consider the “humans making statements” in two ways; if we simply consider the efficient causes of the sounds, one by one, you might indeed explain them as “simple parts interacting simply.” But they are not actually mere sounds; they are meaningful and express the intention and meaning of a subject. And they have meaning by reason of the forms of the action and of the subject.

In other words, the idea of the philosophical zombie is that the zombie is indeed producing mere sounds. It is not only that the zombie is not conscious, but rather that it really is just interacting parts, and the sounds it produces are just a collection of sounds. We don’t need, then, some complicated method to determine that we are not such zombies. We are by definition not zombies if we say, think, or understanding at all.

The same ignorance of the formal cause is seen in the rest of Robin’s comments:

If yes, this is a remarkably strong interaction, making it quite surprising that physicists have missed it so far. So surprising in fact as to be frankly unbelievable. If this type of interaction were remotely as simple as all the interactions we know, then it should be quite measurable with existing equipment. Any interaction not so measurable would have be vastly more complex and context dependent than any we’ve ever seen or considered. Thus I’d bet heavily and confidently that no one will measure such an interaction.

Again, he is asking whether there is some additional part which has some additional efficient causality, and suggesting that this is unlikely. It is indeed unlikely, but irrelevant, because consciousness is not an additional part, but a formal way of being that a thing has. He continues:

But if no, if this interaction isn’t strong enough to explain human claims of feeling, then we have a remarkable coincidence to explain. Somehow this extra feeling stuff exists, and humans also have a tendency to say that it exists, but these happen for entirely independent reasons. The fact that feeling stuff exists isn’t causing people to claim it exists, nor vice versa. Instead humans have some sort of weird psychological quirk that causes them to make such statements, and they would make such claims even if feeling stuff didn’t exist. But if we have a good alternate explanation for why people tend to make such statements, what need do we have of the hypothesis that feeling stuff actually exists? Such a coincidence seems too remarkable to be believed.

First, there is no “extra feeling stuff.” There is only a way of being, namely in this case being alive and conscious. Second, there is no coincidence. Robin’s supposed coincidence is that “I am conscious” is thought to mean, “I have feeling stuff,” but the feeling stuff is not the efficient cause of my saying that I have it; instead, the efficient cause is said to be simple parts interacting simply.

Again, the mistake here is simply to completely overlook the formal cause. “I am conscious” does not mean that I have any feeling stuff; it says that I am something that perceives. Of course we can modify Robin’s question: what is the efficient cause of my saying that I am conscious? Is it the fact that I actually perceive things, or is it simple parts interacting simply? But if we think of this in relation to form, it is like asking whether the properties of a square follow from squareness, or from the properties of the parts of a square. And it is perfectly obvious that the properties of a square follow both from squareness, and from the properties of the parts of a square, without any coincidence, and without interfering with one another. In the same way, the fact that I perceive things is the efficient cause of my saying that I perceive things. But the only difference between this actual situation and a philosophical zombie is one of form, not of matter; in a corresponding zombie, “simple parts interacting simply” are the cause of its producing sounds, but it neither perceives anything nor asserts that it is conscious, since its words are meaningless.

The same basic issue, namely Robin’s lack of the concept of a formal cause, is responsible for his statements about philosophical zombies:

Carroll inspires me to try to make one point I think worth making, even if it is also ignored. My target is people who think philosophical zombies make sense. Zombies are supposedly just like real people in having the same physical brains, which arose the through the same causal history. The only difference is that while real people really “feel”, zombies do not. But since this state of “feeling” is presumed to have zero causal influence on behavior, zombies act exactly like real people, including being passionate and articulate about claiming they are not zombies. People who think they can conceive of such zombies see a “hard question” regarding which physical systems that claim to feel and otherwise act as if they feel actually do feel. (And which other systems feel as well.)

The one point I want to make is: if zombies are conceivable, then none of us will ever have any more relevant info than we do now about which systems actually feel. Which is pretty much zero info! You will never have any info about whether you ever really felt in the past, or will ever feel in the future. No one part of your brain ever gets any info from any other part of your brain about whether it really feels.

The state of “feeling” is not presumed to have zero causal influence on behavior. It is thought to have precisely a formal influence on behavior. That is, being conscious is why the activity of the conscious person is “saying that they feel” instead of “producing random meaningless sounds that others mistakenly interpret as meaning that they feel.”

Robin is right that philosophical zombies are impossible, however, although not for the reasons that he supposes. The actual reason for this is that it is impossible for a disposed matter to be lacking its corresponding form, and the idea of a zombie is precisely the idea of humanly disposed matter lacking human form.

Regarding his point about “info,” the possession of any information at all is already a proof that one is not a zombie. Since the zombie lacks form, any correlation between one part and another in it is essentially a random material correlation, not one that contains any information. If the correlation is noticed as having any info, then the thing noticing the information, and the information itself, are things which possess form. This argument, as far as it goes, is consistent with Robin’s claim that zombies do not make sense; they do not, but not for the reasons that he posits.

Zeal for Form, But Not According to Knowledge

Some time ago I discussed the question of whether the behavior of a whole should be predictable from the behavior of the parts, without fully resolving it. I promised at the time to revisit the question later, and this is the purpose of the present post.

In the discussion of Robin Hanson’s book Age of Em, we looked briefly at his account of the human mind. Let us look at a more extended portion of his argument about the mind:

There is nothing that we know of that isn’t described well by physics, and everything that physicists know of is well described as many simple parts interacting simply. Parts are localized in space, have interactions localized in time, and interactions effects don’t move in space faster than the speed of light. Simple parts have internal states that can be specified with just a few bits (or qubits), and each part only interacts directly with a few other parts close in space and time. Since each interaction is only between a few bits on a few sides, it must also be simple. Furthermore, all known interactions are mutual in the sense that the state on all sides is influenced by states of the other sides.

For example, ordinary field theories have a limited number of fields at each point in space-time, with each field having a limited number of degrees of freedom. Each field has a few simple interactions with other fields, and with its own space-time derivatives. With limited energy, this latter effect limits how fast a field changes in space and time.

As a second example, ordinary digital electronics is made mostly of simple logic units, each with only a few inputs, a few outputs, and a few bits of internal state. Typically: two inputs, one output, and zero or one bits of state. Interactions between logic units are via simple wires that force the voltage and current to be almost the same at matching ends.

As a third example, cellular automatons are often taken as a clear simple metaphor for typical physical systems. Each such automation has a discrete array of cells, each of which has a few possible states. At discrete time steps, the state of each cell is a simple standard function of the states of that cell and its neighbors at the last time step. The famous “game of life” uses a two dimensional array with one bit per cell.

This basic physics fact, that everything is made of simple parts interacting simply, implies that anything complex, able to represent many different possibilities, is made of many parts. And anything able to manage complex interaction relations is spread across time, constructed via many simple interactions built up over time. So if you look at a disk of a complex movie, you’ll find lots of tiny structures encoding bits. If you look at an organism that survives in a complex environment, you’ll find lots of tiny parts with many non-regular interactions.

Physicists have learned that we only we ever get empirical evidence about the state of things via their interactions with other things. When such interactions the state of one thing create correlations with the state of another, we can use that correlation, together with knowledge of one state, as evidence about the other state. If a feature or state doesn’t influence any interactions with familiar things, we could drop it from our model of the world and get all the same predictions. (Though we might include it anyway for simplicity, so that similar parts have similar features and states.)

Not only do we know that in general everything is made of simple parts interacting simply, for pretty much everything that happens here on Earth we know those parts and interactions in great precise detail. Yes there are still some areas of physics we don’t fully understand, but we also know that those uncertainties have almost nothing to say about ordinary events here on Earth. For humans and their immediate environments on Earth, we know exactly what are all the parts, what states they hold, and all of their simple interactions. Thermodynamics assures us that there can’t be a lot of hidden states around holding many bits that interact with familiar states.

Now it is true that when many simple parts are combined into complex arrangements, it can be very hard to calculate the detailed outcomes they produce. This isn’t because such outcomes aren’t implied by the math, but because it can be hard to calculate what math implies. When we can figure out quantities that are easier to calculate, as long as the parts and interactions we think are going on are in fact the only things going on, then we usually see those quantities just as calculated.

The point of Robin’s argument is to take a particular position in regard to the question we are revisiting in this post: everything that is done by wholes is predictable from the behavior of the parts. The argument is simply a more extended form of a point I made in the earlier post, namely that there is no known case where the behavior of a whole is known not to be predictable in such a way, and many known cases where it is certainly predictable in this way.

The title of the present post of course refers us to this earlier post. In that post I discussed the tendency to set first and second causes in opposition, and noted that the resulting false dichotomy leads to two opposite mistakes, namely the denial of a first cause on one hand, and to the assertion that the first cause does or should work without secondary causes on the other.

In the same way, I say it is a false dichotomy to set the work of form in opposition with the work of matter and disposition. Rather, they produce the same thing, both according to being and according to activity, but in different respects. If this is the case, it will be necessarily true from the nature of things that the behavior of a whole is predictable from the behavior of the parts, but this will happen in a particular way.

I mentioned an example of the same false dichotomy in the post on Robin’s book. Here again is his argument:

But consider a key question: Does this other feeling stuff interact with the familiar parts of our world strongly and reliably enough to usually be the actual cause of humans making statements of feeling like this?

If yes, this is a remarkably strong interaction, making it quite surprising that physicists have missed it so far. So surprising in fact as to be frankly unbelievable. If this type of interaction were remotely as simple as all the interactions we know, then it should be quite measurable with existing equipment. Any interaction not so measurable would have be vastly more complex and context dependent than any we’ve ever seen or considered. Thus I’d bet heavily and confidently that no one will measure such an interaction.

But if no, if this interaction isn’t strong enough to explain human claims of feeling, then we have a remarkable coincidence to explain. Somehow this extra feeling stuff exists, and humans also have a tendency to say that it exists, but these happen for entirely independent reasons. The fact that feeling stuff exists isn’t causing people to claim it exists, nor vice versa. Instead humans have some sort of weird psychological quirk that causes them to make such statements, and they would make such claims even if feeling stuff didn’t exist. But if we have a good alternate explanation for why people tend to make such statements, what need do we have of the hypothesis that feeling stuff actually exists? Such a coincidence seems too remarkable to be believed.

I am currently awake and conscious, hearing the sounds of my keyboard as I type and the music playing in the background. Robin’s argument is something like this: why did I type the previous sentence? Is it because I am in fact awake and conscious and actually heard these sounds? If in principle it is predictable that I would have typed that, based on the simple interactions of simple parts, that seems to be an entirely different explanation. So either one might be the case or the other, but not both.

We have seen this kind of argument before. C.S. Lewis made this kind of argument when he said that thought must have reasons only, and no causes. Similarly, there is the objection to the existence of God, “But it seems that everything we see in the world can be accounted for by other principles, supposing God did not exist.” Just as in those cases we have a false dichotomy between the first cause and secondary causes, and between the final cause and efficient causes, so here we have a false dichotomy between form and matter.

Let us consider this in a simpler case. We earlier discussed the squareness of a square. Suppose someone attempted to apply Robin’s argument to squares. The equivalent argument would say this: all conclusions about squares can be proved from premises about the four lines that make it up and their relationships. So what use is this extra squareness? We might as well assume it does not exist, since it cannot explain anything.

In order to understand this one should consider why we need several kinds of cause in the first place. To assign a cause is just to give the origin of a thing in a way that explains it, while explanation has various aspects. In the linked post, we divided causes into two, namely intrinsic and extrinsic, and then divided each of these into two. But consider what would happen if we did not make the second division. In this case, there would be two causes of a thing: matter subject to form, and agent intending an end. We can see from this how the false dichotomies arise: all the causality of the end must be included in some way in the agent, since the end causes by informing the agent, and all the causality of the form must be included in some way in the matter, since the form causes by informing the matter.

In the case of the square, even the linked post noted that there was an aspect of the square that could not be derived from its properties: namely, the fact that a square is one figure, rather than simply many lines. This is the precise effect of form in general: to make a thing be what it is.

Consider Alexander Pruss’s position on artifacts. He basically asserted that artifacts do not truly exist, on the grounds that they seem to be lacking a formal cause. In this way, he says, they are just a collection of parts, just as someone might suppose that a square is just a collection of lines, and that there is no such thing as squareness. My response there was the same as my response about the square: saying that this is just a collection cannot explain why a square is one figure, nor can the same account explain the fact that artifacts do have a unity of some kind. Just as the denial of squareness would mean the denial of the existence of a unified figure, so the denial of chairness would mean the denial of the existence of chairs. Unlike Sean Carroll, Pruss seems even to recognize that this denial follows from his position, even if he is ambivalent about it at times.

Hanson’s argument about the human mind is actually rather similar to Pruss’s argument about artifacts, and to Carroll’s argument about everything. The question of whether or not the fact that I am actually conscious influences whether I say that I am, is a reference to the idea of a philosophical zombie. Robin discusses this idea more directly in another post:

Carroll inspires me to try to make one point I think worth making, even if it is also ignored. My target is people who think philosophical zombies make sense. Zombies are supposedly just like real people in having the same physical brains, which arose the through the same causal history. The only difference is that while real people really “feel”, zombies do not. But since this state of “feeling” is presumed to have zero causal influence on behavior, zombies act exactly like real people, including being passionate and articulate about claiming they are not zombies. People who think they can conceive of such zombies see a “hard question” regarding which physical systems that claim to feel and otherwise act as if they feel actually do feel. (And which other systems feel as well.)

The one point I want to make is: if zombies are conceivable, then none of us will ever have any more relevant info than we do now about which systems actually feel. Which is pretty much zero info! You will never have any info about whether you ever really felt in the past, or will ever feel in the future. No one part of your brain ever gets any info from any other part of your brain about whether it really feels.

These claims all follow from our very standard and well-established info theory. We get info about things by interacting with them, so that our states become correlated with the states of those things. But by assumption this hypothesized extra “feeling” state never interacts with anything. The actual reason why you feel compelled to assert very confidently that you really do feel has no causal connection with whether you actually do really feel. You would have been just as likely to say it if it were not true. What could possibly be the point of hypothesizing and forming beliefs about states about which one can never get any info?

We noted the unresolved tension in Sean Carroll’s position. The eliminativists are metaphysically correct, he says, but they are mistaken to draw the conclusion that the things of our common experience do not exist. The problem is that given that he accepts the eliminativist metaphysics, he can have no justification for rejecting their conclusions. We can see the same tension in Robin Hanson’s account of consciousness and philosophical zombies. For example, why does he say that they do not “make sense,” rather than asking whether or not they can exist and why or why not?

Let us think about this in more detail. And to see more clearly the issues involved, let us consider a simpler case. Take the four chairs in Pruss’s office. Is it possible that one of them is a zombie?

What would this even mean? In the post on the relationship of form and reality, we noted that asking whether something has a form is very close to the question of whether something is real. I really have two hands, Pruss says, if my hands have forms. And likewise chairs are real chairs if they have the form of a chair, and if they do not, they are not real in the first place, as Pruss argues is the case.

The zombie question about the chair would then be this: is it possible that one of the apparent chairs, physically identical to a real chair, is yet not a real chair, while the three others are real?

We should be able to understand why someone would want to say that the question “does not make sense” here. What would it even be like for one of the chairs not to be a real chair, especially if it is posited to be identical to all of the others? In reality, though, the question does make sense, even if we answer that the thing cannot happen. In this case it might actually be more possible than in other cases, since artifacts are in part informed by human intentions. But possible or not, the question surely makes sense.

Let us consider the case of natural things. Consider the zombie oak tree: it is physically identical to an oak tree, but it is not truly alive. It appears to grow, but this is just the motion of particles. There are three positions someone could hold: no oak trees are zombie oaks, since all are truly alive and grow; all oak trees are zombies, since all are mere collections of particles; and some are alive and grow, while others are zombies, being mere collections of particles.

Note that the question does indeed make sense. It is hard to see why anyone would accept the third position, but if the first and second positions make sense, then the third does as well. It has an intelligible content, even if it is one that we have no good arguments for accepting. The argument that it does not make sense is basically the claim that the first and second positions are not distinct positions: they do not say different things, but the same thing. Thus the the third would “not make sense” insofar as it assumes that the first and second positions are distinct positions.

Why would someone suppose that the first and second positions are not distinct? This is basically Sean Carroll’s position, since he tries to say both that eliminativists are correct about what exists, but incorrect in denying the existence of common sense things like oak trees. It is useful to say, “oak trees are real,” he says, and therefore we will say it, but we do not mean to say something different about reality than the eliminativists who say that “oak trees are not real but mere collections of particles.”

But this is wrong. Carroll’s position is inconsistent in virtually the most direct possible way. Either oak trees are real or they are not; and if they are real, then they are not mere collections of particles. So both the first and second positions are meaningful, and consequently also the third.

The second and third positions are false, however, and the meaningfulness of this becomes especially clear when we speak of the human case. It obviously does make sense to ask whether other human beings are conscious, and this is simply to ask whether their apparent living activities, such as speaking and thinking, are real living activities, or merely apparent ones: perhaps the thing is making sounds, but it is not truly speaking or thinking.

Let us go back to the oak tree for a moment. The zombie oak would be one that is not truly living, but its activities, apparently full of life, are actually lifeless. In order to avoid this possibility, and out of a zeal for form which is not according to knowledge, some assert that the activities of an oak cannot be understood in terms of the activities of the parts. There is a hint of this, perhaps, in this remark by James Chastek:

Consciousness is just the latest field where we are protesting that something constitutes a specific difference from some larger genus, but if it goes the way the others have gone, in fifty years no one will even remember the controversy or bother to give the fig-leaf explanations of it being emergent or reductive. No one will remember that there is a difference to explain. Did anyone notice in tenth-grade biology that life was explained entirely in terms of non-living processes? No. There was nothing to explain since nothing was noticed.

Chastek does not assert that life cannot be “explained entirely in terms of non-living processes,” in the manner of tenth-grade biology, but he perhaps would prefer that it could not be so explained. And the reason for this would be the idea that if everything the living thing does can be explained in terms of the parts, then oak trees are zombies after all.

But this idea is mistaken. Look again at the square: the parts explain everything, except the fact that the figure is one figure, and a square. The form of a square is indeed needed, precisely in order that the thing will actually be a whole and a square.

Likewise with the oak. If an oak tree is made out of parts, then since activity follows being, it should be unsurprising that in some sense its activities themselves will be made out of parts, namely the activities of its parts. But the oak is real, and its activities are real. And just as oaks really exist, so they really live and grow; but just as the living oak has parts which are not alive in themselves, such as elements, so the activity of growth contains partial activities which are not living activities in themselves. What use is the form of an oak, then? It makes the tree really an oak and really alive; and it makes its activities living activities such as growth, rather than being merely a collection of non-living activities.

We can look at human beings in the same way, but I will leave the details of this for another post, since this one is long enough already.

Supreme Good

In Chapter 4 of The Divine Names, Dionysius says:

Now if the Good is above all things (as indeed It is) Its Formless Nature produces all-form; and in It alone Not-Being is an excess of Being, and Lifelessness an excess of Life and Its Mindless state is an excess of Wisdom, and all the Attributes of the Good we express in a transcendent manner by negative images.

Now this is not especially easy to understand. But Dionysius seems to be saying that God does not posses life or mind in a literal sense, but is rather above these things, much as held by Plotinus. Possibly somewhat in contrast, he seems to believe that “Good” is an especially appropriate name for God.

According to the account we have given of being and the good, this is correct. If the good is that towards which things tend, then a necessary being must above all be good, because it has such a deep tendency to be that it cannot not be. Likewise, insofar as the good is understood as a final cause of other things, and thus as an ultimate explanation, while the first cause can have nothing else explaining its existence, it must constitute the supreme good not only in relation to itself, but in relation to all other things as well.

As the Heavens are Higher than the Earth

Job accuses God:

It is all one; therefore I say,
    he destroys both the blameless and the wicked.
When disaster brings sudden death,
    he mocks at the calamity of the innocent.
The earth is given into the hand of the wicked;
    he covers the eyes of its judges—
    if it is not he, who then is it?

Ezekiel 18 seems to say something very opposed to this:

The word of the Lord came to me: What do you mean by repeating this proverb concerning the land of Israel, “The parents have eaten sour grapes, and the children’s teeth are set on edge”? As I live, says the Lord God, this proverb shall no more be used by you in Israel. Know that all lives are mine; the life of the parent as well as the life of the child is mine: it is only the person who sins that shall die.

If a man is righteous and does what is lawful and right— if he does not eat upon the mountains or lift up his eyes to the idols of the house of Israel, does not defile his neighbor’s wife or approach a woman during her menstrual period, does not oppress anyone, but restores to the debtor his pledge, commits no robbery, gives his bread to the hungry and covers the naked with a garment, does not take advance or accrued interest, withholds his hand from iniquity, executes true justice between contending parties, follows my statutes, and is careful to observe my ordinances, acting faithfully—such a one is righteous; he shall surely live, says the Lord God.

If he has a son who is violent, a shedder of blood, who does any of these things (though his father does none of them), who eats upon the mountains, defiles his neighbor’s wife, oppresses the poor and needy, commits robbery, does not restore the pledge, lifts up his eyes to the idols, commits abomination, takes advance or accrued interest; shall he then live? He shall not. He has done all these abominable things; he shall surely die; his blood shall be upon himself.

But if this man has a son who sees all the sins that his father has done, considers, and does not do likewise, who does not eat upon the mountains or lift up his eyes to the idols of the house of Israel, does not defile his neighbor’s wife, does not wrong anyone, exacts no pledge, commits no robbery, but gives his bread to the hungry and covers the naked with a garment, withholds his hand from iniquity, takes no advance or accrued interest, observes my ordinances, and follows my statutes; he shall not die for his father’s iniquity; he shall surely live. As for his father, because he practiced extortion, robbed his brother, and did what is not good among his people, he dies for his iniquity.

Yet you say, “Why should not the son suffer for the iniquity of the father?” When the son has done what is lawful and right, and has been careful to observe all my statutes, he shall surely live. The person who sins shall die. A child shall not suffer for the iniquity of a parent, nor a parent suffer for the iniquity of a child; the righteousness of the righteous shall be his own, and the wickedness of the wicked shall be his own.

But if the wicked turn away from all their sins that they have committed and keep all my statutes and do what is lawful and right, they shall surely live; they shall not die. None of the transgressions that they have committed shall be remembered against them; for the righteousness that they have done they shall live. Have I any pleasure in the death of the wicked, says the Lord God, and not rather that they should turn from their ways and live? But when the righteous turn away from their righteousness and commit iniquity and do the same abominable things that the wicked do, shall they live? None of the righteous deeds that they have done shall be remembered; for the treachery of which they are guilty and the sin they have committed, they shall die.

Yet you say, “The way of the Lord is unfair.” Hear now, O house of Israel: Is my way unfair? Is it not your ways that are unfair? When the righteous turn away from their righteousness and commit iniquity, they shall die for it; for the iniquity that they have committed they shall die. Again, when the wicked turn away from the wickedness they have committed and do what is lawful and right, they shall save their life. Because they considered and turned away from all the transgressions that they had committed, they shall surely live; they shall not die. Yet the house of Israel says, “The way of the Lord is unfair.” O house of Israel, are my ways unfair? Is it not your ways that are unfair?

Therefore I will judge you, O house of Israel, all of you according to your ways, says the Lord God. Repent and turn from all your transgressions; otherwise iniquity will be your ruin. Cast away from you all the transgressions that you have committed against me, and get yourselves a new heart and a new spirit! Why will you die, O house of Israel? For I have no pleasure in the death of anyone, says the Lord God. Turn, then, and live.

If life and death here refer to physical life, then the passage indeed would be opposed to Job’s claims, and Job might well respond:

How often is the lamp of the wicked put out?
    How often does calamity come upon them?
    How often does God distribute pains in his anger?
How often are they like straw before the wind,
    and like chaff that the storm carries away?
You say, ‘God stores up their iniquity for their children.’
    Let it be paid back to them, so that they may know it.
Let their own eyes see their destruction,
    and let them drink of the wrath of the Almighty.
For what do they care for their household after them,
    when the number of their months is cut off?
Will any teach God knowledge,
    seeing that he judges those that are on high?
One dies in full prosperity,
    being wholly at ease and secure,
his loins full of milk
    and the marrow of his bones moist.
Another dies in bitterness of soul,
    never having tasted of good.
They lie down alike in the dust,
    and the worms cover them.

Oh, I know your thoughts,
    and your schemes to wrong me.
For you say, ‘Where is the house of the prince?
    Where is the tent in which the wicked lived?’
Have you not asked those who travel the roads,
    and do you not accept their testimony,
that the wicked are spared in the day of calamity,
    and are rescued in the day of wrath?
Who declares their way to their face,
    and who repays them for what they have done?
When they are carried to the grave,
    a watch is kept over their tomb.
The clods of the valley are sweet to them;
    everyone will follow after,
    and those who went before are innumerable.
How then will you comfort me with empty nothings?
    There is nothing left of your answers but falsehood.

But if we understand Ezekiel to refer to happiness and misery, there is surely some truth in his claims, because happiness consists in activity according to virtue. So one who lives virtuously, at least to that degree, will be happy, even if he did not always live in that manner. At the same time, there is some qualification on this, both because human life is not merely an instant but a temporal whole, and also because even if virtue is the most formal element of happiness, it is not the only thing that is relevant to it.

Job and Ezekiel’s opponents seem to agree in an important way, even if they disagree about the facts. Both seem to be saying that God’s ways are bad. Either God’s ways are indifferent to good and evil, or worse, God supports evil himself. Either God treats the good and evil alike, and thus he is indifferent, or he gives better things to the evil, and is thus evil. Or, according to Ezekiel’s opponents, he unjustly spares the lifelong wicked on account of a moment of repentance.

In the passage from Ezekiel, God responds that it is not his ways that are unjust, but their ways. In the context of the particular dispute, the implication is that people fear this account because it implies that even if you have lived a good life for many years, a single evil deed may result in your condemnation. That is only bad, God responds, if you plan to do evil, in other words if your ways are evil, not his. Isaiah says, speaking of the same thing, namely the repentance of the wicked,

For my thoughts are not your thoughts,
    nor are your ways my ways, says the Lord.
For as the heavens are higher than the earth,
    so are my ways higher than your ways
    and my thoughts than your thoughts.

As I pointed out earlier, Jesus presents Job’s characterization of God as something to be imitated:

“You have heard that it was said, ‘You shall love your neighbor and hate your enemy.’ But I say to you, Love your enemies and pray for those who persecute you, so that you may be children of your Father in heaven; for he makes his sun rise on the evil and on the good, and sends rain on the righteous and on the unrighteous. For if you love those who love you, what reward do you have? Do not even the tax collectors do the same? And if you greet only your brothers and sisters, what more are you doing than others? Do not even the Gentiles do the same?  Be perfect, therefore, as your heavenly Father is perfect.

God is perfect, Jesus says, and consequently his activity is perfect towards all. And that results in apparent indifference, because it means that God treats all alike. Jesus is quite explicit that this applies to the very kinds of situations that Job and his friends are concerned with:

Or those eighteen who were killed when the tower of Siloam fell on them—do you think that they were worse offenders than all the others living in Jerusalem? No, I tell you; but unless you repent, you will all perish just as they did.”

This would be inconsistent if it meant that “unless you repent, a tower will fall on you or some similar evil,” because Jesus is saying that the ones are no different from the others. It may be that nine of the eighteen were repentant people, and the other nine wicked. Or it could be broken down in any other way. The whole point is that the virtue of the people involved was not relevant to the physical disaster. The implication is that the physical disaster should be understood as a representation of the moral disaster that necessarily overtakes anyone who does evil. And that same disaster is avoided by anyone who does good.

More importantly, however, Jesus’s understanding is that God treats all alike because of his love towards all. And this implies that even the disaster of the tower resulted from love, just as the rain and sun do in the other examples.

How can this be? This will be the topic of a later post. Of course, a reasonable inductive inference, which may or may not be mistaken, would be that it might be not only later, but much later.

Composing Elements

Suppose we have two elements, as for example water and earth (not that these are really elements.) How do we make something out of the elements? We can consider two different possible ways that this could happen.

Suppose that when we combine one part water and one part earth, we get mud, and when we combine one part water and two parts earth, we get clay. Thus clay and mud are two different composite bodies that can be made from our elements.

How do we expect clay and mud to behave? We saw earlier that the nature of the physical world more or less requires the existence of mathematical laws of nature. Now we could say, “Clay and mud are made of earth and water, and we know the laws governing earth and water. So we can figure out the behavior of clay and mud using the laws governing earth and water.”

But we could also say, “Although clay and mud are made of earth and water, they are also something new. Consequently we can work out the laws governing them by experience, but we cannot expect to work them out just from the laws governing earth and water.”

These two claims are basically opposed to one another, and we should not expect that both would be true, at least in any particular instance. It might be that one is true in some cases and the other is true in some cases, or it might be that one side is always true. But in any case one will be true and not the other, in each particular situation.

Someone might argue that the first claim must be always true in principle. If water behaves one way by itself, and another way when it is combined with earth, then you haven’t sufficiently specified the behavior of water without including how it behaves when it is beside earth, or mixed with earth, or combined in whatever way. So once you have completely specified the behavior of water, you have specified how it behaves when combined with other things.

But this way of thinking is artificial. If water follows an inverse square law of gravity by itself, but an entirely different mathematical law when it is combined with earth, rather than saying that the entirely different law is a special case governing water, we should just admit that the different law is a law governing clay and mud, but not water. On the other hand, it is not unreasonable to include various potential interactions in your laws governing water, rather than only considering how water behaves in perfect isolation. Thus for example one would want to say that water suffers gravitational effects from all other bodies, rather than simply saying that water attracts itself. Nonetheless, even if the distinction is somewhat rough, there is a meaningful distinction between situations where the laws governing the elements also govern the composites, and situations where we need new laws for the composites.

In one way, the second claim is always true. It is always the case that something is true of the composite which is not true of the elements in themselves, since the composite is a whole composed of elements, while the elements in themselves are not. This is true even in artificial compositions; the parts of a bicycle are not a bicycle, but the whole is. And I can ride a bicycle, but  I cannot ride the individual pieces of metal that form it. Likewise, it is evidently true of living things, which are alive, and in some cases have conscious experience, even though the individual elements do not.

In a second way, the second claim is almost always true. If we consider our laws as practical methods for predicting the behavior of a physical system, in practice we will almost always need special laws to predict the behavior of a complex composite, if only because it would be too complex and time consuming to predict the behavior of the composite using laws governing only the parts. Thus people who wish to predict the weather use generalizations based on the experience of weather, rather than trying to predict the weather simply by considering more general laws of physics, despite believing that the weather is in fact a consequence of such general laws.

In a third way, the first claim is true at least frequently, and possibly always. If we consider the behavior of a bicycle or a computer, not with respect to general questions such as “can I ride it?” or “can it calculate the square root of two?”, but with respect to the physical movement of the parts, there are good reasons to think that the behavior of the whole can be determined from the behavior of the parts of which it is composed. For these are human inventions, and although experience is involved in such inventions, people make guesses about new behavior largely from their understanding of how the parts behave which they plan to put together. So if the whole behaved in ways which are significantly unpredictable from the behavior of the parts, we would not expect such inventions to work. Likewise, as said above, there is little reason to doubt that the weather results from general principles of physics that apply to earth, air, water, and so on.

I say “possibly always” above, because there is no case where the second claim is known to be true in this sense, and many instances, as noted, where the first is known or reasonably believed to be the case. Additionally, one can give reasons in principle for expecting the first claim to be true in this way, although this is a matter for later consideration.

An important objection to this possibility is that the fact that the second claim is always true in the first way mentioned above, seems to imply that the first claim cannot be true even in the third way, at least in some cases. In particular, the conscious behavior of living things, and especially human free will, might seem inconsistent with the idea that the physical behavior of living things is in principle predictable from laws governing their elements.