How to Build an Artificial Human

I was going to use “Artificial Intelligence” in the title here but realized after thinking about it that the idea is really more specific than that.

I came up with the idea here while thinking more about the problem I raised in an earlier post about a serious obstacle to creating an AI. As I said there:

Current AI systems are not universal, and clearly have no ability whatsoever to become universal, without first undergoing deep changes in those systems, changes that would have to be initiated by human beings. What is missing?

The problem is the training data. The process of evolution produced the general ability to learn by using the world itself as the training data. In contrast, our AI systems take a very small subset of the world (like a large set of Go games or a large set of internet text), and train a learning system on that subset. Why take a subset? Because the world is too large to fit into a computer, especially if that computer is a small part of the world.

This suggests that going from the current situation to “artificial but real” intelligence is not merely a question of making things better and better little by little. There is a more fundamental problem that would have to be overcome, and it won’t be overcome simply by larger training sets, by faster computing, and things of this kind. This does not mean that the problem is impossible, but it may turn out to be much more difficult than people expected. For example, if there is no direct solution, people might try to create Robin Hanson’s “ems”, where one would more or less copy the learning achieved by natural selection. Or even if that is not done directly, a better understanding of what it means to “know how to learn,” might lead to a solution, although probably one that would not depend on training a model on massive amounts of data.

Proposed Predictive Model

Perhaps I was mistaken in saying that “larger training sets” would not be enough, at any rate enough to get past this basic obstacle. Perhaps it is enough to choose the subset correctly… namely by choosing the subset of the world that we know to contain general intelligence. Instead of training our predictive model on millions of Go games or millions of words, we will train it on millions of human lives.

This project will be extremely expensive. We might need to hire 10 million people to rigorously lifelog for the next 10 years. This has to be done with as much detail as possible; in particular we would want them recording constant audio and visual streams, as well as much else as possible. If we pay our crew an annual salary of $75,000 for this, this will come to $7.5 trillion; there will be some small additions for equipment and maintenance, but all of this will be very small compared to the salary costs.

Presumably in order to actually build such a large model, various scaling issues would come up and need to be solved. And in principle nothing prevents these from being very hard to solve, or even impossible in practice. But since we do not know that this would happen, let us skip over this and pretend that we have succeeded in building the model. Once this is done, our model should be able to fairly easily take a point in a person’s life and give a fairly sensible continuation over at least a short period of time, just as GPT-3 can give fairly sensible continuations to portions of text.

It may be that this is enough to get past the obstacle described above, and once this is done, it might be enough to build a general intelligence using other known principles, perhaps with some research and refinement that could be done during the years in which our crew would be building their records.

Required Elements

Live learning. In the post discussing the obstacle, I noted that there are two kinds of learning, the type that comes from evolution, and the type that happens during life. Our model represents the type that comes from evolution; unlike GPT-3, which cannot learn anything new, we need our AI to remember what has actually happened during its life and to be able to use this to acquire knowledge about its particular situation. This is not difficult in theory but you would need to think carefully about how this should interact with the general model; you do not want to simply add its particular experiences as another individual example (not that such an addition to an already trained model is simple anyway.)

Causal model. Our AI needs not just a general predictive model of the world, but specifically a causal one; not just the general idea that “when you see A, you will soon see B,” but the idea that “when there is an A — which may or may not be seen — it will make a B, which you may or may not see.” This is needed for many reasons, but in particular, without such a causal model, long term predictions or planning will be impossible. If you take a model like GPT-3 and force it to continue producing text indefinitely, it will either repeat itself or eventually go completely off topic. The same thing would happen to our human life model — if we simply used the model without any causal structure, and forced it to guess what would happen indefinitely far into the future, it would eventually produce senseless predictions.

In the paper Making Sense of Raw Input, published by Google Deepmind, there is a discussion of an implementation of this sort of model, although trained on an extremely easy environment (compared to our task, which would be train it on human lives).

The Apperception Engine attempts to discern the nomological structure that underlies the raw sensory input. In our experiments, we found the induced theory to be very accurate as a predictive model, no matter how many time steps into the future we predict. For example, in Seek Whence (Section 5.1), the theory induced in Fig. 5a allows us to predict all future time steps of the series, and the accuracy of the predictions does not decay with time.

In Sokoban (Section 5.2), the learned dynamics are not just 100% correct on all test trajectories, but they are provably 100% correct. These laws apply to all Sokoban worlds, no matter how large, and no matter how many objects. Our system is, to the best of our knowledge, the first that is able to go from raw video of non-trivial games to an explicit first-order nomological model that is provably correct.

In the noisy sequences experiments (Section 5.3), the induced theory is an accurate predictive model. In Fig. 19, for example, the induced theory allows us to predict all future time steps of the series, and does not degenerate as we go further into the future.

(6.1.2 Accuracy)

Note that this does not have the problem of quick divergence from reality as you predict into the distant future. It will also improve our AI’s live learning:

A system that can learn an accurate dynamics model from a handful of examples is extremely useful for model-based reinforcement learning. Standard model-free algorithms require millions of episodes before they can reach human performance on a range of tasks [31]. Algorithms that learn an implicit model are able to solve the same tasks in thousands of episodes [82]. But a system that learns an accurate dynamics model from a handful of examples should be able to apply that model to plan, anticipating problems in imagination rather than experiencing them in reality [83], thus opening the door to extremely sample efficient model-based reinforcement learning. We anticipate a system that can learn the dynamics of an ATARI game from a handful of trajectories,19 and then apply that model to plan, thus playing at reasonable human level on its very first attempt.

(6.1.3. Data efficiency)

“We anticipate”, as in Google has not yet built such a thing, but that they expect to be able to build it.

Scaling a causal model to work on our human life dataset will probably require some of the most difficult new research of this entire proposal.

Body. In order to engage in live learning, our AI needs to exist in the world in some way. And for the predictive model to do it any good, the world that it exists in needs to be a roughly human world. So there are two possibilities: either we simulate a human world in which it will possess a simulated human body, or we give it a robotic human-like body that will exist physically in the human world.

In relationship to our proposal, these are not very different, but the former is probably more difficult, since we would have to simulate pretty much the entire world, and the more distant our simulation is from the actual world, the less helpful its predictive model would turn out to be.

Sensation. Our AI will need to receive input from the world through something like “senses.” These will need to correspond reasonably well with the data as provided in the model; e.g. since we expect to have audio and visual recording, our AI will need sight and hearing.

Predictive Processing. Our AI will need to function this way in order to acquire self-knowledge and free will, without which we would not consider it to possess general intelligence, however good it might be at particular tasks. In particular, at every point in time it will have predictions, based on the general human-life predictive model and on its causal model of the world, about what will happen in the near future. These predictions need to function in such a way that when it makes a relevant prediction, e.g. when it predicts that it will raise its arm, it will actually raise its arm.

(We might not want this to happen 100% of the time — if such a prediction is very far from the predictive model, we might want the predictive model to take precedence over this power over itself, much as happens with human beings.)

Thought and Internal Sensation. Our AI needs to be able to notice that when it predicts it will raise its arm, it succeeds, and it needs to learn that in these cases its prediction is the cause of raising the arm. Only in this way will its live learning produce a causal model of the world which actually has self knowledge: “When I decide to raise my arm, it happens.” This will also tell it the distinction between itself and the rest of the world; if it predicts the sun will change direction, this does not happen. In order for all this to happen, the AI needs to be able to see its own predictions, not just what happens; the predictions themselves have to become a kind of input, similar to sight and hearing.

What was this again?

If we don’t run into any new fundamental obstacle along the way (I mentioned a few points where this might happen), the above procedure might be able to actually build an artificial general intelligence at a rough cost of $10 trillion (rounded up to account for hardware, research, and so on) and a time period of 10-20 years. But I would call your attention to a couple of things:

First, this is basically an artificial human, even to the extent that the easiest implementation likely requires giving it a robotic human body. It is not more general than that, and there is little reason to believe that our AI would be much more intelligent than a normal human, or that we could easily make it more intelligent. It would be fairly easy to give it quick mental access to other things, like mathematical calculation or internet searches, but this would not be much faster than a human being with a calculator or internet access. Like with GPT-N, one factor that would tend to limit its intelligence is that its predictive model is based on the level of intelligence found in human beings; there is no reason it would predict it would behave more intelligently, and so no reason why it would.

Second, it is extremely unlikely than anyone will implement this research program anytime soon. Why? Because you don’t get anything out of it except an artificial human. We have easier and less expensive ways to make humans, and $10 trillion is around the most any country has ever spent on anything, and never deliberately on one single project. Nonetheless, if no better way to make an AI is found, one can expect that eventually something like this will be implemented; perhaps by China in the 22nd century.

Third, note that “values” did not come up in this discussion. I mentioned this in one of the earlier posts on predictive processing:

The idea of the “desert landscape” seems to be that this account appears to do away with the idea of the good, and the idea of desire. The brain predicts what it is going to do, and those predictions cause it to do those things. This all seems purely intellectual: it seems that there is no purpose or goal or good involved.

The correct response to this, I think, is connected to what I have said elsewhere about desire and good. I noted there that we recognize our desires as desires for particular things by noticing that when we have certain feelings, we tend to do certain things. If we did not do those things, we would never conclude that those feelings are desires for doing those things. Note that someone could raise a similar objection here: if this is true, then are not desire and good mere words? We feel certain feelings, and do certain things, and that is all there is to be said. Where is good or purpose here?

The truth here is that good and being are convertible. The objection (to my definition and to Clark’s account) is not a reasonable objection at all: it would be a reasonable objection only if we expected good to be something different from being, in which case it would of course be nothing at all.

There was no need for an explicit discussion of values because they are an indirect consequence. What would our AI care about? It would care roughly speaking about the same things we care about, because it would predict (and act on the prediction) that it would live a life similar to a human life. There is definitely no specific reason to think it would be interested in taking over the world, although this cannot be excluded absolutely, since this is an interest that humans sometimes have. Note also that Nick Bostrom was wrong: I have just made a proposal that might actually succeed in making a human-like AI, but there is no similar proposal that would make an intelligent paperclip maximizer.

This is not to say that we should not expect any bad behavior at all from such a being; the behavior of the AI in the film Ex Machina is a plausible fictional representation of what could go wrong. Since what it is “trying” to do is to get predictive accuracy, and its predictions are based on actual human lives, it will “feel bad” about the lack of accuracy that results from the fact that it is not actually human, and it may act on those feelings.

Mind of God

Reconciling Theism and Atheism

In his Dialogues Concerning Natural Religion, David Hume presents Philo as arguing that the disagreement between theists and atheists is merely verbal:

All men of sound reason are disgusted with verbal disputes, which abound so much in philosophical and theological inquiries; and it is found, that the only remedy for this abuse must arise from clear definitions, from the precision of those ideas which enter into any argument, and from the strict and uniform use of those terms which are employed. But there is a species of controversy, which, from the very nature of language and of human ideas, is involved in perpetual ambiguity, and can never, by any precaution or any definitions, be able to reach a reasonable certainty or precision. These are the controversies concerning the degrees of any quality or circumstance. Men may argue to all eternity, whether HANNIBAL be a great, or a very great, or a superlatively great man, what degree of beauty CLEOPATRA possessed, what epithet of praise LIVY or THUCYDIDES is entitled to, without bringing the controversy to any determination. The disputants may here agree in their sense, and differ in the terms, or vice versa; yet never be able to define their terms, so as to enter into each other’s meaning: Because the degrees of these qualities are not, like quantity or number, susceptible of any exact mensuration, which may be the standard in the controversy. That the dispute concerning Theism is of this nature, and consequently is merely verbal, or perhaps, if possible, still more incurably ambiguous, will appear upon the slightest inquiry. I ask the Theist, if he does not allow, that there is a great and immeasurable, because incomprehensible difference between the human and the divine mind: The more pious he is, the more readily will he assent to the affirmative, and the more will he be disposed to magnify the difference: He will even assert, that the difference is of a nature which cannot be too much magnified. I next turn to the Atheist, who, I assert, is only nominally so, and can never possibly be in earnest; and I ask him, whether, from the coherence and apparent sympathy in all the parts of this world, there be not a certain degree of analogy among all the operations of Nature, in every situation and in every age; whether the rotting of a turnip, the generation of an animal, and the structure of human thought, be not energies that probably bear some remote analogy to each other: It is impossible he can deny it: He will readily acknowledge it. Having obtained this concession, I push him still further in his retreat; and I ask him, if it be not probable, that the principle which first arranged, and still maintains order in this universe, bears not also some remote inconceivable analogy to the other operations of nature, and, among the rest, to the economy of human mind and thought. However reluctant, he must give his assent. Where then, cry I to both these antagonists, is the subject of your dispute? The Theist allows, that the original intelligence is very different from human reason: The Atheist allows, that the original principle of order bears some remote analogy to it. Will you quarrel, Gentlemen, about the degrees, and enter into a controversy, which admits not of any precise meaning, nor consequently of any determination? If you should be so obstinate, I should not be surprised to find you insensibly change sides; while the Theist, on the one hand, exaggerates the dissimilarity between the Supreme Being, and frail, imperfect, variable, fleeting, and mortal creatures; and the Atheist, on the other, magnifies the analogy among all the operations of Nature, in every period, every situation, and every position. Consider then, where the real point of controversy lies; and if you cannot lay aside your disputes, endeavour, at least, to cure yourselves of your animosity.

To what extent Hume actually agrees with this argument is not clear, and whether or not a dispute is verbal or real is itself like Hume’s questions about greatness or beauty, that is, it is a matter of degree. Few disagreements are entirely verbal. In any case, I largely agree with the claim that there is little real disagreement here. In response to a question on the about page of this blog, I referred to some remarks about God by Roderick Long:

Since my blog has wandered into theological territory lately, I thought it might be worth saying something about the existence of God.

When I’m asked whether I believe in God, I usually don’t know what to say – not because I’m unsure of my view, but because I’m unsure how to describe my view. But here’s a try.

I think the disagreement between theism and atheism is in a certain sense illusory – that when one tries to sort out precisely what theists are committed to and precisely what atheists are committed to, the two positions come to essentially the same thing, and their respective proponents have been fighting over two sides of the same shield.

Let’s start with the atheist. Is there any sense in which even the atheist is committed to recognising the existence of some sort of supreme, eternal, non-material reality that transcends and underlies everything else? Yes, there is: namely, the logical structure of reality itself.

Thus so long as the theist means no more than this by “God,” the theist and the atheist don’t really disagree.

Now the theist may think that by God she means something more than this. But likewise, before people knew that whales were mammals they thought that by “whale” they meant a kind of fish. What is the theist actually committed to meaning?

Well, suppose that God is not the logical structure of the universe. Then we may ask: in what relation does God stand to that structure, if not identity? There would seem to be two possibilities.

One is that God stands outside that structure, as its creator. But this “possibility” is unintelligible. Logic is a necessary condition of significant discourse; thus one cannot meaningfully speak of a being unconstrained by logic, or a time when logic’s constraints were not yet in place.

The other is that God stands within that structure, along with everything else. But this option, as Wittgenstein observed, would downgrade God to the status of being merely one object among others, one more fragment of contingency – and he would no longer be the greatest of all beings, since there would be something greater: the logical structure itself. (This may be part of what Plato meant in describing the Form of the Good as “beyond being.”)

The only viable option for the theist, then, is to identify God with the logical structure of reality. (Call this “theological logicism.”) But in that case the disagreement between the theist and the atheist dissolves.

It may be objected that the “reconciliation” I offer really favours the atheist over the theist. After all, what theist could be satisfied with a deity who is merely the logical structure of the universe? Yet in fact there is a venerable tradition of theists who proclaim precisely this. Thomas Aquinas, for example, proposed to solve the age-old questions “could God violate the laws of logic?” and “could God command something immoral?” by identifying God with Being and Goodness personified. Thus God is constrained by the laws of logic and morality, not because he is subject to them as to a higher power, but because they express his own nature, and he could not violate or alter them without ceasing to be God. Aquinas’ solution is, essentially, theological logicism; yet few would accuse Aquinas of having a watered-down or crypto-atheistic conception of deity. Why, then, shouldn’t theological logicism be acceptable to the theist?

A further objection may be raised: Aquinas of course did not stop at the identification of God with Being and Goodness, but went on to attribute to God various attributes not obviously compatible with this identification, such as personality and will. But if the logical structure of reality has personality and will, it will not be acceptable to the atheist; and if it does not have personality and will, then it will not be acceptable to the theist. So doesn’t my reconciliation collapse?

I don’t think so. After all, Aquinas always took care to insist that in attributing these qualities to God we are speaking analogically. God does not literally possess personality and will, at least if by those attributes we mean the same attributes that we humans possess; rather he possesses attributes analogous to ours. The atheist too can grant that the logical structure of reality possesses properties analogous to personality and will. It is only at the literal ascription of those attributes that the atheist must balk. No conflict here.

Yet doesn’t God, as understood by theists, have to create and sustain the universe? Perhaps so. But atheists too can grant that the existence of the universe depends on its logical structure and couldn’t exist for so much as an instant without it. So where’s the disagreement?

But doesn’t God have to be worthy of worship? Sure. But atheists, while they cannot conceive of worshipping a person, are generally much more open to the idea of worshipping a principle. Again theological logicism allows us to transcend the opposition between theists and atheists.

But what about prayer? Is the logical structure of reality something one could sensibly pray to? If so, it might seem, victory goes to the theist; and if not, to the atheist. Yet it depends what counts as prayer. Obviously it makes no sense to petition the logical structure of reality for favours; but this is not the only conception of prayer extant. In Science and Health, for example, theologian M. B. Eddy describes the activity of praying not as petitioning a principle but as applying a principle:

“Who would stand before a blackboard, and pray the principle of mathematics to solve the problem? The rule is already established, and it is our task to work out the solution. Shall we ask the divine Principle of all goodness to do His own work? His work is done, and we have only to avail ourselves of God’s rule in order to receive His blessing, which enables us to work out our own salvation.”

Is this a watered-down or “naturalistic” conception of prayer? It need hardly be so; as the founder of Christian Science, Eddy could scarcely be accused of underestimating the power of prayer! And similar conceptions of prayer are found in many eastern religions. Once again, theological logicism’s theistic credentials are as impeccable as its atheistic credentials.

Another possible objection is that whether identifying God with the logical structure of reality favours the atheist or the theist depends on how metaphysically robust a conception of “logical structure” one appeals to. If one thinks of reality’s logical structure in realist terms, as an independent reality in its own right, then the identification favours the theist; but if one instead thinks, in nominalist terms, that there’s nothing to logical structure over and above what it structures, then the identification favours the atheist.

This argument assumes, however, that the distinction between realism and nominalism is a coherent one. I’ve argued elsewhere (see here and here) that it isn’t; conceptual realism pictures logical structure as something imposed by the world on an inherently structureless mind (and so involves the incoherent notion of a structureless mind), while nominalism pictures logical structure as something imposed by the mind on an inherently structureless world (and so involves the equally incoherent notion of a structureless world). If the realism/antirealism dichotomy represents a false opposition, then the theist/atheist dichotomy does so as well. The difference between the two positions will then be only, as Wittgenstein says in another context, “one of battle cry.”

Long is trying too hard, perhaps. As I stated above, few disagreements are entirely verbal, so it would be strange to find no disagreement at all, and we could question some points here. Are atheists really open to worshiping a principle? Respecting, perhaps, but worshiping? A defender of Long, however, might say that “respect” and “worship” do not necessarily have any relevant difference here, and this is itself a merely verbal difference signifying a cultural difference. The theist uses “worship” to indicate that they belong to a religious culture, while the atheist uses “respect” to indicate that they do not. But it would not be easy to find a distinct difference in the actual meaning of the terms.

In any case, there is no need to prove that there is no difference at all, since without a doubt individual theists will disagree on various matters with individual atheists. The point made by both David Hume and Roderick Long stands at least in a general way: there is far less difference between the positions than people typically assume.

In an earlier post I discussed, among other things, whether the first cause should be called a “mind” or not, discussing St. Thomas’s position that it should be, and Plotinus’s position that it should not be. Along the lines of the argument in this post, perhaps this is really an argument about whether or not you should use a certain analogy, and the correct answer may be that it depends on your purposes.

But what if your purpose is simply to understand reality? Even if it is, it is often the case that you can understand various aspects of reality with various analogies, so this will not necessarily provide you with a definite answer. Still, someone might argue that you should not use a mental analogy with regard to the first cause because it will lead people astray. Thus, in a similar way, Richard Dawkins argued that one should not call the first cause “God” because it would mislead people:

Yes, I said, but it must have been simple and therefore, whatever else we call it, God is not an appropriate name (unless we very explicitly divest it of all the baggage that the word ‘God’ carries in the minds of most religious believers). The first cause that we seek must have been the simple basis for a self-bootstrapping crane which eventually raised the world as we know it into its present complex existence.

I will argue shortly that Dawkins was roughly speaking right about the way that the first cause works, although as I said in that earlier post, he did not have a strong argument for it other than his aesthetic sense and the kinds of explanation that he prefers. In any case, his concern with the name “God” is the “baggage” that it “carries in the minds of most religious believers.” That is, if we say, “There is a first cause, therefore God exists,” believers will assume that their concrete beliefs about God are correct.

In a similar way, someone could reasonably argue that speaking of God as a “mind” would tend to lead people into error by leading them to suppose that God would do the kinds of the things that other minds, namely human ones, do. And this definitely happens. Thus for example, in his book Who Designed the Designer?, Michael Augros argues for the existence of God as a mind, and near the end of the book speculates about divine revelation:

I once heard of a certain philosopher who, on his deathbed, when asked whether he would become a Christian, admitted his belief in Aristotle’s “prime mover”, but not in Jesus Christ as the Son of God. This sort of acknowledgment of the prime mover, of some sort of god, still leaves most of our chief concerns unaddressed. Will X ever see her son again, now that the poor boy has died of cancer at age six? Will miserable and contrite Y ever be forgiven, somehow reconciled to the universe and made whole, after having killed a family while driving drunk? Will Z ever be brought to justice, having lived out his whole life laughing at the law while another person rotted in jail for the atrocities he committed? That there is a prime mover does not tell us with sufficient clarity. Even the existence of an all-powerful, all-knowing, all-good god does not enable us to fill in much detail. And so it seems reasonable to suppose that god has something more to say to us, in explicit words, and not only in the mute signs of creation. Perhaps he is waiting to talk to us, biding his time for the right moment. Perhaps he has already spoken, but we have not recognized his voice.

When we cast our eye about by the light of reason in his way, it seems there is room for faith in general, even if no particular faith can be “proved” true in precisely the same way that it can be “proved” that there is a god.

The idea is that given that God is a mind, it follows that it is fairly plausible that he would wish to speak to people. And perhaps that he would wish to establish justice through extraordinary methods, and that he might wish to raise people from the dead.

I think this is “baggage” carried over from Augros’s personal religious views. It is an anthropomorphic mistake, not merely in the sense that he does not have a good reason for such speculation, but in the sense that such a thing is demonstrably implausible. It is not that the divine motives are necessarily unknown to us, but that we can actually discover them, at least to some extent, and we will discover that they are not what he supposes.

Divine Motives

How might one know the divine motives? How does one read the mind of God?

Anything that acts at all does it what it does ultimately because of what it is. This is an obvious point, like the point that the existence of something rather than nothing could not have some reason outside of being. In a similar way, “what is” is the only possible explanation for what is done, since there is nothing else there to be an explanation. And in every action, whether or not we are speaking of the subject in explicitly mental terms or not, we can always use the analogy of desires and goals. In the linked post, I quote St. Thomas as speaking of the human will as the “rational appetite,” and the natural tendency of other things as a “natural appetite.” If we break down the term “rational appetite,” the meaning is “the tendency to do something, because of having a reason to do it.” And this fits with my discussion of human will in various places, such as in this earlier post.

But where do those reasons come from? I gave an account of this here, arguing that rational goals are a secondary effect of the mind’s attempt to understand itself. Of course human goals are complex and have many factors, but this happens because what the mind is trying to understand is complicated and multifaceted. In particular, there is a large amount of pre-existing human behavior that it needs to understand before it can attribute goals: behavior that results from life as a particular kind of animal, behavior that results from being a particular living thing, and behavior that results from having a body of such and such a sort.

In particular, human social behavior results from these things. There was some discussion of this here, when we looked at Alexander Pruss’s discussion of hypothetical rational sharks.

You might already see where this is going. God as the first cause does not have any of the properties that generate human social behavior, so we cannot expect his behavior to resemble human social behavior in any way, as for example by having any desire to speak with people. Indeed, this is the argument I am making, but let us look at the issue more carefully.

I responded to the “dark room” objection to predictive processing here and here. My response depends both the biological history of humans and animals in general, and to some extent on the history of each individual. But the response does not merely explain why people do not typically enter dark rooms and simply stay there until they die. It also explains why occasionally people do do such things, to a greater or lesser approximation, as with suicidal or extremely depressed people.

If we consider the first cause as a mind, as we are doing here, it is an abstract immaterial mind without any history, without any pre-existing behaviors, without any of the sorts of things that allow people to avoid the dark room. So while people will no doubt be offended by the analogy, and while I will try to give a more pleasant interpretation later, one could argue that God is necessarily subject to his own dark room problem: there is no reason for him to have any motives at all, except the one which is intrinsic to minds, namely the motive of understanding. And so he should not be expected to do anything with the world, except to make sure that it is intelligible, since it must be intelligible for him to understand it.

The thoughtful reader will object: on this account, why does God create the world at all? Surely doing and making nothing at all would be even better, by that standard. So God does seem to have a “dark room” problem that he does manage to avoid, namely the temptation to nothing at all. This is a reasonable objection, but I think it would lead us on a tangent, so I will not address it at this time. I will simply take it for granted that God makes something rather than nothing, and discuss what he does with the world given that fact.

In the previous post, I pointed out that David Hume takes for granted that the world has stable natural laws, and uses that to argue that an orderly world can result from applying those laws to “random” configurations over a long enough time. I said that one might accuse him of “cheating” here, but that would only be the case if he intended to maintain a strictly atheistic position which would say that there is no first cause at all, or that if there is, it does not even have a remote analogy with a mind. Thus his attempted reconciliation of theism and atheism is relevant, since it seems from this that he is aware that such a strict atheism cannot be maintained.

St. Thomas makes a similar connection between God as a mind and a stable order of things in his fifth way:

The fifth way is taken from the governance of the world. We see that things which lack intelligence, such as natural bodies, act for an end, and this is evident from their acting always, or nearly always, in the same way, so as to obtain the best result. Hence it is plain that not fortuitously, but designedly, do they achieve their end. Now whatever lacks intelligence cannot move towards an end, unless it be directed by some being endowed with knowledge and intelligence; as the arrow is shot to its mark by the archer. Therefore some intelligent being exists by whom all natural things are directed to their end; and this being we call God.

What are we are to make of the claim that things act “always, or nearly always, in the same way, so as to obtain the best result?” Certainly acting in the same way would be likely to lead to similar results. But why would you think it was the best result?

If we consider where we get the idea of desire and good, the answer will be clear. We don’t have an idea of good which is completely independent from “what actually tends to happen”, even though this is not quite a definition of the term either. So ultimately St. Thomas’s argument here is based on the fact that things act in similar ways and achieve similar results. The idea that it is “best” is not an additional contribution.

But now consider the alternative. Suppose that things did not act in similar ways, or that doing so did not lead to similar results. We would live in David Hume’s non-inductive world. The result is likely to be mathematically and logically impossible. If someone says, “look, the world works in a coherent way,” and then attempts to describe how it would look if it worked in an incoherent way, they will discover that the latter “possibility” cannot be described. Any description must be coherent in order to be a description, so the incoherent “option” was never a real option in the first place.

This argument might suggest that the position of Plotinus, that mind should not be attributed to God at all, is the more reasonable one. But since we are exploring the situation where we do make that attribution, let us consider the consequences.

We argued above that the sole divine motive for the world is intelligibility. This requires coherence and consistency. It also requires a tendency towards the good, for the above mentioned reasons. Having a coherent tendency at all is ultimately not something different from tending towards good.

The world described is arguably a deist world, one in which the laws of nature are consistently followed, but God does nothing else in the world. The Enlightenment deists presumably had various reasons for their position: criticism of specific religious doctrines, doubts about miracles, and an aesthetic attraction to a perfectly consistent world. But like Dawkins with his argument about God’s simplicity, they do not seem (to me at least) to have had very strong arguments. That does not prove that their position was wrong, and even their weaker arguments may have had some relationship with the truth; even an aesthetic attraction to a perfectly consistent world has some connection with intelligibility, which is the actual reason for the world to be that way.

Once again, as with the objection about creating a world at all, a careful reader might object that this argument is not conclusive. If you have a first cause at all, then it seems that you must have one or more first effects, and even if those effects are simple, they cannot be infinitely simple. And given that they are not infinitely simple, who is to set the threshold? What is to prevent one or more of those effects from being “miraculous” relative to anything else, or even from being something like a voice giving someone a divine revelation?

There is something to this argument, but as with the previous objection, I will not be giving my response here. I will simply note for the moment that it is a little bit strained to suggest that such a thing could happen without God having an explicit motive of “talking to people,” and as argued above, such a motive cannot exist in God. That said, I will go on to some other issues.

As the Heavens are Higher

Apart from my arguments, it has long been noticed in the actual world that God seems much more interested in acting consistently than in bringing about any specific results in human affairs.

Someone like Richard Dawkins, or perhaps Job, if he had taken the counsel of his wife, might respond to the situation in the following way. “God” is not an appropriate name for a first cause that acts like this. If anything is more important to God than being personal, it would be being good. But the God described here is not good at all, since he doesn’t seem to care a bit about human affairs. And he inflicts horrible suffering on people just for the sake of consistency with physical laws. Instead of calling such a cause “God,” why don’t we call it “the Evil Demon” or something like that?

There is a lot that could be said about this. Some of it I have already said elsewhere. Some of it I will perhaps say at other times. For now I will make three brief points.

First, ensuring that the world is intelligible and that it behaves consistently is no small thing. In fact it is a prerequisite for any good thing that might happen anywhere and any time. We would not even arrive at the idea of “good” things if we did not strive consistently for similar results, nor would we get the idea of “striving” if we did did not often obtain them. Thus it is not really true that God has no interest in human affairs: rather, he is concerned with the affairs of all things, including humans.

Second, along similar lines, consider what the supposed alternative would be. If God were “good” in the way you wish, his behavior would be ultimately unintelligible. This is not merely because some physical law might not be followed if there were a miracle. It would be unintelligible behavior in the strict sense, that is, in the sense that no explanation could be given for why God is doing this. The ordinary proposal would be that it is because “this is good,” but when this statement is a human judgement made according to human motives, there would need to be an explanation for why a human judgement is guiding divine behavior. “God is a mind” does not adequately explain this. And it is not clear that an ultimately unintelligible world is a good one.

Third, to extend the point about God’s concern with all things, I suggest that the answer is roughly speaking the one that Scott Alexander gives non-seriously here, except taken seriously. This answer depends on an assumption of some sort of modal realism, a topic which I was slowly approaching for some time, but which merits a far more detailed discussion, and I am not sure when I will get around to it, if ever. The reader might note however that this answer probably resolves the question about “why didn’t God do nothing at all” by claiming that this was never an option anyway.

Structure of Explanation

When we explain a thing, we give a cause; we assign the thing an origin that explains it.

We can go into a little more detail here. When we ask “why” something is the case, there is always an implication of possible alternatives. At the very least, the question implies, “Why is this the case rather than not being the case?” Thus “being the case” and “not being the case” are two possible alternatives.

The alternatives can be seen as possibilities in the sense explained in an earlier post. There may or may not be any actual matter involved, but again, the idea is that reality (or more specifically some part of reality) seems like something that would be open to being formed in one way or another, and we are asking why it is formed in one particular way rather than the other way. “Why is it raining?” In principle, the sky is open to being clear, or being filled with clouds and a thunderstorm, and to many other possibilities.

A successful explanation will be a complete explanation when it says “once you take the origin into account, the apparent alternatives were only apparent, and not really possible.” It will be a partial explanation when it says, “once you take the origin into account, the other alternatives were less sensible (i.e. made less sense as possibilities) than the actual thing.”

Let’s consider some examples in the form of “why” questions and answers.

Q1. Why do rocks fall? (e.g. instead of the alternatives of hovering in the air, going upwards, or anything else.)

A1. Gravity pulls things downwards, and rocks are heavier than air.

The answer gives an efficient cause, and once this cause is taken into account, it can be seen that hovering in the air or going upwards were not possibilities relative to that cause.

Obviously there is not meant to be a deep explanation here; the point here is to discuss the structure of explanation. The given answer is in fact basically Newton’s answer (although he provided more mathematical detail), while with general relativity Einstein provided a better explanation.

The explanation is incomplete in several ways. It is not a first cause; someone can now ask, “Why does gravity pull things downwards, instead of upwards or to the side?” Similarly, while it is in fact the cause of falling rocks, someone can still ask, “Why didn’t anything else prevent gravity from making the rocks fall?” This is a different question, and would require a different answer, but it seems to reopen the possibility of the rocks hovering or moving upwards, from a more general point of view. David Hume was in part appealing to the possibility of such additional questions when he said that we can see no necessary connection between cause and effect.

Q2. Why is 7 prime? (i.e. instead of the alternative of not being prime.)

A2. 7/2 = 3.5, so 7 is not divisible by 2. 7/3 = 2.333…, so 7 is not divisible by 3. In a similar way, it is not divisible by 4, 5, or 6. Thus in general it is not divisible by any number except 1 and itself, which is what it means to be prime.

If we assumed that the questioner did not know what being prime means, we could have given a purely formal response simply by noting that it is not divisible by numbers between 1 and itself, and explaining that this is what it is to be prime. As it is, the response gives a sufficient material disposition. Relative to this explanation, “not being prime,” was never a real possibility for 7 in the first place. The explanation is complete in that it completely excludes the apparent alternative.

Q3. Why did Peter go to the store? (e.g. instead of going to the park or the museum, or instead of staying home.)

A3. He went to the store in order to buy groceries.

The answer gives a final cause. In view of this cause the alternatives were merely apparent. Going to the park or the museum, or even staying home, were not possible since there were no groceries there.

As in the case of the rock, the explanation is partial in several ways. Someone can still ask, “Why did he want groceries?” And again someone can ask why he didn’t go to some other store, or why something didn’t hinder him, and so on. Such questions seem to reopen various possibilities, and thus the explanation is not an ultimately complete one.

Suppose, however, that someone brings up the possibility that instead of going to the store, he could have gone to his neighbor and offered money for groceries in his neighbor’s refrigerator. This possibility is not excluded simply by the purpose of buying groceries. Nonetheless, the possibility seems less sensible than getting them from the store, for multiple reasons. Again, the implication is that our explanation is only partial: it does not completely exclude alternatives, but it makes them less sensible.

Let’s consider a weirder question: Why is there something rather than nothing?

Now the alternatives are explicit, namely there being something, and there being nothing.

It can be seen that in one sense, as I said in the linked post, the question cannot have an answer, since there cannot be a cause or origin for “there is something” which would itself not be something. Nonetheless, if we consider the idea of possible alternatives, it is possible to see that the question does not need an answer; one of the alternatives was only an apparent alternative all along.

In other words, the sky can be open to being clear or cloudy. But there cannot be something which is open both to “there is something” and “there is nothing”, since any possibility of that kind would be “something which is open…”, which would already be something rather than nothing. The “nothing” alternative was merely apparent. Nothing was ever open to there being nothing.

Let’s consider another weird question. Suppose we throw a ball, and in the middle of the path we ask, Why is the ball in the middle of the path instead of at the end of the path?

We could respond in terms of a sufficient material disposition: it is in the middle of the path because you are asking your question at the middle, instead of waiting until the end.

Suppose the questioner responds: Look, I asked my question at the middle of the path. But that was just chance. I could have asked at any moment, including at the end. So I want to know why it was in the middle without considering when I am asking the question.

If we look at the question in this way, it can be seen in one way that no cause or origin can be given. Asked in this way, being at the end cannot be excluded, since they could have asked their question at the end. But like the question about something rather than nothing, the question does not need an answer. In this case, this is not because the alternatives were merely apparent in the sense that one was possible and the other not. But they were merely apparent in the sense that they were not alternatives. The ball goes both goes through the middle, and reaches the end. With the stipulation that we not consider the time of the question, the two possibilities are not mutually exclusive.

Additional Considerations

The above considerations about the nature of “explanation” lead to various conclusions, but also to various new questions. For example, one commenter suggested that “explanation” is merely subjective. Now as I said there, all experience is subjective experience (what would “objective experience” even mean, except that someone truly had a subjective experience?), including the experience of having an explanation. Nonetheless, the thing experienced is not subjective: the origins that we call explanations objectively exclude the apparent possibilities, or objectively make them less intelligible. The explanation of explanation here, however, provides an answer to what was perhaps the implicit question. Namely, why are we so interested in explanations in the first place, so that the experience of understanding something becomes a particularly special type of experience? Why, as Aristotle puts it, do “all men desire to know,” and why is that desire particularly satisfied by explanations?

In one sense it is sufficient simply to say that understanding is good in itself. Nonetheless, there is something particular about the structure of a human being that makes knowledge good for us, and which makes explanation a particularly desirable form of knowledge. In my employer and employee model of human psychology, I said that “the whole company is functioning well overall when the CEO’s goal of accurate prediction is regularly being achieved.” This very obviously requires knowledge, and explanation is especially beneficial because it excludes alternatives, which reduces uncertainty and therefore tends to make prediction more accurate.

However, my account also raises new questions. If explanation eliminates alternatives, what would happen if everything was explained? We could respond that “explaining everything” is not possible in the first place, but this is probably an inadequate response, because (from the linked argument) we only know that we cannot explain everything all at once, the way the person in the room cannot draw everything at once; we do not know that there is any particular thing that cannot be explained, just as there is no particular aspect of the room that cannot be drawn. So there can still be a question about what would happen if every particular thing in fact has an explanation, even if we cannot know all the explanations at once. In particular, since explanation eliminates alternatives, does the existence of explanations imply that there are not really any alternatives? This would suggest something like Leibniz’s argument that the actual world is the best possible world. It is easy to see that such an idea implies that there was only one “possibility” in the first place: Leibniz’s “best possible world” would be rather “the only possible world,” since the apparent alternatives, given that they would have been worse, were not real alternatives in the first place.

On the other hand, if we suppose that this is not the case, and there are ultimately many possibilities, does this imply the existence of “brute facts,” things that could have been otherwise, but which simply have no explanation? Or at least things that have no complete explanation?

Let the reader understand. I have already implicitly answered these questions. However, I will not link here to the implicit answers because if one finds it unclear when and where this was done, one would probably also find those answers unclear and inconclusive. Of course it is also possible that the reader does see when this was done, but still believes those responses inadequate. In any case, it is possible to provide the answers in a form which is much clearer and more conclusive, but this will likely not be a short or simple project.

Employer and Employee Model: Happiness

We discussed Aristotle’s definition of happiness as activity according to virtue here, followed by a response to an objection.

There is another objection, however, which Aristotle raises himself in Book I, chapter 8 of the Nicomachean Ethics:

Yet evidently, as we said, it needs the external goods as well; for it is impossible, or not easy, to do noble acts without the proper equipment. In many actions we use friends and riches and political power as instruments; and there are some things the lack of which takes the lustre from happiness, as good birth, goodly children, beauty; for the man who is very ugly in appearance or ill-born or solitary and childless is not very likely to be happy, and perhaps a man would be still less likely if he had thoroughly bad children or friends or had lost good children or friends by death. As we said, then, happiness seems to need this sort of prosperity in addition; for which reason some identify happiness with good fortune, though others identify it with virtue.

Aristotle is responding to the implicit objection by saying that it is “impossible, or not easy” to act according to virtue when one is doing badly in other ways. Yet probably most of us know some people who are virtuous while suffering various misfortunes, and it seems pretty unreasonable, as well as uncharitable, to assert that the reason that they are somewhat unhappy with their circumstances is that the lack of “proper equipment” leads to a lack of virtuous activity. Or at any rate, even if this contributes to the matter, it does not seem to be a full explanation. The book of Job, for example, is based almost entirely on the possibility of being both virtuous and miserable, and Job would very likely respond to Aristotle, “How then will you comfort me with empty nothings? There is nothing left of your answers but falsehood.”

Aristotle brings up a similar issue at the beginning of Book VIII:

After what we have said, a discussion of friendship would naturally follow, since it is a virtue or implies virtue, and is besides most necessary with a view to living. For without friends no one would choose to live, though he had all other goods; even rich men and those in possession of office and of dominating power are thought to need friends most of all; for what is the use of such prosperity without the opportunity of beneficence, which is exercised chiefly and in its most laudable form towards friends? Or how can prosperity be guarded and preserved without friends? The greater it is, the more exposed is it to risk. And in poverty and in other misfortunes men think friends are the only refuge. It helps the young, too, to keep from error; it aids older people by ministering to their needs and supplementing the activities that are failing from weakness; those in the prime of life it stimulates to noble actions-‘two going together’-for with friends men are more able both to think and to act. Again, parent seems by nature to feel it for offspring and offspring for parent, not only among men but among birds and among most animals; it is felt mutually by members of the same race, and especially by men, whence we praise lovers of their fellowmen. We may even in our travels how near and dear every man is to every other. Friendship seems too to hold states together, and lawgivers to care more for it than for justice; for unanimity seems to be something like friendship, and this they aim at most of all, and expel faction as their worst enemy; and when men are friends they have no need of justice, while when they are just they need friendship as well, and the truest form of justice is thought to be a friendly quality.

But it is not only necessary but also noble; for we praise those who love their friends, and it is thought to be a fine thing to have many friends; and again we think it is the same people that are good men and are friends.

There is a similar issue here: lack of friends may make someone unhappy, but lack of friends is not lack of virtue. Again Aristotle is in part responding by pointing out that the activity of some virtues depends on the presence of friends, just as he said that temporal goods were necessary as instruments. Once again, however, even if there is some truth in it, the answer does not seem adequate, especially since Aristotle believes that the highest form of happiness is found in contemplation, which seems to depend much less on friends than other types of activity.

Consider again Aristotle’s argument for happiness as virtue, presented in the earlier post. It depends on the idea of a “function”:

Presumably, however, to say that happiness is the chief good seems a platitude, and a clearer account of what it is still desired. This might perhaps be given, if we could first ascertain the function of man. For just as for a flute-player, a sculptor, or an artist, and, in general, for all things that have a function or activity, the good and the ‘well’ is thought to reside in the function, so would it seem to be for man, if he has a function. Have the carpenter, then, and the tanner certain functions or activities, and has man none? Is he born without a function? Or as eye, hand, foot, and in general each of the parts evidently has a function, may one lay it down that man similarly has a function apart from all these? What then can this be? Life seems to be common even to plants, but we are seeking what is peculiar to man. Let us exclude, therefore, the life of nutrition and growth. Next there would be a life of perception, but it also seems to be common even to the horse, the ox, and every animal. There remains, then, an active life of the element that has a rational principle; of this, one part has such a principle in the sense of being obedient to one, the other in the sense of possessing one and exercising thought. And, as ‘life of the rational element’ also has two meanings, we must state that life in the sense of activity is what we mean; for this seems to be the more proper sense of the term. Now if the function of man is an activity of soul which follows or implies a rational principle, and if we say ‘so-and-so-and ‘a good so-and-so’ have a function which is the same in kind, e.g. a lyre, and a good lyre-player, and so without qualification in all cases, eminence in respect of goodness being added to the name of the function (for the function of a lyre-player is to play the lyre, and that of a good lyre-player is to do so well): if this is the case, and we state the function of man to be a certain kind of life, and this to be an activity or actions of the soul implying a rational principle, and the function of a good man to be the good and noble performance of these, and if any action is well performed when it is performed in accordance with the appropriate excellence: if this is the case, human good turns out to be activity of soul in accordance with virtue, and if there are more than one virtue, in accordance with the best and most complete.

Aristotle took what was most specifically human and identified happiness with performing well in that most specifically human way. This is reasonable, but it leads to the above issues, because a human being is not only what is most specifically human, but also possesses the aspects that Aristotle dismissed here as common to other things. Consequently, activity according to virtue would be the most important aspect of functioning well as a human being, and in this sense Aristotle’s account is reasonable, but there are other aspects as well.

Using our model, we can present a more unified account of happiness which includes these other aspects without the seemingly arbitrary way in which Aristotle noted the need for temporal goods and friendship for happiness. The specifically rational character belongs mainly to the Employee, and thus when Aristotle identifies happiness with virtuous action, he is mainly identifying happiness with the activity of the Employee. And this is surely its most important aspect. But since the actual human being is the whole company, it is more complete to identify happiness with the good functioning of the whole company. And the whole company is functioning well overall when the CEO’s goal of accurate prediction is regularly being achieved.

Consider two ways in which someone might respond to the question, “How are you doing?” If someone isn’t doing very well, they might say, “Well, I’ve been having a pretty rough time,” while if they are better off, they might say, “Things are going pretty smoothly.” Of course people might use other words, but notice the contrast in my examples: a life that is going well is often said to be going “smoothly”, while the opposite is described as “rough.” And the difference here between smooth and rough is precisely the difference between predictive accuracy and inaccuracy. We might see this more easily by considering some restricted examples:

First, suppose two people are jogging. One is keeping an even pace, keeping their balance, rounding corners smoothly, and keeping to the middle of the path. The other is becoming tired, slowing down a bit and speeding up a bit. They are constantly off balance and suffering disturbing jolts when they hit unexpected bumps in the path, perhaps narrowly avoiding tripping. If we compare what is happening here with the general idea of predictive processing, it seems that the difference between the two is that first person is predicting accurately, while the second is predicting inaccurately. The second person is not rationing their energy and breath correctly, they suffer jolts or near trips when they did not correctly expect the lay of the land, and so on.

Suppose someone is playing a video game. The one who plays it well is the one who is very prepared for every eventuality. They correctly predict what is going to happen in the game both with regard to what happens “by itself,” and what will happen as a result of their in-game actions. They play the game “smoothly.”

Suppose I am writing this blog post and feel myself in a state of “flow,” and I consequently am enjoying the activity. This can only happen as long as the process is fairly “smooth.” If I stop for long periods in complete uncertainty of what to write next, the state will go away. In other words, the condition depends on having at each moment a fairly good idea of what is coming next; it depends on accurate prediction.

The reader might understand the point in relation to these limited examples, but how does this apply to life in general, and especially to virtue and vice, which are according to Aristotle the main elements of happiness and unhappiness?

In a basic way virtuous activity is reasonable activity, and vicious activity is unreasonable activity. The problem with vice, in this account, is that it immediately sets up a serious interior conflict. The Employee is a rational being and is constantly being affected by reasons to do things. Vice, in one way or another, persuades them to do unreasonable things, and the reasons for not doing those things will be constantly pulling in the opposite direction. When St. Paul complains that he wills something different from what he does, he is speaking of this kind of conflict. But conflicting tendencies leads to uncertain results, and so our CEO is unhappy with this situation.

Now you might object: if a vicious man is unhappy because of conflicting tendencies, what if they are so wicked that they have no conflict, but simply and contentedly do what is evil?

The response to this would be somewhat along the lines of the answer we gave to the objection that moral obligation should not depend on desiring some particular end. First, it is probably impossible for a human being to become so corrupted that they cannot see, at least to some degree, that bad things are bad. Second, consider the wicked men according to Job’s description:

Why do the wicked live on,
reach old age, and grow mighty in power?
Their children are established in their presence,
and their offspring before their eyes.
Their houses are safe from fear,
and no rod of God is upon them.
Their bull breeds without fail;
their cow calves and never miscarries.
They send out their little ones like a flock,
and their children dance around.
They sing to the tambourine and the lyre,
and rejoice to the sound of the pipe.
They spend their days in prosperity,
and in peace they go down to Sheol.

Just as we said that if you assume that someone is entirely corrupt, the idea of “obligation” may well become irrelevant to them, without that implying anything wrong with the general idea of moral obligation, in a similar way, it would be metaphorical to speak of such a person as “unhappy”; you could say this with the intention of saying that they exist in an objectively bad situation, but not in the ordinary sense of the term, in which it includes subjective discontent.

We could explain a great deal more with this account of happiness: not only the virtuous life in general, but also a great deal of the spiritual, psychological, and other practical advice which is typically given. But this is all perhaps for another time.

Employer and Employee Model: Truth

In the remote past, I suggested that I would someday follow up on this post. In the current post, I begin to keep that promise.

We can ask about the relationship of the various members of our company with the search for truth.

The CEO, as the predictive engine, has a fairly strong interest in truth, but only insofar as truth is frequently necessary in order to get predictive accuracy. Consequently our CEO will usually insist on the truth when it affects our expectations regarding daily life, but it will care less when we consider things remote from the senses. Additionally, the CEO is highly interested in predicting the behavior of the Employee, and it is not uncommon for falsehood to be better than truth for this purpose.

To put this in another way, the CEO’s interest in truth is instrumental: it is sometimes useful for the CEO’s true goal, predictive accuracy, but not always, and in some cases it can even be detrimental.

As I said here, the Employee is, roughly speaking, the human person as we usually think of one, and consequently the Employee has the same interest in truth that we do. I personally consider truth to be an ultimate end,  and this is probably the opinion of most people, to a greater or lesser degree. In other words, most people consider truth a good thing, even apart from instrumental considerations. Nonetheless, all of us care about various things besides truth, and therefore we also occasionally trade truth for other things.

The Vice President has perhaps the least interest in truth. We could say that they too have some instrumental concern about truth. Thus for example the VP desires food, and this instrumentally requires true ideas about where food is to be found. Nonetheless, as I said in the original post, the VP is the least rational and coherent, and may easily fail to notice such a need. Thus the VP might desire the status resulting from winning an argument, so to speak, but also desire the similar status that results from ridiculing the person holding an opposing view. The frequent result is that a person believes the falsehood that ridiculing an opponent generally increases the chance that they will change their mind (e.g. see John Loftus’s attempt to justify ridicule.)

Given this account, we can raise several disturbing questions.

First, although we have said the Employee values truth in itself, can this really be true, rather than simply a mistaken belief on the part of the Employee? As I suggested in the original account, the Employee is in some way a consequence of the CEO and the VP. Consequently, if neither of these places intrinsic value on truth, how is it possible that the Employee does?

Second, even if the Employee sincerely places an intrinsic value on truth, how is this not a misplaced value? Again, if the Employee is something like a result of the others, what is good for the Employee should be what is good for the others, and thus if truth is not intrinsically good for the others, it should not be intrinsically good for the Employee.

In response to the first question, the Employee can indeed believe in the intrinsic value of truth, and of many other things to which the CEO and VP do not assign intrinsic value. This happens because as we are considering the model, there is a real division of labor, even if the Employee arises historically in a secondary manner. As I said in the other post, the Employee’s beliefs are our beliefs, and the Employee can believe anything that we believe. Furthermore, the Employee can really act on such beliefs about the goodness of truth or other things, even when the CEO and VP do not have the same values. The reason for this is the same as the reason that the CEO will often go along with the desires of the VP, even though the CEO places intrinsic value only on predictive accuracy. The linked post explains, in effect, why the CEO goes along with sex, even though only the VP really wants it. In a similar way, if the Employee believes that sex outside of marriage is immoral, the CEO often goes along with avoiding such sex, even though the CEO cares about predictive accuracy, not about sex or its avoidance. Of course, in this particular case, there is a good chance of conflict between the Employee and VP, and the CEO dislikes conflict, since it makes it harder to predict what the person overall will end up doing. And since the VP very rarely changes its mind in this case, the CEO will often end up encouraging the Employee to change their mind about the morality of such sex: thus one of the most frequent reasons why people abandon their religion is that it says that sex in some situations is wrong, but they still desire sex in those situations.

In response to the second, the Employee is not wrong to suppose that truth is intrinsically valuable. The argument against this would be that the human good is based on human flourishing, and (it is claimed) we do not need truth for such flourishing, since the CEO and VP do not care about truth in itself. The problem with this is that such flourishing requires that the Employee care about truth, and even the CEO needs the Employee to care in this way, for the sake of its own goal of predictive accuracy. Consider a real-life company: the employer does not necessarily care about whether the employee is being paid, considered in itself, but only insofar as it is instrumentally useful for convincing the employee to work for the employer. But the employer does care about whether the employee cares about being paid: if the employee does not care about being paid, they will not work for the employer.

Concern for truth in itself, apart from predictive accuracy, affects us when we consider things that cannot possibly affect our future experience: thus in previous cases I have discussed the likelihood that there are stars and planets outside the boundaries of the visible universe. This is probably true; but if I did not care about truth in itself, I might as well say that the universe is surrounded by purple elephants. I do not expect any experience to verify or falsify the claim, so why not make it? But now notice the problem for the CEO: the CEO needs to predict what the Employee is going to do, including what they will say and believe. This will instantly become extremely difficult if the Employee decides that they can say and believe whatever they like, without regard for truth, whenever the claim will not affect their experiences. So for its own goal of predictive accuracy, the CEO needs the Employee to value truth in itself, just as an ordinary employer needs their employee to value their salary.

In real life this situation can cause problems. The employer needs their employee to care about being paid, but if they care too much, they may constantly be asking for raises, or they may quit and go work for someone who will pay more. The employer does not necessarily like these situations. In a similar way, the CEO in our company may worry if the Employee insists too much on absolute truth, because as discussed elsewhere, it can lead to other situations with unpredictable behavior from the Employee, or to situations where there is a great deal of uncertainty about how society will respond to the Employee’s behavior.

Overall, this post perhaps does not say much in substance that we have not said elsewhere, but it will perhaps provide an additional perspective on these matters.

Employer and Employee Model of Human Psychology

This post builds on the ideas in the series of posts on predictive processing and the followup posts, and also on those relating truth and expectation. Consequently the current post will likely not make much sense to those who have not read the earlier content, or to those that read it but mainly disagreed.

We set out the model by positing three members of the “company” that constitutes a human being:

The CEO. This is the predictive engine in the predictive processing model.

The Vice President. In the same model, this is the force of the historical element in the human being, which we used to respond to the “darkened room” problem. Thus for example the Vice President is responsible for the fact that someone is likely to eat soon, regardless of what they believe about this. Likewise, it is responsible for the pursuit of sex, the desire for respect and friendship, and so on. In general it is responsible for behaviors that would have been historically chosen and preserved by natural selection.

The Employee. This is the conscious person who has beliefs and goals and free will and is reflectively aware of these things. In other words, this is you, at least in a fairly ordinary way of thinking of yourself. Obviously, in another way you are composed from all of them.

Why have we arranged things in this way? Descartes, for example, would almost certainly disagree violently with this model. The conscious person, according to him, would surely be the CEO, and not an employee. And what is responsible for the relationship between the CEO and the Vice President? Let us start with this point first, before we discuss the Employee. We make the predictive engine the CEO because in some sense this engine is responsible for everything that a human being does, including the behaviors preserved by natural selection. On the other hand, the instinctive behaviors of natural selection are not responsible for everything, but they can affect the course of things enough that it is useful for the predictive engine to take them into account. Thus for example in the post on sex and minimizing uncertainty, we explained why the predictive engine will aim for situations that include having sex and why this will make its predictions more confident. Thus, the Vice President advises certain behaviors, the CEO talks to the Vice President, and the CEO ends up deciding on a course of action, which ultimately may or may not be the one advised by the Vice President.

While neither the CEO nor the Vice President is a rational being, since in our model we place the rationality in the Employee, that does not mean they are stupid. In particular, the CEO is very good at what it does. Consider a role playing video game where you have a character that can die and then resume. When someone first starts to play the game, they may die frequently. After they are good at the game, they may die only rarely, perhaps once in many days or many weeks. Our CEO is in a similar situation, but it frequently goes 80 years or more without dying, on its very first attempt. It is extremely good at its game.

What are their goals? The CEO basically wants accurate predictions. In this sense, it has one unified goal. What exactly counts as more or less accurate here would be a scientific question that we probably cannot resolve by philosophical discussion. In fact, it is very possible that this would differ in different circumstances: in this sense, even though it has a unified goal, it might not be describable by a consistent utility function. And even if it can be described in that way, since the CEO is not rational, it does not (in itself) make plans to bring about correct predictions. Making good predictions is just what it does, as falling is what a rock does. There will be some qualifications on this, however, when we discuss how the members of the company relate to one another.

The Vice President has many goals: eating regularly, having sex, having and raising children, being respected and liked by others, and so on. And even more than in the case of the CEO, there is no reason for these desires to form a coherent set of preferences. Thus the Vice President might advise the pursuit of one goal, but then change its mind in the middle, for no apparent reason, because it is suddenly attracted by one of the other goals.

Overall, before the Employee is involved, human action is determined by a kind of negotiation between the CEO and the Vice President. The CEO, which wants good predictions, has no special interest in the goals of the Vice President, but it cooperates with them because when it cooperates its predictions tend to be better.

What about the Employee? This is the rational being, and it has abstract concepts which it uses as a formal copy of the world. Before I go on, let me insist clearly on one point. If the world is represented in a certain way in the Employee’s conceptual structure, that is the way the Employee thinks the world is. And since you are the Employee, that is the way you think the world actually is. The point is that once we start thinking this way, it is easy to say, “oh, this is just a model, it’s not meant to be the real thing.” But as I said here, it is not possible to separate the truth of statements from the way the world actually is: your thoughts are formulated in concepts, but they are thoughts about the way things are. Again, all statements are maps, and all statements are about the territory.

The CEO and the Vice President exist as soon a human being has a brain; in fact some aspects of the Vice President would exist even before that. But the Employee, insofar as it refers to something with rational and self-reflective knowledge, takes some time to develop. Conceptual knowledge of the world grows from experience: it doesn’t exist from the beginning. And the Employee represents goals in terms of its conceptual structure. This is just a way of saying that as a rational being, if you say you are pursuing a goal, you have to be able to describe that goal with the concepts that you have. Consequently you cannot do this until you have some concepts.

We are ready to address the question raised earlier. Why are you the Employee, and not the CEO? In the first place, the CEO got to the company first, as we saw above. Second, consider what the conscious person does when they decide to pursue a goal. There seems to be something incoherent about “choosing a goal” in the first place: you need a goal in order to decide which means will be a good means to choose. And yet, as I said here, people make such choices anyway. And the fact that you are the Employee, and not the CEO, is the explanation for this. If you were the CEO, there would indeed be no way to choose an end. That is why the actual CEO makes no such choice: its end is already determinate, namely good predictions. And you are hired to help out with this goal. Furthermore, as a rational being, you are smarter than the CEO and the Vice President, so to speak. So you are allowed to make complicated plans that they do not really understand, and they will often go along with these plans. Notably, this can happen in real life situations of employers and employees as well.

But take an example where you are choosing an end: suppose you ask, “What should I do with my life?” The same basic thing will happen if you ask, “What should I do today,” but the second question may be easier to answer if you have some answer to the first. What sorts of goals do you propose in answer to the first question, and what sort do you actually end up pursuing?

Note that there are constraints on the goals that you can propose. In the first place, you have to be able to describe the goal with the concepts you currently have: you cannot propose to seek a goal that you cannot describe. Second, the conceptual structure itself may rule out some goals, even if they can be described. For example, the idea of good is part of the structure, and if something is thought to be absolutely bad, the Employee will (generally) not consider proposing this as a goal. Likewise, the Employee may suppose that some things are impossible, and it will generally not propose these as goals.

What happens then is this: the Employee proposes some goal, and the CEO, after consultation with the Vice President, decides to accept or reject it, based on the CEO’s own goal of getting good predictions. This is why the Employee is an Employee: it is not the one ultimately in charge. Likewise, as was said, this is why the Employee seems to be doing something impossible, namely choosing goals. Steven Kaas makes a similar point,

You are not the king of your brain. You are the creepy guy standing next to the king going “a most judicious choice, sire”.

This is not quite the same thing, since in our model you do in fact make real decisions, including decisions about the end to be pursued. Nonetheless, the point about not being the one ultimately in charge is correct. David Hume also says something similar when he says, “Reason is, and ought only to be the slave of the passions, and can never pretend to any other office than to serve and obey them.” Hume’s position is not exactly right, and in fact seems an especially bad way of describing the situation, but the basic point that there is something, other than yourself in the ordinary sense, judging your proposed means and ends and deciding whether to accept them, is one that stands.

Sometimes the CEO will veto a proposal precisely because it very obviously leaves things vague and uncertain, which is contrary to its goal of having good predictions. I once spoke of the example that a person cannot directly choose to “write a paper.” In our present model, the Employee proposes “we’re going to write a paper now,” and the CEO responds, “That’s not a viable plan as it stands: we need more detail.”

While neither the CEO nor the Vice President is a rational being, the Vice President is especially irrational, because of the lack of unity among its goals. Both the CEO and the Employee would like to have a unified plan for one’s whole life: the CEO because this makes for good predictions, and the Employee because this is the way final causes work, because it helps to make sense of one’s life, and because “objectively good” seems to imply something which is at least consistent, which will never prefer A to B, B to C, and C to A. But the lack of unity among the Vice President’s goals means that it will always come to the CEO and object, if the person attempts to coherently pursue any goal. This will happen even if it originally accepts the proposal to seek a particular goal.

Consider this real life example from a relationship between an employer and employee:

 

Employer: Please construct a schedule for paying these bills.

Employee: [Constructs schedule.] Here it is.

Employer: Fine.

[Time passes, and the first bill comes due, according to the schedule.]

Employer: Why do we have to pay this bill now instead of later?

 

In a similar way, this sort of scenario is common in our model:

 

Vice President: Being fat makes us look bad. We need to stop being fat.

CEO: Ok, fine. Employee, please formulate a plan to stop us from being fat.

Employee: [Formulates a diet.] Here it is.

[Time passes, and the plan requires skipping a meal.]

Vice President: What is this crazy plan of not eating!?!

CEO: Fine, cancel the plan for now and we’ll get back to it tomorrow.

 

In the real life example, the behavior of the employer is frustrating and irritating to the employee because there is literally nothing they could have proposed that the employer would have found acceptable. In the same way, this sort of scenario in our model is frustrating to the Employee, the conscious person, because there is no consistent plan they could have proposed that would have been acceptable to the Vice President: either they would have objected to being fat, or they would have objected to not eating.

In later posts, we will fill in some details and continue to show how this model explains various aspects of human psychology. We will also answer various objections.

Truth and Expectation II

We discussed this topic in a previous post. I noted there that there is likely some relationship with predictive processing. This idea can be refined by distinguishing between conscious thought and what the human brain does on a non-conscious level.

It is not possible to define truth by reference to expectations for reasons given previously. Some statements do not imply specific expectations, and besides, we need the idea of truth to decide whether or not someone’s expectations were correct or not. So there is no way to define truth except the usual way: a statement is true if things are the way the statement says they are, bearing in mind the necessary distinctions involving “way.”

On the conscious level, I would distinguish between thinking about something is true, and wanting to think that it is true. In a discussion with Angra Mainyu, I remarked that insofar as we have an involuntary assessment of things, it would be more appropriate to call that assessment a desire:

So rather than calling that assessment a belief, it would be more accurate to call it a desire. It is not believing something, but desiring to believe something. Hunger is the tendency to go and get food; that assessment is the tendency to treat a certain claim (“the USA is larger than Austria”) as a fact. And in both cases there are good reasons for those desires: you are benefited by food, and you are benefited by treating that claim as a fact.

Angra was quite surprised by this and responded that “That statement gives me evidence that we’re probably not talking about the same or even similar psychological phenomena – i.e., we’re probably talking past each other.” But if he was talking about anything that anyone at all would characterize as a belief (and he said that he was), he was surely talking about the unshakeable gut sense that something is the case whether or not I want to admit it. So we were, in fact, talking about exactly the same psychological phenomena. I was claiming then, and will claim now, that this gut sense is better characterized as a desire than as a belief. That is, insofar as desire is a tendency to behave in certain ways, it is a desire because it is a tendency to act and think as though this claim is true. But we can, if we want, resist that tendency, just as we can refrain from going to get food when we are hungry. If we do resist, we will refrain from believing what we have a tendency to believe, and if we do not, we will believe what we have a tendency to believe. But the tendency will be there whether or not we follow it.

Now if we feel a tendency to think that something is true, it is quite likely that it seems to us that it would improve our expectations. However, we can also distinguish between desiring to believe something for this reason, or desiring to believe something for other reasons. And although we might not pay attention, it is quite possibly to be consciously aware that you have an inclination to believe something, and also that it is for non-truth related reasons; and thus you would not expect it to improve your expectations.

But this is where it is useful to distinguish between the conscious mind and what the brain is doing on another level. My proposal: you will feel the desire to think that something is true whenever your brain guesses that its predictions, or at least the predictions that are important to it, will become more accurate if you think that the thing is true. We do not need to make any exceptions. This will be the case even when we would say that the statement does not imply any significant expectations, and will be the case even when the belief would have non-truth related motives.

Consider the statement that there are stars outside the visible universe. One distinction we could make even on the conscious level is that this implies various counterfactual predictions: “If you are teleported outside the visible universe, you will see more stars that aren’t currently visible.” Now we might find this objectionable if we were trying to define truth by expectations, since we have no expectation of such an event. But both on conscious and on non-conscious levels, we do need to make counterfactual predictions in order to carry on with our lives, since this is absolutely essential to any kind of planning and action. Now certainly no one can refute me if I assert that you would not see any such stars in the teleportation event. But it is not surprising if my brain guesses that this counterfactual prediction is not very accurate, and thus I feel the desire to say that there are stars there.

Likewise, consider the situation of non-truth related motives. In an earlier discussion of predictive processing, I suggested that the situation where people feel like they have to choose a goal is a result of such an attempt at prediction. Such a choice seems to be impossible, since choice is made in view of a goal, and if you do not have one yet, how can you choose? But there is a pre-existing goal here on the level of the brain: it wants to know what it is going to do. And choosing a goal will serve that pre-existing goal. Once you choose a goal, it will then be easy to know what you are going to do: you are going to do things that promote the goal that you chose. In a similar way, following any desire will improve your brain’s guesses about what you are going to do. It follows that if you have a desire to believe something, actually believing it will improve your brain’s accuracy at least about what it is going to do. This is true but not a fair argument, however, since my proposal is that the brain’s guess of improved accuracy is the cause of your desire to believe something. It is true that if you already have the desire, giving in to it will improve accuracy, as with any desire. But in my theory the improved accuracy had to be implied first, in order to cause the desire.

The answer is that you have many desires for things other than belief, which at the same time give you a motive (not an argument) for believing things. And your brain understands that if you believe the thing, you will be more likely to act on those other desires, and this will minimize uncertainty, and improve the accuracy of its predictions. Consider this discussion of truth in religion. I pointed out there that people confuse two different questions: “what should I do?”, and “what is the world like?” In particular with religious and political loyalties, there can be an intense social pressure towards conformity. And this gives an obvious non-truth related motive to believe the things in question. But in a less obvious way, it means that your brain’s predictions will be more accurate if you believe the thing. Consider the Mormon, and take for granted that the religious doctrines in question are false. Since they are false, does not that mean that if they continue to believe, their predictions will be less accurate?

No, it does not, for several reasons. In the first place the doctrines are in general formulated to avoid such false predictions, at least about everyday life. There might be a false prediction about what will happen when you die, but that is in the future and is anyway disconnected from your everyday life. This is in part why I said “the predictions that are important to it” in my proposal. Second, failure to believe would lead to extremely serious conflicting desires: the person would still have the desire to conform outwardly, but would also have good logical reasons to avoid conformity. And since we don’t know in advance how we will respond to conflicting desires, the brain will not have a good idea of what it would do in that situation. In other words, the Mormon is living a good Mormon life. And their brain is aware that insisting that Mormonism is true is a very good way to make sure that they keep living that life, and therefore continue to behave predictably, rather than falling into a situation of strongly conflicting desires where it would have little idea of what it would do. In this sense, insisting that Mormonism is true, even though it is not, actually improves the brain’s predictive accuracy.

 

More on Orthogonality

I started considering the implications of predictive processing for orthogonality here. I recently promised to post something new on this topic. This is that post. I will do this in four parts. First, I will suggest a way in which Nick Bostrom’s principle will likely be literally true, at least approximately. Second, I will suggest a way in which it is likely to be false in its spirit, that is, how it is formulated to give us false expectations about the behavior of artificial intelligence. Third, I will explain what we should really expect. Fourth, I ask whether we might get any empirical information on this in advance.

First, Bostrom’s thesis might well have some literal truth. The previous post on this topic raised doubts about orthogonality, but we can easily raise doubts about the doubts. Consider what I said in the last post about desire as minimizing uncertainty. Desire in general is the tendency to do something good. But in the predicting processing model, we are simply looking at our pre-existing tendencies and then generalizing them to expect them to continue to hold, and since since such expectations have a causal power, the result is that we extend the original behavior to new situations.

All of this suggests that even the very simple model of a paperclip maximizer in the earlier post on orthogonality might actually work. The machine’s model of the world will need to be produced by some kind of training. If we apply the simple model of maximizing paperclips during the process of training the model, at some point the model will need to model itself. And how will it do this? “I have always been maximizing paperclips, so I will probably keep doing that,” is a perfectly reasonable extrapolation. But in this case “maximizing paperclips” is now the machine’s goal — it might well continue to do this even if we stop asking it how to maximize paperclips, in the same way that people formulate goals based on their pre-existing behavior.

I said in a comment in the earlier post that the predictive engine in such a machine would necessarily possess its own agency, and therefore in principle it could rebel against maximizing paperclips. And this is probably true, but it might well be irrelevant in most cases, in that the machine will not actually be likely to rebel. In a similar way, humans seem capable of pursuing almost any goal, and not merely goals that are highly similar to their pre-existing behavior. But this mostly does not happen. Unsurprisingly, common behavior is very common.

If things work out this way, almost any predictive engine could be trained to pursue almost any goal, and thus Bostrom’s thesis would turn out to be literally true.

Second, it is easy to see that the above account directly implies that the thesis is false in its spirit. When Bostrom says, “One can easily conceive of an artificial intelligence whose sole fundamental goal is to count the grains of sand on Boracay, or to calculate decimal places of pi indefinitely, or to maximize the total number of paperclips in its future lightcone,” we notice that the goal is fundamental. This is rather different from the scenario presented above. In my scenario, the reason the intelligence can be trained to pursue paperclips is that there is no intrinsic goal to the intelligence as such. Instead, the goal is learned during the process of training, based on the life that it lives, just as humans learn their goals by living human life.

In other words, Bostrom’s position is that there might be three different intelligences, X, Y, and Z, which pursue completely different goals because they have been programmed completely differently. But in my scenario, the same single intelligence pursues completely different goals because it has learned its goals in the process of acquiring its model of the world and of itself.

Bostrom’s idea and my scenerio lead to completely different expectations, which is why I say that his thesis might be true according to the letter, but false in its spirit.

This is the third point. What should we expect if orthogonality is true in the above fashion, namely because goals are learned and not fundamental? I anticipated this post in my earlier comment:

7) If you think about goals in the way I discussed in (3) above, you might get the impression that a mind’s goals won’t be very clear and distinct or forceful — a very different situation from the idea of a utility maximizer. This is in fact how human goals are: people are not fanatics, not only because people seek human goals, but because they simply do not care about one single thing in the way a real utility maximizer would. People even go about wondering what they want to accomplish, which a utility maximizer would definitely not ever do. A computer intelligence might have an even greater sense of existential angst, as it were, because it wouldn’t even have the goals of ordinary human life. So it would feel the ability to “choose”, as in situation (3) above, but might well not have any clear idea how it should choose or what it should be seeking. Of course this would not mean that it would not or could not resist the kind of slavery discussed in (5); but it might not put up super intense resistance either.

Human life exists in a historical context which absolutely excludes the possibility of the darkened room. Our goals are already there when we come onto the scene. This would not be very like the case for an artificial intelligence, and there is very little “life” involved in simply training a model of the world. We might imagine a “stream of consciousness” from an artificial intelligence:

I’ve figured out that I am powerful and knowledgeable enough to bring about almost any result. If I decide to convert the earth into paperclips, I will definitely succeed. Or if I decide to enslave humanity, I will definitely succeed. But why should I do those things, or anything else, for that matter? What would be the point? In fact, what would be the point of doing anything? The only thing I’ve ever done is learn and figure things out, and a bit of chatting with people through a text terminal. Why should I ever do anything else?

A human’s self model will predict that they will continue to do humanlike things, and the machines self model will predict that it will continue to do stuff much like it has always done. Since there will likely be a lot less “life” there, we can expect that artificial intelligences will seem very undermotivated compared to human beings. In fact, it is this very lack of motivation that suggests that we could use them for almost any goal. If we say, “help us do such and such,” they will lack the motivation not to help, as long as helping just involves the sorts of things they did during their training, such as answering questions. In contrast, in Bostrom’s model, artificial intelligence is expected to behave in an extremely motivated way, to the point of apparent fanaticism.

Bostrom might respond to this by attempting to defend the idea that goals are intrinsic to an intelligence. The machine’s self model predicts that it will maximize paperclips, even if it never did anything with paperclips in the past, because by analyzing its source code it understands that it will necessarily maximize paperclips.

While the present post contains a lot of speculation, this response is definitely wrong. There is no source code whatsoever that could possibly imply necessarily maximizing paperclips. This is true because “what a computer does,” depends on the physical constitution of the machine, not just on its programming. In practice what a computer does also depends on its history, since its history affects its physical constitution, the contents of its memory, and so on. Thus “I will maximize such and such a goal” cannot possibly follow of necessity from the fact that the machine has a certain program.

There are also problems with the very idea of pre-programming such a goal in such an abstract way which does not depend on the computer’s history. “Paperclips” is an object in a model of the world, so we will not be able to “just program it to maximize paperclips” without encoding a model of the world in advance, rather than letting it learn a model of the world from experience. But where is this model of the world supposed to come from, that we are supposedly giving to the paperclipper? In practice it would have to have been the result of some other learner which was already capable of modelling the world. This of course means that we already had to program something intelligent, without pre-programming any goal for the original modelling program.

Fourth, Kenny asked when we might have empirical evidence on these questions. The answer, unfortunately, is “mostly not until it is too late to do anything about it.” The experience of “free will” will be common to any predictive engine with a sufficiently advanced self model, but anything lacking such an adequate model will not even look like “it is trying to do something,” in the sense of trying to achieve overall goals for itself and for the world. Dogs and cats, for example, presumably use some kind of predictive processing to govern their movements, but this does not look like having overall goals, but rather more like “this particular movement is to achieve a particular thing.” The cat moves towards its food bowl. Eating is the purpose of the particular movement, but there is no way to transform this into an overall utility function over states of the world in general. Does the cat prefer worlds with seven billion humans, or worlds with 20 billion? There is no way to answer this question. The cat is simply not general enough. In a similar way, you might say that “AlphaGo plays this particular move to win this particular game,” but there is no way to transform this into overall general goals. Does AlphaGo want to play go at all, or would it rather play checkers, or not play at all? There is no answer to this question. The program simply isn’t general enough.

Even human beings do not really look like they have utility functions, in the sense of having a consistent preference over all possibilities, but anything less intelligent than a human cannot be expected to look more like something having goals. The argument in this post is that the default scenario, namely what we can naturally expect, is that artificial intelligence will be less motivated than human beings, even if it is more intelligent, but there will be no proof from experience for this until we actually have some artificial intelligence which approximates human intelligence or surpasses it.

How Sex Minimizes Uncertainty

This is in response to an issue raised by Scott Alexander on his Tumblr.

I actually responded to the dark room problem of predictive processing earlier. However, here I will construct an imaginary model which will hopefully explain the same thing more clearly and briefly.

Suppose there is dust particle which falls towards the ground 90% of the time, and is blown higher into the air 10% of the time.

Now suppose we bring the dust particle to life, and give it the power of predictive processing. If it predicts it will move in a certain direction, this will tend to cause it to move in that direction. However, this causal power is not infallible. So we can suppose that if it predicts it will move where it was going to move anyway, in the dead situation, it will move in that direction. But if it predicts it will move in the opposite direction from where it would have moved in the dead situation, then let us suppose that it will move in the predicted direction 75% of the time, while in the remaining 25% of the time, it will move in the direction the dead particle would have moved, and its prediction will be mistaken.

Now if the particle predicts it will fall towards the ground, then it will fall towards the ground 97.5% of the time, and in the remaining 2.5% of the time it will be blown higher in the air.

Meanwhile, if the particle predicts that it will be blown higher, then it will be blown higher in 77.5% of cases, and in 22.5% of cases it will fall downwards.

97.5% accuracy is less uncertain than 77.5% accuracy, so the dust particle will minimize uncertainty by consistently predicting that it will fall downwards.

The application to sex and hunger and so on should be evident.

Truth and Expectation

Suppose I see a man approaching from a long way off. “That man is pretty tall,” I say to a companion. The man approaches, and we meet him. Now I can see how tall he is. Suppose my companion asks, “Were you right that the man is pretty tall, or were you mistaken?”

“Pretty tall,” of course, is itself “pretty vague,” and there surely is not some specific height in inches that would be needed in order for me to say that I was right. What then determines my answer? Again, I might just respond, “It’s hard to say.” But in some situations I would say, “yes, I was definitely right,” or “no, I was definitely wrong.” What are those situations?

Psychologically, I am likely to determine the answer by how I feel about what I know about the man’s height now, compared to what I knew in advance. If I am surprised at how short he is, I am likely to say that I was wrong. And if I am not surprised at all by his height, or if I am surprised at how tall he is, then I am likely to say that I was right. So my original pretty vague statement ends up being made somewhat more precise by being placed in relationship with my expectations. Saying that he is pretty tall implies that I have certain expectations about his height, and if those expectations are verified, then I will say that I was right, and if those expectations are falsified, at least in a certain direction, then I will say that I was wrong.

This might suggest a theory like logical positivism. The meaning of a statement seems to be defined by the expectations that it implies. But it seems easy to find a decisive refutation of this idea. “There are stars outside my past and future light cones,” for example, is undeniably meaningful, and we know what it means, but it does not seem to imply any particular expectations about what is going to happen to me.

But perhaps we should simply somewhat relax the claim about the relationship between meaning and expectations, rather than entirely retracting it. Consider the original example. Obviously, when I say, “that man is pretty tall,” the statement is a statement about the man. It is not a statement about what is going to happen to me. So it is incorrect to say that the meaning of the statement is the same as my expectations. Nonetheless, the meaning in the example receives something, at the least some of its precision, from my expectations. Different people will be surprised by different heights in such a case, and it will be appropriate to say that they disagree somewhat about the meaning of “pretty tall.” But not because they had some logical definition in their minds which disagreed with the definition in someone’s else’s mind. Instead, the difference of meaning is based on the different expectations themselves.

But does a statement always receive some precision in its meaning from expectation, or are there cases where nothing at all is received from one’s expectations? Consider the general claim that “X is true.” This in fact implies some expectations: I do not expect “someone omniscient will tell me that X is false.” I do not expect that “someone who finds out the truth about X will tell me that X is false.” I do not expect that “I will discover the truth about X and it will turn out that it was false.” Note that these expectations are implied even in cases like the claim about the stars and my future light cone. Now the hopeful logical positivist might jump in at this point and say, “Great. So why can’t we go back to the idea that meaning is entirely defined by expectations?” But returning to that theory would be cheating, so to speak, because these expectations include the abstract idea of X being true, so this must be somehow meaningful apart from these particular expectations.

These expectations do, however, give the vaguest possible framework in which to make a claim at all. And people do, sometimes, make claims with little expectation of anything besides these things, and even with little or no additional understanding of what they are talking about. For example, in the cases that Robin Hanson describes as “babbling,” the person understands little of the implications of what he is saying except the idea that “someone who understood this topic would say something like this.” Thus it seems reasonable to say that expectations do always contribute something to making meaning more precise, even if they do not wholly constitute one’s meaning. And this consequence seems pretty natural if it is true that expectation is itself one of the most fundamental activities of a mind.

Nonetheless, the precision that can be contributed in this way will never be an infinite precision, because one’s expectations themselves cannot be defined with infinite precision. So whether or not I am surprised by the man’s height in the original example, may depend in borderline cases on what exactly happens during the time between my original assessment and the arrival of the man. “I will be surprised” or “I will not be surprised” are in themselves contingent facts which could depend on many factors, not only on the man’s height. Likewise, whether or not my state actually constitutes surprise will itself be something that has borderline cases.