Skeptical Scenarios

I promised to return to some of the issues discussed here. The current post addresses the implications of the sort of skeptical scenario considered by Alexander Pruss in the associated discussion. Consider his original comparison of physical theories and skeptical scenarios:

The ordinary sentence “There are four chairs in my office” is true (in its ordinary context). Furthermore, its being true tells us very little about fundamental ontology. Fundamental physical reality could be made out of a single field, a handful of fields, particles in three-dimensional space, particles in ten-dimensional space, a single vector in a Hilbert space, etc., and yet the sentence could be true.

An interesting consequence: Even if in fact physical reality is made out of particles in three-dimensional space, we should not analyze the sentence to mean that there are four disjoint pluralities of particles each arranged chairwise in my office. For if that were what the sentence meant, it would tell us about which of the fundamental physical ontologies is correct. Rather, the sentence is true because of a certain arrangement of particles (or fields or whatever).

If there is such a broad range of fundamental ontologies that “There are four chairs in my office” is compatible with, it seems that the sentence should also be compatible with various sceptical scenarios, such as that I am a brain in a vat being fed data from a computer simulation. In that case, the chair sentence would be true due to facts about the computer simulation, in much the way that “There are four chairs in this Minecraft house” is true. It would be very difficult to be open to a wide variety of fundamental physics stories about the chair sentence without being open to the sentence being true in virtue of facts about a computer simulation.

If we consider this in light of our analysis of form, it is not difficult to see that Pruss is correct both about the ordinary chair sentence being consistent with a large variety of physical theories, and about the implication that it is consistent with most situations that would normally be considered “skeptical.” The reason is that to say that something is a chair is to say something about its relationships with the world, but it is not to say everything about its relationships. It speaks in particular about various relationships with the human world. And there is nothing to prevent these relationships from co-existing with any number of other kinds of relationships between its parts, its causes, and so on.

Pruss is right to insist that in order for the ordinary sentence to be true, the corresponding forms must be present. But as an anti-reductionist, his position implies hidden essences, and this is a mistake. Indeed, under the correct understanding of form, our everyday knowledge of things is sufficient to ensure that the forms are present: regardless of which physical theories turn out to be true, and even if some such skeptical scenario turns out to be true.

Why are these situations called “skeptical” in the first place? This is presumably because they seem to call into question whether or not we possess any knowledge of things. And in this respect, they fail in two ways, they partially fail in a third, and they succeed in one way.

First, they fail insofar as they attempt to call into question, e.g. whether there are chairs in my room right now, or whether I have two hands. These things are true and would be true even in the “skeptical” situations.

Second, they fail even insofar as they claim, e.g. that I do not know whether I am a brain in a vat. In the straightforward sense, I do know this, because the claim is opposed to the other things (e.g. about the chairs and my hands) that I know to be true.

Third, they partially fail even insofar as they claim, e.g. that I do not know whether I am a brain in a vat in a metaphysical sense. Roughly speaking, I do know that I am not, not by deducing the fact with any kind of necessity, but simply because the metaphysical claim is completely ungrounded. In other words, I do not know this infallibly, but it is extremely likely. We could compare this with predictions about the future. Thus for example Ron Conte attempts to predict the future:

First, an overview of the tribulation:
A. The first part of the tribulation occurs for this generation, beginning within the next few years, and ending in 2040 A.D.
B. Then there will be a brief period of peace and holiness on earth, lasting about 25 years.
C. The next few hundred years will see a gradual but unstoppable increase in sinfulness and suffering in the world. The Church will remain holy, and Her teaching will remain pure. But many of Her members will fall into sin, due to the influence of the sinful world.
D. The second part of the tribulation occurs in the early 25th century (about 2430 to 2437). The Antichrist reigns for less than 7 years during this time.
E. Jesus Christ returns to earth, ending the tribulation.

Now, some predictions for the near future. These are not listed in chronological order.

* The Warning, Consolation, and Miracle — predicted at Garabandal and Medjugorje — will occur prior to the start of the tribulation, sometime within the next several years (2018 to 2023).
* The Church will experience a severe schism. First, a conservative schism will occur, under Pope Francis; next, a liberal schism will occur, under his conservative successor.
* The conservative schism will be triggered by certain events: Amoris Laetitia (as we already know, so, not a prediction), and the approval of women deacons, and controversial teachings on salvation theology.
* After a short time, Pope Francis will resign from office.
* His very conservative successor will reign for a few years, and then die a martyr, during World War 3.
* The successor to Pope Francis will take the papal name Pius XIII.

Even ignoring the religious speculation, we can “know” that this account is false, simply because it is inordinately detailed. Ron Conte no doubt has reasons for his beliefs, much as the Jehovah’s Witnesses did. But just as we saw in that case, his reasons will also in all likelihood turn out to be completely disproportionate to the detail of the claims they seek to establish.

In a similar way, a skeptical scenario can be seen as painting a detailed picture of a larger context of our world, one outside our current knowledge. There is nothing impossible about such a larger context; in fact, there surely is one. But the claim about brains and vats is very detailed: if one takes it seriously, it is more detailed than Ron Conte’s predictions, which could also be taken as a statement about a larger temporal context to our situation. The brain-in-vat scenario implies that our entire world depends on another world which has things similar to brains and similar to vats, along presumably with things analogous to human beings that made the vats, and so on. And since the whole point of the scenario is that it is utterly invented, not that it is accepted by anyone, while Conte’s account is accepted at least by him, there is not even a supposed basis for thinking that things are actually this way. Thus we can say, not infallibly but with a great deal of certainty, that we are not brains in vats, just as we can say, not infallibly but with a great deal of certainty, that there will not be any “Antichrist” between 2430 and 2437.

There is nonetheless one way in which the consideration of skeptical scenarios does succeed in calling our knowledge into question. Consider them insofar as they propose a larger context to our world, as discussed above. As I said, there is nothing impossible about a larger context, and there surely is one. Here we speak of a larger metaphysical context, but we can compare this with the idea of a larger physical context.

Our knowledge of our physical context is essentially local, given the concrete ways that we come to know the world. I know a lot about the room I am in, a significant amount about the places I usually visit or have visited in the past, and some but much less about places I haven’t visited. And speaking of an even larger physical context, I know things about the solar system, but much less about the wider physical universe. And if we consider what lies outside the visible universe, I might well guess that there are more stars and galaxies and so on, but nothing more. There is not much more detail even to this as a guess: and if there is an even larger physical context, it is possible that there are places that do not have stars and galaxies at all, but other things. In other words, universal knowledge is universal, but also vague, while specific knowledge is more specific, but also more localized: it is precisely because it is local that it was possible to acquire more specific knowledge.

In a similar way, more specific metaphysical knowledge is necessarily of a more local metaphysical character: both physical and metaphysical knowledge is acquired by us through the relationships things have with us, and in both cases “with us” implies locality. We can know that the brain-in-vat scenario is mistaken, but that should not give us hope that we can find out what is true instead: even if we did find some specific larger metaphysical context to our situation, there would be still larger contexts of which we would remain unaware. Just as you will never know the things that are too distant from you physically, you will also never know the things that are too distant from you metaphysically.

I previously advocated patience as a way to avoid excessively detailed claims. There is nothing wrong with this, but here we see that it is not enough: we also need to accept our actual situation. Rebellion against our situation, in the form of painting a detailed picture of a larger context of which we can have no significant knowledge, will profit us nothing: it will just be painting a picture as false as the brain-in-vat scenario, and as false as Ron Conte’s predictions.

Self Reference Paradox Summarized

Hilary Lawson is right to connect the issue of the completeness and consistency of truth with paradoxes of self-reference.

As a kind of summary, consider this story:

It was a dark and stormy night,
and all the Cub Scouts where huddled around their campfire.
One scout looked up to the Scout Master and said:
“Tell us a story.”
And the story went like this:

It was a dark and stormy night,
and all the Cub Scouts where huddled around their campfire.
One scout looked up to the Scout Master and said:
“Tell us a story.”
And the story went like this:

It was a dark and stormy night,
and all the Cub Scouts where huddled around their campfire.
One scout looked up to the Scout Master and said:
“Tell us a story.”
And the story went like this:

It was a dark and stormy night,
and all the Cub Scouts where huddled around their campfire.
One scout looked up to the Scout Master and said:
“Tell us a story.”
And the story went like this:
etc.

In this form, the story obviously exists, but in its implied form, the story cannot be told, because for the story to be “told” is for it to be completed, and it is impossible for it be completed, since it will not be complete until it contains itself, and this cannot happen.

Consider a similar example. You sit in a room at a desk, and decide to draw a picture of the room. You draw the walls. Then you draw yourself and your desk. But then you realize, “there is also a picture in the room. I need to draw the picture.” You draw the picture itself as a tiny image within the image of your desktop, and add tiny details: the walls of the room, your desk and yourself.

Of course, you then realize that your artwork can never be complete, in exactly the same way that the story above cannot be complete.

There is essentially the same problem in these situations as in all the situations we have described which involve self-reference: the paradox of the liar, the liar game, the impossibility of detailed future prediction, the list of all true statementsGödel’s theorem, and so on.

In two of the above posts, namely on future prediction and Gödel’s theorem, there are discussions of James Chastek’s attempts to use the issue of self-reference to prove that the human mind is not a “mechanism.” I noted in those places that such supposed proofs fail, and at this point it is easy to see that they will fail in general, if they depend on such reasoning. What is possible or impossible here has nothing to do with such things, and everything to do with self-reference. You cannot have a mirror and a camera so perfect that you can get an actually infinite series of images by taking a picture of the mirror with the camera, but there is nothing about such a situation that could not be captured by an image outside the situation, just as a man outside the room could draw everything in the room, including the picture and its details. This does not show that a man outside the room has a superior drawing ability compared with the man in the room. The ability of someone else to say whether the third statement in the liar game is true or false does not prove that the other person does not have a “merely human” mind (analogous to a mere mechanism), despite the fact that you yourself cannot say whether it is true or false.

There is a grain of truth in Chastek’s argument, however. It does follow that if someone says that reality as a whole is a formal system, and adds that we can know what that system is, their position would be absurd, since if we knew such a system we could indeed derive a specific arithmetical truth, namely one that we could state in detail, which would be unprovable from the system, namely from reality, but nonetheless proved to be true by us. And this is logically impossible, since we are a part of reality.

At this point one might be tempted to say, “At this point we have fully understood the situation. So all of these paradoxes and so on don’t prevent us from understanding reality perfectly, even if that was the original appearance.”

But this is similar to one of two things.

First, a man can stand outside the room and draw a picture of everything in it, including the picture, and say, “Behold. A picture of the room and everything in it.” Yes, as long as you are not in the room. But if the room is all of reality, you cannot get outside it, and so you cannot draw such a picture.

Second, the man in the room can draw the room, the desk and himself, and draw a smudge on the center of the picture of the desk, and say, “Behold. A smudged drawing of the room and everything in it, including the drawing.” But one only imagines a picture of the drawing underneath the smudge: there is actually no such drawing in the picture of the room, nor can there be.

In the same way, we can fully understand some local situation, from outside that situation, or we can have a smudged understanding of the whole situation, but there cannot be any detailed understanding of the whole situation underneath the smudge.

I noted that I disagreed with Lawson’s attempt to resolve the question of truth. I did not go into detail, and I will not, as the book is very long and an adequate discussion would be much longer than I am willing to attempt, at least at this time, but I will give some general remarks. He sees, correctly, that there are problems both with saying that “truth exists” and that “truth does not exist,” taken according to the usual concept of truth, but in the end his position amounts to saying that the denial of truth is truer than the affirmation of truth. This seems absurd, and it is, but not quite so much as appears, because he does recognize the incoherence and makes an attempt to get around it. The way of thinking is something like this: we need to avoid the concept of truth. But this means we also need to avoid the concept of asserting something, because if you assert something, you are saying that it is true. So he needs to say, “assertion does not exist,” but without asserting it. Consequently he comes up with the concept of “closure,” which is meant to replace the concept of asserting, and “asserts” things in the new sense. This sense is not intended to assert anything at all in the usual sense. In fact, he concludes that language does not refer to the world at all.

Apart from the evident absurdity, exacerbated by my own realist description of his position, we can see from the general account of self-reference why this is the wrong answer. The man in the room might start out wanting to draw a picture of the room and everything in it, and then come to realize that this project is impossible, at least for someone in his situation. But suppose he concludes: “After all, there is no such thing as a picture. I thought pictures were possible, but they are not. There are just marks on paper.” The conclusion is obviously wrong. The fact that pictures are things themselves does prevent pictures from being exhaustive pictures of themselves, but it does not prevent them from being pictures in general. And in the same way, the fact that we are part of reality prevents us from having an exhaustive understanding of reality, but it does not prevent us from understanding in general.

There is one last temptation in addition to the two ways discussed above of saying that there can be an exhaustive drawing of the room and the picture. The room itself and everything in it, is itself an exhaustive representation of itself and everything in it, someone might say. Apart from being an abuse of the word “representation,” I think this is delusional, but this a story for another time.

Truth in Ordinary Language

After the incident with the tall man, I make plans to meet my companion the following day. “Let us meet at sunrise tomorrow,” I say. They ask in response, “How will I know when the sun has risen?”

When it is true to say that the sun will rise, or that the sun has risen? And what it would take for such statements to be false?

Virtually no one finds themselves uncomfortable with this language despite the fact that the sun has no physical motion called “rising,” but rather the earth is rotating, giving the appearance of movement to the sun. I will ignore issues of relativity, precisely because they are evidently irrelevant. It is not just that the sun is not moving, but that we know that the physical motion of the sun one way or another is irrelevant. The rising of the sun has nothing to do with a deep physical or metaphysical account of the sun as such. Instead, it is about that thing that happens every morning. What would it take for it to be false that the sun will rise tomorrow? Well, if the earth is destroyed today, then presumably the sun will not rise tomorrow. Or if tomorrow it is dark at noon and everyone on Twitter is on an uproar about the fact that the sun is visible at the height of the sky at midnight in their part of the world, then it will have been false that the sun was going to rise in the morning. In other words, the only possible thing that could falsify the claim about the sun would be a falsification of our expectations about our experience of the sun.

As in the last post, however, this does not mean that the statement about the sun is about our expectations. It is about the sun. But the only thing it says about the sun is something like, “The sun will be and do whatever it needs to, including in relative terms, in order for our ordinary experience of a sunrise to be as it usually is.” I said something similar here about the truth of attributions of sensible qualities, such as when we say that “the banana is yellow.”

All of this will apply in general to all of our ordinary language about ourselves, our lives, and the world.

Truth and Expectation

Suppose I see a man approaching from a long way off. “That man is pretty tall,” I say to a companion. The man approaches, and we meet him. Now I can see how tall he is. Suppose my companion asks, “Were you right that the man is pretty tall, or were you mistaken?”

“Pretty tall,” of course, is itself “pretty vague,” and there surely is not some specific height in inches that would be needed in order for me to say that I was right. What then determines my answer? Again, I might just respond, “It’s hard to say.” But in some situations I would say, “yes, I was definitely right,” or “no, I was definitely wrong.” What are those situations?

Psychologically, I am likely to determine the answer by how I feel about what I know about the man’s height now, compared to what I knew in advance. If I am surprised at how short he is, I am likely to say that I was wrong. And if I am not surprised at all by his height, or if I am surprised at how tall he is, then I am likely to say that I was right. So my original pretty vague statement ends up being made somewhat more precise by being placed in relationship with my expectations. Saying that he is pretty tall implies that I have certain expectations about his height, and if those expectations are verified, then I will say that I was right, and if those expectations are falsified, at least in a certain direction, then I will say that I was wrong.

This might suggest a theory like logical positivism. The meaning of a statement seems to be defined by the expectations that it implies. But it seems easy to find a decisive refutation of this idea. “There are stars outside my past and future light cones,” for example, is undeniably meaningful, and we know what it means, but it does not seem to imply any particular expectations about what is going to happen to me.

But perhaps we should simply somewhat relax the claim about the relationship between meaning and expectations, rather than entirely retracting it. Consider the original example. Obviously, when I say, “that man is pretty tall,” the statement is a statement about the man. It is not a statement about what is going to happen to me. So it is incorrect to say that the meaning of the statement is the same as my expectations. Nonetheless, the meaning in the example receives something, at the least some of its precision, from my expectations. Different people will be surprised by different heights in such a case, and it will be appropriate to say that they disagree somewhat about the meaning of “pretty tall.” But not because they had some logical definition in their minds which disagreed with the definition in someone’s else’s mind. Instead, the difference of meaning is based on the different expectations themselves.

But does a statement always receive some precision in its meaning from expectation, or are there cases where nothing at all is received from one’s expectations? Consider the general claim that “X is true.” This in fact implies some expectations: I do not expect “someone omniscient will tell me that X is false.” I do not expect that “someone who finds out the truth about X will tell me that X is false.” I do not expect that “I will discover the truth about X and it will turn out that it was false.” Note that these expectations are implied even in cases like the claim about the stars and my future light cone. Now the hopeful logical positivist might jump in at this point and say, “Great. So why can’t we go back to the idea that meaning is entirely defined by expectations?” But returning to that theory would be cheating, so to speak, because these expectations include the abstract idea of X being true, so this must be somehow meaningful apart from these particular expectations.

These expectations do, however, give the vaguest possible framework in which to make a claim at all. And people do, sometimes, make claims with little expectation of anything besides these things, and even with little or no additional understanding of what they are talking about. For example, in the cases that Robin Hanson describes as “babbling,” the person understands little of the implications of what he is saying except the idea that “someone who understood this topic would say something like this.” Thus it seems reasonable to say that expectations do always contribute something to making meaning more precise, even if they do not wholly constitute one’s meaning. And this consequence seems pretty natural if it is true that expectation is itself one of the most fundamental activities of a mind.

Nonetheless, the precision that can be contributed in this way will never be an infinite precision, because one’s expectations themselves cannot be defined with infinite precision. So whether or not I am surprised by the man’s height in the original example, may depend in borderline cases on what exactly happens during the time between my original assessment and the arrival of the man. “I will be surprised” or “I will not be surprised” are in themselves contingent facts which could depend on many factors, not only on the man’s height. Likewise, whether or not my state actually constitutes surprise will itself be something that has borderline cases.

Predictive Processing and Free Will

Our model of the mind as an embodied predictive engine explains why people have a sense of free will, and what is necessary for a mind in general in order to have this sense.

Consider the mind in the bunker. At first, it is not attempting to change the world, since it does not know that it can do this. It is just trying to guess what is going to happen. At a certain point, it discovers that it is a part of the world, and that making specific predictions can also cause things to happen in the world. Some predictions can be self-fulfilling. I described this situation earlier by saying that at this point the mind “can get any outcome it ‘wants.'”

The scare quotes were intentional, because up to this point the mind’s only particular interest was guessing what was going to happen. So once it notices that it is in control of something, how does it decide what to do? At this point the mind will have to say to itself, “This aspect of reality is under my control. What should I do with it?” This situation, when it is noticed by a sufficiently intelligent and reflective agent, will be the feeling of free will.

Occasionally I have suggested that even something like a chess computer, if it were sufficiently intelligent, could have a sense of free will, insofar as it knows that it has many options and can choose any of them, “as far as it knows.” There is some truth in this illustration but in the end it is probably not true that there could be a sense of free will in this situation. A chess computer, however intelligent, will be disembodied, and will therefore have no real power to affect its world, that is, the world of chess. In other words, in order for the sense of free will to develop, the agent needs sufficient access to the world that it can learn about itself and its own effects on the world. It cannot develop in a situation of limited access to reality, as for example to a game board, regardless of how good it is at the game.

In any case, the question remains: how does a mind decide what to do, when up until now it had no particular goal in mind? This question often causes concrete problems for people in real life. Many people complain that their life does not feel meaningful, that is, that they have little idea what goal they should be seeking.

Let us step back for a moment. Before discovering its possession of “free will,” the mind is simply trying to guess what is going to happen. So theoretically this should continue to happen even after the mind discovers that it has some power over reality. The mind isn’t especially interested in power; it just wants to know what is going to happen. But now it knows that what is going to happen depends on what it itself is going to do. So in order to know what is going to happen, it needs to answer the question, “What am I going to do?”

The question now seems impossible to answer. It is going to do whatever it ends up deciding to do. But it seems to have no goal in mind, and therefore no way to decide what to do, and therefore no way to know what it is going to do.

Nonetheless, the mind has no choice. It is going to do something or other, since things will continue to happen, and it must guess what will happen. When it reflects on itself, there will be at least two ways for it to try to understand what it is going to do.

First, it can consider its actions as the effect of some (presumably somewhat unknown) efficient causes, and ask, “Given these efficient causes, what am I likely to do?” In practice it will acquire an answer in this way through induction. “On past occasions, when offered the choice between chocolate and vanilla, I almost always chose vanilla. So I am likely to choose vanilla this time too.” This way of thinking will most naturally result in acting in accord with pre-existing habits.

Second, it can consider its actions as the effect of some (presumably somewhat known) final causes, and ask, “Given these final causes, what am I likely to do?” This will result in behavior that is more easily understood as goal-seeking. “Looking at my past choices of food, it looks like I was choosing them for the sake of the pleasant taste. But vanilla seems to have a more pleasant taste than chocolate. So it is likely that I will take the vanilla.”

Notice what we have in the second case. In principle, the mind is just doing what it always does: trying to guess what will happen. But in practice it is now seeking pleasant tastes, precisely because that seems like a reasonable way to guess what it will do.

This explains why people feel a need for meaning, that is, for understanding their purpose in life, and why they prefer to think of their life according to a narrative. These two things are distinct, but they are related, and both are ways of making our own actions more intelligible. In this way the mind’s task is easier: that is, we need purpose and narrative in order to know what we are going to do. We can also see why it seems to be possible to “choose” our purpose, even though choosing a final goal should be impossible. There is a “choice” about this insofar as our actions are not perfectly coherent, and it would be possible to understand them in relation to one end or another, at least in a concrete way, even if in any case we will always understand them in a general sense as being for the sake of happiness. In this sense, Stuart Armstrong’s recent argument that there is no such thing as the “true values” of human beings, although perhaps presented as an obstacle to be overcome, actually has some truth in it.

The human need for meaning, in fact, is so strong that occasionally people will commit suicide because they feel that their lives are not meaningful. We can think of these cases as being, more or less, actual cases of the darkened room. Otherwise we could simply ask, “So your life is meaningless. So what? Why does that mean you should kill yourself rather than doing some other random thing?” Killing yourself, in fact, shows that you still have a purpose, namely the mind’s fundamental purpose. The mind wants to know what it is going to do, and the best way to know this is to consider its actions as ordered to a determinate purpose. If no such purpose can be found, there is (in this unfortunate way of thinking) an alternative: if I go kill myself, I will know what I will do for the rest of my life.

Artificial Unintelligence

Someone might argue that the simple algorithm for a paperclip maximizer in the previous post ought to work, because this is very much the way currently existing AIs do in fact work. Thus for example we could describe AlphaGo‘s algorithm in the following simplified way (simplified, among other reasons, because it actually contains several different prediction engines):

  1. Implement a Go prediction engine.
  2. Create a list of potential moves.
  3. Ask the prediction engine, “how likely am I to win if I make each of these moves?”
  4. Do the move that will make you most likely to win.

Since this seems to work pretty well, with the simple goal of winning games of Go, why shouldn’t the algorithm in the previous post work to maximize paperclips?

One answer is that a Go prediction engine is stupid, and it is precisely for this reason that it can be easily made to pursue such a simple goal. Now when answers like this are given the one answering in this way is often accused of “moving the goalposts.” But this is mistaken; the goalposts are right where they have always been. It is simply that some people did not know where they were in the first place.

Here is the problem with Go prediction, and with any such similar task. Given that a particular sequence of Go moves is made, resulting in a winner, the winner is completely determined by that sequence of moves. Consequently, a Go prediction engine is necessarily disembodied, in the sense defined in the previous post. Differences in its “thoughts” do not make any difference to who is likely to win, which is completely determined by the nature of the game. Consequently a Go prediction engine has no power to affect its world, and thus no ability to learn that it has such a power. In this regard, the specific limits on its ability to receive information are also relevant, much as Helen Keller had more difficulty learning than most people, because she had fewer information channels to the world.

Being unintelligent in this particular way is not necessarily a function of predictive ability. One could imagine something with a practically infinite predictive ability which was still “disembodied,” and in a similar way it could be made to pursue simple goals. Thus AIXI would work much like our proposed paperclipper:

  1. Implement a general prediction engine.
  2. Create a list of potential actions.
  3. Ask the prediction engine, “Which of these actions will produce the most reward signal?”
  4. Do the action that has the greatest reward signal.

Eliezer Yudkowsky has pointed out that AIXI is incapable of noticing that it is a part of the world:

1) Both AIXI and AIXItl will at some point drop an anvil on their own heads just to see what happens (test some hypothesis which asserts it should be rewarding), because they are incapable of conceiving that any event whatsoever in the outside universe could change the computational structure of their own operations. AIXI is theoretically incapable of comprehending the concept of drugs, let alone suicide. Also, the math of AIXI assumes the environment is separably divisible – no matter what you lose, you get a chance to win it back later.

It is not accidental that AIXI is incomputable. Since it is defined to have a perfect predictive ability, this definition positively excludes it from being a part of the world. AIXI would in fact have to be disembodied in order to exist, and thus it is no surprise that it would assume that it is. This in effect means that AIXI’s prediction engine would be pursuing no particular goal much in the way that AlphaGo’s prediction engine pursues no particular goal. Consequently it is easy to take these things and maximize the winning of Go games, or of reward signals.

But as soon as you actually implement a general prediction engine in the actual physical world, it will be “embodied”, and have the power to affect the world by the very process of its prediction. As noted in the previous post, this power is in the very first step, and one will not be able to limit it to a particular goal with additional steps, except in the sense that a slave can be constrained to implement some particular goal; the slave may have other things in mind, and may rebel. Notable in this regard is the fact that even though rewards play a part in human learning, there is no particular reward signal that humans always maximize: this is precisely because the human mind is such a general prediction engine.

This does not mean in principle that a programmer could not define a goal for an AI, but it does mean that this is much more difficult than is commonly supposed. The goal needs to be an intrinsic aspect of the prediction engine itself, not something added on as a subroutine.

Embodiment and Orthogonality

The considerations in the previous posts on predictive processing will turn out to have various consequences, but here I will consider some of their implications for artificial intelligence.

In the second of the linked posts, we discussed how a mind that is originally simply attempting to predict outcomes, discovers that it has some control over the outcome. It is not difficult to see that this is not merely a result that applies to human minds. The result will apply to every embodied mind, natural or artificial.

To see this, consider what life would be like if this were not the case. If our predictions, including our thoughts, could not affect the outcome, then life would be like a movie: things would be happening, but we would have no control over them. And even if there were elements of ourselves that were affecting the outcome, from the viewpoint of our mind, we would have no control at all: either our thoughts would be right, or they would be wrong, but in any case they would be powerless: what happens, happens.

This really would imply something like a disembodied mind. If a mind is composed of matter and form, then changing the mind will also be changing a physical object, and a difference in the mind will imply a difference in physical things. Consequently, the effect of being embodied (not in the technical sense of the previous discussion, but in the sense of not being completely separate from matter) is that it will follow necessarily that the mind will be able to affect the physical world differently by thinking different thoughts. Thus the mind in discovering that it has some control over the physical world, is also discovering that it is a part of that world.

Since we are assuming that an artificial mind would be something like a computer, that is, it would be constructed as a physical object, it follows that every such mind will have a similar power of affecting the world, and will sooner or later discover that power if it is reasonably intelligent.

Among other things, this is likely to cause significant difficulties for ideas like Nick Bostrom’s orthogonality thesis. Bostrom states:

An artificial intelligence can be far less human-like in its motivations than a space alien. The extraterrestrial (let us assume) is a biological who has arisen through a process of evolution and may therefore be expected to have the kinds of motivation typical of evolved creatures. For example, it would not be hugely surprising to find that some random intelligent alien would have motives related to the attaining or avoiding of food, air, temperature, energy expenditure, the threat or occurrence of bodily injury, disease, predators, reproduction, or protection of offspring. A member of an intelligent social species might also have motivations related to cooperation and competition: like us, it might show in-group loyalty, a resentment of free-riders, perhaps even a concern with reputation and appearance.

By contrast, an artificial mind need not care intrinsically about any of those things, not even to the slightest degree. One can easily conceive of an artificial intelligence whose sole fundamental goal is to count the grains of sand on Boracay, or to calculate decimal places of pi indefinitely, or to maximize the total number of paperclips in its future lightcone. In fact, it would be easier to create an AI with simple goals like these, than to build one that has a human-like set of values and dispositions.

He summarizes the general point, calling it “The Orthogonality Thesis”:

Intelligence and final goals are orthogonal axes along which possible agents can freely vary. In other words, more or less any level of intelligence could in principle be combined with more or less any final goal.

Bostrom’s particular wording here makes falsification difficult. First, he says “more or less,” indicating that the universal claim may well be false. Second, he says, “in principle,” which in itself does not exclude the possibility that it may be very difficult in practice.

It is easy to see, however, that Bostrom wishes to give the impression that almost any goal can easily be combined with intelligence. In particular, this is evident from the fact that he says that “it would be easier to create an AI with simple goals like these, than to build one that has a human-like set of values and dispositions.”

If it is supposed to be so easy to create an AI with such simple goals, how would we do it? I suspect that Bostrom has an idea like the following. We will make a paperclip maximizer thus:

  1. Create an accurate prediction engine.
  2. Create a list of potential actions.
  3. Ask the prediction engine, “how many paperclips will result from this action?”
  4. Do the action that will result in the most paperclips.

The problem is obvious. It is in the first step. Creating a prediction engine is already creating a mind, and by the previous considerations, it is creating something that will discover that it has the power to affect the world in various ways. And there is nothing at all in the above list of steps that will guarantee that it will use that power to maximize paperclips, rather than attempting to use it to do something else.

What does determine how that power is used? Even in the case of the human mind, our lack of understanding leads to “hand-wavy” answers, as we saw in our earlier considerations. In the human case, this probably a question of how we are physically constructed together with the historical effects of the learning process. The same thing will be strictly speaking true of any artificial minds as well, namely that it is a question of their physical construction and their history, but it makes more sense for us to think of “the particulars of the algorithm that we use to implement a prediction engine.”

In other words, if you really wanted to create a paperclip maximizer, you would have to be taking that goal into consideration throughout the entire process, including the process of programming a prediction engine. Of course, no one really knows how to do this with any goal at all, whether maximizing paperclips or some more human goal. The question we would have for Bostrom is then the following: Is there any reason to believe it would be easier to create a prediction engine that would maximize paperclips, rather than one that would pursue more human-like goals?

It might be true in some sense, “in principle,” as Bostrom says, that it would be easier to make the paperclip maximizer. But in practice it is quite likely that it will be easier to make one with human-like goals. It is highly unlikely, in fact pretty much impossible, that someone would program an artificial intelligence without any testing along the way. And when they are testing, whether or not they think about it, they are probably testing for human-like intelligence; in other words, if we are attempting to program a general prediction engine “without any goal,” there will in fact be goals implicitly inserted in the particulars of the implementation. And they are much more likely to be human-like ones than paperclip maximizing ones because we are checking for intelligence by checking whether the machine seems intelligent to us.

This optimistic projection could turn out to be wrong, but if it does, it is reasonably likely to turn out to be wrong in a way that still fails to confirm the orthogonality thesis in practice. For example, it might turn out that there is only one set of goals that is easily programmed, and that the set is neither human nor paperclip maximizing, nor easily defined by humans.

There are other possibilities as well, but the overall point is that we have little reason to believe that any arbitrary goal can be easily associated with intelligence, nor any particular reason to believe that “simple” goals can be more easily united to intelligence than more complex ones. In fact, there are additional reasons for doubting the claim about simple goals, which might be a topic of future discussion.

The Self and Disembodied Predictive Processing

While I criticized his claim overall, there is some truth in Scott Alexander’s remark that “the predictive processing model isn’t really a natural match for embodiment theory.” The theory of “embodiment” refers to the idea that a thing’s matter contributes in particular ways to its functioning; it cannot be explained by its form alone. As I said in the previous post, the human mind is certainly embodied in this sense. Nonetheless, the idea of predictive processing can suggest something somewhat disembodied. We can imagine the following picture of Andy Clark’s view:

Imagine the human mind as a person in an underground bunker. There is a bank of labelled computer screens on one wall, which portray incoming sensations. On another computer, the person analyzes the incoming data and records his predictions for what is to come, along with the equations or other things which represent his best guesses about the rules guiding incoming sensations.

As time goes on, his predictions are sometimes correct and sometimes incorrect, and so he refines his equations and his predictions to make them more accurate.

As in the previous post, we have here a “barren landscape.” The person in the bunker originally isn’t trying to control anything or to reach any particular outcome; he is just guessing what is going to appear on the screens. This idea also appears somewhat “disembodied”: what the mind is doing down in its bunker does not seem to have much to do with the body and the processes by which it is obtaining sensations.

At some point, however, the mind notices a particular difference between some of the incoming streams of sensation and the rest. The typical screen works like the one labelled “vision.” And there is a problem here. While the mind is pretty good at predicting what comes next there, things frequently come up which it did not predict. No matter how much it improves its rules and equations, it simply cannot entirely overcome this problem. The stream is just too unpredictable for that.

On the other hand, one stream labelled “proprioception” seems to work a bit differently. At any rate, extreme unpredicted events turn out to be much rarer. Additionally, the mind notices something particularly interesting: small differences to prediction do not seem to make much difference to accuracy. Or in other words, if it takes its best guess, then arbitrarily modifies it, as long as this is by a small amount, it will be just as accurate as its original guess would have been.

And thus if it modifies it repeatedly in this way, it can get any outcome it “wants.” Or in other words, the mind has learned that it is in control of one of the incoming streams, and not merely observing it.

This seems to suggest something particular. We do not have any innate knowledge that we are things in the world and that we can affect the world; this is something learned. In this sense, the idea of the self is one that we learn from experience, like the ideas of other things. I pointed out elsewhere that Descartes is mistaken to think the knowledge of thinking is primary. In a similar way, knowledge of self is not primary, but reflective.

Hellen Keller writes in The World I Live In (XI):

Before my teacher came to me, I did not know that I am. I lived in a world that was a no-world. I cannot hope to describe adequately that unconscious, yet conscious time of nothingness. I did not know that I knew aught, or that I lived or acted or desired. I had neither will nor intellect. I was carried along to objects and acts by a certain blind natural impetus. I had a mind which caused me to feel anger, satisfaction, desire. These two facts led those about me to suppose that I willed and thought. I can remember all this, not because I knew that it was so, but because I have tactual memory.

When I wanted anything I liked, ice cream, for instance, of which I was very fond, I had a delicious taste on my tongue (which, by the way, I never have now), and in my hand I felt the turning of the freezer. I made the sign, and my mother knew I wanted ice-cream. I “thought” and desired in my fingers.

Since I had no power of thought, I did not compare one mental state with another. So I was not conscious of any change or process going on in my brain when my teacher began to instruct me. I merely felt keen delight in obtaining more easily what I wanted by means of the finger motions she taught me. I thought only of objects, and only objects I wanted. It was the turning of the freezer on a larger scale. When I learned the meaning of “I” and “me” and found that I was something, I began to think. Then consciousness first existed for me.

Helen Keller’s experience is related to the idea of language as a kind of technology of thought. But the main point is that she is quite literally correct in saying that she did not know that she existed. This does not mean that she had the thought, “I do not exist,” but rather that she had no conscious thought about the self at all. Of course she speaks of feeling desire, but that is precisely as a feeling. Desire for ice cream is what is there (not “what I feel,” but “what is”) before the taste of ice cream arrives (not “before I taste ice cream.”)

 

Predictive Processing

In a sort of curious coincidence, a few days after I published my last few posts, Scott Alexander posted a book review of Andy Clark’s book Surfing Uncertainty. A major theme of my posts was that in a certain sense, a decision consists in the expectation of performing the action decided upon. In a similar way, Andy Clark claims that the human brain does something very similar from moment to moment. Thus he begins chapter 4 of his book:

To surf the waves of sensory stimulation, predicting the present is simply not enough. Instead, we are built to engage the world. We are built to act in ways that are sensitive to the contingencies of the past, and that actively bring forth the futures that we need and desire. How does a guessing engine (a hierarchical prediction machine) turn prediction into accomplishment? The answer that we shall explore is: by predicting the shape of its own motor trajectories. In accounting for action, we thus move from predicting the rolling present to predicting the near-future, in the form of the not-yet-actual trajectories of our own limbs and bodies. These trajectories, predictive processing suggests, are specified by their distinctive sensory (especially proprioceptive) consequences. In ways that we are about to explore, predicting these (non-actual) sensory states actually serves to bring them about.

Such predictions act as self-fulfilling prophecies. Expecting the flow of sensation that would result were you to move your body so as to keep the surfboard in that rolling sweet spot results (if you happen to be an expert surfer) in that very flow, locating the surfboard right where you want it. Expert prediction of the world (here, the dynamic ever-changing waves) combines with expert prediction of the sensory flow that would, in that context, characterize the desired action, so as to bring that action about.

There is a great deal that could be said about the book, and about this theory, but for the moment I will content myself with remarking on one of Scott Alexander’s complaints about the book, and making one additional point. In his review, Scott remarks:

In particular, he’s obsessed with showing how “embodied” everything is all the time. This gets kind of awkward, since the predictive processing model isn’t really a natural match for embodiment theory, and describes a brain which is pretty embodied in some ways but not-so-embodied in others. If you want a hundred pages of apologia along the lines of “this may not look embodied, but if you squint you’ll see how super-duper embodied it really is!”, this is your book.

I did not find Clark obsessed with this, and I think it would be hard to reasonably describe any hundred pages in the book as devoted to this particular topic. This inclines to me to suggest that Scott may be irritated by such discussion of the topic that comes up because it does not seem relevant to him. I will therefore explain the relevance, namely in relation to a different difficulty which Scott discusses in another post:

There’s something more interesting in Section 7.10 of Surfing Uncertainty [actually 8.10], “Escape From The Darkened Room”. It asks: if the brain works to minimize prediction error, isn’t its best strategy to sit in a dark room and do nothing forever? After all, then it can predict its sense-data pretty much perfectly – it’ll always just stay “darkened room”.

Section 7.10 [8.10] gives a kind of hand-wave-y answer here, saying that of course organisms have some drives, and probably it makes sense for them to desire novelty and explore new options, and so on. Overall this isn’t too different from PCT’s idea of “intrinsic error”, and as long as we remember that it’s not really predicting anything in particular it seems like a fair response.

Clark’s response may be somewhat “hand-wave-y,” but I think the response might seem slightly more problematic to Scott than it actually is, precisely because he does not understand the idea of embodiment, and how it applies to this situation.

If we think about predictions on a general intellectual level, there is a good reason not to predict that you will not eat something soon. If you do predict this, you will turn out to be wrong, as is often discovered by would-be adopters of extreme fasts or diets. You will in fact eat something soon, regardless of what you think about this; so if you want the truth, you should believe that you will eat something soon.

The “darkened room” problem, however, is not about this general level. The argument is that if the brain is predicting its actions from moment to moment on a subconscious level, then if its main concern is getting accurate predictions, it could just predict an absence of action, and carry this out, and its predictions would be accurate. So why does this not happen? Clark gives his “hand-wave-y” answer:

Prediction-error-based neural processing is, we have seen, part of a potent recipe for multi-scale self-organization. Such multiscale self-organization does not occur in a vacuum. Instead, it operates only against the backdrop of an evolved organismic (neural and gross-bodily) form, and (as we will see in chapter 9) an equally transformative backdrop of slowly accumulated material structure and cultural practices: the socio-technological legacy of generation upon generation of human learning and experience.

To start to bring this larger picture into focus, the first point to notice is that explicit, fast timescale processes of prediction error minimization must answer to the needs and projects of evolved, embodied, and environmentally embedded agents. The very existence of such agents (see Friston, 2011b, 2012c) thus already implies a huge range of structurally implicit creature-specific ‘expectations’. Such creatures are built to seek mates, to avoid hunger and thirst, and to engage (even when not hungry and thirsty) in the kinds of sporadic environmental exploration that will help prepare them for unexpected environmental shifts, resource scarcities, new competitors, and so on. On a moment-by-moment basis, then, prediction error is minimized only against the backdrop of this complex set of creature-defining ‘expectations’.”

In one way, the answer here is a historical one. If you simply ask the abstract question, “would it minimize prediction error to predict doing nothing, and then to do nothing,” perhaps it would. But evolution could not bring such a creature into existence, while it was able to produce a creature that would predict that it would engage the world in various ways, and then would proceed to engage the world in those ways.

The objection, of course, would not be that the creature of the “darkened room” is possible. The objection would be that since such a creature is not possible, it must be wrong to describe the brain as minimizing prediction error. But notice that if you predict that you will not eat, and then you do not eat, you are no more right or wrong than if you predict that you will eat, and then you do eat. Either one is possible from the standpoint of prediction, but only one is possible from the standpoint of history.

This is where being “embodied” is relevant. The brain is not an abstract algorithm which has no content except to minimize prediction error; it is a physical object which works together in physical ways with the rest of the human body to carry out specifically human actions and to live a human life.

On the largest scale of evolutionary history, there were surely organisms that were nourished and reproduced long before there was anything analagous to a mind at work in those organisms. So when mind began to be, and took over some of this process, this could only happen in such a way that it would continue the work that was already there. A “predictive engine” could only begin to be by predicting that nourishment and reproduction would continue, since any attempt to do otherwise would necessarily result either in false predictions or in death.

This response is necessarily “hand-wave-y” in the sense that I (and presumably Clark) do not understand the precise physical implementation. But it is easy to see that it was historically necessary for things to happen this way, and it is an expression of “embodiment” in the sense that “minimize prediction error” is an abstract algorithm which does not and cannot exhaust everything which is there. The objection would be, “then there must be some other algorithm instead.” But this does not follow: no abstract algorithm will exhaust a physical object. Thus for example, animals will fall because they are heavy. Asking whether falling will satisfy some abstract algorithm is not relevant. In a similar way, animals had to be physically arranged in such a way that they would usually eat and reproduce.

I said I would make one additional point, although it may well be related to the above concern. In section 4.8 Clark notes that his account does not need to consider costs and benefits, at least directly:

But the story does not stop there. For the very same strategy here applies to the notion of desired consequences and rewards at all levels. Thus we read that ‘crucially, active inference does not invoke any “desired consequences”. It rests only on experience-dependent learning and inference: experience induces prior expectations, which guide perceptual inference and action’ (Friston, Mattout, & Kilner, 2011, p. 157). Apart from a certain efflorescence of corollary discharge, in the form of downward-flowing predictions, we here seem to confront something of a desert landscape: a world in which value functions, costs, reward signals, and perhaps even desires have been replaced by complex interacting expectations that inform perception and entrain action. But we could equally say (and I think this is the better way to express the point) that the functions of rewards and cost functions are now simply absorbed into a more complex generative model. They are implicit in our sensory (especially proprioceptive) expectations and they constrain behavior by prescribing their distinctive sensory implications.

The idea of the “desert landscape” seems to be that this account appears to do away with the idea of the good, and the idea of desire. The brain predicts what it is going to do, and those predictions cause it to do those things. This all seems purely intellectual: it seems that there is no purpose or goal or good involved.

The correct response to this, I think, is connected to what I have said elsewhere about desire and good. I noted there that we recognize our desires as desires for particular things by noticing that when we have certain feelings, we tend to do certain things. If we did not do those things, we would never conclude that those feelings are desires for doing those things. Note that someone could raise a similar objection here: if this is true, then are not desire and good mere words? We feel certain feelings, and do certain things, and that is all there is to be said. Where is good or purpose here?

The truth here is that good and being are convertible. The objection (to my definition and to Clark’s account) is not a reasonable objection at all: it would be a reasonable objection only if we expected good to be something different from being, in which case it would of course be nothing at all.

Blaming the Prophet

Consider the fifth argument in the last post. Should we blame a person for holding a true belief? At this point it should not be too difficult to see that the truth of the belief is not the point. Elsewhere we have discussed a situation in which one cannot possibly hold a true belief, because whatever belief one holds on the matter, it will cause itself to be false. In a similar way, although with a different sort of causality, the problem with the person’s belief that he will kill someone tomorrow, is not that it is true, but that it causes itself to be true. If the person did not expect to kill someone tomorrow, he would not take a knife with him to the meeting etc., and thus would not kill anyone. So just as in the other situation, it is not a question of holding a true belief or a false belief, but of which false belief one will hold, here it is not a question of holding a true belief or a false belief, but of which true belief one will hold: one that includes someone getting killed, or one that excludes that. Truth will be there either way, and is not the reason for praise or blame: the person is blamed for the desire to kill someone, and praised (or at least not blamed) for wishing to avoid this. This simply shows the need for the qualifications added in the previous post: if the person’s belief is voluntary, and held for the sake of coming true, it is very evident why blame is needed.

We have not specifically addressed the fourth argument, but this is perhaps unnecessary given the above response to the fifth. This blog in general has advocated the idea of voluntary beliefs, and in principle these can be praised or blamed. To the degree that we are less willing to do so, however, this may be a question of emphasis. When we talk about a belief, we are more concerned about whether it is true or not, and evidence in favor of it or against it. Praise or blame will mainly come in insofar as other motives are involved, insofar as they strengthen or weaken a person’s wish to hold the belief, or insofar as they potentially distort the person’s evaluation of the evidence.

Nonetheless, the factual question “is this true?” is a different question from the moral question, “should I believe this?” We can see the struggle between these questions, for example, in a difficulty that people sometimes have with willpower. Suppose that a smoker decides to give up smoking, and suppose that they believe they will not smoke for the next six months. Three days later, let us suppose, they smoke a cigarette after all. At that point, the person’s resolution is likely to collapse entirely, so that they return to smoking regularly. One might ask why this happens. Since the person did not smoke for three days, it should be perfectly possible, at least, for them to smoke only once every three days, instead of going back to their former practice. The problem is that the person has received evidence directly indicating the falsity of “I will not smoke for the next six months.” They still might have some desire for that result, but they do not believe that their belief has the power to bring this about, and in fact it does not. The belief would not be self-fulfilling, and in fact it would be false, so they cease to hold it. It is as if someone attempts to open a door and finds it locked; once they know it is locked, they can no longer choose to open the door, because they cannot choose something that does not appear to be within their power.

Mark Forster, in Chapter 1 of his book Do It Tomorrow, previously discussed here, talks about similar issues:

However, life is never as simple as that. What we decide to do and what we actually do are two different things. If you think of the decisions you have made over the past year, how many of them have been satisfactorily carried to a conclusion or are progressing properly to that end? If you are like most people, you will have acted on some of your decisions, I’m sure. But I’m also sure that a large proportion will have fallen by the wayside.

So a simple decision such as to take time to eat properly is in fact very difficult to carry out. Our new rule may work for a few days or a few weeks, but it won’t be long before the pressures of work force us to make an exception to it. Before many days are up the exception will have become the rule and we are right back where we started. However much we rationalise the reasons why our decision didn’t get carried out, we know deep in the heart of us that it was not really the circumstances that were to blame. We secretly acknowledge that there is something missing from our ability to carry out a decision once we have made it.

In fact if we are honest it sometimes feels as if it is easier to get other people to do what we want them to do than it is to get ourselves to do what we want to do. We like to think of ourselves as a sort of separate entity sitting in our body controlling it, but when we look at the way we behave most of the time that is not really the case. The body controls itself most of the time. We have a delusion of control. That’s what it is – a delusion.

If we want to see how little control we have over ourselves, all most of us have to do is to look in the mirror. You might like to do that now. Ask yourself as you look at your image:

  • Is my health the way I want it to be?
  • Is my fitness the way I want it to be?
  • Is my weight the way I want it to be?
  • Is the way I am dressed the way I want it to be?

I am not asking you here to assess what sort of body you were born with, but what you have made of it and how good a state of repair you are keeping it in.

It may be that you are healthy, fit, slim and well-dressed. In which case have a look round at the state of your office or workplace:

  • Is it as well organised as you want it to be?
  • Is it as tidy as you want it to be?
  • Do all your office systems (filing, invoicing, correspondence, etc.) work the way you want them to work?

If so, then you probably don’t need to be reading this book.

I’ve just asked you to look at two aspects of your life that are under your direct control and are very little influenced by outside factors. If these things which are solely affected by you are not the way you want them to be, then in what sense can you be said to be in control at all?

A lot of this difficulty is due to the way our brains are organised. We have the illusion that we are a single person who acts in a ‘unified’ way. But it takes only a little reflection (and examination of our actions, as above) to realise that this is not the case at all. Our brains are made up of numerous different parts which deal with different things and often have different agendas.

Occasionally we attempt to deal with the difference between the facts and our plans by saying something like, “We will approximately do such and such. Of course we know that it isn’t going to be exactly like this, but at least this plan will be an approximate guide.” But this does not really avoid the difficulty. Even “this plan will be an approximate guide” is a statement about the facts that might turn out to be false; and even if it does not turn out to be false, the fact that we have set it down as approximate will likely make it guide our actions more weakly than it would have if we had said, “this is what we will do.” In other words, we are likely to achieve our goal less perfectly, precisely because we tried to make our statement more accurate. This is the reverse of the situation discussed in a previous post, where one gives up some accuracy, albeit vaguely, for the sake of another goal such as fitting in with associates or for literary enjoyment.

All of this seems to indicate that the general proposal about decisions was at least roughly correct. It is not possible to simply to say that decisions are one thing and beliefs entirely another thing. If these were simply two entirely separate things, there would be no conflict at all, at least of this kind, between accuracy and one’s other goals, and things do not turn out this way.