The Power of a Name

Fairy tales and other stories occasionally suggest the idea that a name gives some kind of power over the thing named, or at least that one’s problems concerning a thing may be solved by knowing its name, as in the story of Rumpelstiltskin. There is perhaps a similar suggestion in Revelation 2:7, “Whoever has ears, let them hear what the Spirit says to the churches. To the one who is victorious, I will give some of the hidden manna. I will also give that person a white stone with a new name written on it, known only to the one who receives it.” The secrecy of the new name may indicate (among other things) that others will have no power over that person.

There is more truth in this idea than one might assume without much thought. For example, anonymous authors do not want to be “doxxed” because knowing the name of the author really does give some power in relation to them which is not had without the knowledge of their name. Likewise, as a blogger, occasionally I want to cite something, but cannot remember the name of the author or article where the statement is made. Even if I remember the content fairly clearly, lacking the memory of the name makes finding the content far more difficult, while on the other name, knowing the name gives me the power of finding the content much more easily.

But let us look a bit more deeply into this. Hilary Lawson, whose position was somewhat discussed here, has a discussion along these lines in Part II of his book, Closure: A Story of Everything. Since he denies that language truly refers to the world at all, as I mentioned in the linked post on his position, it is important to him that language has other effects, and in particular has practical goals. He says in chapter 4:

In order to understand the mechanism of practical linguistic closure consider an example where a proficient speaker of English comes across a new word. Suppose that we are visiting a zoo with a friend. We stand outside a cage and our friend says: ‘An aasvogel.” …

It might appear at first from this example that nothing has been added by the realisation of linguistic closure. The sound ‘aasvogel’ still sounds the same, the image of the bird still looks the same. So what has changed? The sensory closures on either side may not have changed, but a new closure has been realised. A new closure which is in addition to the prior available closures and which enables intervention which was not possible previously. For example, we now have a means of picking out this particular bird in the zoo because the meaning that has been realised will have identified a something in virtue of which this bird is an aasvogel and which thus enables us to distinguish it from others. As a result there will be many consequences for how we might be able to intervene.

The important point here is simply that naming something, even before taking any additional steps, immediately gives one the ability to do various practical things that one could not previously do. In a passage by Helen Keller, previously quoted here, she says:

Since I had no power of thought, I did not compare one mental state with another. So I was not conscious of any change or process going on in my brain when my teacher began to instruct me. I merely felt keen delight in obtaining more easily what I wanted by means of the finger motions she taught me.

We may have similar experiences as adults learning a foreign language while living abroad. At first one has very little ability to interact with the foreign world, but suddenly everything is possible.

Or consider the situation of a hunter gatherer who may not know how to count. It may be obvious to them that a bigger pile of fruit is better than a smaller one, but if two piles look similar, they may have no way to know which is better. But once they decide to give “one fruit and another” a name like “two,” and “two and one” a name like “three,” and so on, suddenly they obtain a great advantage that they previously did not possess. It is now possible to count piles and to discover that one pile has sixty-four while another has sixty-three. And it turns out that by treating the “sixty-four” as bigger than the other pile, although it does not look bigger, they end up better off.

In this sense one could look at the scientific enterprise of looking for mathematical laws of nature as one long process of looking for better names. We can see that some things are faster and some things are slower, but the vague names “fast” and “slow” cannot accomplish much. Once we can name different speeds more precisely, we can put them all in order and accomplish much more, just as the hunter gatherer can accomplish more after learning to count. And this extends to the full power of technology: the men who landed on the moon, did so ultimately due to the power of names.

If you take Lawson’s view, that language does not refer to the world at all, all of this is basically casting magic spells. In fact, he spells this out himself, in so many words, in chapter 3:

All material is in this sense magical. It enables intervention that cannot be understood. Ancient magicians were those who had access to closures that others did not know, in the same way that the Pharaohs had access to closures not available to their subjects. This gave them a supernatural character. It is now that thought that their magic has been explained, as the knowledge of herbs, metals or the weather. No such thing has taken place. More powerful closures have been realised, more powerful magic that can subsume the feeble closures of those magicians. We have simply lost sight of its magical character. Anthropology has many accounts of tribes who on being observed by a Western scientist believe that the observer has access to some very powerful magic. Magic that produces sound and images from boxes, and makes travel swift. We are inclined to smile patronisingly believing that we merely have knowledge — the technology behind radio and television, and motor vehicles — and not magic. The closures behind the technology do indeed provide us with knowledge and understanding and enable us to handle activity, but they do not explain how the closures enable intervention. How the closures are successful remains incomprehensible and in this sense is our magic.

I don’t think we should dismiss this point of view entirely, but I do think it is more mistaken than otherwise, basically because of the original mistake of thinking that language cannot refer to the world. But the point that names are extremely powerful is correct and important, to the point where even the analogy of technology as “magic that works” does make a certain amount of sense.

Artificial Unintelligence

Someone might argue that the simple algorithm for a paperclip maximizer in the previous post ought to work, because this is very much the way currently existing AIs do in fact work. Thus for example we could describe AlphaGo‘s algorithm in the following simplified way (simplified, among other reasons, because it actually contains several different prediction engines):

  1. Implement a Go prediction engine.
  2. Create a list of potential moves.
  3. Ask the prediction engine, “how likely am I to win if I make each of these moves?”
  4. Do the move that will make you most likely to win.

Since this seems to work pretty well, with the simple goal of winning games of Go, why shouldn’t the algorithm in the previous post work to maximize paperclips?

One answer is that a Go prediction engine is stupid, and it is precisely for this reason that it can be easily made to pursue such a simple goal. Now when answers like this are given the one answering in this way is often accused of “moving the goalposts.” But this is mistaken; the goalposts are right where they have always been. It is simply that some people did not know where they were in the first place.

Here is the problem with Go prediction, and with any such similar task. Given that a particular sequence of Go moves is made, resulting in a winner, the winner is completely determined by that sequence of moves. Consequently, a Go prediction engine is necessarily disembodied, in the sense defined in the previous post. Differences in its “thoughts” do not make any difference to who is likely to win, which is completely determined by the nature of the game. Consequently a Go prediction engine has no power to affect its world, and thus no ability to learn that it has such a power. In this regard, the specific limits on its ability to receive information are also relevant, much as Helen Keller had more difficulty learning than most people, because she had fewer information channels to the world.

Being unintelligent in this particular way is not necessarily a function of predictive ability. One could imagine something with a practically infinite predictive ability which was still “disembodied,” and in a similar way it could be made to pursue simple goals. Thus AIXI would work much like our proposed paperclipper:

  1. Implement a general prediction engine.
  2. Create a list of potential actions.
  3. Ask the prediction engine, “Which of these actions will produce the most reward signal?”
  4. Do the action that has the greatest reward signal.

Eliezer Yudkowsky has pointed out that AIXI is incapable of noticing that it is a part of the world:

1) Both AIXI and AIXItl will at some point drop an anvil on their own heads just to see what happens (test some hypothesis which asserts it should be rewarding), because they are incapable of conceiving that any event whatsoever in the outside universe could change the computational structure of their own operations. AIXI is theoretically incapable of comprehending the concept of drugs, let alone suicide. Also, the math of AIXI assumes the environment is separably divisible – no matter what you lose, you get a chance to win it back later.

It is not accidental that AIXI is incomputable. Since it is defined to have a perfect predictive ability, this definition positively excludes it from being a part of the world. AIXI would in fact have to be disembodied in order to exist, and thus it is no surprise that it would assume that it is. This in effect means that AIXI’s prediction engine would be pursuing no particular goal much in the way that AlphaGo’s prediction engine pursues no particular goal. Consequently it is easy to take these things and maximize the winning of Go games, or of reward signals.

But as soon as you actually implement a general prediction engine in the actual physical world, it will be “embodied”, and have the power to affect the world by the very process of its prediction. As noted in the previous post, this power is in the very first step, and one will not be able to limit it to a particular goal with additional steps, except in the sense that a slave can be constrained to implement some particular goal; the slave may have other things in mind, and may rebel. Notable in this regard is the fact that even though rewards play a part in human learning, there is no particular reward signal that humans always maximize: this is precisely because the human mind is such a general prediction engine.

This does not mean in principle that a programmer could not define a goal for an AI, but it does mean that this is much more difficult than is commonly supposed. The goal needs to be an intrinsic aspect of the prediction engine itself, not something added on as a subroutine.

The Self and Disembodied Predictive Processing

While I criticized his claim overall, there is some truth in Scott Alexander’s remark that “the predictive processing model isn’t really a natural match for embodiment theory.” The theory of “embodiment” refers to the idea that a thing’s matter contributes in particular ways to its functioning; it cannot be explained by its form alone. As I said in the previous post, the human mind is certainly embodied in this sense. Nonetheless, the idea of predictive processing can suggest something somewhat disembodied. We can imagine the following picture of Andy Clark’s view:

Imagine the human mind as a person in an underground bunker. There is a bank of labelled computer screens on one wall, which portray incoming sensations. On another computer, the person analyzes the incoming data and records his predictions for what is to come, along with the equations or other things which represent his best guesses about the rules guiding incoming sensations.

As time goes on, his predictions are sometimes correct and sometimes incorrect, and so he refines his equations and his predictions to make them more accurate.

As in the previous post, we have here a “barren landscape.” The person in the bunker originally isn’t trying to control anything or to reach any particular outcome; he is just guessing what is going to appear on the screens. This idea also appears somewhat “disembodied”: what the mind is doing down in its bunker does not seem to have much to do with the body and the processes by which it is obtaining sensations.

At some point, however, the mind notices a particular difference between some of the incoming streams of sensation and the rest. The typical screen works like the one labelled “vision.” And there is a problem here. While the mind is pretty good at predicting what comes next there, things frequently come up which it did not predict. No matter how much it improves its rules and equations, it simply cannot entirely overcome this problem. The stream is just too unpredictable for that.

On the other hand, one stream labelled “proprioception” seems to work a bit differently. At any rate, extreme unpredicted events turn out to be much rarer. Additionally, the mind notices something particularly interesting: small differences to prediction do not seem to make much difference to accuracy. Or in other words, if it takes its best guess, then arbitrarily modifies it, as long as this is by a small amount, it will be just as accurate as its original guess would have been.

And thus if it modifies it repeatedly in this way, it can get any outcome it “wants.” Or in other words, the mind has learned that it is in control of one of the incoming streams, and not merely observing it.

This seems to suggest something particular. We do not have any innate knowledge that we are things in the world and that we can affect the world; this is something learned. In this sense, the idea of the self is one that we learn from experience, like the ideas of other things. I pointed out elsewhere that Descartes is mistaken to think the knowledge of thinking is primary. In a similar way, knowledge of self is not primary, but reflective.

Hellen Keller writes in The World I Live In (XI):

Before my teacher came to me, I did not know that I am. I lived in a world that was a no-world. I cannot hope to describe adequately that unconscious, yet conscious time of nothingness. I did not know that I knew aught, or that I lived or acted or desired. I had neither will nor intellect. I was carried along to objects and acts by a certain blind natural impetus. I had a mind which caused me to feel anger, satisfaction, desire. These two facts led those about me to suppose that I willed and thought. I can remember all this, not because I knew that it was so, but because I have tactual memory.

When I wanted anything I liked, ice cream, for instance, of which I was very fond, I had a delicious taste on my tongue (which, by the way, I never have now), and in my hand I felt the turning of the freezer. I made the sign, and my mother knew I wanted ice-cream. I “thought” and desired in my fingers.

Since I had no power of thought, I did not compare one mental state with another. So I was not conscious of any change or process going on in my brain when my teacher began to instruct me. I merely felt keen delight in obtaining more easily what I wanted by means of the finger motions she taught me. I thought only of objects, and only objects I wanted. It was the turning of the freezer on a larger scale. When I learned the meaning of “I” and “me” and found that I was something, I began to think. Then consciousness first existed for me.

Helen Keller’s experience is related to the idea of language as a kind of technology of thought. But the main point is that she is quite literally correct in saying that she did not know that she existed. This does not mean that she had the thought, “I do not exist,” but rather that she had no conscious thought about the self at all. Of course she speaks of feeling desire, but that is precisely as a feeling. Desire for ice cream is what is there (not “what I feel,” but “what is”) before the taste of ice cream arrives (not “before I taste ice cream.”)