The Power of a Name

Fairy tales and other stories occasionally suggest the idea that a name gives some kind of power over the thing named, or at least that one’s problems concerning a thing may be solved by knowing its name, as in the story of Rumpelstiltskin. There is perhaps a similar suggestion in Revelation 2:7, “Whoever has ears, let them hear what the Spirit says to the churches. To the one who is victorious, I will give some of the hidden manna. I will also give that person a white stone with a new name written on it, known only to the one who receives it.” The secrecy of the new name may indicate (among other things) that others will have no power over that person.

There is more truth in this idea than one might assume without much thought. For example, anonymous authors do not want to be “doxxed” because knowing the name of the author really does give some power in relation to them which is not had without the knowledge of their name. Likewise, as a blogger, occasionally I want to cite something, but cannot remember the name of the author or article where the statement is made. Even if I remember the content fairly clearly, lacking the memory of the name makes finding the content far more difficult, while on the other name, knowing the name gives me the power of finding the content much more easily.

But let us look a bit more deeply into this. Hilary Lawson, whose position was somewhat discussed here, has a discussion along these lines in Part II of his book, Closure: A Story of Everything. Since he denies that language truly refers to the world at all, as I mentioned in the linked post on his position, it is important to him that language has other effects, and in particular has practical goals. He says in chapter 4:

In order to understand the mechanism of practical linguistic closure consider an example where a proficient speaker of English comes across a new word. Suppose that we are visiting a zoo with a friend. We stand outside a cage and our friend says: ‘An aasvogel.” …

It might appear at first from this example that nothing has been added by the realisation of linguistic closure. The sound ‘aasvogel’ still sounds the same, the image of the bird still looks the same. So what has changed? The sensory closures on either side may not have changed, but a new closure has been realised. A new closure which is in addition to the prior available closures and which enables intervention which was not possible previously. For example, we now have a means of picking out this particular bird in the zoo because the meaning that has been realised will have identified a something in virtue of which this bird is an aasvogel and which thus enables us to distinguish it from others. As a result there will be many consequences for how we might be able to intervene.

The important point here is simply that naming something, even before taking any additional steps, immediately gives one the ability to do various practical things that one could not previously do. In a passage by Helen Keller, previously quoted here, she says:

Since I had no power of thought, I did not compare one mental state with another. So I was not conscious of any change or process going on in my brain when my teacher began to instruct me. I merely felt keen delight in obtaining more easily what I wanted by means of the finger motions she taught me.

We may have similar experiences as adults learning a foreign language while living abroad. At first one has very little ability to interact with the foreign world, but suddenly everything is possible.

Or consider the situation of a hunter gatherer who may not know how to count. It may be obvious to them that a bigger pile of fruit is better than a smaller one, but if two piles look similar, they may have no way to know which is better. But once they decide to give “one fruit and another” a name like “two,” and “two and one” a name like “three,” and so on, suddenly they obtain a great advantage that they previously did not possess. It is now possible to count piles and to discover that one pile has sixty-four while another has sixty-three. And it turns out that by treating the “sixty-four” as bigger than the other pile, although it does not look bigger, they end up better off.

In this sense one could look at the scientific enterprise of looking for mathematical laws of nature as one long process of looking for better names. We can see that some things are faster and some things are slower, but the vague names “fast” and “slow” cannot accomplish much. Once we can name different speeds more precisely, we can put them all in order and accomplish much more, just as the hunter gatherer can accomplish more after learning to count. And this extends to the full power of technology: the men who landed on the moon, did so ultimately due to the power of names.

If you take Lawson’s view, that language does not refer to the world at all, all of this is basically casting magic spells. In fact, he spells this out himself, in so many words, in chapter 3:

All material is in this sense magical. It enables intervention that cannot be understood. Ancient magicians were those who had access to closures that others did not know, in the same way that the Pharaohs had access to closures not available to their subjects. This gave them a supernatural character. It is now that thought that their magic has been explained, as the knowledge of herbs, metals or the weather. No such thing has taken place. More powerful closures have been realised, more powerful magic that can subsume the feeble closures of those magicians. We have simply lost sight of its magical character. Anthropology has many accounts of tribes who on being observed by a Western scientist believe that the observer has access to some very powerful magic. Magic that produces sound and images from boxes, and makes travel swift. We are inclined to smile patronisingly believing that we merely have knowledge — the technology behind radio and television, and motor vehicles — and not magic. The closures behind the technology do indeed provide us with knowledge and understanding and enable us to handle activity, but they do not explain how the closures enable intervention. How the closures are successful remains incomprehensible and in this sense is our magic.

I don’t think we should dismiss this point of view entirely, but I do think it is more mistaken than otherwise, basically because of the original mistake of thinking that language cannot refer to the world. But the point that names are extremely powerful is correct and important, to the point where even the analogy of technology as “magic that works” does make a certain amount of sense.

Tautologies Not Trivial

In mathematics and logic, one sometimes speaks of a “trivial truth” or “trivial theorem”, referring to a tautology. Thus for example in this Quora question, Daniil Kozhemiachenko gives this example:

The fact that all groups of order 2 are isomorphic to one another and commutative entails that there are no non-Abelian groups of order 2.

This statement is a tautology because “Abelian group” here just means one that is commutative: the statement is like the customary example of asserting that “all bachelors are unmarried.”

Some extend this usage of “trivial” to refer to all statements that are true in virtue of the meaning of the terms, sometimes called “analytic.” The effect of this is to say that all statements that are logically necessary are trivial truths. An example of this usage can be seen in this paper by Carin Robinson. Robinson says at the end of the summary:

Firstly, I do not ask us to abandon any of the linguistic practises discussed; merely to adopt the correct attitude towards them. For instance, where we use the laws of logic, let us remember that there are no known/knowable facts about logic. These laws are therefore, to the best of our knowledge, conventions not dissimilar to the rules of a game. And, secondly, once we pass sentence on knowing, a priori, anything but trivial truths we shall have at our disposal the sharpest of philosophical tools. A tool which can only proffer a better brand of empiricism.

While the word “trivial” does have a corresponding Latin form that means ordinary or commonplace, the English word seems to be taken mainly from the “trivium” of grammar, rhetoric, and logic. This would seem to make some sense of calling logical necessities “trivial,” in the sense that they pertain to logic. Still, even here something is missing, since Robinson wants to include the truths of mathematics as trivial, and classically these did not pertain to the aforesaid trivium.

Nonetheless, overall Robinson’s intention, and presumably that of others who speak this way, is to suggest that such things are trivial in the English sense of “unimportant.” That is, they may be important tools, but they are not important for understanding. This is clear at least in our example: Robinson calls them trivial because “there are no known/knowable facts about logic.” Logical necessities tell us nothing about reality, and therefore they provide us with no knowledge. They are true by the meaning of the words, and therefore they cannot be true by reason of facts about reality.

Things that are logically necessary are not trivial in this sense. They are important, both in a practical way and directly for understanding the world.

Consider the failure of the Mars Climate Orbiter:

On November 10, 1999, the Mars Climate Orbiter Mishap Investigation Board released a Phase I report, detailing the suspected issues encountered with the loss of the spacecraft. Previously, on September 8, 1999, Trajectory Correction Maneuver-4 was computed and then executed on September 15, 1999. It was intended to place the spacecraft at an optimal position for an orbital insertion maneuver that would bring the spacecraft around Mars at an altitude of 226 km (140 mi) on September 23, 1999. However, during the week between TCM-4 and the orbital insertion maneuver, the navigation team indicated the altitude may be much lower than intended at 150 to 170 km (93 to 106 mi). Twenty-four hours prior to orbital insertion, calculations placed the orbiter at an altitude of 110 kilometers; 80 kilometers is the minimum altitude that Mars Climate Orbiter was thought to be capable of surviving during this maneuver. Post-failure calculations showed that the spacecraft was on a trajectory that would have taken the orbiter within 57 kilometers of the surface, where the spacecraft likely skipped violently on the uppermost atmosphere and was either destroyed in the atmosphere or re-entered heliocentric space.[1]

The primary cause of this discrepancy was that one piece of ground software supplied by Lockheed Martin produced results in a United States customary unit, contrary to its Software Interface Specification (SIS), while a second system, supplied by NASA, expected those results to be in SI units, in accordance with the SIS. Specifically, software that calculated the total impulse produced by thruster firings produced results in pound-force seconds. The trajectory calculation software then used these results – expected to be in newton seconds – to update the predicted position of the spacecraft.

It is presumably an analytic truth that the units defined in one way are unequal to the units defined in the other. But it was ignoring this analytic truth that was the primary cause of the space probe’s failure. So it is evident that analytic truths can be extremely important for practical purposes.

Such truths can also be important for understanding reality. In fact, they are typically more important for understanding than other truths. The argument against this is that if something is necessary in virtue of the meaning of the words, it cannot be telling us something about reality. But this argument is wrong for one simple reason: words and meaning themselves are both elements of reality, and so they do tell us something about reality, even when the truth is fully determinate given the meaning.

If one accepts the mistaken argument, in fact, sometimes one is led even further. Logically necessary truths cannot tell us anything important for understanding reality, since they are simply facts about the meaning of words. On the other hand, anything which is not logically necessary is in some sense accidental: it might have been otherwise. But accidental things that might have been otherwise cannot help us to understand reality in any deep way: it tells us nothing deep about reality to note that there is a tree outside my window at this moment, when this merely happens to be the case, and could easily have been otherwise. Therefore, since neither logically necessary things, nor logically contingent things, can help us to understand reality in any deep or important way, such understanding must be impossible.

It is fairly rare to make such an argument explicitly, but it is a common implication of many arguments that are actually made or suggested, or it at least influences the way people feel about arguments and understanding.  For example, consider this comment on an earlier post. Timocrates suggests that (1) if you have a first cause, it would have to be a brute fact, since it doesn’t have any other cause, and (2) describing reality can’t tell us any reasons but is “simply another description of how things are.” The suggestion behind these objections is that the very idea of understanding is incoherent. As I said there in response, it is true that every true statement is in some sense “just a description of how things are,” but that was what a true statement was meant to be in any case. It surely was not meant to be a description of how things are not.

That “analytic” or “tautologous” statements can indeed provide a non-trivial understanding of reality can also easily be seen by example. Some examples from this blog:

Good and being. The convertibility of being and goodness is “analytic,” in the sense that carefully thinking about the meaning of desire and the good reveals that a universe where existence as such was bad, or even failed to be good, is logically impossible. In particular, it would require a universe where there is no tendency to exist, and this is impossible given that it is posited that something exists.

Natural selection. One of the most important elements of Darwin’s theory of evolution is the following logically necessary statement: the things that have survived are more likely to be the things that were more likely to survive, and less likely to be the things that were less likely to survive.

Limits of discursive knowledge. Knowledge that uses distinct thoughts and concepts is necessarily limited by issues relating to self-reference. It is clear that this is both logically necessary, and tells us important things about our understanding and its limits.

Knowledge and being. Kant rightly recognized a sense in which it is logically impossible to “know things as they are in themselves,” as explained in this post. But as I said elsewhere, the logically impossible assertion that knowledge demands an identity between the mode of knowing and the mode of being is the basis for virtually every sort of philosophical error. So a grasp on the opposite “tautology” is extremely useful for understanding.

 

Perfectly Random

Suppose you have a string of random binary digits such as the following:

00111100010101001100011011001100110110010010100111

This string is 50 digits long, and was the result of a single attempt using the linked generator.

However, something seems distinctly non-random about it: there are exactly 25 zeros and exactly 25 ones. Naturally, this will not always happen, but most of the time the proportion of zeros will be fairly close to half. And evidently this is necessary, since if the proportion was usually much different from half, then the selection could not have been random in the first place.

There are other things about this string that are definitely not random. It contains only zeros and ones, and no other digits, much less items like letters from the alphabet, or items like ‘%’ and ‘$’.

Why do we have these apparently non-random characteristics? Both sorts of characteristics, the approximate and typical proportion, and the more rigid characteristics, are necessary consequences of the way we obtained or defined this number.

It is easy to see that such characteristics are inevitable. Suppose someone wants to choose something random without any non-random characteristics. Let’s suppose they want to avoid the first sort of characteristic, which is perhaps the “easier” task. They can certainly make the proportion of zeros approximately 75% or anything else that they please. But this will still be a non-random characteristic.

They try again. Suppose they succeed in preventing the series of digits from converging to any specific probability. If they do, there is one and only one way to do this. Much as in our discussion of the mathematical laws of nature, the only way to accomplish this will be to go back and forth between longer and longer strings of zeros and ones. But this is an extremely non-random characteristic. So they may have succeeded in avoiding one particular type of non-randomness, but only at the cost of adding something else very non-random.

Again, consider the second kind of characteristic. Here things are even clearer: the only way to avoid the second kind of characteristic is not to attempt any task in the first place. The only way to win is not to play. Once we have said “your task is to do such and such,” we have already specified some non-random characteristics of the second kind; to avoid such characteristics is to avoid the task completely.

“Completely random,” in fact, is an incoherent idea. No such thing can exist anywhere, in the same way that “formless matter” cannot actually exist, but all matter is formed in one way or another.

The same thing applies to David Hume’s supposed problem of induction. I ended that post with the remark that for his argument to work, he must be “absolutely certain that the future will resemble the past in no way.” But this of course is impossible in the first place; the past and the future are both defined as periods of time, and so there is some resemblance in their very definition, in the same way that any material thing must have some form in its definition, and any “random” thing must have something non-random in its definition.

 

Discount Rates

Eliezer Yudkowsky some years ago made this argument against temporal discounting:

I’ve never been a fan of the notion that we should (normatively) have a discount rate in our pure preferences – as opposed to a pseudo-discount rate arising from monetary inflation, or from opportunity costs of other investments, or from various probabilistic catastrophes that destroy resources or consumers.  The idea that it is literally, fundamentally 5% more important that a poverty-stricken family have clean water in 2008, than that a similar family have clean water in 2009, seems like pure discrimination to me – just as much as if you were to discriminate between blacks and whites.

Robin  Hanson disagreed, responding with this post:

But doesn’t discounting at market rates of return suggest we should do almost nothing to help far future folk, and isn’t that crazy?  No, it suggests:

  1. Usually the best way to help far future folk is to invest now to give them resources they can spend as they wish.
  2. Almost no one now in fact cares much about far future folk, or they would have bid up the price (i.e., market return) to much higher levels.

Very distant future times are ridiculously easy to help via investment.  A 2% annual return adds up to a googol (10^100) return over 12,000 years, even if there is only a 1/1000 chance they will exist or receive it.

So if you are not incredibly eager to invest this way to help them, how can you claim to care the tiniest bit about them?  How can you think anyone on Earth so cares?  And if no one cares the tiniest bit, how can you say it is “moral” to care about them, not just somewhat, but almost equally to people now?  Surely if you are representing a group, instead of spending your own wealth, you shouldn’t assume they care much.

Yudkowsky’s argument is idealistic, while Hanson is attempting to be realistic. I will look at this from a different point of view. Hanson is right, and Yudkowsky is wrong, for a still more idealistic reason than Yudkowsky’s reasons. In particular, a temporal discount rate is logically and mathematically necessary in order to have consistent preferences.

Suppose you have the chance to save 10 lives a year from now, or 2 years from now, or 3 years from now etc., such that your mutually exclusive options include the possibility of saving 10 lives x years from now for all x.

At first, it would seem to be consistent for you to say that all of these possibilities have equal value by some measure of utility.

The problem does not arise from this initial assignment, but it arises when we consider what happens when you act in this situation. Your revealed preferences in that situation will indicate that you prefer things nearer in time to things more distant, for the following reason.

It is impossible to choose a random integer without a bias towards low numbers, for the same reasons we argued here that it is impossible to assign probabilities to hypotheses without, in general, assigning simpler hypotheses higher probabilities. In a similar way, if “you will choose 2 years from now”, “you will choose 10 years from now,” “you will choose 100 years from now,” are all assigned probabilities, they cannot all be assigned equal probabilities, but you must be more likely to choose the options less distant in time, in general and overall. There will be some number n such that there is a 99.99% chance that you will choose some number of years less than n, and and a probability of 0.01% that you will choose n or more years, indicating that you have a very strong preference for saving lives sooner rather than later.

Someone might respond that this does not necessarily affect the specific value assignments, in the same way that in some particular case, we can consistently think that some particular complex hypothesis is more probable than some particular simple hypothesis. The problem with this is the hypotheses do not change their complexity, but time passes, making things distant in time become things nearer in time. Thus, for example, if Yudkowsky responds, “Fine. We assign equal value to saving lives for each year from 1 to 10^100, and smaller values to the times after that,” this will necessarily lead to dynamic inconsistency. The only way to avoid this inconsistency is to apply a discount rate to all periods of time, including ones in the near, medium, and long term future.

 

Spooky Action at a Distance

Albert Einstein objected to the usual interpretations of quantum mechanics because they seemed to him to imply “spooky action at a distance,” a phrase taken from a letter from Einstein to Max Born in 1947 (page 155 in this book):

I cannot make a case for my attitude in physics which you would consider at all reasonable. I admit, of course, that there is a considerable amount of validity in the statistical approach which you were the first to recognize clearly as necessary given the framework of the existing formalism. I cannot seriously believe in it because the theory cannot be reconciled with the idea that physics should represent a reality in time and space, free from spooky actions at a distance. I am, however, not yet firmly convinced that it can really be achieved with a continuous field theory, although I have discovered a possible way of doing this which so far seems quite reasonable. The calculation difficulties are so great that I will be biting the dust long before I myself can be fully convinced of it. But I am quite convinced that someone will eventually come up with a theory whose objects, connected by laws, are not probabilities but considered facts, as used to be taken for granted until quite recently. I cannot, however, base this conviction on logical reasons, but can only produce my little finger as witness, that is, I offer no authority which would be able to command any kind of respect outside of my own hand.

Einstein has two objections: the theory seems to be indeterministic, and it also seems to imply action at a distance. He finds both of these implausible. He thinks physics should be deterministic, “as used to be taken for granted until quite recently,” and that all interactions should be local: things directly affect only things which are close by, and affect distant things only indirectly.

In many ways, things do not appear to have gone well for Einstein’s intuitions. John Bell constructed a mathematical argument, now known as Bell’s Theorem, that the predictions of quantum mechanics cannot be reproduced by the kind of theory desired by Einstein. Bell summarizes his point:

The paradox of Einstein, Podolsky and Rosen was advanced as an argument that quantum mechanics could not be a complete theory but should be supplemented by additional variables. These additional variables were to restore to the theory causality and locality. In this note that idea will be formulated mathematically and shown to be incompatible with the statistical predictions of quantum mechanics. It is the requirement of locality, or more precisely that the result of a measurement on one system be unaffected by operations on a distant system with which it has interacted in the past, that creates the essential difficulty. There have been attempts to show that even without such a separability or locality requirement no “hidden variable” interpretation of quantum mechanics is possible. These attempts have been examined elsewhere and found wanting. Moreover, a hidden variable interpretation of elementary quantum theory has been explicitly constructed. That particular interpretation has indeed a grossly non-local structure. This is characteristic, according to the result to be proved here, of any such theory which reproduces exactly the quantum mechanical predictions.

“Causality and locality” in this description are exactly the two points where Einstein objected in the quoted letter: causality, as understood here, implies determinism, and locality implies no spooky action at a distance. Given this result, Einstein might have hoped that the predictions of quantum mechanics would turn out to fail, so that he could still have his desired physics. This did not happen. On the contrary, these predictions (precisely those inconsistent with such theories) have been verified time and time again.

Rather than putting the reader through Bell’s math and physics, we will explain his result with an analogy by Mark Alford. Alford makes this comparison:

Imagine that someone has told us that twins have special powers, including the ability to communicate with each other using telepathic influences that are “superluminal” (faster than light). We decide to test this by collecting many pairs of twins, separating each pair, and asking each twin one question to see if their answers agree.

To make things simple we will only have three possible questions, and they will be Yes/No questions. We will tell the twins in advance what the questions are.

The procedure is as follows.

  1. A new pair of twins is brought in and told what the three possible questions are.
  2. The twins travel far apart in space to separate questioning locations.
  3. At each location there is a questioner who selects one of the three questions at random, and poses that question to the twin in front of her.
  4. Spacelike separation. When the question is chosen and asked at one location, there is not enough time for any influence traveling at the speed of light to get from there to the other location in time to affect either what question is chosen there, or the answer given.

He now supposes the twins give the same responses when they are asked the same question, and discusses this situation:

Now, suppose we perform this experiment and we find same-question agreement: whenever a pair of spacelike-separated twins both happen to get asked the same question, their answers always agree. How could they do this? There are two possible explanations,

1. Each pair of twins uses superluminal telepathic communication to make sure both twins give the same answer.

2. Each pair of twins follows a plan. Before they were separated they agreed in advance what their answers to the three questions would be.

The same-question agreement that we observe does not prove that twins can communicate telepathically faster than light. If we believe that strong locality is a valid principle, then we can resort to the other explanation, that each pair of twins is following a plan. The crucial point is that this requires determinism. If there were any indeterministic evolution while the twins were spacelike separated, strong locality requires that the random component of one twin’s evolution would have to be uncorrelated with the other twin’s evolution. Such uncorrelated indeterminism would cause their recollections of the plan to diverge, and they would not always show same-question agreement.

The results are understandable if the twins agree on the answers Yes-Yes-Yes, or Yes-No-Yes, or any other determinate combination. But they are not understandable if they decide to flip coins if they are asked the second question, for example. If they did this, they would have to disagree 50% of the time on that question, unless one of the coin flips affected the other.

Alford goes on to discuss what happens when the twins are asked different questions:

In the thought experiment as described up to this point we only looked at the recorded answers in cases where each twin in a given pair was asked the same question. There are also recorded data on what happens when the two questioners happen to choose different questions. Bell noticed that this data can be used as a cross-check on our strong-locality-saving idea that the twins are following a pre-agreed plan that determines that their answers will always agree. The cross-check takes the form of an inequality:

Bell inequality for twins:

If a pair of twins is following a plan then, when each twin is asked a different randomly chosen question, their answers will be the same, on average, at least 1/3 of the time.

He derives this value:

For each pair of twins, there are four general types of pre-agreed plan they could adopt when they are arranging how they will both give the same answer to each of the three possible questions.

(a) a plan in which all three answers are Yes;

(b) a plan in which there are two Yes and one No;

(c) a plan in which there are two No and one Yes;

(d) a plan in which all three answers are No.

If, as strong locality and same-question agreement imply, both twins in a given pair follow a shared predefined plan, then when the random questioning leads to each of them being asked a different question from the set of three possible questions, how often will their answers happen to be the same (both Yes or both No)? If the plan is of type (a) or (d), both answers will always be the same. If the plan is of type (b) or (c), both answers will be the same 1/3 of the time. We conclude that no matter what type of plan each pair of twins may follow, the mere fact that they are following a plan implies that, when each of them is asked a different randomly chosen question, they will both give the same answer (which might be Yes or No) at least 1/3 of the time. It is important to appreciate that one needs data from many pairs of twins to see this effect, and that the inequality holds even if each pair of twins freely chooses any plan they like.

The “Bell inequality” is violated if we do the experimental test and the twins end up agreeing, when they are asked different questions, less than 1/3 of the time, despite consistently agreeing when they are asked the same question. If one saw such results in reality, one might be forgiven for concluding that the twins do have superluminal telepathic abilities. Unfortunately for Einstein, this is what we do get, consistently, when we test the analogous quantum mechanical version of the experiment.

Self Reference Paradox Summarized

Hilary Lawson is right to connect the issue of the completeness and consistency of truth with paradoxes of self-reference.

As a kind of summary, consider this story:

It was a dark and stormy night,
and all the Cub Scouts where huddled around their campfire.
One scout looked up to the Scout Master and said:
“Tell us a story.”
And the story went like this:

It was a dark and stormy night,
and all the Cub Scouts where huddled around their campfire.
One scout looked up to the Scout Master and said:
“Tell us a story.”
And the story went like this:

It was a dark and stormy night,
and all the Cub Scouts where huddled around their campfire.
One scout looked up to the Scout Master and said:
“Tell us a story.”
And the story went like this:

It was a dark and stormy night,
and all the Cub Scouts where huddled around their campfire.
One scout looked up to the Scout Master and said:
“Tell us a story.”
And the story went like this:
etc.

In this form, the story obviously exists, but in its implied form, the story cannot be told, because for the story to be “told” is for it to be completed, and it is impossible for it be completed, since it will not be complete until it contains itself, and this cannot happen.

Consider a similar example. You sit in a room at a desk, and decide to draw a picture of the room. You draw the walls. Then you draw yourself and your desk. But then you realize, “there is also a picture in the room. I need to draw the picture.” You draw the picture itself as a tiny image within the image of your desktop, and add tiny details: the walls of the room, your desk and yourself.

Of course, you then realize that your artwork can never be complete, in exactly the same way that the story above cannot be complete.

There is essentially the same problem in these situations as in all the situations we have described which involve self-reference: the paradox of the liar, the liar game, the impossibility of detailed future prediction, the list of all true statementsGödel’s theorem, and so on.

In two of the above posts, namely on future prediction and Gödel’s theorem, there are discussions of James Chastek’s attempts to use the issue of self-reference to prove that the human mind is not a “mechanism.” I noted in those places that such supposed proofs fail, and at this point it is easy to see that they will fail in general, if they depend on such reasoning. What is possible or impossible here has nothing to do with such things, and everything to do with self-reference. You cannot have a mirror and a camera so perfect that you can get an actually infinite series of images by taking a picture of the mirror with the camera, but there is nothing about such a situation that could not be captured by an image outside the situation, just as a man outside the room could draw everything in the room, including the picture and its details. This does not show that a man outside the room has a superior drawing ability compared with the man in the room. The ability of someone else to say whether the third statement in the liar game is true or false does not prove that the other person does not have a “merely human” mind (analogous to a mere mechanism), despite the fact that you yourself cannot say whether it is true or false.

There is a grain of truth in Chastek’s argument, however. It does follow that if someone says that reality as a whole is a formal system, and adds that we can know what that system is, their position would be absurd, since if we knew such a system we could indeed derive a specific arithmetical truth, namely one that we could state in detail, which would be unprovable from the system, namely from reality, but nonetheless proved to be true by us. And this is logically impossible, since we are a part of reality.

At this point one might be tempted to say, “At this point we have fully understood the situation. So all of these paradoxes and so on don’t prevent us from understanding reality perfectly, even if that was the original appearance.”

But this is similar to one of two things.

First, a man can stand outside the room and draw a picture of everything in it, including the picture, and say, “Behold. A picture of the room and everything in it.” Yes, as long as you are not in the room. But if the room is all of reality, you cannot get outside it, and so you cannot draw such a picture.

Second, the man in the room can draw the room, the desk and himself, and draw a smudge on the center of the picture of the desk, and say, “Behold. A smudged drawing of the room and everything in it, including the drawing.” But one only imagines a picture of the drawing underneath the smudge: there is actually no such drawing in the picture of the room, nor can there be.

In the same way, we can fully understand some local situation, from outside that situation, or we can have a smudged understanding of the whole situation, but there cannot be any detailed understanding of the whole situation underneath the smudge.

I noted that I disagreed with Lawson’s attempt to resolve the question of truth. I did not go into detail, and I will not, as the book is very long and an adequate discussion would be much longer than I am willing to attempt, at least at this time, but I will give some general remarks. He sees, correctly, that there are problems both with saying that “truth exists” and that “truth does not exist,” taken according to the usual concept of truth, but in the end his position amounts to saying that the denial of truth is truer than the affirmation of truth. This seems absurd, and it is, but not quite so much as appears, because he does recognize the incoherence and makes an attempt to get around it. The way of thinking is something like this: we need to avoid the concept of truth. But this means we also need to avoid the concept of asserting something, because if you assert something, you are saying that it is true. So he needs to say, “assertion does not exist,” but without asserting it. Consequently he comes up with the concept of “closure,” which is meant to replace the concept of asserting, and “asserts” things in the new sense. This sense is not intended to assert anything at all in the usual sense. In fact, he concludes that language does not refer to the world at all.

Apart from the evident absurdity, exacerbated by my own realist description of his position, we can see from the general account of self-reference why this is the wrong answer. The man in the room might start out wanting to draw a picture of the room and everything in it, and then come to realize that this project is impossible, at least for someone in his situation. But suppose he concludes: “After all, there is no such thing as a picture. I thought pictures were possible, but they are not. There are just marks on paper.” The conclusion is obviously wrong. The fact that pictures are things themselves does prevent pictures from being exhaustive pictures of themselves, but it does not prevent them from being pictures in general. And in the same way, the fact that we are part of reality prevents us from having an exhaustive understanding of reality, but it does not prevent us from understanding in general.

There is one last temptation in addition to the two ways discussed above of saying that there can be an exhaustive drawing of the room and the picture. The room itself and everything in it, is itself an exhaustive representation of itself and everything in it, someone might say. Apart from being an abuse of the word “representation,” I think this is delusional, but this a story for another time.

Lies, Religion, and Miscalibrated Priors

In a post from some time ago, Scott Alexander asks why it is so hard to believe that people are lying, even in situations where it should be obvious that they made up the whole story:

The weird thing is, I know all of this. I know that if a community is big enough to include even a few liars, then absent a strong mechanism to stop them those lies should rise to the top. I know that pretty much all of our modern communities are super-Dunbar sized and ought to follow that principle.

And yet my System 1 still refuses to believe that the people in those Reddit threads are liars. It’s actually kind of horrified at the thought, imagining them as their shoulders slump and they glumly say “Well, I guess I didn’t really expect anyone to believe me”. I want to say “No! I believe you! I know you had a weird experience and it must be hard for you, but these things happen, I’m sure you’re a good person!”

If you’re like me, and you want to respond to this post with “but how do you know that person didn’t just experience a certain coincidence or weird psychological trick?”, then before you comment take a second to ask why the “they’re lying” theory is so hard to believe. And when you figure it out, tell me, because I really want to know.

The strongest reason for this effect is almost certainly a moral reason. In an earlier post, I discussed St. Thomas’s explanation for why one should give a charitable interpretation to someone’s behavior, and in a follow up, I explained the problem of applying that reasoning to the situation of judging whether a person is lying or not. St. Thomas assumes that the bad consequences of being mistaken about someone’s moral character will be minor, and most of the time this is true. But if we asking the question, “are they telling the truth or are they lying?”, the consequences can sometimes be very serious if we are mistaken.

Whether or not one is correct in making this application, it is not hard to see that this is the principal answer to Scott’s question. It is hard to believe the “they’re lying” theory not because of the probability that they are lying, but because we are unwilling to risk injuring someone with our opinion. This is without doubt a good motive from a moral standpoint.

But if you proceed to take this unwillingness as a sign of the probability that they are telling the truth, this would be a demonstrably miscalibrated probability assignment. Consider a story on Quora which makes a good example of Scott’s point:

I shuffled a deck of cards and got the same order that I started with.

No I am not kidding and its not because I can’t shuffle.

Let me just tell the story of how it happened. I was on a trip to Europe and I bought a pack of playing cards at the airport in Madrid to entertain myself on the flight back to Dallas.

It was about halfway through the flight after I’d watched Pixels twice in a row (That s literally the only reason I even remembered this) And I opened my brand new Real Madrid Playing Cards and I just shuffled them for probably like 30 minutes doing different tricks that I’d learned at school to entertain myself and the little girl sitting next to me also found them to be quite cool.

I then went to look at the other sides of the cards since they all had a picture of the Real Madrid player with the same number on the back. That’s when I realized that they were all in order. I literally flipped through the cards and saw Nacho-Fernandes, Ronaldo, Toni Kroos, Karim Benzema and the rest of the team go by all in the perfect order.

Then a few weeks ago when we randomly started talking about Pixels in AP Statistics I brought up this story and my teacher was absolutely amazed. We did the math and the amount of possibilities when shuffling a deck of cards is 52! Meaning 52 x 51 x 50 x 49 x 48….

There were 8.0658175e+67 different combinations of cards that I could have gotten. And I managed to get the same one twice.

The lack of context here might make us more willing to say that Arman Razaali is lying, compared to Scott’s particular examples. Nonetheless, I think a normal person will feel somewhat unwilling to say, “he’s lying, end of story.” I certainly feel that myself.

It does not take many shuffles to essentially randomize a deck. Consequently if Razaali’s statement that he “shuffled them for probably like 30 minutes” is even approximately true, 1 in 52! is probably a good estimate of the chance of the outcome that he claims, if we assume that it happened by chance. It might be some orders of magnitude less since there might be some possibility of “unshuffling.” I do not know enough about the physical process of shuffling to know whether this is a real possibility or not, but it is not likely to make a significant difference: e.g. the difference between 10^67 and 10^40 would be a huge difference mathematically, but it would not be significant for our considerations here, because both are simply too large for us to grasp.

People demonstrably lie at far higher rates than 1 in 10^67 or 1 in 10^40. This will remain the case even if you ask about the rate of “apparently unmotivated flat out lying for no reason.” Consequently, “he’s lying, period,” is far more likely than “the story is true, and happened by pure chance.” Nor can we fix this by pointing to the fact that an extraordinary claim is a kind of extraordinary evidence. In the linked post I said that the case of seeing ghosts, and similar things, might be unclear:

Or in other words, is claiming to have seen a ghost more like claiming to have picked 422,819,208, or is it more like claiming to have picked 500,000,000?

That remains undetermined, at least by the considerations which we have given here. But unless you have good reasons to suspect that seeing ghosts is significantly more rare than claiming to see a ghost, it is misguided to dismiss such claims as requiring some special evidence apart from the claim itself.

In this case there is no such unclarity – if we interpret the claim as “by pure chance the deck ended up in its original order,” then it is precisely like claiming to have picked 500,000,000, except that it is far less likely.

Note that there is some remaining ambiguity. Razaali could defend himself by saying, “I said it happened, I didn’t say it happened by chance.” Or in other words, “but how do you know that person didn’t just experience a certain coincidence or weird psychological trick?” But this is simply to point out that “he’s lying” and “this happened by pure chance” are not exhaustive alternatives. And this is true. But if we want to estimate the likelihood of those two alternatives in particular, we must say that it is far more likely that he is lying than that it happened, and happened by chance. And so much so that if one of these alternatives is true, it is virtually certain that he is lying.

As I have said above, the inclination to doubt that such a person is lying primarily has a moral reason. This might lead someone to say that my estimation here also has a moral reason: I just want to form my beliefs in the “correct” way, they might say: it is not about whether Razaali’s story really happened or not.

Charles Taylor, in chapter 15 of A Secular Age, gives a similar explanation of the situation of former religious believers who apparently have lost their faith due to evidence and argument:

From the believer’s perspective, all this falls out rather differently. We start with an epistemic response: the argument from modern science to all-around materialism seems quite unconvincing. Whenever this is worked out in something closer to detail, it seems full of holes. The best examples today might be evolution, sociobiology, and the like. But we also see reasonings of this kind in the works of Richard Dawkins, for instance, or Daniel Dennett.

So the believer returns the compliment. He casts about for an explanation why the materialist is so eager to believe very inconclusive arguments. Here the moral outlook just mentioned comes back in, but in a different role. Not that, failure to rise to which makes you unable to face the facts of materialism; but rather that, whose moral attraction, and seeming plausibility to the facts of the human moral condition, draw you to it, so that you readily grant the materialist argument from science its various leaps of faith. The whole package seems plausible, so we don’t pick too closely at the details.

But how can this be? Surely, the whole package is meant to be plausible precisely because science has shown . . . etc. That’s certainly the way the package of epistemic and moral views presents itself to those who accept it; that’s the official story, as it were. But the supposition here is that the official story isn’t the real one; that the real power that the package has to attract and convince lies in it as a definition of our ethical predicament, in particular, as beings capable of forming beliefs.

This means that this ideal of the courageous acknowledger of unpalatable truths, ready to eschew all easy comfort and consolation, and who by the same token becomes capable of grasping and controlling the world, sits well with us, draws us, that we feel tempted to make it our own. And/or it means that the counter-ideals of belief, devotion, piety, can all-too-easily seem actuated by a still immature desire for consolation, meaning, extra-human sustenance.

What seems to accredit the view of the package as epistemically-driven are all the famous conversion stories, starting with post-Darwinian Victorians but continuing to our day, where people who had a strong faith early in life found that they had reluctantly, even with anguish of soul, to relinquish it, because “Darwin has refuted the Bible”. Surely, we want to say, these people in a sense preferred the Christian outlook morally, but had to bow, with whatever degree of inner pain, to the facts.

But that’s exactly what I’m resisting saying. What happened here was not that a moral outlook bowed to brute facts. Rather we might say that one moral outlook gave way to another. Another model of what was higher triumphed. And much was going for this model: images of power, of untrammelled agency, of spiritual self-possession (the “buffered self”). On the other side, one’s childhood faith had perhaps in many respects remained childish; it was all too easy to come to see it as essentially and constitutionally so.

But this recession of one moral ideal in face of the other is only one aspect of the story. The crucial judgment is an all-in one about the nature of the human ethical predicament: the new moral outlook, the “ethics of belief” in Clifford’s famous phrase, that one should only give credence to what was clearly demonstrated by the evidence, was not only attractive in itself; it also carried with it a view of our ethical predicament, namely, that we are strongly tempted, the more so, the less mature we are, to deviate from this austere principle, and give assent to comforting untruths. The convert to the new ethics has learned to mistrust some of his own deepest instincts, and in particular those which draw him to religious belief. The really operative conversion here was based on the plausibility of this understanding of our ethical situation over the Christian one with its characteristic picture of what entices us to sin and apostasy. The crucial change is in the status accorded to the inclination to believe; this is the object of a radical shift in interpretation. It is no longer the impetus in us towards truth, but has become rather the most dangerous temptation to sin against the austere principles of belief-formation. This whole construal of our ethical predicament becomes more plausible. The attraction of the new moral ideal is only part of this, albeit an important one. What was also crucial was a changed reading of our own motivation, wherein the desire to believe appears now as childish temptation. Since all incipient faith is childish in an obvious sense, and (in the Christian case) only evolves beyond this by being child-like in the Gospel sense, this (mis)reading is not difficult to make.

Taylor’s argument is that the arguments for unbelief are unconvincing; consequently, in order to explain why unbelievers find them convincing, he must find some moral explanation for why they do not believe. This turns out to be the desire to have a particular “ethics of belief”: they do not want to have beliefs which are not formed in such and such a particular way. This is much like the theoretical response above regarding my estimation of the probability that Razaali is lying, and how that might be considered a moral estimation, rather than being concerned with what actually happened.

There are a number of problems with Taylor’s argument, which I may or may not address in the future in more detail. For the moment I will take note of three things:

First, neither in this passage nor elsewhere in the book does Taylor explain in any detailed way why he finds the unbeliever’s arguments unconvincing. I find the arguments convincing, and it is the rebuttals (by others, not by Taylor, since he does not attempt this) that I find unconvincing. Now of course Taylor will say this is because of my particular ethical motivations, but I disagree, and I have considered the matter exactly in the kind of detail to which he refers when he says, “Whenever this is worked out in something closer to detail, it seems full of holes.” On the contrary, the problem of detail is mostly on the other side; most religious views can only make sense when they are not worked out in detail. But this is a topic for another time.

Second, Taylor sets up an implicit dichotomy between his own religious views and “all-around materialism.” But these two claims do not come remotely close to exhausting the possibilities. This is much like forcing someone to choose between “he’s lying” and “this happened by pure chance.” It is obvious in both cases (the deck of cards and religious belief) that the options do not exhaust the possibilities. So insisting on one of them is likely motivated itself: Taylor insists on this dichotomy to make his religious beliefs seem more plausible, using a presumed implausibility of “all-around materialism,” and my hypothetical interlocutor insists on the dichotomy in the hope of persuading me that the deck might have or did randomly end up in its original order, using my presumed unwillingness to accuse someone of lying.

Third, Taylor is not entirely wrong that such an ethical motivation is likely involved in the case of religious belief and unbelief, nor would my hypothetical interlocutor be entirely wrong that such motivations are relevant to our beliefs about the deck of cards.

But we need to consider this point more carefully. Insofar as beliefs are voluntary, you cannot make one side voluntary and the other side involuntary. You cannot say, “Your beliefs are voluntarily adopted due to moral reasons, while my beliefs are imposed on my intellect by the nature of things.” If accepting an opinion is voluntary, rejecting it will also be voluntary, and if rejecting it is voluntary, accepting it will also be voluntary. In this sense, it is quite correct that ethical motivations will always be involved, even when a person’s opinion is actually true, and even when all the reasons that make it likely are fully known. To this degree, I agree that I want to form my beliefs in a way which is prudent and reasonable, and I agree that this desire is partly responsible for my beliefs about religion, and for my above estimate of the chance that Razaali is lying.

But that is not all: my interlocutor (Taylor or the hypothetical one) is also implicitly or explicitly concluding that fundamentally the question is not about truth. Basically, they say, I want to have “correctly formed” beliefs, but this has nothing to do with the real truth of the matter. Sure, I might feel forced to believe that Razaali’s story isn’t true, but there really is no reason it couldn’t be true. And likewise I might feel forced to believe that Taylor’s religious beliefs are untrue, but there really is no reason they couldn’t be.

And in this respect they are mistaken, not because anything “couldn’t” be true, but because the issue of truth is central, much more so than forming beliefs in an ethical way. Regardless of your ethical motives, if you believe that Razaali’s story is true and happened by pure chance, it is virtually certain that you believe a falsehood. Maybe you are forming this belief in a virtuous way, and maybe you are forming it in a vicious way: but either way, it is utterly false. Either it in fact did not happen, or it in fact did not happen by chance.

We know this, essentially, from the “statistics” of the situation: no matter how many qualifications we add, lies in such situations will be vastly more common than truths. But note that something still seems “unconvincing” here, in the sense of Scott Alexander’s original post: even after “knowing all this,” he finds himself very unwilling to say they are lying. In a discussion with Angra Mainyu, I remarked that our apparently involuntary assessments of things are more like desires than like beliefs:

So rather than calling that assessment a belief, it would be more accurate to call it a desire. It is not believing something, but desiring to believe something. Hunger is the tendency to go and get food; that assessment is the tendency to treat a certain claim (“the USA is larger than Austria”) as a fact. And in both cases there are good reasons for those desires: you are benefited by food, and you are benefited by treating that claim as a fact.

In a similar way, because we have the natural desire not to injure people, we will naturally desire not to treat “he is lying” as a fact; that is, we will desire not to believe it. The conclusion that Angra should draw in the case under discussion, according to his position, is that I do not “really believe” that it is more likely that Razaali is lying than that his story is true, because I do feel the force of the desire not to say that he is lying. But I resist that desire, in part because I want to have reasonable beliefs, but most of all because it is false that Razaali’s story is true and happened by chance.

To the degree that this desire feels like a prior probability, and it does feel that way, it is necessarily miscalibrated. But to the degree that this desire remains nonetheless, this reasoning will continue to feel in some sense unconvincing. And it does in fact feel that way to me, even after making the argument, as expected. Very possibly, this is not unrelated to Taylor’s assessment that the argument for unbelief “seems quite unconvincing.” But discussing that in the detail which Taylor omitted is a task for another time.