Fire, Water, and Numbers

Fire vs. Water

All things are water,” says Thales.

“All things are fire,” says Heraclitus.

“Wait,” says David Hume’s Philo. “You both agree that all things are made up of one substance. Thales, you prefer to call it water, and Heraclitus, you prefer to call it fire. But isn’t that merely a verbal dispute? According to both of you, whatever you point at is fundamentally the same fundamental stuff. So whether you point at water or fire, or anything else, for that matter, you are always pointing at the same fundamental stuff. Where is the real disagreement?”

Philo has a somewhat valid point here, and I mentioned the same thing in the linked post referring to Thales. Nonetheless, as I also said in the same post, as well as in the discussion of the disagreement about God, while there is some common ground, there are also likely remaining points of disagreement. It might depend on context, and perhaps the disagreement is more about the best way of thinking about things than about the things themselves, somewhat like discussing whether the earth or the universe is the thing spinning, but Heraclitus could respond, for example, by saying that thinking of the fundamental stuff as fire is more valid because fire is constantly changing, while water often appears to be completely still, and (Heraclitus claims) everything is in fact constantly changing. This could represent a real disagreement, but it is not a large one, and Thales could simply respond: “Ok, everything is flowing water. Problem fixed.”

Numbers

It is said that Pythagoras and his followers held that “all things are numbers.” To what degree and in what sense this attribution is accurate is unclear, but in any case, some people hold this very position today, even if they would not call themselves Pythagoreans. Thus for example in a recent episode of Sean Carroll’s podcast, Carroll speaks with Max Tegmark, who seems to adopt this position:

0:23:37 MT: It’s squishy a little bit blue and moose like. [laughter] Those properties, I just described don’t sound very mathematical at all. But when we look at it, Sean through our physics eyes, we see that it’s actually a blob of quarks and electrons. And what properties does an electron have? It has the property, minus one, one half, one, and so on. We, physicists have made up these nerdy names for these properties like electric charge, spin, lepton number. But it’s just we humans who invented that language of calling them that, they are really just numbers. And you know as well as I do that the only difference between an electron and a top quark is what numbers its properties are. We have not discovered any other properties that they actually have. So that’s the stuff in space, all the different particles, in the Standard Model, you’ve written so much nice stuff about in your books are all described by just by sets of numbers. What about the space that they’re in? What property does the space have? I think I actually have your old nerdy non-popular, right?

0:24:50 SC: My unpopular book, yes.

0:24:52 MT: Space has, for example, the property three, that’s a number and we have a nerdy name for that too. We call it the dimensionality of space. It’s the maximum number of fingers I can put in space that are all perpendicular to each other. The name dimensionality is just the human language thing, the property is three. We also discovered that it has some other properties, like curvature and topology that Einstein was interested in. But those are all mathematical properties too. And as far as we know today in physics, we have never discovered any properties of either space or the stuff in space yet that are actually non-mathematical. And then it starts to feel a little bit less insane that maybe we are living in a mathematical object. It’s not so different from if you were a character living in a video game. And you started to analyze how your world worked. You would secretly be discovering just the mathematical workings of the code, right?

Tegmark presumably would believe that by saying that things “are really just numbers,” he would disagree with Thales and Heraclitus about the nature of things. But does he? Philo might well be skeptical that there is any meaningful disagreement here, just as between Thales and Heraclitus. As soon as you begin to say, “all things are this particular kind of thing,” the same issues will arise to hinder your disagreement with others who characterize things in a different way.

The discussion might be clearer if I put my cards on the table in advance:

First, there is some validity to the objection, just as there is to the objection concerning the difference between Thales and Heraclitus.

Second, there is nonetheless some residual disagreement, and on that basis it turns out that Tegmark and Pythagoras are more correct than Thales and Heraclitus.

Third, Tegmark most likely does not understand the sense in which he might be correct, rather supposing himself correct the way Thales might suppose himself correct in insisting, “No, things are really not fire, they are really water.”

Mathematical and non-mathematical properties

As an approach to these issues, consider the statement by Tegmark, “We have never discovered any properties of either space or the stuff in space yet that are actually non-mathematical.”

What would it look like if we found a property that was “actually non-mathematical?” Well, what about the property of being blue? As Tegmark remarks, that does not sound very mathematical. But it turns out that color is a certain property of a surface regarding how it reflects flight, and this is much more of a “mathematical” property, at least in the sense that we can give it a mathematical description, which we would have a hard time doing if we simply took the word “blue.”

So presumably we would find a non-mathematical property by seeing some property of things, then investigating it, and then concluding, “We have fully investigated this property and there is no mathematical description of it.” This did not happen with the color blue, nor has it yet happened with any other property; either we can say that we have not yet fully investigated it, or we can give some sort of mathematical description.

Tegmark appears to take the above situation to be surprising. Wow, we might have found reality to be non-mathematical, but it actually turns out to be entirely mathematical! I suggest something different. As hinted by connection with the linked post, things could not have turned out differently. A sufficiently detailed analysis of anything will be a mathematical analysis or something very like it. But this is not because things “are actually just numbers,” as though this were some deep discovery about the essence of things, but because of what it is for people to engage in “a detailed analysis” of anything.

Suppose you want to investigate some thing or some property. The first thing you need to do is to distinguish it from other things or other properties. The color blue is not the color red, the color yellow, or the color green.

Numbers are involved right here at the very first step. There are at least three colors, namely red, yellow, and blue.

Of course we can find more colors, but what if it turns out there seems to be no definite number of them, but we can always find more? Even in this situation, in order to “analyze” them, we need some way of distinguishing and comparing them. We will put them in some sort of order: one color is brighter than another, or one length is greater than another, or one sound is higher pitched than another.

As soon as you find some ordering of that sort (brightness, or greatness of length, or pitch), it will become possible to give a mathematical analysis in terms of the real numbers, as we discussed in relation to “good” and “better.” Now someone defending Tegmark might respond: there was no guarantee we would find any such measure or any such method to compare them. Without such a measure, you could perhaps count your property along with other properties. But you could not give a mathematical analysis of the property itself. So it is surprising that it turned out this way.

But you distinguished your property from other properties, and that must have involved recognizing some things in common with other properties, at least that it was something rather than nothing and that it was a property, and some ways in which it was different from other properties. Thus for example blue, like red, can be seen, while a musical note can be heard but not seen (at least by most people.) Red and blue have in common that they are colors. But what is the difference between them? If we are to respond in any way to this question, except perhaps, “it looks different,” we must find some comparison. And if we find a comparison, we are well on the way to a mathematical account. If we don’t find a comparison, people might rightly complain that we have not yet done any detailed investigation.

But to make the point stronger, let’s assume the best we can do is “it looks different.” Even if this is the case, this very thing will allow us to construct a comparison that will ultimately allow us to construct a mathematical measure. For “it looks different” is itself something that comes in degrees. Blue looks different from red, but orange does so as well, just less different. Insofar as this judgment is somewhat subjective, it might be hard to get a great deal of accuracy with this method. But it would indeed begin to supply us with a kind of sliding scale of colors, and we would be able to number this scale with the real numbers.

From a historical point of view, it took a while for people to realize that this would always be possible. Thus for example Isidore of Seville said that “unless sounds are held by the memory of man, they perish, because they cannot be written down.” It was not, however, so much ignorance of sound that caused this, as ignorance of “detailed analysis.”

This is closely connected to what we said about names. A mathematical analysis is a detailed system of naming, where we name not only individual items, but also various groups, using names like “two,” “three,” and “four.” If we find that we cannot simply count the thing, but we can always find more examples, we look for comparative ways to name them. And when we find a comparison, we note that some things are more distant from one end of the scale and other things are less distant. This allows us to analyze the property using real numbers or some similar mathematical concept. This is also related to our discussion of technical terminology; in an advanced stage any science will begin to use somewhat mathematical methods. Unfortunately, this can also result in people adopting mathematical language in order to look like their understanding has reached an advanced stage, when it has not.

It should be sufficiently clear from this why I suggested that things could not have turned out otherwise. A “non-mathematical” property, in Tegmark’s sense, can only be a property you haven’t analyzed, or one that you haven’t succeeded in analyzing if you did attempt it.

The three consequences

Above, I made three claims about Tegmark’s position. The reasons for them may already be somewhat clarified by the above, but nonetheless I will look at this in a bit more detail.

First, I said there was some truth in the objection that “everything is numbers” is not much different from “everything is water,” or “everything is fire.” One notices some “hand-waving,” so to speak, in Tegmark’s claim that “We, physicists have made up these nerdy names for these properties like electric charge, spin, lepton number. But it’s just we humans who invented that language of calling them that, they are really just numbers.” A measure of charge or spin or whatever may be a number. But who is to say the thing being measured is a number? Nonetheless, there is a reasonable point there. If you are to give an account at all, it will in some way express the form of the thing, which implies explaining relationships, which depends on the distinction of various related things, which entails the possibility of counting the things that are related. In other words, someone could say, “You have a mathematical account of a thing. But the thing itself is non-mathematical.” But if you then ask them to explain that non-mathematical thing, the new explanation will be just as mathematical as the original explanation.

Given this fact, namely that the “mathematical” aspect is a question of how detailed explanations work, what is the difference between saying “we can give a mathematical explanation, but apart from explanations, the things are numbers,” and “we can give a mathematical explanation, but apart from explanations, the things are fires?”

Exactly. There isn’t much difference. Nonetheless, I made the second claim that there is some residual disagreement and that by this measure, the mathematical claim is better than the one about fire or water. Of course we don’t really know what Thales or Heraclitus thought in detail. But Aristotle, at any rate, claimed that Thales intended to assert that material causes alone exist. And this would be at least a reasonable understanding of the claim that all things are water, or fire. Just as Heraclitus could say that fire is a better term than water because fire is always changing, Thales, if he really wanted to exclude other causes, could say that water is a better term than “numbers” because water seems to be material and numbers do not. But since other causes do exist, the opposite is the case: the mathematical claim is better than the materialistic ones.

Many people say that Tegmark’s account is flawed in a similar way, but with respect to another cause; that is, that mathematical accounts exclude final causes. But this is a lot like Ed Feser’s claim that a mathematical account of color implies that colors don’t really exist; namely they are like in just being wrong. A mathematical account of color does not imply that things are not colored, and a mathematical account of the world does not imply that final causes do not exist. As I said early on, a final causes explains why an efficient cause does what it does, and there is nothing about a mathematical explanation that prevents you from saying why the efficient cause does what it does.

My third point, that Tegmark does not understand the sense in which he is right, should be plain enough. As I stated above, he takes it to be a somewhat surprising discovery that we consistently find it possible to give mathematical accounts of the world, and this only makes sense if we assume it would in theory have been possible to discover something else. But that could not have happened, not because the world couldn’t have been a certain way, but because of the nature of explanation.

The Power of a Name

Fairy tales and other stories occasionally suggest the idea that a name gives some kind of power over the thing named, or at least that one’s problems concerning a thing may be solved by knowing its name, as in the story of Rumpelstiltskin. There is perhaps a similar suggestion in Revelation 2:7, “Whoever has ears, let them hear what the Spirit says to the churches. To the one who is victorious, I will give some of the hidden manna. I will also give that person a white stone with a new name written on it, known only to the one who receives it.” The secrecy of the new name may indicate (among other things) that others will have no power over that person.

There is more truth in this idea than one might assume without much thought. For example, anonymous authors do not want to be “doxxed” because knowing the name of the author really does give some power in relation to them which is not had without the knowledge of their name. Likewise, as a blogger, occasionally I want to cite something, but cannot remember the name of the author or article where the statement is made. Even if I remember the content fairly clearly, lacking the memory of the name makes finding the content far more difficult, while on the other name, knowing the name gives me the power of finding the content much more easily.

But let us look a bit more deeply into this. Hilary Lawson, whose position was somewhat discussed here, has a discussion along these lines in Part II of his book, Closure: A Story of Everything. Since he denies that language truly refers to the world at all, as I mentioned in the linked post on his position, it is important to him that language has other effects, and in particular has practical goals. He says in chapter 4:

In order to understand the mechanism of practical linguistic closure consider an example where a proficient speaker of English comes across a new word. Suppose that we are visiting a zoo with a friend. We stand outside a cage and our friend says: ‘An aasvogel.” …

It might appear at first from this example that nothing has been added by the realisation of linguistic closure. The sound ‘aasvogel’ still sounds the same, the image of the bird still looks the same. So what has changed? The sensory closures on either side may not have changed, but a new closure has been realised. A new closure which is in addition to the prior available closures and which enables intervention which was not possible previously. For example, we now have a means of picking out this particular bird in the zoo because the meaning that has been realised will have identified a something in virtue of which this bird is an aasvogel and which thus enables us to distinguish it from others. As a result there will be many consequences for how we might be able to intervene.

The important point here is simply that naming something, even before taking any additional steps, immediately gives one the ability to do various practical things that one could not previously do. In a passage by Helen Keller, previously quoted here, she says:

Since I had no power of thought, I did not compare one mental state with another. So I was not conscious of any change or process going on in my brain when my teacher began to instruct me. I merely felt keen delight in obtaining more easily what I wanted by means of the finger motions she taught me.

We may have similar experiences as adults learning a foreign language while living abroad. At first one has very little ability to interact with the foreign world, but suddenly everything is possible.

Or consider the situation of a hunter gatherer who may not know how to count. It may be obvious to them that a bigger pile of fruit is better than a smaller one, but if two piles look similar, they may have no way to know which is better. But once they decide to give “one fruit and another” a name like “two,” and “two and one” a name like “three,” and so on, suddenly they obtain a great advantage that they previously did not possess. It is now possible to count piles and to discover that one pile has sixty-four while another has sixty-three. And it turns out that by treating the “sixty-four” as bigger than the other pile, although it does not look bigger, they end up better off.

In this sense one could look at the scientific enterprise of looking for mathematical laws of nature as one long process of looking for better names. We can see that some things are faster and some things are slower, but the vague names “fast” and “slow” cannot accomplish much. Once we can name different speeds more precisely, we can put them all in order and accomplish much more, just as the hunter gatherer can accomplish more after learning to count. And this extends to the full power of technology: the men who landed on the moon, did so ultimately due to the power of names.

If you take Lawson’s view, that language does not refer to the world at all, all of this is basically casting magic spells. In fact, he spells this out himself, in so many words, in chapter 3:

All material is in this sense magical. It enables intervention that cannot be understood. Ancient magicians were those who had access to closures that others did not know, in the same way that the Pharaohs had access to closures not available to their subjects. This gave them a supernatural character. It is now that thought that their magic has been explained, as the knowledge of herbs, metals or the weather. No such thing has taken place. More powerful closures have been realised, more powerful magic that can subsume the feeble closures of those magicians. We have simply lost sight of its magical character. Anthropology has many accounts of tribes who on being observed by a Western scientist believe that the observer has access to some very powerful magic. Magic that produces sound and images from boxes, and makes travel swift. We are inclined to smile patronisingly believing that we merely have knowledge — the technology behind radio and television, and motor vehicles — and not magic. The closures behind the technology do indeed provide us with knowledge and understanding and enable us to handle activity, but they do not explain how the closures enable intervention. How the closures are successful remains incomprehensible and in this sense is our magic.

I don’t think we should dismiss this point of view entirely, but I do think it is more mistaken than otherwise, basically because of the original mistake of thinking that language cannot refer to the world. But the point that names are extremely powerful is correct and important, to the point where even the analogy of technology as “magic that works” does make a certain amount of sense.

Tautologies Not Trivial

In mathematics and logic, one sometimes speaks of a “trivial truth” or “trivial theorem”, referring to a tautology. Thus for example in this Quora question, Daniil Kozhemiachenko gives this example:

The fact that all groups of order 2 are isomorphic to one another and commutative entails that there are no non-Abelian groups of order 2.

This statement is a tautology because “Abelian group” here just means one that is commutative: the statement is like the customary example of asserting that “all bachelors are unmarried.”

Some extend this usage of “trivial” to refer to all statements that are true in virtue of the meaning of the terms, sometimes called “analytic.” The effect of this is to say that all statements that are logically necessary are trivial truths. An example of this usage can be seen in this paper by Carin Robinson. Robinson says at the end of the summary:

Firstly, I do not ask us to abandon any of the linguistic practises discussed; merely to adopt the correct attitude towards them. For instance, where we use the laws of logic, let us remember that there are no known/knowable facts about logic. These laws are therefore, to the best of our knowledge, conventions not dissimilar to the rules of a game. And, secondly, once we pass sentence on knowing, a priori, anything but trivial truths we shall have at our disposal the sharpest of philosophical tools. A tool which can only proffer a better brand of empiricism.

While the word “trivial” does have a corresponding Latin form that means ordinary or commonplace, the English word seems to be taken mainly from the “trivium” of grammar, rhetoric, and logic. This would seem to make some sense of calling logical necessities “trivial,” in the sense that they pertain to logic. Still, even here something is missing, since Robinson wants to include the truths of mathematics as trivial, and classically these did not pertain to the aforesaid trivium.

Nonetheless, overall Robinson’s intention, and presumably that of others who speak this way, is to suggest that such things are trivial in the English sense of “unimportant.” That is, they may be important tools, but they are not important for understanding. This is clear at least in our example: Robinson calls them trivial because “there are no known/knowable facts about logic.” Logical necessities tell us nothing about reality, and therefore they provide us with no knowledge. They are true by the meaning of the words, and therefore they cannot be true by reason of facts about reality.

Things that are logically necessary are not trivial in this sense. They are important, both in a practical way and directly for understanding the world.

Consider the failure of the Mars Climate Orbiter:

On November 10, 1999, the Mars Climate Orbiter Mishap Investigation Board released a Phase I report, detailing the suspected issues encountered with the loss of the spacecraft. Previously, on September 8, 1999, Trajectory Correction Maneuver-4 was computed and then executed on September 15, 1999. It was intended to place the spacecraft at an optimal position for an orbital insertion maneuver that would bring the spacecraft around Mars at an altitude of 226 km (140 mi) on September 23, 1999. However, during the week between TCM-4 and the orbital insertion maneuver, the navigation team indicated the altitude may be much lower than intended at 150 to 170 km (93 to 106 mi). Twenty-four hours prior to orbital insertion, calculations placed the orbiter at an altitude of 110 kilometers; 80 kilometers is the minimum altitude that Mars Climate Orbiter was thought to be capable of surviving during this maneuver. Post-failure calculations showed that the spacecraft was on a trajectory that would have taken the orbiter within 57 kilometers of the surface, where the spacecraft likely skipped violently on the uppermost atmosphere and was either destroyed in the atmosphere or re-entered heliocentric space.[1]

The primary cause of this discrepancy was that one piece of ground software supplied by Lockheed Martin produced results in a United States customary unit, contrary to its Software Interface Specification (SIS), while a second system, supplied by NASA, expected those results to be in SI units, in accordance with the SIS. Specifically, software that calculated the total impulse produced by thruster firings produced results in pound-force seconds. The trajectory calculation software then used these results – expected to be in newton seconds – to update the predicted position of the spacecraft.

It is presumably an analytic truth that the units defined in one way are unequal to the units defined in the other. But it was ignoring this analytic truth that was the primary cause of the space probe’s failure. So it is evident that analytic truths can be extremely important for practical purposes.

Such truths can also be important for understanding reality. In fact, they are typically more important for understanding than other truths. The argument against this is that if something is necessary in virtue of the meaning of the words, it cannot be telling us something about reality. But this argument is wrong for one simple reason: words and meaning themselves are both elements of reality, and so they do tell us something about reality, even when the truth is fully determinate given the meaning.

If one accepts the mistaken argument, in fact, sometimes one is led even further. Logically necessary truths cannot tell us anything important for understanding reality, since they are simply facts about the meaning of words. On the other hand, anything which is not logically necessary is in some sense accidental: it might have been otherwise. But accidental things that might have been otherwise cannot help us to understand reality in any deep way: it tells us nothing deep about reality to note that there is a tree outside my window at this moment, when this merely happens to be the case, and could easily have been otherwise. Therefore, since neither logically necessary things, nor logically contingent things, can help us to understand reality in any deep or important way, such understanding must be impossible.

It is fairly rare to make such an argument explicitly, but it is a common implication of many arguments that are actually made or suggested, or it at least influences the way people feel about arguments and understanding.  For example, consider this comment on an earlier post. Timocrates suggests that (1) if you have a first cause, it would have to be a brute fact, since it doesn’t have any other cause, and (2) describing reality can’t tell us any reasons but is “simply another description of how things are.” The suggestion behind these objections is that the very idea of understanding is incoherent. As I said there in response, it is true that every true statement is in some sense “just a description of how things are,” but that was what a true statement was meant to be in any case. It surely was not meant to be a description of how things are not.

That “analytic” or “tautologous” statements can indeed provide a non-trivial understanding of reality can also easily be seen by example. Some examples from this blog:

Good and being. The convertibility of being and goodness is “analytic,” in the sense that carefully thinking about the meaning of desire and the good reveals that a universe where existence as such was bad, or even failed to be good, is logically impossible. In particular, it would require a universe where there is no tendency to exist, and this is impossible given that it is posited that something exists.

Natural selection. One of the most important elements of Darwin’s theory of evolution is the following logically necessary statement: the things that have survived are more likely to be the things that were more likely to survive, and less likely to be the things that were less likely to survive.

Limits of discursive knowledge. Knowledge that uses distinct thoughts and concepts is necessarily limited by issues relating to self-reference. It is clear that this is both logically necessary, and tells us important things about our understanding and its limits.

Knowledge and being. Kant rightly recognized a sense in which it is logically impossible to “know things as they are in themselves,” as explained in this post. But as I said elsewhere, the logically impossible assertion that knowledge demands an identity between the mode of knowing and the mode of being is the basis for virtually every sort of philosophical error. So a grasp on the opposite “tautology” is extremely useful for understanding.

 

Perfectly Random

Suppose you have a string of random binary digits such as the following:

00111100010101001100011011001100110110010010100111

This string is 50 digits long, and was the result of a single attempt using the linked generator.

However, something seems distinctly non-random about it: there are exactly 25 zeros and exactly 25 ones. Naturally, this will not always happen, but most of the time the proportion of zeros will be fairly close to half. And evidently this is necessary, since if the proportion was usually much different from half, then the selection could not have been random in the first place.

There are other things about this string that are definitely not random. It contains only zeros and ones, and no other digits, much less items like letters from the alphabet, or items like ‘%’ and ‘$’.

Why do we have these apparently non-random characteristics? Both sorts of characteristics, the approximate and typical proportion, and the more rigid characteristics, are necessary consequences of the way we obtained or defined this number.

It is easy to see that such characteristics are inevitable. Suppose someone wants to choose something random without any non-random characteristics. Let’s suppose they want to avoid the first sort of characteristic, which is perhaps the “easier” task. They can certainly make the proportion of zeros approximately 75% or anything else that they please. But this will still be a non-random characteristic.

They try again. Suppose they succeed in preventing the series of digits from converging to any specific probability. If they do, there is one and only one way to do this. Much as in our discussion of the mathematical laws of nature, the only way to accomplish this will be to go back and forth between longer and longer strings of zeros and ones. But this is an extremely non-random characteristic. So they may have succeeded in avoiding one particular type of non-randomness, but only at the cost of adding something else very non-random.

Again, consider the second kind of characteristic. Here things are even clearer: the only way to avoid the second kind of characteristic is not to attempt any task in the first place. The only way to win is not to play. Once we have said “your task is to do such and such,” we have already specified some non-random characteristics of the second kind; to avoid such characteristics is to avoid the task completely.

“Completely random,” in fact, is an incoherent idea. No such thing can exist anywhere, in the same way that “formless matter” cannot actually exist, but all matter is formed in one way or another.

The same thing applies to David Hume’s supposed problem of induction. I ended that post with the remark that for his argument to work, he must be “absolutely certain that the future will resemble the past in no way.” But this of course is impossible in the first place; the past and the future are both defined as periods of time, and so there is some resemblance in their very definition, in the same way that any material thing must have some form in its definition, and any “random” thing must have something non-random in its definition.

 

Discount Rates

Eliezer Yudkowsky some years ago made this argument against temporal discounting:

I’ve never been a fan of the notion that we should (normatively) have a discount rate in our pure preferences – as opposed to a pseudo-discount rate arising from monetary inflation, or from opportunity costs of other investments, or from various probabilistic catastrophes that destroy resources or consumers.  The idea that it is literally, fundamentally 5% more important that a poverty-stricken family have clean water in 2008, than that a similar family have clean water in 2009, seems like pure discrimination to me – just as much as if you were to discriminate between blacks and whites.

Robin  Hanson disagreed, responding with this post:

But doesn’t discounting at market rates of return suggest we should do almost nothing to help far future folk, and isn’t that crazy?  No, it suggests:

  1. Usually the best way to help far future folk is to invest now to give them resources they can spend as they wish.
  2. Almost no one now in fact cares much about far future folk, or they would have bid up the price (i.e., market return) to much higher levels.

Very distant future times are ridiculously easy to help via investment.  A 2% annual return adds up to a googol (10^100) return over 12,000 years, even if there is only a 1/1000 chance they will exist or receive it.

So if you are not incredibly eager to invest this way to help them, how can you claim to care the tiniest bit about them?  How can you think anyone on Earth so cares?  And if no one cares the tiniest bit, how can you say it is “moral” to care about them, not just somewhat, but almost equally to people now?  Surely if you are representing a group, instead of spending your own wealth, you shouldn’t assume they care much.

Yudkowsky’s argument is idealistic, while Hanson is attempting to be realistic. I will look at this from a different point of view. Hanson is right, and Yudkowsky is wrong, for a still more idealistic reason than Yudkowsky’s reasons. In particular, a temporal discount rate is logically and mathematically necessary in order to have consistent preferences.

Suppose you have the chance to save 10 lives a year from now, or 2 years from now, or 3 years from now etc., such that your mutually exclusive options include the possibility of saving 10 lives x years from now for all x.

At first, it would seem to be consistent for you to say that all of these possibilities have equal value by some measure of utility.

The problem does not arise from this initial assignment, but it arises when we consider what happens when you act in this situation. Your revealed preferences in that situation will indicate that you prefer things nearer in time to things more distant, for the following reason.

It is impossible to choose a random integer without a bias towards low numbers, for the same reasons we argued here that it is impossible to assign probabilities to hypotheses without, in general, assigning simpler hypotheses higher probabilities. In a similar way, if “you will choose 2 years from now”, “you will choose 10 years from now,” “you will choose 100 years from now,” are all assigned probabilities, they cannot all be assigned equal probabilities, but you must be more likely to choose the options less distant in time, in general and overall. There will be some number n such that there is a 99.99% chance that you will choose some number of years less than n, and and a probability of 0.01% that you will choose n or more years, indicating that you have a very strong preference for saving lives sooner rather than later.

Someone might respond that this does not necessarily affect the specific value assignments, in the same way that in some particular case, we can consistently think that some particular complex hypothesis is more probable than some particular simple hypothesis. The problem with this is the hypotheses do not change their complexity, but time passes, making things distant in time become things nearer in time. Thus, for example, if Yudkowsky responds, “Fine. We assign equal value to saving lives for each year from 1 to 10^100, and smaller values to the times after that,” this will necessarily lead to dynamic inconsistency. The only way to avoid this inconsistency is to apply a discount rate to all periods of time, including ones in the near, medium, and long term future.

 

Spooky Action at a Distance

Albert Einstein objected to the usual interpretations of quantum mechanics because they seemed to him to imply “spooky action at a distance,” a phrase taken from a letter from Einstein to Max Born in 1947 (page 155 in this book):

I cannot make a case for my attitude in physics which you would consider at all reasonable. I admit, of course, that there is a considerable amount of validity in the statistical approach which you were the first to recognize clearly as necessary given the framework of the existing formalism. I cannot seriously believe in it because the theory cannot be reconciled with the idea that physics should represent a reality in time and space, free from spooky actions at a distance. I am, however, not yet firmly convinced that it can really be achieved with a continuous field theory, although I have discovered a possible way of doing this which so far seems quite reasonable. The calculation difficulties are so great that I will be biting the dust long before I myself can be fully convinced of it. But I am quite convinced that someone will eventually come up with a theory whose objects, connected by laws, are not probabilities but considered facts, as used to be taken for granted until quite recently. I cannot, however, base this conviction on logical reasons, but can only produce my little finger as witness, that is, I offer no authority which would be able to command any kind of respect outside of my own hand.

Einstein has two objections: the theory seems to be indeterministic, and it also seems to imply action at a distance. He finds both of these implausible. He thinks physics should be deterministic, “as used to be taken for granted until quite recently,” and that all interactions should be local: things directly affect only things which are close by, and affect distant things only indirectly.

In many ways, things do not appear to have gone well for Einstein’s intuitions. John Bell constructed a mathematical argument, now known as Bell’s Theorem, that the predictions of quantum mechanics cannot be reproduced by the kind of theory desired by Einstein. Bell summarizes his point:

The paradox of Einstein, Podolsky and Rosen was advanced as an argument that quantum mechanics could not be a complete theory but should be supplemented by additional variables. These additional variables were to restore to the theory causality and locality. In this note that idea will be formulated mathematically and shown to be incompatible with the statistical predictions of quantum mechanics. It is the requirement of locality, or more precisely that the result of a measurement on one system be unaffected by operations on a distant system with which it has interacted in the past, that creates the essential difficulty. There have been attempts to show that even without such a separability or locality requirement no “hidden variable” interpretation of quantum mechanics is possible. These attempts have been examined elsewhere and found wanting. Moreover, a hidden variable interpretation of elementary quantum theory has been explicitly constructed. That particular interpretation has indeed a grossly non-local structure. This is characteristic, according to the result to be proved here, of any such theory which reproduces exactly the quantum mechanical predictions.

“Causality and locality” in this description are exactly the two points where Einstein objected in the quoted letter: causality, as understood here, implies determinism, and locality implies no spooky action at a distance. Given this result, Einstein might have hoped that the predictions of quantum mechanics would turn out to fail, so that he could still have his desired physics. This did not happen. On the contrary, these predictions (precisely those inconsistent with such theories) have been verified time and time again.

Rather than putting the reader through Bell’s math and physics, we will explain his result with an analogy by Mark Alford. Alford makes this comparison:

Imagine that someone has told us that twins have special powers, including the ability to communicate with each other using telepathic influences that are “superluminal” (faster than light). We decide to test this by collecting many pairs of twins, separating each pair, and asking each twin one question to see if their answers agree.

To make things simple we will only have three possible questions, and they will be Yes/No questions. We will tell the twins in advance what the questions are.

The procedure is as follows.

  1. A new pair of twins is brought in and told what the three possible questions are.
  2. The twins travel far apart in space to separate questioning locations.
  3. At each location there is a questioner who selects one of the three questions at random, and poses that question to the twin in front of her.
  4. Spacelike separation. When the question is chosen and asked at one location, there is not enough time for any influence traveling at the speed of light to get from there to the other location in time to affect either what question is chosen there, or the answer given.

He now supposes the twins give the same responses when they are asked the same question, and discusses this situation:

Now, suppose we perform this experiment and we find same-question agreement: whenever a pair of spacelike-separated twins both happen to get asked the same question, their answers always agree. How could they do this? There are two possible explanations,

1. Each pair of twins uses superluminal telepathic communication to make sure both twins give the same answer.

2. Each pair of twins follows a plan. Before they were separated they agreed in advance what their answers to the three questions would be.

The same-question agreement that we observe does not prove that twins can communicate telepathically faster than light. If we believe that strong locality is a valid principle, then we can resort to the other explanation, that each pair of twins is following a plan. The crucial point is that this requires determinism. If there were any indeterministic evolution while the twins were spacelike separated, strong locality requires that the random component of one twin’s evolution would have to be uncorrelated with the other twin’s evolution. Such uncorrelated indeterminism would cause their recollections of the plan to diverge, and they would not always show same-question agreement.

The results are understandable if the twins agree on the answers Yes-Yes-Yes, or Yes-No-Yes, or any other determinate combination. But they are not understandable if they decide to flip coins if they are asked the second question, for example. If they did this, they would have to disagree 50% of the time on that question, unless one of the coin flips affected the other.

Alford goes on to discuss what happens when the twins are asked different questions:

In the thought experiment as described up to this point we only looked at the recorded answers in cases where each twin in a given pair was asked the same question. There are also recorded data on what happens when the two questioners happen to choose different questions. Bell noticed that this data can be used as a cross-check on our strong-locality-saving idea that the twins are following a pre-agreed plan that determines that their answers will always agree. The cross-check takes the form of an inequality:

Bell inequality for twins:

If a pair of twins is following a plan then, when each twin is asked a different randomly chosen question, their answers will be the same, on average, at least 1/3 of the time.

He derives this value:

For each pair of twins, there are four general types of pre-agreed plan they could adopt when they are arranging how they will both give the same answer to each of the three possible questions.

(a) a plan in which all three answers are Yes;

(b) a plan in which there are two Yes and one No;

(c) a plan in which there are two No and one Yes;

(d) a plan in which all three answers are No.

If, as strong locality and same-question agreement imply, both twins in a given pair follow a shared predefined plan, then when the random questioning leads to each of them being asked a different question from the set of three possible questions, how often will their answers happen to be the same (both Yes or both No)? If the plan is of type (a) or (d), both answers will always be the same. If the plan is of type (b) or (c), both answers will be the same 1/3 of the time. We conclude that no matter what type of plan each pair of twins may follow, the mere fact that they are following a plan implies that, when each of them is asked a different randomly chosen question, they will both give the same answer (which might be Yes or No) at least 1/3 of the time. It is important to appreciate that one needs data from many pairs of twins to see this effect, and that the inequality holds even if each pair of twins freely chooses any plan they like.

The “Bell inequality” is violated if we do the experimental test and the twins end up agreeing, when they are asked different questions, less than 1/3 of the time, despite consistently agreeing when they are asked the same question. If one saw such results in reality, one might be forgiven for concluding that the twins do have superluminal telepathic abilities. Unfortunately for Einstein, this is what we do get, consistently, when we test the analogous quantum mechanical version of the experiment.

Self Reference Paradox Summarized

Hilary Lawson is right to connect the issue of the completeness and consistency of truth with paradoxes of self-reference.

As a kind of summary, consider this story:

It was a dark and stormy night,
and all the Cub Scouts where huddled around their campfire.
One scout looked up to the Scout Master and said:
“Tell us a story.”
And the story went like this:

It was a dark and stormy night,
and all the Cub Scouts where huddled around their campfire.
One scout looked up to the Scout Master and said:
“Tell us a story.”
And the story went like this:

It was a dark and stormy night,
and all the Cub Scouts where huddled around their campfire.
One scout looked up to the Scout Master and said:
“Tell us a story.”
And the story went like this:

It was a dark and stormy night,
and all the Cub Scouts where huddled around their campfire.
One scout looked up to the Scout Master and said:
“Tell us a story.”
And the story went like this:
etc.

In this form, the story obviously exists, but in its implied form, the story cannot be told, because for the story to be “told” is for it to be completed, and it is impossible for it be completed, since it will not be complete until it contains itself, and this cannot happen.

Consider a similar example. You sit in a room at a desk, and decide to draw a picture of the room. You draw the walls. Then you draw yourself and your desk. But then you realize, “there is also a picture in the room. I need to draw the picture.” You draw the picture itself as a tiny image within the image of your desktop, and add tiny details: the walls of the room, your desk and yourself.

Of course, you then realize that your artwork can never be complete, in exactly the same way that the story above cannot be complete.

There is essentially the same problem in these situations as in all the situations we have described which involve self-reference: the paradox of the liar, the liar game, the impossibility of detailed future prediction, the list of all true statementsGödel’s theorem, and so on.

In two of the above posts, namely on future prediction and Gödel’s theorem, there are discussions of James Chastek’s attempts to use the issue of self-reference to prove that the human mind is not a “mechanism.” I noted in those places that such supposed proofs fail, and at this point it is easy to see that they will fail in general, if they depend on such reasoning. What is possible or impossible here has nothing to do with such things, and everything to do with self-reference. You cannot have a mirror and a camera so perfect that you can get an actually infinite series of images by taking a picture of the mirror with the camera, but there is nothing about such a situation that could not be captured by an image outside the situation, just as a man outside the room could draw everything in the room, including the picture and its details. This does not show that a man outside the room has a superior drawing ability compared with the man in the room. The ability of someone else to say whether the third statement in the liar game is true or false does not prove that the other person does not have a “merely human” mind (analogous to a mere mechanism), despite the fact that you yourself cannot say whether it is true or false.

There is a grain of truth in Chastek’s argument, however. It does follow that if someone says that reality as a whole is a formal system, and adds that we can know what that system is, their position would be absurd, since if we knew such a system we could indeed derive a specific arithmetical truth, namely one that we could state in detail, which would be unprovable from the system, namely from reality, but nonetheless proved to be true by us. And this is logically impossible, since we are a part of reality.

At this point one might be tempted to say, “At this point we have fully understood the situation. So all of these paradoxes and so on don’t prevent us from understanding reality perfectly, even if that was the original appearance.”

But this is similar to one of two things.

First, a man can stand outside the room and draw a picture of everything in it, including the picture, and say, “Behold. A picture of the room and everything in it.” Yes, as long as you are not in the room. But if the room is all of reality, you cannot get outside it, and so you cannot draw such a picture.

Second, the man in the room can draw the room, the desk and himself, and draw a smudge on the center of the picture of the desk, and say, “Behold. A smudged drawing of the room and everything in it, including the drawing.” But one only imagines a picture of the drawing underneath the smudge: there is actually no such drawing in the picture of the room, nor can there be.

In the same way, we can fully understand some local situation, from outside that situation, or we can have a smudged understanding of the whole situation, but there cannot be any detailed understanding of the whole situation underneath the smudge.

I noted that I disagreed with Lawson’s attempt to resolve the question of truth. I did not go into detail, and I will not, as the book is very long and an adequate discussion would be much longer than I am willing to attempt, at least at this time, but I will give some general remarks. He sees, correctly, that there are problems both with saying that “truth exists” and that “truth does not exist,” taken according to the usual concept of truth, but in the end his position amounts to saying that the denial of truth is truer than the affirmation of truth. This seems absurd, and it is, but not quite so much as appears, because he does recognize the incoherence and makes an attempt to get around it. The way of thinking is something like this: we need to avoid the concept of truth. But this means we also need to avoid the concept of asserting something, because if you assert something, you are saying that it is true. So he needs to say, “assertion does not exist,” but without asserting it. Consequently he comes up with the concept of “closure,” which is meant to replace the concept of asserting, and “asserts” things in the new sense. This sense is not intended to assert anything at all in the usual sense. In fact, he concludes that language does not refer to the world at all.

Apart from the evident absurdity, exacerbated by my own realist description of his position, we can see from the general account of self-reference why this is the wrong answer. The man in the room might start out wanting to draw a picture of the room and everything in it, and then come to realize that this project is impossible, at least for someone in his situation. But suppose he concludes: “After all, there is no such thing as a picture. I thought pictures were possible, but they are not. There are just marks on paper.” The conclusion is obviously wrong. The fact that pictures are things themselves does prevent pictures from being exhaustive pictures of themselves, but it does not prevent them from being pictures in general. And in the same way, the fact that we are part of reality prevents us from having an exhaustive understanding of reality, but it does not prevent us from understanding in general.

There is one last temptation in addition to the two ways discussed above of saying that there can be an exhaustive drawing of the room and the picture. The room itself and everything in it, is itself an exhaustive representation of itself and everything in it, someone might say. Apart from being an abuse of the word “representation,” I think this is delusional, but this a story for another time.

Lies, Religion, and Miscalibrated Priors

In a post from some time ago, Scott Alexander asks why it is so hard to believe that people are lying, even in situations where it should be obvious that they made up the whole story:

The weird thing is, I know all of this. I know that if a community is big enough to include even a few liars, then absent a strong mechanism to stop them those lies should rise to the top. I know that pretty much all of our modern communities are super-Dunbar sized and ought to follow that principle.

And yet my System 1 still refuses to believe that the people in those Reddit threads are liars. It’s actually kind of horrified at the thought, imagining them as their shoulders slump and they glumly say “Well, I guess I didn’t really expect anyone to believe me”. I want to say “No! I believe you! I know you had a weird experience and it must be hard for you, but these things happen, I’m sure you’re a good person!”

If you’re like me, and you want to respond to this post with “but how do you know that person didn’t just experience a certain coincidence or weird psychological trick?”, then before you comment take a second to ask why the “they’re lying” theory is so hard to believe. And when you figure it out, tell me, because I really want to know.

The strongest reason for this effect is almost certainly a moral reason. In an earlier post, I discussed St. Thomas’s explanation for why one should give a charitable interpretation to someone’s behavior, and in a follow up, I explained the problem of applying that reasoning to the situation of judging whether a person is lying or not. St. Thomas assumes that the bad consequences of being mistaken about someone’s moral character will be minor, and most of the time this is true. But if we asking the question, “are they telling the truth or are they lying?”, the consequences can sometimes be very serious if we are mistaken.

Whether or not one is correct in making this application, it is not hard to see that this is the principal answer to Scott’s question. It is hard to believe the “they’re lying” theory not because of the probability that they are lying, but because we are unwilling to risk injuring someone with our opinion. This is without doubt a good motive from a moral standpoint.

But if you proceed to take this unwillingness as a sign of the probability that they are telling the truth, this would be a demonstrably miscalibrated probability assignment. Consider a story on Quora which makes a good example of Scott’s point:

I shuffled a deck of cards and got the same order that I started with.

No I am not kidding and its not because I can’t shuffle.

Let me just tell the story of how it happened. I was on a trip to Europe and I bought a pack of playing cards at the airport in Madrid to entertain myself on the flight back to Dallas.

It was about halfway through the flight after I’d watched Pixels twice in a row (That s literally the only reason I even remembered this) And I opened my brand new Real Madrid Playing Cards and I just shuffled them for probably like 30 minutes doing different tricks that I’d learned at school to entertain myself and the little girl sitting next to me also found them to be quite cool.

I then went to look at the other sides of the cards since they all had a picture of the Real Madrid player with the same number on the back. That’s when I realized that they were all in order. I literally flipped through the cards and saw Nacho-Fernandes, Ronaldo, Toni Kroos, Karim Benzema and the rest of the team go by all in the perfect order.

Then a few weeks ago when we randomly started talking about Pixels in AP Statistics I brought up this story and my teacher was absolutely amazed. We did the math and the amount of possibilities when shuffling a deck of cards is 52! Meaning 52 x 51 x 50 x 49 x 48….

There were 8.0658175e+67 different combinations of cards that I could have gotten. And I managed to get the same one twice.

The lack of context here might make us more willing to say that Arman Razaali is lying, compared to Scott’s particular examples. Nonetheless, I think a normal person will feel somewhat unwilling to say, “he’s lying, end of story.” I certainly feel that myself.

It does not take many shuffles to essentially randomize a deck. Consequently if Razaali’s statement that he “shuffled them for probably like 30 minutes” is even approximately true, 1 in 52! is probably a good estimate of the chance of the outcome that he claims, if we assume that it happened by chance. It might be some orders of magnitude less since there might be some possibility of “unshuffling.” I do not know enough about the physical process of shuffling to know whether this is a real possibility or not, but it is not likely to make a significant difference: e.g. the difference between 10^67 and 10^40 would be a huge difference mathematically, but it would not be significant for our considerations here, because both are simply too large for us to grasp.

People demonstrably lie at far higher rates than 1 in 10^67 or 1 in 10^40. This will remain the case even if you ask about the rate of “apparently unmotivated flat out lying for no reason.” Consequently, “he’s lying, period,” is far more likely than “the story is true, and happened by pure chance.” Nor can we fix this by pointing to the fact that an extraordinary claim is a kind of extraordinary evidence. In the linked post I said that the case of seeing ghosts, and similar things, might be unclear:

Or in other words, is claiming to have seen a ghost more like claiming to have picked 422,819,208, or is it more like claiming to have picked 500,000,000?

That remains undetermined, at least by the considerations which we have given here. But unless you have good reasons to suspect that seeing ghosts is significantly more rare than claiming to see a ghost, it is misguided to dismiss such claims as requiring some special evidence apart from the claim itself.

In this case there is no such unclarity – if we interpret the claim as “by pure chance the deck ended up in its original order,” then it is precisely like claiming to have picked 500,000,000, except that it is far less likely.

Note that there is some remaining ambiguity. Razaali could defend himself by saying, “I said it happened, I didn’t say it happened by chance.” Or in other words, “but how do you know that person didn’t just experience a certain coincidence or weird psychological trick?” But this is simply to point out that “he’s lying” and “this happened by pure chance” are not exhaustive alternatives. And this is true. But if we want to estimate the likelihood of those two alternatives in particular, we must say that it is far more likely that he is lying than that it happened, and happened by chance. And so much so that if one of these alternatives is true, it is virtually certain that he is lying.

As I have said above, the inclination to doubt that such a person is lying primarily has a moral reason. This might lead someone to say that my estimation here also has a moral reason: I just want to form my beliefs in the “correct” way, they might say: it is not about whether Razaali’s story really happened or not.

Charles Taylor, in chapter 15 of A Secular Age, gives a similar explanation of the situation of former religious believers who apparently have lost their faith due to evidence and argument:

From the believer’s perspective, all this falls out rather differently. We start with an epistemic response: the argument from modern science to all-around materialism seems quite unconvincing. Whenever this is worked out in something closer to detail, it seems full of holes. The best examples today might be evolution, sociobiology, and the like. But we also see reasonings of this kind in the works of Richard Dawkins, for instance, or Daniel Dennett.

So the believer returns the compliment. He casts about for an explanation why the materialist is so eager to believe very inconclusive arguments. Here the moral outlook just mentioned comes back in, but in a different role. Not that, failure to rise to which makes you unable to face the facts of materialism; but rather that, whose moral attraction, and seeming plausibility to the facts of the human moral condition, draw you to it, so that you readily grant the materialist argument from science its various leaps of faith. The whole package seems plausible, so we don’t pick too closely at the details.

But how can this be? Surely, the whole package is meant to be plausible precisely because science has shown . . . etc. That’s certainly the way the package of epistemic and moral views presents itself to those who accept it; that’s the official story, as it were. But the supposition here is that the official story isn’t the real one; that the real power that the package has to attract and convince lies in it as a definition of our ethical predicament, in particular, as beings capable of forming beliefs.

This means that this ideal of the courageous acknowledger of unpalatable truths, ready to eschew all easy comfort and consolation, and who by the same token becomes capable of grasping and controlling the world, sits well with us, draws us, that we feel tempted to make it our own. And/or it means that the counter-ideals of belief, devotion, piety, can all-too-easily seem actuated by a still immature desire for consolation, meaning, extra-human sustenance.

What seems to accredit the view of the package as epistemically-driven are all the famous conversion stories, starting with post-Darwinian Victorians but continuing to our day, where people who had a strong faith early in life found that they had reluctantly, even with anguish of soul, to relinquish it, because “Darwin has refuted the Bible”. Surely, we want to say, these people in a sense preferred the Christian outlook morally, but had to bow, with whatever degree of inner pain, to the facts.

But that’s exactly what I’m resisting saying. What happened here was not that a moral outlook bowed to brute facts. Rather we might say that one moral outlook gave way to another. Another model of what was higher triumphed. And much was going for this model: images of power, of untrammelled agency, of spiritual self-possession (the “buffered self”). On the other side, one’s childhood faith had perhaps in many respects remained childish; it was all too easy to come to see it as essentially and constitutionally so.

But this recession of one moral ideal in face of the other is only one aspect of the story. The crucial judgment is an all-in one about the nature of the human ethical predicament: the new moral outlook, the “ethics of belief” in Clifford’s famous phrase, that one should only give credence to what was clearly demonstrated by the evidence, was not only attractive in itself; it also carried with it a view of our ethical predicament, namely, that we are strongly tempted, the more so, the less mature we are, to deviate from this austere principle, and give assent to comforting untruths. The convert to the new ethics has learned to mistrust some of his own deepest instincts, and in particular those which draw him to religious belief. The really operative conversion here was based on the plausibility of this understanding of our ethical situation over the Christian one with its characteristic picture of what entices us to sin and apostasy. The crucial change is in the status accorded to the inclination to believe; this is the object of a radical shift in interpretation. It is no longer the impetus in us towards truth, but has become rather the most dangerous temptation to sin against the austere principles of belief-formation. This whole construal of our ethical predicament becomes more plausible. The attraction of the new moral ideal is only part of this, albeit an important one. What was also crucial was a changed reading of our own motivation, wherein the desire to believe appears now as childish temptation. Since all incipient faith is childish in an obvious sense, and (in the Christian case) only evolves beyond this by being child-like in the Gospel sense, this (mis)reading is not difficult to make.

Taylor’s argument is that the arguments for unbelief are unconvincing; consequently, in order to explain why unbelievers find them convincing, he must find some moral explanation for why they do not believe. This turns out to be the desire to have a particular “ethics of belief”: they do not want to have beliefs which are not formed in such and such a particular way. This is much like the theoretical response above regarding my estimation of the probability that Razaali is lying, and how that might be considered a moral estimation, rather than being concerned with what actually happened.

There are a number of problems with Taylor’s argument, which I may or may not address in the future in more detail. For the moment I will take note of three things:

First, neither in this passage nor elsewhere in the book does Taylor explain in any detailed way why he finds the unbeliever’s arguments unconvincing. I find the arguments convincing, and it is the rebuttals (by others, not by Taylor, since he does not attempt this) that I find unconvincing. Now of course Taylor will say this is because of my particular ethical motivations, but I disagree, and I have considered the matter exactly in the kind of detail to which he refers when he says, “Whenever this is worked out in something closer to detail, it seems full of holes.” On the contrary, the problem of detail is mostly on the other side; most religious views can only make sense when they are not worked out in detail. But this is a topic for another time.

Second, Taylor sets up an implicit dichotomy between his own religious views and “all-around materialism.” But these two claims do not come remotely close to exhausting the possibilities. This is much like forcing someone to choose between “he’s lying” and “this happened by pure chance.” It is obvious in both cases (the deck of cards and religious belief) that the options do not exhaust the possibilities. So insisting on one of them is likely motivated itself: Taylor insists on this dichotomy to make his religious beliefs seem more plausible, using a presumed implausibility of “all-around materialism,” and my hypothetical interlocutor insists on the dichotomy in the hope of persuading me that the deck might have or did randomly end up in its original order, using my presumed unwillingness to accuse someone of lying.

Third, Taylor is not entirely wrong that such an ethical motivation is likely involved in the case of religious belief and unbelief, nor would my hypothetical interlocutor be entirely wrong that such motivations are relevant to our beliefs about the deck of cards.

But we need to consider this point more carefully. Insofar as beliefs are voluntary, you cannot make one side voluntary and the other side involuntary. You cannot say, “Your beliefs are voluntarily adopted due to moral reasons, while my beliefs are imposed on my intellect by the nature of things.” If accepting an opinion is voluntary, rejecting it will also be voluntary, and if rejecting it is voluntary, accepting it will also be voluntary. In this sense, it is quite correct that ethical motivations will always be involved, even when a person’s opinion is actually true, and even when all the reasons that make it likely are fully known. To this degree, I agree that I want to form my beliefs in a way which is prudent and reasonable, and I agree that this desire is partly responsible for my beliefs about religion, and for my above estimate of the chance that Razaali is lying.

But that is not all: my interlocutor (Taylor or the hypothetical one) is also implicitly or explicitly concluding that fundamentally the question is not about truth. Basically, they say, I want to have “correctly formed” beliefs, but this has nothing to do with the real truth of the matter. Sure, I might feel forced to believe that Razaali’s story isn’t true, but there really is no reason it couldn’t be true. And likewise I might feel forced to believe that Taylor’s religious beliefs are untrue, but there really is no reason they couldn’t be.

And in this respect they are mistaken, not because anything “couldn’t” be true, but because the issue of truth is central, much more so than forming beliefs in an ethical way. Regardless of your ethical motives, if you believe that Razaali’s story is true and happened by pure chance, it is virtually certain that you believe a falsehood. Maybe you are forming this belief in a virtuous way, and maybe you are forming it in a vicious way: but either way, it is utterly false. Either it in fact did not happen, or it in fact did not happen by chance.

We know this, essentially, from the “statistics” of the situation: no matter how many qualifications we add, lies in such situations will be vastly more common than truths. But note that something still seems “unconvincing” here, in the sense of Scott Alexander’s original post: even after “knowing all this,” he finds himself very unwilling to say they are lying. In a discussion with Angra Mainyu, I remarked that our apparently involuntary assessments of things are more like desires than like beliefs:

So rather than calling that assessment a belief, it would be more accurate to call it a desire. It is not believing something, but desiring to believe something. Hunger is the tendency to go and get food; that assessment is the tendency to treat a certain claim (“the USA is larger than Austria”) as a fact. And in both cases there are good reasons for those desires: you are benefited by food, and you are benefited by treating that claim as a fact.

In a similar way, because we have the natural desire not to injure people, we will naturally desire not to treat “he is lying” as a fact; that is, we will desire not to believe it. The conclusion that Angra should draw in the case under discussion, according to his position, is that I do not “really believe” that it is more likely that Razaali is lying than that his story is true, because I do feel the force of the desire not to say that he is lying. But I resist that desire, in part because I want to have reasonable beliefs, but most of all because it is false that Razaali’s story is true and happened by chance.

To the degree that this desire feels like a prior probability, and it does feel that way, it is necessarily miscalibrated. But to the degree that this desire remains nonetheless, this reasoning will continue to feel in some sense unconvincing. And it does in fact feel that way to me, even after making the argument, as expected. Very possibly, this is not unrelated to Taylor’s assessment that the argument for unbelief “seems quite unconvincing.” But discussing that in the detail which Taylor omitted is a task for another time.

 

 

Statistical Laws of Choice

I noted in an earlier post the necessity of statistical laws of nature. This will necessarily apply to human actions as a particular case, as I implied there in mentioning the amount of food humans eat in a year.

Someone might object. It was said in the earlier post that this will happen unless there is a deliberate attempt to evade this result. But since we are speaking of human beings, there might well be such an attempt. So for example if we ask someone to choose to raise their right hand or their left hand, this might converge to an average, such as 50% each, or perhaps the right hand 60% of the time, or something of this kind. But presumably someone who starts out with the deliberate intention of avoiding such an average will be able to do so.

Unfortunately, such an attempt may succeed in the short run, but will necessarily fail in the long run, because although it is possible in principle, it would require an infinite knowing power, which humans do not have. As I pointed out in the earlier discussion, attempting to prevent convergence requires longer and longer strings on one side or the other. But if you need to raise your right hand a few trillion times before switching again to your left, you will surely lose track of your situation. Nor can you remedy this by writing things down, or by other technical aids: you may succeed in doing things trillions of times with this method, but if you do it forever, the numbers will also become too large to write down. Naturally, at this point we are only making a theoretical point, but it is nonetheless an important one, as we shall see later.

In any case, in practice people do not tend even to make such attempts, and consequently it is far easier to predict their actions in a roughly statistical manner. Thus for example it would not be hard to discover the frequency with which an individual chooses chocolate ice cream over vanilla.

Telephone Game

Victor Reppert says at his blog,

1. If the initial explosion of the big bang had differed in strength by as little as one part in 10\60, the universe would have either quickly collapsed back on itself, or
expanded [too] rapidly for stars to form. In either case, life would be impossible.
2. (An accuracy of one part in 10 to the 60th power can be compared to firing a bullet at a one-inch target on the other side of the observable universe, twenty billion light years away, and hitting the target.)

The claim seems a bit strong. Let x be a measurement in some units of the strength of “the inital explosion of the big bang.” Reppert seems to be saying that if x were increased or decreased by x / (10^60), then the universe would have either collapsed immediately, or it would have expanded without forming stars, so that life would have been impossible.

It’s possible that someone could make a good argument for that claim. But the most natural argument for that claim would be to say something like this, “We know that x had to fall between y and z in order to produce stars, and y and z are so close together that if we increased or decreased x by one part in 10^60, it would fall outside y and z.” But this will not work unless x is already known to fall between y and z. And this implies that we have measured x to a precision of 60 digits.

I suspect that no one, ever, has measured any physical thing to a precision of 60 digits, using any units or any form of measurement. This suggests that something about Reppert’s claim is a bit off.

In any case, the fact that 10^60 is expressed by “10\60”, and the fact that Reppert omits the word “too” mean that we can trace his claim fairly precisely. Searching Google for the exact sentence, we get this page as the first result, from November 2011. John Piippo says there:

1. If the initial explosion of the big bang had differed in strength by as little as one part in 10\60, the universe would have either quickly collapsed back on itself, or expanded [too] rapidly for stars to form. In either case, life would be impossible. (An accuracy of one part in 10 to the 60th power can be compared to firing a bullet at a one-inch target on the other side of the observable universe, twenty billion light years away, and hitting the target.)

Reppert seems to have accidentally or deliberately divided this into two separate points; number 2 in his list does not make sense except as an observation on the first, as it is found here. Piippo likewise omits the word “too,” strongly suggesting that Piippo is the direct source for Reppert, although it is also possible that both borrowed from a third source.

We find an earlier form of the claim here, made by Robin Collins. It appears to date from around 1998, given the statement, “This work was made possible in part by a Discovery Institute grant for the fiscal year 1997-1998.” Here the claim stands thus:

1. If the initial explosion of the big bang had differed in strength by as little as 1 part in 1060, the universe would have either quickly collapsed back on itself, or expanded too rapidly for stars to form. In either case, life would be impossible. [See Davies, 1982, pp. 90-91. (As John Jefferson Davis points out (p. 140), an accuracy of one part in 1060 can be compared to firing a bullet at a one-inch target on the other side of the observable universe, twenty billion light years away, and hitting the target.)

Here we still have the number “1.”, and the text is obviously the source for the later claims, but the word “too” is present in this version, and the claims are sourced. He refers to The Accidental Universe by Paul Davies. Davies says on page 88:

It follows from (4.13) that if p > p_crit then > 0, the universe is spatially closed, and will eventually contract. The additional gravity of the extra-dense matter will drag the galaxies back on themselves. For p_crit, the gravity of the cosmic matter is weaker and the universe ‘escapes’, expanding unchecked in much the same way as a rapidly receding projectile. The geometry of the universe, and its ultimate fate, thus depends on the density of matter or, equivalently, on the total number of particles in the universe, N. We are now able to grasp the full significance of the coincidence (4.12). It states precisely that nature has chosen to have a value very close to that required to yield a spatially flat universe, with = 0 and p = p_crit.

Then, at the end of page 89, he says this:

At the Planck time – the earliest epoch at which we can have any confidence in the theory – the ratio was at most an almost infinitesimal 10-60. If one regards the Planck time as the initial moment when the subsequent cosmic dynamics were determined, it is necessary to suppose that nature chose to differ from p_crit by no more than one part in 1060.

Here we have our source. “The ratio” here refers to (p – p_crit) / p_crit. In order for the ratio to be this small, has to be almost equal to p_crit. In fact, Davies says that this ratio is proportional to time. If we set time = 0, then we would get a ratio of exactly 0, so that p = p_crit. Davies rightly states that the physical theories in question cannot work this way: under the theory of the Big Bang, we cannot discuss the state of the universe at t = 0 and expect to get sensible results. Nonetheless, this suggests that something is wrong with the idea that anything has been calibrated to one part in 1060. Rather, two values have started out basically equal and grown apart throughout time, so that if you choose an extremely small value of time, you get an extremely small difference in the two values.

This also verifies my original suspicion. Nothing has been measured to a precision of 60 digits, and a determination made that the number measured could not vary by one iota. Instead, Davies has simply taken a ratio that is proportional to time, and calculated its value with a very small value of time.

 

There is a real issue here, and it is the question, “Why is the universe basically flat?” But whatever the answer to this question may be, the question, and presumably its answer, are quite different from the claim that physics contains constants that are constrained to the level of “one part in 1060.” To put this another way: if you answer the question, “Why is the universe flat?” with a response of the form, “Because = 1892592714.2256399288581158185662151865333331859591, and if it had been the slightest amount more or less than this, the universe would not have been flat,” then your answer is very likely wrong. There is likely to be a simpler and more general answer to the question.

Reppert in fact agrees, and that is the whole point of his argument. For him, the simpler and more general answer is that God planned it that way. That may be, but it should be evident that there is nothing that demands either this answer or an answer of the above form. There could be any number of potential answers.

Playing the telephone game and expecting to get a sensible result is a bad idea. If you take a statement from someone else and restate it without a source, and your source itself has no source, it is quite possible that your statement is wrong and that the original claim was quite different. Even apart from this, however, Reppert is engaging in a basically mistaken enterprise. In essence, he is making a philosophical argument, but attempting to give the appearance of supporting it with physics and mathematics. This is presumably because these topics are less remote from the senses. If Reppert can convince you that his argument is supported by physics and mathematics, you will be likely to think that reasonable disagreement with his position is impossible. You will be less likely to be persuaded if you recognize that his argument remains a philosophical one.

There are philosophical arguments for the existence of God, and this blog has discussed such arguments. But these arguments belong to philosophy, not to science.