Technical Discussion and Philosophical Progress

In The Structure of Scientific Revolutions (p. 19-21), Thomas Kuhn remarks on the tendency of sciences to acquire a technical vocabulary and manner of discussion:

We shall be examining the nature of this highly directed or paradigm-based research in the next section, but must first note briefly how the emergence of a paradigm affects the structure of the group that practices the field. When, in the development of a natural science, an individual or group first produces a synthesis able to attract most of the next generation’s practitioners, the older schools gradually disappear. In part their disappearance is caused by their members’ conversion to the new paradigm. But there are always some men who cling to one or another of the older views, and they are simply read out of the profession, which thereafter ignores their work. The new paradigm implies a new and more rigid definition of the field. Those unwilling or unable to accommodate their work to it must proceed in isolation or attach themselves to some other group. Historically, they have often simply stayed in the departments of philosophy from which so many of the special sciences have been spawned. As these indications hint, it is sometimes just its reception of a paradigm that transforms a group previously interested merely in the study of nature into a profession or, at least, a discipline. In the sciences (though not in fields like medicine, technology, and law, of which the principal raison d’être is an external social need), the formation of specialized journals, the foundation of specialists’ societies, and the claim for a special place in the curriculum have usually been associated with a group’s first reception of a single paradigm. At least this was the case between the time, a century and a half ago, when the institutional pattern of scientific specialization first developed and the very recent time when the paraphernalia of specialization acquired a prestige of their own.

The more rigid definition of the scientific group has other consequences. When the individual scientist can take a paradigm for granted, he need no longer, in his major works, attempt to build his field anew, starting from first principles and justifying the use of each concept introduced. That can be left to the writer of textbooks. Given a textbook, however, the creative scientist can begin his research where it leaves off and thus concentrate exclusively upon the subtlest and most esoteric aspects of the natural phenomena that concern his group. And as he does this, his research communiqués will begin to change in ways whose evolution has been too little studied but whose modern end products are obvious to all and oppressive to many. No longer will his researches usually be embodied in books addressed, like Franklin’s Experiments . . . on Electricity or Darwin’s Origin of Species, to anyone who might be interested in the subject matter of the field. Instead they will usually appear as brief articles addressed only to professional colleagues, the men whose knowledge of a shared paradigm can be assumed and who prove to be the only ones able to read the papers addressed to them.

Today in the sciences, books are usually either texts or retrospective reflections upon one aspect or another of the scientific life. The scientist who writes one is more likely to find his professional reputation impaired than enhanced. Only in the earlier, pre-paradigm, stages of the development of the various sciences did the book ordinarily possess the same relation to professional achievement that it still retains in other creative fields. And only in those fields that still retain the book, with or without the article, as a vehicle for research communication are the lines of professionalization still so loosely drawn that the layman may hope to follow progress by reading the practitioners’ original reports. Both in mathematics and astronomy, research reports had ceased already in antiquity to be intelligible to a generally educated audience. In dynamics, research became similarly esoteric in the later Middle Ages, and it recaptured general intelligibility only briefly during the early seventeenth century when a new paradigm replaced the one that had guided medieval research. Electrical research began to require translation for the layman before the end of the eighteenth century, and most other fields of physical science ceased to be generally accessible in the nineteenth. During the same two centuries similar transitions can be isolated in the various parts of the biological sciences. In parts of the social sciences they may well be occurring today. Although it has become customary, and is surely proper, to deplore the widening gulf that separates the professional scientist from his colleagues in other fields, too little attention is paid to the essential relationship between that gulf and the mechanisms intrinsic to scientific advance.

As Kuhn says, this tendency has very well known results. Consider the papers constantly being published at arxiv.org, for example. If you are not familiar with the science in question, you will likely not be able to understand even the title, let alone the summary or the content. Many or most of the words will be meaningless to you, and even if they are not, their combinations will be.

It is also not difficult to see why this happens, and why it must happen. Everything we understand, we understand through form, which is a network of relationships. Thus if particular investigators wish to go into something in greater detail, these relationships will become more and more remote from the ordinary knowledge accessible to everyone. “Just say it in simple words” will become literally impossible, in the sense that explaining the “simple” statement will involve explaining a huge number of relationships that by default a person would have no knowledge of. That is the purpose, as Kuhn notes, of textbooks, namely to form connections between everyday knowledge and the more complex relationships studied in particular fields.

In Chapter XIII, Kuhn relates this sort of development with the word “science” and progress:

The preceding pages have carried my schematic description of scientific development as far as it can go in this essay. Nevertheless, they cannot quite provide a conclusion. If this description has at all caught the essential structure of a science’s continuing evolution, it will simultaneously have posed a special problem: Why should the enterprise sketched above move steadily ahead in ways that, say, art, political theory, or philosophy does not? Why is progress a perquisite reserved almost exclusively for the activities we call science? The most usual answers to that question have been denied in the body of this essay. We must conclude it by asking whether substitutes can be found.

Notice immediately that part of the question is entirely semantic. To a very great extent the term ‘science’ is reserved for fields that do progress in obvious ways. Nowhere does this show more clearly than in the recurrent debates about whether one or another of the contemporary social sciences is really a science. These debates have parallels in the pre-paradigm periods of fields that are today unhesitatingly labeled science. Their ostensible issue throughout is a definition of that vexing term. Men argue that psychology, for example, is a science because it possesses such and such characteristics. Others counter that those characteristics are either unnecessary or not sufficient to make a field a science. Often great energy is invested, great passion aroused, and the outsider is at a loss to know why. Can very much depend upon a definition of ‘science’? Can a definition tell a man whether he is a scientist or not? If so, why do not natural scientists or artists worry about the definition of the term? Inevitably one suspects that the issue is more fundamental. Probably questions like the following are really being asked: Why does my field fail to move ahead in the way that, say, physics does? What changes in technique or method or ideology would enable it to do so? These are not, however, questions that could respond to an agreement on definition. Furthermore, if precedent from the natural sciences serves, they will cease to be a source of concern not when a definition is found, but when the groups that now doubt their own status achieve consensus about their past and present accomplishments. It may, for example, be significant that economists argue less about whether their field is a science than do practitioners of some other fields of social science. Is that because economists know what science is? Or is it rather economics about which they agree?

The last point is telling. There is significantly more consensus among economists than among other sorts of social science, and consequently less worry about whether their field is scientific or not. The difference, then, is a difference of how much agreement is found. There is not necessarily any difference with respect to the kind of increasingly detailed thought that results in increasingly technical discussion. Kuhn remarks:

The theologian who articulates dogma or the philosopher who refines the Kantian imperatives contributes to progress, if only to that of the group that shares his premises. No creative school recognizes a category of work that is, on the one hand, a creative success, but is not, on the other, an addition to the collective achievement of the group. If we doubt, as many do, that nonscientific fields make progress, that cannot be because individual schools make none. Rather, it must be because there are always competing schools, each of which constantly questions the very foundations of the others. The man who argues that philosophy, for example, has made no progress emphasizes that there are still Aristotelians, not that Aristotelianism has failed to progress.

In this sense, if a particular school believes they possess the general truth about some matter (here theology or philosophy), they will quite naturally begin to discuss it in greater detail and in ways which are mainly intelligible to students of that school, just as happens in other technical fields. The field is only failing to progress in the sense that there are other large communities making contrasting claims, while we begin to use the term “science” and to speak of progress when one school completely dominates the field, and to a first approximation even people who know nothing about it assume that the particular school has things basically right.

What does this imply about progress in philosophy?

1. There is progress in the knowledge of topics that were once considered “philosophy,” but when we get to this point, we usually begin to use the name of a particular science, and with good reason, since technical specialization arises in the manner discussed above. Tyler Cowen discusses this sort of thing here.

2. Areas in which there doesn’t seem to be such progress, are probably most often areas where human knowledge remains at an early stage of development; it is precisely at such early stages that discussion does not have a technical character and when it can generally be understood by ordinary people without a specialized education. I pointed out that Aristotle was mistaken to assume that the sciences in general were fully developed. We would be equally mistaken to make such an assumption at the present times. As Kuhn notes, astronomy and mathematics achieved a “scientific” stage centuries before geology and biology did the same, and these long before economics and the like. The conclusion that one should draw is that metaphysics is hard, not that it is impossible or meaningless.

3. Even now, particular philosophical schools or individuals can make progress even without such consensus. This is evidently true if their overall position is correct or more correct than that of others, but it remains true even if their overall position is more wrong than that of other schools. Naturally, in the latter situation, they will not advance beyond the better position of other schools, but they will advance.

4. One who wishes to progress philosophically cannot avoid the tendency to technical specialization, even as an individual. This can be rather problematic for bloggers and people engaging in similar projects. John Nerst describes this problem:

The more I think about this issue the more unsolvable it seems to become. Loyal readers of a publication won’t be satisfied by having the same points reiterated again and again. News media get around this by focusing on, well, news. News are events, you can describe them and react to them for a while until they’re no longer news. Publications that aim to be more analytical and focus on discussing ideas, frameworks, slow processes and large-scale narratives instead of events have a more difficult task because their subject matter doesn’t change quickly enough for it to be possible to churn out new material every day without repeating yourself[2].

Unless you start building upwards. Instead of laying out stone after stone on the ground you put one on top of another, and then one on top of two others laying next to each other, and then one on top of all that, making a single three-level structure. In practice this means writing new material that builds on what came before, taking ideas further and further towards greater complexity, nuance and sophistication. This is what academia does when working correctly.

Mass media (including the more analytical outlets) do it very little and it’s obvious why: it’s too demanding[3]. If an article references six other things you need to have read to fully understand it you’re going to have a lot of difficulty attracting new readers.

Some of his conclusions:

I think that’s the real reason I don’t try to pitch more writing to various online publications. In my summary of 2018 I said it was because I thought my writing was to “too idiosyncratic, abstract and personal to fit in anywhere but my own blog”. Now I think the main reason is that I don’t so much want to take part in public debate or make myself a career. I want to explore ideas that lie at the edge of my own thinking. To do that I must assume that a reader knows broadly the same things I know and I’m just not that interested in writing about things where I can’t do that[9]. I want to follow my thoughts to for me new and unknown places — and import whatever packages I need to do it. This style isn’t compatible with the expectation that a piece will be able to stand on its own and deliver a single recognizable (and defensible) point[10].

The downside is of course obscurity. To achieve both relevance in the wider world and to build on other ideas enough to reach for the sky you need extraordinary success — so extraordinary that you’re essentially pulling the rest of the world along with you.

Obscurity is certainly one result. Another (relevant at least from the VP’s point of view) is disrespect. Scientists are generally respected despite the general incomprehensibility of their writing, on account of the absence of opposing schools. This lack leads people to assume that their arguments must be mostly right, even though they cannot understand them themselves. This can actually lead to an “Emperor has No Clothes” situation, where a scientist publishes something basically crazy, but others, even in his field, are reluctant to say so because they might appear to be the ones who are ignorant. As an example, consider Joy Christian’s “Disproof of Bell’s Theorem.” After reading this text, Scott Aaronson comments:

In response to my post criticizing his “disproof” of Bell’s Theorem, Joy Christian taunted me that “all I knew was words.”  By this, he meant that my criticisms were entirely based on circumstantial evidence, for example that (1) Joy clearly didn’t understand what the word “theorem” even meant, (2) every other sentence he uttered contained howling misconceptions, (3) his papers were written in an obscure, “crackpot” way, and (4) several people had written very clear papers pointing out mathematical errors in his work, to which Joy had responded only with bluster.  But I hadn’t actually studied Joy’s “work” at a technical level.  Well, yesterday I finally did, and I confess that I was astonished by what I found.  Before, I’d actually given Joy some tiny benefit of the doubt—possibly misled by the length and semi-respectful tone of the papers refuting his claims.  I had assumed that Joy’s errors, though ultimately trivial (how could they not be, when he’s claiming to contradict such a well-understood fact provable with a few lines of arithmetic?), would nevertheless be artfully concealed, and would require some expertise in geometric algebra to spot.  I’d also assumed that of course Joy would have some well-defined hidden-variable model that reproduced the quantum-mechanical predictions for the Bell/CHSH experiment (how could he not?), and that the “only” problem would be that, due to cleverly-hidden mistakes, his model would be subtly nonlocal.

What I actually found was a thousand times worse: closer to the stuff freshmen scrawl on an exam when they have no clue what they’re talking about but are hoping for a few pity points.  It’s so bad that I don’t understand how even Joy’s fellow crackpots haven’t laughed this off the stage.  Look, Joy has a hidden variable λ, which is either 1 or -1 uniformly at random.  He also has a measurement choice a of Alice, and a measurement choice b of Bob.  He then defines Alice and Bob’s measurement outcomes A and B via the following functions:

A(a,λ) = something complicated = (as Joy correctly observes) λ

B(b,λ) = something complicated = (as Joy correctly observes) -λ

I shit you not.  A(a,λ) = λ, and B(b,λ) = -λ.  Neither A nor B has any dependence on the choices of measurement a and b, and the complicated definitions that he gives for them turn out to be completely superfluous.  No matter what measurements are made, A and B are always perfectly anticorrelated with each other.

You might wonder: what could lead anyone—no matter how deluded—even to think such a thing could violate the Bell/CHSH inequalities?

“Give opposite answers in all cases” is in fact entirely irrelevant to Bell’s inequality. Thus the rest of Joy’s paper has no bearing whatsoever on the issue: it is essentially meaningless nonsense. Aaronson says he was possibly “misled by the length and semi-respectful tone of the papers refuting his claims.” But it is not difficult to see why people would be cautious in this way: the fear that they would turn out to be the ones missing something important.

The individual blogger in philosophy, however, is in a different position. If they wish to develop their thought it must become more technical, and there is no similar community backing that would cause others to assume that the writing basically makes sense. Thus, one’s writing is not only likely to become more and more obscure, but others will become more and more likely to assume that it is more or less meaningless word salad. This will happen even more to the degree that there is cultural opposition to one’s vocabulary, concepts, and topics.

Violations of Bell’s Inequality: Drawing Conclusions

In the post on violations of Bell’s inequality, represented there by Mark Alford’s twin analogy, I pointed out that things did not seem to go very well for Einstein’s hope for physics, I did not draw any specific conclusions. Here I will consider the likely consequences, first by looking at the relationship of the experiments to Einstein’s position on causality and determinism, and second on their relationship to Einstein’s position on locality and action at a distance.

Einstein on Determinism

Einstein hoped for “facts” instead of probabilities. Everything should be utterly fixed by the laws, much like the position recently argued by Marvin Edwards in the comments here.

On the face of it, violations of Bell’s inequality rule this out, represented by the argument that if the twins had pre-existing determinate plans, it would be impossible for them to give the same answer less than 1/3 of the time when they are asked different questions. Bell however pointed out that it is possible to formulate a deterministic theory which would give similar probabilities at the cost of positing action at a distance (quoted here):

Moreover, a hidden variable interpretation of elementary quantum theory has been explicitly constructed. That particular interpretation has indeed a grossly non-local structure. This is characteristic, according to the result to be proved here, of any such theory which reproduces exactly the quantum mechanical predictions.

Nonetheless, I have set aside action at a distance to be discussed separately, and I would argue that we should accept the above surface appearance: the outcomes of quantum mechanical experiments are actually indeterministic. These probabilities represent something in the world, not merely something in our knowledge.

Why? In the first place, note that “reproduces exactly the quantum mechanical predictions” can be understood in two ways. A deterministic theory of that kind would say that because the details are unknown to us, we cannot know what is going to happen. But the details are there, and they in fact determine what is going to happen. There is still a difference on the object level between a world where the present fixes the future to a single possibility, and one in which the future is left open, as Aristotle supposed.

Of course there is no definitive proof here that we are actually in the situation with the open future, although the need for action at a distance in the alternative theory suggests that we are. Even apart from this, however, the general phenomena of quantum mechanics directly suggest that this is the situation. Even apart from violations of Bell’s inequality, quantum mechanics in general already looked exactly as we should have expected a world with an indeterminate future to look.

If this is the case, then Einstein was mistaken on this point, at least to this extent. But what about the deterministic aspect, which I mentioned at the end of this post, and which Schrödinger describes:

At all events it is an imagined entity that images the blurring of all variables at every moment just as clearly and faithfully as does the classical model its sharp numerical values. Its equation of motion too, the law of its time variation, so long as the system is left undisturbed, lags not one iota, in clarity and determinacy, behind the equations of motion of the classical model.

The answer is that this is deterministic not because the future, as we know it, is deterministic, but because it describes all of the possibilities at once. Thus in the case of the cat it includes both the cat living and the cat dying, which are two possible outcomes. It is “deterministic” only because once you have stated all of the alternatives, there is nothing left to say.

Why did Einstein want a deterministic theory? He openly admits that he does not have a convincing argument for it. It seems likely, however, that the fundamental motivation is the conviction that reality is intelligible. And an indeterministic world seems significantly less intelligible than a deterministic one. But this desire can in fact be satisfied by this second kind of “determinism”; thus Schrödinger calls it “one perfectly clear concept.”

In this respect, Einstein’s intuition was not mistaken. It is possible to give an intelligible account of the world, even a “deterministic” one, in this sense.

Einstein on Locality

Einstein also wanted to avoid “spooky action at a distance.” Admitting that the future is indeterminate, however, is not enough to avoid this conclusion. In Mark Alford’s twin analogy, it is not only pre-determined plans that fail, but also plans that involve randomness. Thus it first appears that the violations of Bell’s inequality absolutely require action at a distance.

If we follow my suggestion here, however, and consequently adopt Hugh Everett’s interpretation of quantum mechanics, then saying that there are multiple future possibilities implies the existence of multiple timelines. And if there are multiple timelines, violations of Bell’s inequality no longer necessarily imply action at a distance.

Why not? Consider the twin experiment with the assumption of indeterminacy and multiple timelines. Suppose that from the very beginning, there are two copies of each twin. The first copy of the first twin has the plan of responding to the three questions with “yes/yes/yes.” Likewise, the first copy of the second twin has the plan of responding to the three questions with, “yes/yes/yes.” In contrast, the second copy of each twin has the plan of responding with “no/no/no.”

Now we have four twins but the experimenter only sees two. So which ones does he see? There is nothing impossible about the following “rule”: if the twins are asked different questions, the experimenter sees the first copy of one of the twins, and the second copy of the other twin. Meanwhile, if the twins are asked the same question, the experimenter sees either the first copy of each twin, or the second copy of each twin. It is easy to see that if this is the case, the experimenter will see the twins agree, when they are asked the same question, and will see them disagree when they are asked different questions (thus agreeing less than 1/3 of the time in that situation.)

“Wait,” you will say. “If multiple timelines is just a way of describing a situation with indeterminism, and indeterminism is not enough to avoid action at a distance, how is it possible for multiple timelines to give a way out?”

From the beginning, the apparent “impossibility” of the outcome was a statistical impossibility, not a logical impossibility. Naturally this had to be the case, since if it were a logical impossibility, we could not have coherently described the actual outcomes. Thus we might imagine that David Hume would give this answer:

The twins are responding randomly to each question. By pure chance, they happened to agree the times they were asked the same question, and by pure chance they violated Bell’s inequality when they were asked different questions.

Since this was all a matter of pure chance, of course, if you do the experiment again tomorrow, it will turn out that all of the answers are random and they will agree and disagree 50% of the time on all questions.

And this answer is logically possible, but false. This account does not explain the correlation, but simply ignores it. In a similar way, the reason why indeterministic theories without action at a distance, but described as having a single timeline, cannot explain the results is that in order to explain the correlation, the outcomes of both sides need to be selected together, so to speak. But “without action at a distance” in this context simply means that they are not selected together. This makes the outcome statistically impossible.

In our multiple timelines version, in contrast, our “rule” above in effect selected the outcomes together. In other words, the guideline we gave regarding which pairs of twins the experimenter would meet, had the same effect as action at a distance.

How is all this an explanation? The point is that the particular way that timelines spread out when they come into contact with other things, in the version with multiple timelines, exactly corresponds to action at a distance, in the version without them. An indeterministic theory represented as having a single timeline and no action at a distance could be directly translated into a version with multiple timelines; but if we did that, this particular multiple timeline version would not have the rule that produces the correct outcomes. And on the other hand, if we start with the multiple timeline version that does have the rule, and translate it into a single timeline account, it will have action at a distance.

What does all this say about Einstein’s opinion about locality? Was he right, or was he wrong?

We might simply say that he was wrong, insofar as the actual situation can in fact be described as including action at a distance, even if it is not necessary to describe it in this way, since we can describe it with multiple timelines and without action at a distance. But to the degree that this suggests that Einstein made two mistakes, one about determinism and one about action at a distance, I think this is wrong. There was only one mistake, and it was the one about determinism. The fact is that as soon you speak of indeterminism at all, it becomes possible to speak of the world as having multiple timelines. So the question at that point is whether this is the “natural” description of the situation, where the natural description more or less means the best way to understand things, in which case the possibility of “action at a distance” is not an additional mistake on Einstein’s part, but rather it is an artifact of describing the situation as though there were only a single timeline.

You might say that there cannot be a better or worse way to understand things if two accounts are objectively equivalent. But this is wrong. Thus for example in general relativity it is probably possible to give an account where the earth has no daily rotation, and the universe is spinning around it every 24 hours. And this account is objectively equivalent to the usual account where the earth is spinning; exactly the same situation is being described, and nothing different is being asserted. And yet this account is weird in many ways, and makes it very hard to understand the universe. The far better and “natural” description is that the earth is spinning. Note, however, that this is an overall result; just looking out the window, you might have thought that saying that the universe is spinning is more natural. (Notice, however, that an even more natural account would be that neither the earth nor the universe is moving; it is only later in the day that you begin to figure out that one of them is moving.)

In a similar way, a single timeline account is originally more natural in the way a Ptolemaic account is more natural when you look out the window. But I would argue that in a similar way, the multiple timeline account, without action at a distance, is ultimately the more natural one. The basic reason for this is that there is no Newtonian Absolute Time. The consequence is that if we speak of “future possibilities,” they cannot be future possibilities for the entire universe at once. They will be fairly localized future possibilities: e.g. there might be more than one possible text for the ending to this blog post, which has not yet been written, and those possibilities are originally possibilities for what happens here in this room, not for the rest of the universe. These future alternatives will naturally result in future possibilities for other parts of the world, but this will happen “slowly,” so to speak (namely if one wishes to speak of the speed of light as slow!) This fits well with the idea of multiple timelines, since there will have to be some process where these multiple timelines come into contact with the rest of the world, much as with our “rule” in the twin experiment. On the other hand, it does not fit so well with a single timeline account of future possibilities, since one is forced (by the terms of the account) to imagine that when a choice among possibilities is made, it is made for the entire universe at once, which appears to require Newton’s Absolute Time.

This suggests that Einstein was basically right about action at a distance, and wrong about determinism. But the intuition that motivated him to embrace both positions, namely that the universe should be intelligible, was sound.

Spooky Action at a Distance

Albert Einstein objected to the usual interpretations of quantum mechanics because they seemed to him to imply “spooky action at a distance,” a phrase taken from a letter from Einstein to Max Born in 1947 (page 155 in this book):

I cannot make a case for my attitude in physics which you would consider at all reasonable. I admit, of course, that there is a considerable amount of validity in the statistical approach which you were the first to recognize clearly as necessary given the framework of the existing formalism. I cannot seriously believe in it because the theory cannot be reconciled with the idea that physics should represent a reality in time and space, free from spooky actions at a distance. I am, however, not yet firmly convinced that it can really be achieved with a continuous field theory, although I have discovered a possible way of doing this which so far seems quite reasonable. The calculation difficulties are so great that I will be biting the dust long before I myself can be fully convinced of it. But I am quite convinced that someone will eventually come up with a theory whose objects, connected by laws, are not probabilities but considered facts, as used to be taken for granted until quite recently. I cannot, however, base this conviction on logical reasons, but can only produce my little finger as witness, that is, I offer no authority which would be able to command any kind of respect outside of my own hand.

Einstein has two objections: the theory seems to be indeterministic, and it also seems to imply action at a distance. He finds both of these implausible. He thinks physics should be deterministic, “as used to be taken for granted until quite recently,” and that all interactions should be local: things directly affect only things which are close by, and affect distant things only indirectly.

In many ways, things do not appear to have gone well for Einstein’s intuitions. John Bell constructed a mathematical argument, now known as Bell’s Theorem, that the predictions of quantum mechanics cannot be reproduced by the kind of theory desired by Einstein. Bell summarizes his point:

The paradox of Einstein, Podolsky and Rosen was advanced as an argument that quantum mechanics could not be a complete theory but should be supplemented by additional variables. These additional variables were to restore to the theory causality and locality. In this note that idea will be formulated mathematically and shown to be incompatible with the statistical predictions of quantum mechanics. It is the requirement of locality, or more precisely that the result of a measurement on one system be unaffected by operations on a distant system with which it has interacted in the past, that creates the essential difficulty. There have been attempts to show that even without such a separability or locality requirement no “hidden variable” interpretation of quantum mechanics is possible. These attempts have been examined elsewhere and found wanting. Moreover, a hidden variable interpretation of elementary quantum theory has been explicitly constructed. That particular interpretation has indeed a grossly non-local structure. This is characteristic, according to the result to be proved here, of any such theory which reproduces exactly the quantum mechanical predictions.

“Causality and locality” in this description are exactly the two points where Einstein objected in the quoted letter: causality, as understood here, implies determinism, and locality implies no spooky action at a distance. Given this result, Einstein might have hoped that the predictions of quantum mechanics would turn out to fail, so that he could still have his desired physics. This did not happen. On the contrary, these predictions (precisely those inconsistent with such theories) have been verified time and time again.

Rather than putting the reader through Bell’s math and physics, we will explain his result with an analogy by Mark Alford. Alford makes this comparison:

Imagine that someone has told us that twins have special powers, including the ability to communicate with each other using telepathic influences that are “superluminal” (faster than light). We decide to test this by collecting many pairs of twins, separating each pair, and asking each twin one question to see if their answers agree.

To make things simple we will only have three possible questions, and they will be Yes/No questions. We will tell the twins in advance what the questions are.

The procedure is as follows.

  1. A new pair of twins is brought in and told what the three possible questions are.
  2. The twins travel far apart in space to separate questioning locations.
  3. At each location there is a questioner who selects one of the three questions at random, and poses that question to the twin in front of her.
  4. Spacelike separation. When the question is chosen and asked at one location, there is not enough time for any influence traveling at the speed of light to get from there to the other location in time to affect either what question is chosen there, or the answer given.

He now supposes the twins give the same responses when they are asked the same question, and discusses this situation:

Now, suppose we perform this experiment and we find same-question agreement: whenever a pair of spacelike-separated twins both happen to get asked the same question, their answers always agree. How could they do this? There are two possible explanations,

1. Each pair of twins uses superluminal telepathic communication to make sure both twins give the same answer.

2. Each pair of twins follows a plan. Before they were separated they agreed in advance what their answers to the three questions would be.

The same-question agreement that we observe does not prove that twins can communicate telepathically faster than light. If we believe that strong locality is a valid principle, then we can resort to the other explanation, that each pair of twins is following a plan. The crucial point is that this requires determinism. If there were any indeterministic evolution while the twins were spacelike separated, strong locality requires that the random component of one twin’s evolution would have to be uncorrelated with the other twin’s evolution. Such uncorrelated indeterminism would cause their recollections of the plan to diverge, and they would not always show same-question agreement.

The results are understandable if the twins agree on the answers Yes-Yes-Yes, or Yes-No-Yes, or any other determinate combination. But they are not understandable if they decide to flip coins if they are asked the second question, for example. If they did this, they would have to disagree 50% of the time on that question, unless one of the coin flips affected the other.

Alford goes on to discuss what happens when the twins are asked different questions:

In the thought experiment as described up to this point we only looked at the recorded answers in cases where each twin in a given pair was asked the same question. There are also recorded data on what happens when the two questioners happen to choose different questions. Bell noticed that this data can be used as a cross-check on our strong-locality-saving idea that the twins are following a pre-agreed plan that determines that their answers will always agree. The cross-check takes the form of an inequality:

Bell inequality for twins:

If a pair of twins is following a plan then, when each twin is asked a different randomly chosen question, their answers will be the same, on average, at least 1/3 of the time.

He derives this value:

For each pair of twins, there are four general types of pre-agreed plan they could adopt when they are arranging how they will both give the same answer to each of the three possible questions.

(a) a plan in which all three answers are Yes;

(b) a plan in which there are two Yes and one No;

(c) a plan in which there are two No and one Yes;

(d) a plan in which all three answers are No.

If, as strong locality and same-question agreement imply, both twins in a given pair follow a shared predefined plan, then when the random questioning leads to each of them being asked a different question from the set of three possible questions, how often will their answers happen to be the same (both Yes or both No)? If the plan is of type (a) or (d), both answers will always be the same. If the plan is of type (b) or (c), both answers will be the same 1/3 of the time. We conclude that no matter what type of plan each pair of twins may follow, the mere fact that they are following a plan implies that, when each of them is asked a different randomly chosen question, they will both give the same answer (which might be Yes or No) at least 1/3 of the time. It is important to appreciate that one needs data from many pairs of twins to see this effect, and that the inequality holds even if each pair of twins freely chooses any plan they like.

The “Bell inequality” is violated if we do the experimental test and the twins end up agreeing, when they are asked different questions, less than 1/3 of the time, despite consistently agreeing when they are asked the same question. If one saw such results in reality, one might be forgiven for concluding that the twins do have superluminal telepathic abilities. Unfortunately for Einstein, this is what we do get, consistently, when we test the analogous quantum mechanical version of the experiment.