“Moral” Responsibility

In a passage quoted here, Jerry Coyne objected to the “moral” in “moral responsibility”:

To me, that means that the concept of “moral responsibility” is meaningless, for that implies an ability to choose freely. Nevertheless, we should still retain the concept of responsibility, meaning “an identifiable person did this or that good or bad action”. And, of course, we can sanction or praise people who were responsible in this sense, for such blame and praise can not only reinforce good behavior but is salubrious for society.

Suppose someone completely insane happens to kill another person, under the mistaken belief that they are doing something completely different. In such a case, “an identifiable person did this or that good or bad action,” and yet we do not say they are responsible, much less blame such a person; rather we may subject them to physical restraints, but we no more blame them than we blame the weather for the deaths that it occasionally inflicts on people. In other words, Coyne’s definition does not even work for “responsibility,” let alone moral responsibility.

Moral action has a specific meaning: something that is done, and not merely an action in itself, but in comparison with the good proposed by human reason. Consequently we have moral action only when we have something voluntarily done by a human being for a reason, or (if without a reason) with the voluntary omission of the consideration of reasons. In exactly the same situations we have moral responsibility: namely, someone voluntarily did something good, or someone voluntarily did something bad.

Praise and blame are added precisely because people are acting for reasons, and given that people tend to like praise and dislike blame, these elements, if rightly applied, will make good things better, and thus more likely to be pursued, and bad things worse, and thus more likely to be avoided. As an aside, this also suggests occasions when it is a bad idea to blame someone for something bad; namely, when blame is not likely to reduce the bad activity, or by very little, since in this case you are simply making things worse, period.

Stop, Coyne and others will say. Even if we agree with the point about praise and blame, we do not agree about moral responsibility, unless determinism is false. And nothing in the above paragraphs even refers to determinism or its opposite, and thus the above cannot be a full account of moral responsibility.

The above is, in fact, a basically complete account of moral responsibility. Although determinism is false, as was said in the linked post, its falsity has nothing to do with the matter one way or another.

The confusion about this results from a confusion between an action as a being in itself, and an action as moral, namely as considered by reason. This distinction was discussed here while considering what it means to say that some kinds of actions are always wrong. It is quite true that considered as a moral action, it would be wrong to blame someone if they did not have any other option. But that situation would be a situation where no reasonable person would act otherwise. And you do not blame someone for doing something that all reasonable people would do. You blame them in a situation where reasonable people would do otherwise: there are reasons for doing something different, but they did not act on those reasons.

But it is not the case that blame or moral responsibility depends on whether or not there is a physically possible alternative, because to consider physical alternatives is simply to speak of the action as a being in itself, and not as a moral act at all.

 

Quantum Mechanics and Libertarian Free Will

In a passage quoted in the last post, Jerry Coyne claims that quantum indeterminacy is irrelevant to free will: “Even the pure indeterminism of quantum mechanics can’t give us free will, because that’s simple randomness, and not a result of our own ‘will.'”

Coyne seems to be thinking that since quantum indeterminism has fixed probabilities in any specific situation, the result for human behavior would necessarily be like our second imaginary situation in the last post. There might be a 20% chance that you would randomly do X, and an 80% chance that you would randomly do Y, and nothing can affect these probabilities. Consequently you cannot be morally responsible for doing X or for doing Y, nor should you be praised or blamed for them.

Wait, you might say. Coyne explicitly favors praise and blame in general. But why? If you would not praise or blame someone doing something randomly, why should you praise or blame someone doing something in a deterministic manner? As explained in the last post, the question is whether reasons have any influence on your behavior. Coyne is assuming that if your behavior is deterministic, it can still be influenced by reasons, but if it is indeterministic, it cannot be. But there is no reason for this to be case. Your behavior can be influenced by reasons whether it is deterministic or not.

St. Thomas argues for libertarian free will on the grounds that there can be reasons for opposite actions:

Man does not choose of necessity. And this is because that which is possible not to be, is not of necessity. Now the reason why it is possible not to choose, or to choose, may be gathered from a twofold power in man. For man can will and not will, act and not act; again, he can will this or that, and do this or that. The reason of this is seated in the very power of the reason. For the will can tend to whatever the reason can apprehend as good. Now the reason can apprehend as good, not only this, viz. “to will” or “to act,” but also this, viz. “not to will” or “not to act.” Again, in all particular goods, the reason can consider an aspect of some good, and the lack of some good, which has the aspect of evil: and in this respect, it can apprehend any single one of such goods as to be chosen or to be avoided. The perfect good alone, which is Happiness, cannot be apprehended by the reason as an evil, or as lacking in any way. Consequently man wills Happiness of necessity, nor can he will not to be happy, or to be unhappy. Now since choice is not of the end, but of the means, as stated above (Article 3); it is not of the perfect good, which is Happiness, but of other particular goods. Therefore man chooses not of necessity, but freely.

Someone might object that if both are possible, there cannot be a reason why someone chooses one rather than the other. This is basically the claim in the third objection:

Further, if two things are absolutely equal, man is not moved to one more than to the other; thus if a hungry man, as Plato says (Cf. De Coelo ii, 13), be confronted on either side with two portions of food equally appetizing and at an equal distance, he is not moved towards one more than to the other; and he finds the reason of this in the immobility of the earth in the middle of the world. Now, if that which is equally (eligible) with something else cannot be chosen, much less can that be chosen which appears as less (eligible). Therefore if two or more things are available, of which one appears to be more (eligible), it is impossible to choose any of the others. Therefore that which appears to hold the first place is chosen of necessity. But every act of choosing is in regard to something that seems in some way better. Therefore every choice is made necessarily.

St. Thomas responds to this that it is a question of what the person considers:

If two things be proposed as equal under one aspect, nothing hinders us from considering in one of them some particular point of superiority, so that the will has a bent towards that one rather than towards the other.

Thus for example, someone might decide to become a doctor because it pays well, or they might decide to become a truck driver because they enjoy driving. Whether they consider “what would I enjoy?” or “what would pay well?” will determine which choice they make.

The reader might notice a flaw, or at least a loose thread, in St. Thomas’s argument. In our example, what determines whether you think about what pays well or what you would enjoy? This could be yet another choice. I could create a spreadsheet of possible jobs and think, “What should I put on it? Should I put the pay? or should I put what I enjoy?” But obviously the question about necessity will simply be pushed back, in this case. Is this choice itself determinate or indeterminate? And what determines what choice I make in this case? Here we are discussing an actual temporal series of thoughts, and it absolutely must have a first, since human life has a beginning in time. Consequently there will have to be a point where, if there is the possibility of “doing A for reason B” and “doing C for reason D”, it cannot be any additional consideration which determines which one is done.

Now it is possible at this point that St. Thomas is mistaken. It might be that the hypothesis that both were “really” possible is mistaken, and something does determine one rather than the other with “necessity.” It is also possible that he is not mistaken. Either way, human reasons do not influence the determination, because reason B and/or reason D are the first reasons considered, by hypothesis (if they were not, we would simply push back the question.)

At this point someone might consider this lack of the influence of reasons to imply that people are not morally responsible for doing A or for doing C. The problem with this is that if you do something without a reason (and without potentially being influenced by a reason), then indeed you would not be morally responsible. But the person doing A or C is not uninfluenced by reasons. They are influenced by reason B, or by reason D. Consequently, they are responsible for their specific action, because they do it for a reason, despite the fact that there is some other general issue that they are not responsible for.

What influence could quantum indeterminacy have here? It might be responsible for deciding between “doing A for reason B” and “doing C for reason D.” And as Coyne says, this would be “simple randomness,” with fixed probabilities in any particular situation. But none of this would prevent this from being a situation that would include libertarian free will, since libertarian free will is precisely nothing but the situation where there are two real possibilities: you might do one thing for one reason, or another thing for another reason. And that is what we would have here.

Does quantum mechanics have this influence in fact, or is this just a theoretical possibility? It very likely does. Some argue that it probably doesn’t, on the grounds that quantum mechanics does not typically seem to imply much indeterminacy for macroscopic objects. The problem with this argument is that the only way of knowing that quantum indeterminacy rarely leads to large scale differences is by using humanly designed items like clocks or computers. And these are specifically designed to be determinate: whenever our artifact is not sufficiently determinate and predictable, we change the design until we get something predictable. If we look at something in nature uninfluenced by human design, like a waterfall, is details are highly unpredictable to us. Which drop of water will be the most distant from this particular point one hour from now? There is no way to know.

But how much real indeterminacy is in the waterfall, or in the human brain, due to quantum indeterminacy? Most likely nobody knows, but it is basically a question of timescales. Do you get a great deal of indeterminacy after one hour, or after several days? One way or another, with the passage of enough time, you will get a degree of real indeterminacy as high as you like. The same thing will be equally true of human behavior. We often notice, in fact, that at short timescales there is less indeterminacy than we subjectively feel. For example, if someone hesitates to accept an invitation, in many situations, others will know that the person is very likely to decline. But the person feels very uncertain, as though there were a 50/50 chance of accepting or declining. The real probabilities might be 90/10 or even more slanted. Nonetheless, the question is one of timescales and not of whether or not there is any indeterminacy. There is, this is basically settled, it will apply to human behavior, and there is little reason to doubt that it applies at relatively short timescales compared to the timescales at which it applies to clocks and computers or other things designed with predictability in mind.

In this sense, quantum indeterminacy strongly suggests that St. Thomas is basically correct about libertarian free will.

On the other hand, Coyne is also right about something here. While it is not true that such “randomness” removes moral responsibility, the fact that people do things for reasons, or that praise and blame is a fitting response to actions done for reasons, Coyne correctly notices that it does not add to the fact that someone is responsible. If there is no human reason for the fact that a person did A for reason B rather than C for reason D, this makes their actions less intelligible, and thus less subject to responsibility. In other words, the “libertarian” part of libertarian free will does not make the will more truly a will, but less truly. In this respect, Coyne is right. This however is unrelated to quantum mechanics or to any particular scientific account. The thoughtful person can understand this simply from general considerations about what it means to act for a reason.

Causality and Moral Responsibility

Consider two imaginary situations:

(1) In the first situation, people are such that when someone sees a red light, they immediately go off and kill someone. Nothing can be done to prevent this, and no intention or desire to do otherwise makes any difference.

In this situation, killing someone after you have seen a red light is not blamed, since it cannot be avoided, but we blame people who show red lights to others. Such people are arrested and convicted as murderers.

(2) In the second situation, people are such that when someone sees a red light, there is a 5% chance they will go off and immediately kill someone, and a 95% chance they will behave normally. Nothing can change this probability: it does not matter whether the person is wicked or virtuous or what their previous attitude to killing was.

In this situation, again, we do not blame people who end up killing someone, but we call them unlucky. We do however blame people who show others red lights, and they are arrested and convicted of second degree murder, or in some cases manslaughter.

Some people would conclude from this that moral responsibility is incoherent: whether the world is deterministic or not, moral responsibility is impossible. Jerry Coyne defends this position in numerous places, as for example here:

We’ve taken a break from the many discussions on this site about free will, but, cognizant of the risks, I want to bring it up again. I think nearly all of us agree that there’s no dualism involved in our decisions: they’re determined completely by the laws of physics. Even the pure indeterminism of quantum mechanics can’t give us free will, because that’s simple randomness, and not a result of our own “will.”

Coyne would perhaps say that “free will” embodies a contradiction much in the way that “square circle” does. “Will” implies a cause, and thus something deterministic. “Free” implies indeterminism, and thus no cause.

In many places Coyne asserts that this implies that moral responsibility does not exist, as for example here:

This four-minute video on free will and responsibility, narrated by polymath Raoul Martinez, was posted by the Royal Society for the Encouragement of the Arts, Manufactures, and Commerce (RSA). Martinez’s point is one I’ve made here many times, and will surely get pushback from: determinism rules human behavior, and our “choices” are all predetermined by our genes and environment. To me, that means that the concept of “moral responsibility” is meaningless, for that implies an ability to choose freely. Nevertheless, we should still retain the concept of responsibility, meaning “an identifiable person did this or that good or bad action”. And, of course, we can sanction or praise people who were responsible in this sense, for such blame and praise can not only reinforce good behavior but is salubrious for society.

I think that Coyne is very wrong about the meaning of free will, somewhat wrong about responsibility, and likely wrong about the consequences of his views for society (e.g. he believes that his view will lead to more humane treatment of prisoners. There is no particular reason to expect this.)

The imaginary situations described in the initial paragraphs of this post do not imply that moral responsibility is impossible, but they do tell us something. In particular, they tell us that responsibility is not directly determined by determinism or its lack. And although Coyne says that “moral responsibility” implies indeterminism, surely even Coyne would not advocate blaming or punishing the person who had the 5% chance of going and killing someone. And the reason is clear: it would not “reinforce good behavior” or be “salubrious for society.” By the terms set out, it would make no difference, so blaming or punishing would be pointless.

Coyne is right that determinism does not imply that punishment is pointless. And he also recognizes that indeterminism does not of itself imply that anyone is responsible for anything. But he fails here to put two and two together: just as determinism does not imply punishment is pointless, nor that it is not, indeterminism likewise implies neither of the two. The conclusion he should draw is not that moral responsibility is meaningless, but that it is independent of both determinism and indeterminism; that is, that both deterministic compatibilism and libertarian free will allow for moral responsibility.

So what is required for praise and blame to have a point? Elsewhere we discussed C.S. Lewis’s claim that something can have a reason or a cause, but not both. In a sense, the initial dilemma in this post can be understood as a similar argument. Either our behavior has deterministic causes, or it has indeterministic causes; therefore it does not have reasons; therefore moral responsibility does not exist.

On the other hand, if people do have reasons for their behavior, there can be good reasons for blaming people who do bad things, and for punishing them. Namely, since those people are themselves acting for reasons, they will be less likely in the future to do those things, and likewise other people, fearing punishment and blame, will be less likely to do them.

As I said against Lewis, reasons do not exclude causes, but require them. Consequently what is necessary for moral responsibility are causes that are consistent with having reasons; one can easily imagine causes that are not consistent with having reasons, as in the imaginary situations described, and such causes would indeed exclude responsibility.

Violations of Bell’s Inequality: Drawing Conclusions

In the post on violations of Bell’s inequality, represented there by Mark Alford’s twin analogy, I pointed out that things did not seem to go very well for Einstein’s hope for physics, I did not draw any specific conclusions. Here I will consider the likely consequences, first by looking at the relationship of the experiments to Einstein’s position on causality and determinism, and second on their relationship to Einstein’s position on locality and action at a distance.

Einstein on Determinism

Einstein hoped for “facts” instead of probabilities. Everything should be utterly fixed by the laws, much like the position recently argued by Marvin Edwards in the comments here.

On the face of it, violations of Bell’s inequality rule this out, represented by the argument that if the twins had pre-existing determinate plans, it would be impossible for them to give the same answer less than 1/3 of the time when they are asked different questions. Bell however pointed out that it is possible to formulate a deterministic theory which would give similar probabilities at the cost of positing action at a distance (quoted here):

Moreover, a hidden variable interpretation of elementary quantum theory has been explicitly constructed. That particular interpretation has indeed a grossly non-local structure. This is characteristic, according to the result to be proved here, of any such theory which reproduces exactly the quantum mechanical predictions.

Nonetheless, I have set aside action at a distance to be discussed separately, and I would argue that we should accept the above surface appearance: the outcomes of quantum mechanical experiments are actually indeterministic. These probabilities represent something in the world, not merely something in our knowledge.

Why? In the first place, note that “reproduces exactly the quantum mechanical predictions” can be understood in two ways. A deterministic theory of that kind would say that because the details are unknown to us, we cannot know what is going to happen. But the details are there, and they in fact determine what is going to happen. There is still a difference on the object level between a world where the present fixes the future to a single possibility, and one in which the future is left open, as Aristotle supposed.

Of course there is no definitive proof here that we are actually in the situation with the open future, although the need for action at a distance in the alternative theory suggests that we are. Even apart from this, however, the general phenomena of quantum mechanics directly suggest that this is the situation. Even apart from violations of Bell’s inequality, quantum mechanics in general already looked exactly as we should have expected a world with an indeterminate future to look.

If this is the case, then Einstein was mistaken on this point, at least to this extent. But what about the deterministic aspect, which I mentioned at the end of this post, and which Schrödinger describes:

At all events it is an imagined entity that images the blurring of all variables at every moment just as clearly and faithfully as does the classical model its sharp numerical values. Its equation of motion too, the law of its time variation, so long as the system is left undisturbed, lags not one iota, in clarity and determinacy, behind the equations of motion of the classical model.

The answer is that this is deterministic not because the future, as we know it, is deterministic, but because it describes all of the possibilities at once. Thus in the case of the cat it includes both the cat living and the cat dying, which are two possible outcomes. It is “deterministic” only because once you have stated all of the alternatives, there is nothing left to say.

Why did Einstein want a deterministic theory? He openly admits that he does not have a convincing argument for it. It seems likely, however, that the fundamental motivation is the conviction that reality is intelligible. And an indeterministic world seems significantly less intelligible than a deterministic one. But this desire can in fact be satisfied by this second kind of “determinism”; thus Schrödinger calls it “one perfectly clear concept.”

In this respect, Einstein’s intuition was not mistaken. It is possible to give an intelligible account of the world, even a “deterministic” one, in this sense.

Einstein on Locality

Einstein also wanted to avoid “spooky action at a distance.” Admitting that the future is indeterminate, however, is not enough to avoid this conclusion. In Mark Alford’s twin analogy, it is not only pre-determined plans that fail, but also plans that involve randomness. Thus it first appears that the violations of Bell’s inequality absolutely require action at a distance.

If we follow my suggestion here, however, and consequently adopt Hugh Everett’s interpretation of quantum mechanics, then saying that there are multiple future possibilities implies the existence of multiple timelines. And if there are multiple timelines, violations of Bell’s inequality no longer necessarily imply action at a distance.

Why not? Consider the twin experiment with the assumption of indeterminacy and multiple timelines. Suppose that from the very beginning, there are two copies of each twin. The first copy of the first twin has the plan of responding to the three questions with “yes/yes/yes.” Likewise, the first copy of the second twin has the plan of responding to the three questions with, “yes/yes/yes.” In contrast, the second copy of each twin has the plan of responding with “no/no/no.”

Now we have four twins but the experimenter only sees two. So which ones does he see? There is nothing impossible about the following “rule”: if the twins are asked different questions, the experimenter sees the first copy of one of the twins, and the second copy of the other twin. Meanwhile, if the twins are asked the same question, the experimenter sees either the first copy of each twin, or the second copy of each twin. It is easy to see that if this is the case, the experimenter will see the twins agree, when they are asked the same question, and will see them disagree when they are asked different questions (thus agreeing less than 1/3 of the time in that situation.)

“Wait,” you will say. “If multiple timelines is just a way of describing a situation with indeterminism, and indeterminism is not enough to avoid action at a distance, how is it possible for multiple timelines to give a way out?”

From the beginning, the apparent “impossibility” of the outcome was a statistical impossibility, not a logical impossibility. Naturally this had to be the case, since if it were a logical impossibility, we could not have coherently described the actual outcomes. Thus we might imagine that David Hume would give this answer:

The twins are responding randomly to each question. By pure chance, they happened to agree the times they were asked the same question, and by pure chance they violated Bell’s inequality when they were asked different questions.

Since this was all a matter of pure chance, of course, if you do the experiment again tomorrow, it will turn out that all of the answers are random and they will agree and disagree 50% of the time on all questions.

And this answer is logically possible, but false. This account does not explain the correlation, but simply ignores it. In a similar way, the reason why indeterministic theories without action at a distance, but described as having a single timeline, cannot explain the results is that in order to explain the correlation, the outcomes of both sides need to be selected together, so to speak. But “without action at a distance” in this context simply means that they are not selected together. This makes the outcome statistically impossible.

In our multiple timelines version, in contrast, our “rule” above in effect selected the outcomes together. In other words, the guideline we gave regarding which pairs of twins the experimenter would meet, had the same effect as action at a distance.

How is all this an explanation? The point is that the particular way that timelines spread out when they come into contact with other things, in the version with multiple timelines, exactly corresponds to action at a distance, in the version without them. An indeterministic theory represented as having a single timeline and no action at a distance could be directly translated into a version with multiple timelines; but if we did that, this particular multiple timeline version would not have the rule that produces the correct outcomes. And on the other hand, if we start with the multiple timeline version that does have the rule, and translate it into a single timeline account, it will have action at a distance.

What does all this say about Einstein’s opinion about locality? Was he right, or was he wrong?

We might simply say that he was wrong, insofar as the actual situation can in fact be described as including action at a distance, even if it is not necessary to describe it in this way, since we can describe it with multiple timelines and without action at a distance. But to the degree that this suggests that Einstein made two mistakes, one about determinism and one about action at a distance, I think this is wrong. There was only one mistake, and it was the one about determinism. The fact is that as soon you speak of indeterminism at all, it becomes possible to speak of the world as having multiple timelines. So the question at that point is whether this is the “natural” description of the situation, where the natural description more or less means the best way to understand things, in which case the possibility of “action at a distance” is not an additional mistake on Einstein’s part, but rather it is an artifact of describing the situation as though there were only a single timeline.

You might say that there cannot be a better or worse way to understand things if two accounts are objectively equivalent. But this is wrong. Thus for example in general relativity it is probably possible to give an account where the earth has no daily rotation, and the universe is spinning around it every 24 hours. And this account is objectively equivalent to the usual account where the earth is spinning; exactly the same situation is being described, and nothing different is being asserted. And yet this account is weird in many ways, and makes it very hard to understand the universe. The far better and “natural” description is that the earth is spinning. Note, however, that this is an overall result; just looking out the window, you might have thought that saying that the universe is spinning is more natural. (Notice, however, that an even more natural account would be that neither the earth nor the universe is moving; it is only later in the day that you begin to figure out that one of them is moving.)

In a similar way, a single timeline account is originally more natural in the way a Ptolemaic account is more natural when you look out the window. But I would argue that in a similar way, the multiple timeline account, without action at a distance, is ultimately the more natural one. The basic reason for this is that there is no Newtonian Absolute Time. The consequence is that if we speak of “future possibilities,” they cannot be future possibilities for the entire universe at once. They will be fairly localized future possibilities: e.g. there might be more than one possible text for the ending to this blog post, which has not yet been written, and those possibilities are originally possibilities for what happens here in this room, not for the rest of the universe. These future alternatives will naturally result in future possibilities for other parts of the world, but this will happen “slowly,” so to speak (namely if one wishes to speak of the speed of light as slow!) This fits well with the idea of multiple timelines, since there will have to be some process where these multiple timelines come into contact with the rest of the world, much as with our “rule” in the twin experiment. On the other hand, it does not fit so well with a single timeline account of future possibilities, since one is forced (by the terms of the account) to imagine that when a choice among possibilities is made, it is made for the entire universe at once, which appears to require Newton’s Absolute Time.

This suggests that Einstein was basically right about action at a distance, and wrong about determinism. But the intuition that motivated him to embrace both positions, namely that the universe should be intelligible, was sound.

Open Past

Suppose that Aristotle was right, and the future is open. What would things be like in detail?

There are many ways things could go, so for concreteness let’s assume that (in some local area) there are approximately 100 possibilities for the next second, and approximately 100 x 100, or 10,000 possibilities for the next two seconds.

Then the question arises: do some of the two-second outcomes have overlapping paths? In other words, suppose we take the first option in the first second. Are all of the outcomes we can reach different from all of the outcomes we could reach if we took the second option in the first second?

It is at least plausible that some overlapping paths can exist. For example, something might swerve to the left in the first second, and then to the right in the second second, ending up just where it would have been if it had swerved to the right in the first second and to the left in the second. Let’s suppose it turns out this way. Thus we have situation A and time A, and situation B and time B, with a first and second path, both of which lead from situation A at time A, to situation B at time B.

When we get to situation B, what does the world look like? In particular, if someone is in situation B and says, “let’s look at the world and figure out what just happened,” what does it look like? Consider three different accounts:

  1. It looks like situation B except also that it looks like we took the first path
  2. It looks like situation B except also that it looks like we took the second path
  3. It looks like situation B, and we can’t tell which path was taken

The problem is evident. These are three different situations. If things currently look different, the situation is different. So these cannot possibly all be descriptions of situation B. And in particular, only the third is a reasonable description of the situation we should expect. We have set up the situation so that there is no difference in our current situation, whether the first or second path was taken. So of course in situation B it will be impossible to know which path was taken.

But what does that look like, exactly? “We don’t know” is not a description of a situation, but a description of our state of knowledge. What is it about situation B that makes it impossible to tell which path was taken? What happens if you describe the situation as exactly as possible, and then explain why that “exact” description still does not determine which path was taken?

Consider again Schrödinger’s confusion about his cat. The reason why the notion of “bluriness” came up at all was not merely that the wave equation seems to describe something blurred, but also because the actual results of experiments suggest that something blurred took place. Thus for example in double-slit experiments, interference patterns suggest that something is going through both slits at once, while if detectors are added to determine what, if anything, is going through the slits, one seems to find that only one slit is used at a time, and the interference pattern goes away.

This fits the above description of situation A and situation B  almost perfectly. In the double slit experiment, there are two paths that could be taken to arrive at the same outcome. But that “same outcome” is not one in which it looks like the first path was taken, nor one in which it looks like the second path was taken, but one in which the outcome’s relationship to the path appears to be confused. And on the other hand, if we can tell which path was taken, as we can when we add detectors, there is no such confusion, because the outcomes no longer overlap; the outcome where the first detector registers is not the same as an outcome where the second detector registers.

In this sense, quantum theory is simply the situation where Aristotle was right about the indeterminacy of the future, with the minor addition that it turned out to be possible to get to the same future by more than route.

Note, however, that this implies the worrisome outcome that I suggested in that post. Just as the future is indeterminate, so is the past. Just as the present has many possible future outcomes, there are many past paths that could have resulted in the present.

Schrödinger’s Cat

Erwin Schrödinger describes the context for his thought experiment with a cat:

The other alternative consists of granting reality only to the momentarily sharp determining parts – or in more general terms to each variable a sort of realization just corresponding to the quantum mechanical statistics of this variable at the relevant moment.

That it is in fact not impossible to express the degree and kind of blurring of all variables in one perfectly clear concept follows at once from the fact that Q.M. as a matter of fact has and uses such an instrument, the so-called wave function or psi-function, also called system vector. Much more is to be said about it further on. That it is an abstract, unintuitive mathematical construct is a scruple that almost always surfaces against new aids to thought and that carries no great message. At all events it is an imagined entity that images the blurring of all variables at every moment just as clearly and faithfully as does the classical model its sharp numerical values. Its equation of motion too, the law of its time variation, so long as the system is left undisturbed, lags not one iota, in clarity and determinacy, behind the equations of motion of the classical model. So the latter could be straight-forwardly replaced by the psi-function, so long as the blurring is confined to atomic scale, not open to direct control. In fact the function has provided quite intuitive and convenient ideas, for instance the “cloud of negative electricity” around the nucleus, etc. But serious misgivings arise if one notices that the uncertainty affects macroscopically tangible and visible things, for which the term “blurring” seems simply wrong. The state of a radioactive nucleus is presumably blurred in such a degree and fashion that neither the instant of decay nor the direction, in which the emitted alpha-particle leaves the nucleus, is well-established. Inside the nucleus, blurring doesn’t bother us. The emerging particle is described, if one wants to explain intuitively, as a spherical wave that continuously emanates in all directions and that impinges continuously on a surrounding luminescent screen over its full expanse. The screen however does not show a more or less constant uniform glow, but rather lights up at one instant at one spot – or, to honor the truth, it lights up now here, now there, for it is impossible to do the experiment with only a single radioactive atom. If in place of the luminescent screen one uses a spatially extended detector, perhaps a gas that is ionised by the alpha-particles, one finds the ion pairs arranged along rectilinear columns, that project backwards on to the bit of radioactive matter from which the alpha-radiation comes (C.T.R. Wilson’s cloud chamber tracks, made visible by drops of moisture condensed on the ions).

One can even set up quite ridiculous cases. A cat is penned up in a steel chamber, along with the following device (which must be secured against direct interference by the cat): in a Geiger counter there is a tiny bit of radioactive substance, so small, that perhaps in the course of the hour one of the atoms decays, but also, with equal probability, perhaps none; if it happens, the counter tube discharges and through a relay releases a hammer which shatters a small flask of hydrocyanic acid. If one has left this entire system to itself for an hour, one would say that the cat still lives if meanwhile no atom has decayed. The psi-function of the entire system would express this by having in it the living and dead cat (pardon the expression) mixed or smeared out in equal parts.

It is typical of these cases that an indeterminacy originally restricted to the atomic domain becomes transformed into macroscopic indeterminacy, which can then be resolved by direct observation. That prevents us from so naively accepting as valid a “blurred model” for representing reality. In itself it would not embody anything unclear or contradictory. There is a difference between a shaky or out-of-focus photograph and a snapshot of clouds and fog banks.

We see here the two elements described at the end of this earlier post. The psi-function is deterministic, but there seems to be an element of randomness when someone comes to check on the cat.

Hugh Everett amusingly describes a similar experiment performed on human beings (but without killing anyone):

Isolated somewhere out in space is a room containing an observer, A, who is about to perform a measurement upon a system S. After performing his measurement he will record the result in his notebook. We assume that he knows the state function of S (perhaps as a result of previous measurement), and that it is not an eigenstate of the measurement he is about to perform. A, being an orthodox quantum theorist, then believes that the outcome of his measurement is undetermined and that the process is correctly described by Process 1 [namely a random determination caused by measurement].

In the meantime, however, there is another observer, B, outside the room, who is in possession of the state function of the entire room, including S, the measuring apparatus, and A, just prior to the measurement. B is only interested in what will be found in the notebook one week hence, so he computes the state function of the room for one week in the future according to Process 2 [namely the deterministic  wave function]. One week passes, and we find B still in possession of the state function of the room, which this equally orthodox quantum theorist believes to be a complete description of the room and its contents. If B’s state function calculation tells beforehand exactly what is going to be in the notebook, then A is incorrect in his belief about the indeterminacy of the outcome of his measurement. We therefore assume that B’s state function contains non-zero amplitudes over several of the notebook entries.

At this point, B opens the door to the room and looks at the notebook (performs his observation.) Having observed the notebook entry, he turns to A and informs him in a patronizing manner that since his (B’s) wave function just prior to his entry into the room, which he knows to have been a complete description of the room and its contents, had non-zero amplitude over other than the present result of the measurement, the result must have been decided only when B entered the room, so that A, his notebook entry, and his memory about what occurred one week ago had no independent objective existence until the intervention by B. In short, B implies that A owes his present objective existence to B’s generous nature which compelled him to intervene on his behalf. However, to B’s consternation, A does not react with anything like the respect and gratitude he should exhibit towards B, and at the end of a somewhat heated reply, in which A conveys in a colorful manner his opinion of B and his beliefs, he rudely punctures B’s ego by observing that if B’s view is correct, then he has no reason to feel complacent, since the whole present situation may have no objective existence, but may depend upon the future actions of yet another observer.

Schrödinger’s problem was that the wave equation seems to describe something “blurred,” but if we assume that is because something blurred exists, it seems to contradict our experience which is of something quite distinct: a live cat or a dead cat, but not something in between.

Everett proposes that his interpretation of quantum mechanics is able to resolve this difficulty. After presenting other interpretations, he proposes his own (“Alternative 5”):

Alternative 5: To assume the universal validity of the quantum description, by the complete abandonment of Process 1 [again, this was the apparently random measurement process]. The general validity of pure wave mechanics, without any statistical assertions, is assumed for all physical systems, including observers and measuring apparata. Observation processes are to be described completely by the state function of the composite system which includes the observer and his object-system, and which at all times obeys the wave equation (Process 2).

It is evident that Alternative 5 is a theory of many advantages. It has the virtue of logical simplicity and it is complete in the sense that it is applicable to the entire universe. All processes are considered equally (there are no “measurement processes” which play any preferred role), and the principle of psycho-physical parallelism is fully maintained. Since the universal validity of the state function is asserted, one can regard the state functions themselves as the fundamental entities, and one can even consider the state function of the whole universe. In this sense this theory can be called the theory of the “universal wave function,” since all of physics is presumed to follow from this function alone. There remains, however, the question whether or not such a theory can be put into correspondence with our experience.

This present thesis is devoted to showing that this concept of a universal wave mechanics, together with the necessary correlation machinery for its interpretation, forms a logically self consistent description of a universe in which several observers are at work.

Ultimately, Everett’s response to Schrödinger is that the cat is indeed “blurred,” and that this never goes away. When someone checks on the cat, the person checking is also “blurred,” becoming a composite of someone seeing a dead cat and someone seeing a live cat. However, these are in effect two entirely separate worlds, one in which someone sees a live cat, and one in which someone sees a dead cat.

Everett mentions “the necessary correlation machinery for its interpretation,” because a mathematical theory of physics as such does not necessarily say that anyone should see anything in particular. So for example when Newton when says that there is a gravitational attraction between masses inversely proportional to the square of their distance, what exactly should we expect to see, given that? Obviously there is no way to answer this without adding something, and ultimately we need to add something non-mathematical, namely something about the way our experiences work.

I will not pretend to judge whether or not Everett does a good job defending his position. There is an interesting point here, whether or not his defense is ultimately a good one. “Orthodox” quantum mechanics, as Everett calls it, only gives statistical predictions about the future, and as long as nothing is added to the theory, it implies that deterministic predictions are impossible. It follows that if the position in our last post, on an open future, was correct, it must be possible to explain the results of quantum mechanics in terms of many worlds or multiple timelines. And I do not merely mean that we can give the same predictions with a one-world account or with a many world account. I mean that there must be a many-world account such that its contents are metaphysically identical to the contents of a one-world account with an open future.

This would nonetheless leave undetermined the question of what sort of account would be most useful to us in practice.

Open Future

Let’s return for a moment to the question at the end of this post. I asked, “What happens if the future is indeterminate? Would not the eternalist position necessarily differ from the presentist one, in that case?”

Why necessarily different? The argument in that post was that eternalism and presentism are different descriptions of the same thing, and that we see the sameness by noting the sameness of relations between the elements of the description. But if the future is open, as Aristotle supposed, it is hard to see how we can maintain this. Aristotle says that the present is open to either having the sea battle tomorrow or not having it. With an eternalist view, the sea battle is “already there” or it is not. So in Aristotle’s view, the present has an open relationship to both possibilities. But the eternalist view seems to be truly open only to the possibility that will actually happen. We no longer have the same set of relationships.

Notice the problem. When I attempted to equate eternalism and presentism, I implicitly assumed that determinism is true. There were only three states of the universe, beginning, middle, and end. If determinism is false, things are different. There might be beginning, middle, and two potential ends. Perhaps there is a sea battle in one of the potential ends, and no sea battle in the other.

This suggests a solution to our conundrum, however. Even the presentist description in that post was inconsistent with an open future. If there is only one possible end, the future is not open, even if we insist that the unique possible end “currently doesn’t exist.” The problem then was not eternalism as such, but the fact that we started out with a determinist description of the universe. This strongly suggests that if my argument about eternalism and presentism was correct, we should be able to formulate eternalist and presentist descriptions of an open future which will be equivalent. But both will need to be different from the fixed “beginning-middle-end” described in that post.

We can simply take Aristotle’s account as the account of presentism with an open future. How can we give an eternalist account of the same thing? The basic requirement will be that the relationship between the present and the future needs to be the same in both accounts. Now in Aristotle’s account, the present has the same relationship to two different possibilities: both of them are equally possible. So to get a corresponding eternalist account, we need the present to be equally related to two futures that correspond to the two possiblities in the presentist account. I do not say “two possible futures,” but “two futures,” precisely because the account is eternalist.

The careful reader will already understand the account from the above, but let us be more explicit. The eternalist account that corresponds to the presentist account with an open future has multiple timelines, all of which “exist”, in the eternalist sense. The reader will no doubt be familiar with the idea of multiple timelines, at least from time travel fiction. In a similar way, the eternalist reworking of Aristotle’s position is that there is a timeline where the sea battle takes place, and another timeline where the sea battle does not take place. In this view, both of them “actually” happen. But even in this view, an observer in the middle location will have to say, “I do not, and cannot, know whether the sea battle will take place or not,” just as in Aristotle’s view. For the observer cannot traverse both timelines at once. From his point of view, he will take only one, but since his relationship to the two possibilities (or actualities) is the same, it is indeterminate which one it will be.

Even if one cannot prove my account of equivalence to be wrong, the reader may worry. Time travel fiction frequently seems incoherent, and this suggests that any view with multiple timelines may also be incoherent. But this potential incoherence supports the equivalence, rather than subtracting from it. For as we noted in the post on Aristotle, there is a definite appearance of incoherence in his position. It is not even clear how his view is logically possible. So it would not be surprising, but quite natural, if views which are intended to be equivalent to his position are also not clearly coherent. Nonetheless, the multiple timelines description does have some logical advantage over Aristotle’s position, in the sense that “the sea battle will take place in timeline A” does not even appear to contradict “the sea battle will not take place in timeline B.”