Was Kavanaugh Guilty?

No, I am not going to answer the question. This post will illustrate and argue for a position that I have argued many times in the past, namely that belief is voluntary. The example is merely particularly good for proving the point. I will also be using a framework something like Bryan Caplan’s in his discussion of rational irrationality:

Two forces lie at the heart of economic models of choice: preferences and prices. A consumer’s preferences determine the shape of his demand curve for oranges; the market price he faces determines where along that demand curve he resides. What makes this insight deep is its generality. Economists use it to analyze everything from having babies to robbing banks.

Irrationality is a glaring exception. Recognizing irrationality is typically equated with rejecting economics. A “logic of the irrational” sounds self-contradictory. This chapter’s central message is that this reaction is premature. Economics can handle irrationality the same way it handles everything: preferences and prices. As I have already pointed out:

  • People have preferences over beliefs: A nationalist enjoys the belief that foreign-made products are overpriced junk; a surgeon takes pride in the belief that he operates well while drunk.
  • False beliefs range in material cost from free to enormous: Acting on his beliefs would lead the nationalist to overpay for inferior goods, and the surgeon to destroy his career.

Snapping these two building blocks together leads to a simple model of irrational conviction. If agents care about both material wealth and irrational beliefs, then as the price of casting reason aside rises, agents consume less irrationality. I might like to hold comforting beliefs across the board, but it costs too much. Living in a Pollyanna dreamworld would stop be from coping with my problems, like that dead tree in my backyard that looks like it is going to fall on my house.

Let us assume that people are considering whether to believe that Brett Kavanaugh was guilty of sexual assault. For ease of visualization, let us suppose that they have utility functions defined over the following outcomes:

(A) Believe Kavanaugh was guilty, and turn out to be right

(B) Believe Kavanaugh was guilty, and turn out to be wrong

(C) Believe Kavanaugh was innocent, and turn out to be right

(D) Believe Kavanaugh was innocent, and turn out to be wrong

(E) Admit that you do not know whether he was guilty or not (this will be presumed to be a true statement, but I will count it as less valuable than a true statement that includes more detail.)

(F) Say something bad about your political enemies

(G) Say something good about your political enemies

(H) Say something bad about your political allies

(I) Say something good about your political allies

Note that options A through E are mutually exclusive, while one or more of options F through I might or might not come together with one of those from A through E.

Let’s suppose there are three people, a right winger who cares a lot about politics and little about truth, a left winger who cares a lot about politics and little about truth, and an independent who does not care about politics and instead cares a lot about truth. Then we posit the following table of utilities:

Right Winger
Left Winger
Independent
(A)
10
10
100
(B)
-10
-10
-100
(C)
10
10
100
(D)
-10
-10
-100
(E)
5
5
50
(F)
100
100
0
(G)
-100
-100
0
(H)
-100
-100
0
(I)
100
100
0

The columns for the right and left wingers are the same, but the totals will be calculated differently because saying something good about Kavanaugh, for the right winger, is saying something good about an ally, while for the left winger, it is saying something good about an enemy, and there is a similar contrast if something bad is said.

Now there are really only three options we need to consider, namely “Believe Kavanaugh was guilty,” “Believe Kavanaugh was innocent,” and “Admit that you do not know.” In addition, in order to calculate expected utility according to the above table, we need a probability that Kavanaugh was guilty. In order not to offend readers who have already chosen an option, I will assume a probability of 50% that he was guilty, and 50% that he was innocent. Using these assumptions, we can calculate the following ultimate utilities:

Right Winger
Left Winger
Independent
Claim Guilt
-100
100
0
Claim Innocence
100
-100
0
Confess Ignorance
5
5
50

(I won’t go through this calculation in detail; it should be evident that given my simple assumptions of the probability and values, there will be no value for anyone in affirming guilt or innocence as such, but only in admitting ignorance, or in making a political point.) Given these values, obviously the left winger will choose to believe that Kavanaugh was guilty, the right winger will choose to believe that he was innocent, and the independent will admit to being ignorant.

This account obviously makes complete sense of people’s actual positions on the question, and it does that by assuming that people voluntarily choose to believe a position in the same way they choose to do other things. On the other hand, if you assume that belief is an involuntary evaluation of a state of affairs, how could the actual distribution of opinion possibly be explained?

As this is a point I have discussed many times in the past, I won’t try to respond to all possible objections. However, I will bring up two of them. In the example, I had to assume that people calculated using a probability of 50% for Kavanaugh’s guilt or innocence. So it could be objected that their “real” belief is that there is a 50% chance he was guilty, and the statement is simply an external thing.

This initial 50% is something like a prior probability, and corresponds to a general leaning towards or away from a position. As I admitted in discussion with Angra Mainyu, that inclination is largely involuntary. However, first, this is not what we call a “belief” in ordinary usage, since we frequently say that someone has a belief while having some qualms about it. Second, it is not completely immune from voluntary influences. In practice in a situation like this, it will represent something like everything the person knows about the subject and predicate apart from this particular claim. And much of what the person knows will already be in subject/predicate form, and the person will have arrived at it through a similar voluntary process.

Another objection is that at least in the case of something obviously true or obviously false, there cannot possibly be anything voluntary about it. No one can choose to believe that the moon is made of green cheese, for example.

I have responded to this to this in the past by pointing out that most of us also cannot choose to go and kill ourselves, right now, despite the fact that doing so would be voluntary. And in a similar way, there is nothing attractive about believing that the moon is made of green cheese, and so no one can do it. At least two objections will be made to this response:

1) I can’t go kill myself right now, but I know that this is because it would be bad. But I cannot believe that the moon is made of green cheese because it is false, not because it is bad.

2) It does not seem that much harm would be done by choosing to believe this about the moon, and then changing your mind after a few seconds. So if it is voluntary, why not prove it by doing so? Obviously you cannot do so.

Regarding the first point, it is true that believing the moon is made of cheese would be bad because it is false. And in fact, if you find falsity the reason you cannot accept it, how is that not because you regard falsity as really bad? In fact lack of attractiveness is extremely relevant here. If people can believe in Xenu, they would find it equally possible to believe that the moon was made of cheese, if that were the teaching of their religion. In that situation, the falsity of the claim would not be much obstacle at all.

Regarding the second point, there is a problem like Kavka’s Toxin here. Choosing to believe something, roughly speaking, means choosing to treat it as a fact, which implies a certain commitment. Choosing to act like it is true enough to say so, then immediately doing something else, is not choosing to believe it, but rather it is choosing to tell a lie. So just as one cannot intend to drink the toxin without expecting to actually drink it, so one cannot choose to believe something without expecting to continue to believe it for the foreseeable future. This is why one would not wish to accept such a statement about the moon, not only in order to prove something (especially since it would prove nothing; no one would admit that you had succeeded in believing it), but even if someone were to offer a very large incentive, say a million dollars if you managed to believe it. This would amount to offering to pay someone to give up their concern for truth entirely, and permanently.

Additionally, in the case of some very strange claims, it might be true that people do not know how to believe them, in the sense that they do not know what “acting as though this were the case” would even mean. This no more affects the general voluntariness of belief than the fact that some people cannot do backflips affects the fact that such bodily motions are in themselves voluntary.

Perfectly Random

Suppose you have a string of random binary digits such as the following:

00111100010101001100011011001100110110010010100111

This string is 50 digits long, and was the result of a single attempt using the linked generator.

However, something seems distinctly non-random about it: there are exactly 25 zeros and exactly 25 ones. Naturally, this will not always happen, but most of the time the proportion of zeros will be fairly close to half. And evidently this is necessary, since if the proportion was usually much different from half, then the selection could not have been random in the first place.

There are other things about this string that are definitely not random. It contains only zeros and ones, and no other digits, much less items like letters from the alphabet, or items like ‘%’ and ‘$’.

Why do we have these apparently non-random characteristics? Both sorts of characteristics, the approximate and typical proportion, and the more rigid characteristics, are necessary consequences of the way we obtained or defined this number.

It is easy to see that such characteristics are inevitable. Suppose someone wants to choose something random without any non-random characteristics. Let’s suppose they want to avoid the first sort of characteristic, which is perhaps the “easier” task. They can certainly make the proportion of zeros approximately 75% or anything else that they please. But this will still be a non-random characteristic.

They try again. Suppose they succeed in preventing the series of digits from converging to any specific probability. If they do, there is one and only one way to do this. Much as in our discussion of the mathematical laws of nature, the only way to accomplish this will be to go back and forth between longer and longer strings of zeros and ones. But this is an extremely non-random characteristic. So they may have succeeded in avoiding one particular type of non-randomness, but only at the cost of adding something else very non-random.

Again, consider the second kind of characteristic. Here things are even clearer: the only way to avoid the second kind of characteristic is not to attempt any task in the first place. The only way to win is not to play. Once we have said “your task is to do such and such,” we have already specified some non-random characteristics of the second kind; to avoid such characteristics is to avoid the task completely.

“Completely random,” in fact, is an incoherent idea. No such thing can exist anywhere, in the same way that “formless matter” cannot actually exist, but all matter is formed in one way or another.

The same thing applies to David Hume’s supposed problem of induction. I ended that post with the remark that for his argument to work, he must be “absolutely certain that the future will resemble the past in no way.” But this of course is impossible in the first place; the past and the future are both defined as periods of time, and so there is some resemblance in their very definition, in the same way that any material thing must have some form in its definition, and any “random” thing must have something non-random in its definition.

 

Discount Rates

Eliezer Yudkowsky some years ago made this argument against temporal discounting:

I’ve never been a fan of the notion that we should (normatively) have a discount rate in our pure preferences – as opposed to a pseudo-discount rate arising from monetary inflation, or from opportunity costs of other investments, or from various probabilistic catastrophes that destroy resources or consumers.  The idea that it is literally, fundamentally 5% more important that a poverty-stricken family have clean water in 2008, than that a similar family have clean water in 2009, seems like pure discrimination to me – just as much as if you were to discriminate between blacks and whites.

Robin  Hanson disagreed, responding with this post:

But doesn’t discounting at market rates of return suggest we should do almost nothing to help far future folk, and isn’t that crazy?  No, it suggests:

  1. Usually the best way to help far future folk is to invest now to give them resources they can spend as they wish.
  2. Almost no one now in fact cares much about far future folk, or they would have bid up the price (i.e., market return) to much higher levels.

Very distant future times are ridiculously easy to help via investment.  A 2% annual return adds up to a googol (10^100) return over 12,000 years, even if there is only a 1/1000 chance they will exist or receive it.

So if you are not incredibly eager to invest this way to help them, how can you claim to care the tiniest bit about them?  How can you think anyone on Earth so cares?  And if no one cares the tiniest bit, how can you say it is “moral” to care about them, not just somewhat, but almost equally to people now?  Surely if you are representing a group, instead of spending your own wealth, you shouldn’t assume they care much.

Yudkowsky’s argument is idealistic, while Hanson is attempting to be realistic. I will look at this from a different point of view. Hanson is right, and Yudkowsky is wrong, for a still more idealistic reason than Yudkowsky’s reasons. In particular, a temporal discount rate is logically and mathematically necessary in order to have consistent preferences.

Suppose you have the chance to save 10 lives a year from now, or 2 years from now, or 3 years from now etc., such that your mutually exclusive options include the possibility of saving 10 lives x years from now for all x.

At first, it would seem to be consistent for you to say that all of these possibilities have equal value by some measure of utility.

The problem does not arise from this initial assignment, but it arises when we consider what happens when you act in this situation. Your revealed preferences in that situation will indicate that you prefer things nearer in time to things more distant, for the following reason.

It is impossible to choose a random integer without a bias towards low numbers, for the same reasons we argued here that it is impossible to assign probabilities to hypotheses without, in general, assigning simpler hypotheses higher probabilities. In a similar way, if “you will choose 2 years from now”, “you will choose 10 years from now,” “you will choose 100 years from now,” are all assigned probabilities, they cannot all be assigned equal probabilities, but you must be more likely to choose the options less distant in time, in general and overall. There will be some number n such that there is a 99.99% chance that you will choose some number of years less than n, and and a probability of 0.01% that you will choose n or more years, indicating that you have a very strong preference for saving lives sooner rather than later.

Someone might respond that this does not necessarily affect the specific value assignments, in the same way that in some particular case, we can consistently think that some particular complex hypothesis is more probable than some particular simple hypothesis. The problem with this is the hypotheses do not change their complexity, but time passes, making things distant in time become things nearer in time. Thus, for example, if Yudkowsky responds, “Fine. We assign equal value to saving lives for each year from 1 to 10^100, and smaller values to the times after that,” this will necessarily lead to dynamic inconsistency. The only way to avoid this inconsistency is to apply a discount rate to all periods of time, including ones in the near, medium, and long term future.

 

Quantum Mechanics and Libertarian Free Will

In a passage quoted in the last post, Jerry Coyne claims that quantum indeterminacy is irrelevant to free will: “Even the pure indeterminism of quantum mechanics can’t give us free will, because that’s simple randomness, and not a result of our own ‘will.'”

Coyne seems to be thinking that since quantum indeterminism has fixed probabilities in any specific situation, the result for human behavior would necessarily be like our second imaginary situation in the last post. There might be a 20% chance that you would randomly do X, and an 80% chance that you would randomly do Y, and nothing can affect these probabilities. Consequently you cannot be morally responsible for doing X or for doing Y, nor should you be praised or blamed for them.

Wait, you might say. Coyne explicitly favors praise and blame in general. But why? If you would not praise or blame someone doing something randomly, why should you praise or blame someone doing something in a deterministic manner? As explained in the last post, the question is whether reasons have any influence on your behavior. Coyne is assuming that if your behavior is deterministic, it can still be influenced by reasons, but if it is indeterministic, it cannot be. But there is no reason for this to be case. Your behavior can be influenced by reasons whether it is deterministic or not.

St. Thomas argues for libertarian free will on the grounds that there can be reasons for opposite actions:

Man does not choose of necessity. And this is because that which is possible not to be, is not of necessity. Now the reason why it is possible not to choose, or to choose, may be gathered from a twofold power in man. For man can will and not will, act and not act; again, he can will this or that, and do this or that. The reason of this is seated in the very power of the reason. For the will can tend to whatever the reason can apprehend as good. Now the reason can apprehend as good, not only this, viz. “to will” or “to act,” but also this, viz. “not to will” or “not to act.” Again, in all particular goods, the reason can consider an aspect of some good, and the lack of some good, which has the aspect of evil: and in this respect, it can apprehend any single one of such goods as to be chosen or to be avoided. The perfect good alone, which is Happiness, cannot be apprehended by the reason as an evil, or as lacking in any way. Consequently man wills Happiness of necessity, nor can he will not to be happy, or to be unhappy. Now since choice is not of the end, but of the means, as stated above (Article 3); it is not of the perfect good, which is Happiness, but of other particular goods. Therefore man chooses not of necessity, but freely.

Someone might object that if both are possible, there cannot be a reason why someone chooses one rather than the other. This is basically the claim in the third objection:

Further, if two things are absolutely equal, man is not moved to one more than to the other; thus if a hungry man, as Plato says (Cf. De Coelo ii, 13), be confronted on either side with two portions of food equally appetizing and at an equal distance, he is not moved towards one more than to the other; and he finds the reason of this in the immobility of the earth in the middle of the world. Now, if that which is equally (eligible) with something else cannot be chosen, much less can that be chosen which appears as less (eligible). Therefore if two or more things are available, of which one appears to be more (eligible), it is impossible to choose any of the others. Therefore that which appears to hold the first place is chosen of necessity. But every act of choosing is in regard to something that seems in some way better. Therefore every choice is made necessarily.

St. Thomas responds to this that it is a question of what the person considers:

If two things be proposed as equal under one aspect, nothing hinders us from considering in one of them some particular point of superiority, so that the will has a bent towards that one rather than towards the other.

Thus for example, someone might decide to become a doctor because it pays well, or they might decide to become a truck driver because they enjoy driving. Whether they consider “what would I enjoy?” or “what would pay well?” will determine which choice they make.

The reader might notice a flaw, or at least a loose thread, in St. Thomas’s argument. In our example, what determines whether you think about what pays well or what you would enjoy? This could be yet another choice. I could create a spreadsheet of possible jobs and think, “What should I put on it? Should I put the pay? or should I put what I enjoy?” But obviously the question about necessity will simply be pushed back, in this case. Is this choice itself determinate or indeterminate? And what determines what choice I make in this case? Here we are discussing an actual temporal series of thoughts, and it absolutely must have a first, since human life has a beginning in time. Consequently there will have to be a point where, if there is the possibility of “doing A for reason B” and “doing C for reason D”, it cannot be any additional consideration which determines which one is done.

Now it is possible at this point that St. Thomas is mistaken. It might be that the hypothesis that both were “really” possible is mistaken, and something does determine one rather than the other with “necessity.” It is also possible that he is not mistaken. Either way, human reasons do not influence the determination, because reason B and/or reason D are the first reasons considered, by hypothesis (if they were not, we would simply push back the question.)

At this point someone might consider this lack of the influence of reasons to imply that people are not morally responsible for doing A or for doing C. The problem with this is that if you do something without a reason (and without potentially being influenced by a reason), then indeed you would not be morally responsible. But the person doing A or C is not uninfluenced by reasons. They are influenced by reason B, or by reason D. Consequently, they are responsible for their specific action, because they do it for a reason, despite the fact that there is some other general issue that they are not responsible for.

What influence could quantum indeterminacy have here? It might be responsible for deciding between “doing A for reason B” and “doing C for reason D.” And as Coyne says, this would be “simple randomness,” with fixed probabilities in any particular situation. But none of this would prevent this from being a situation that would include libertarian free will, since libertarian free will is precisely nothing but the situation where there are two real possibilities: you might do one thing for one reason, or another thing for another reason. And that is what we would have here.

Does quantum mechanics have this influence in fact, or is this just a theoretical possibility? It very likely does. Some argue that it probably doesn’t, on the grounds that quantum mechanics does not typically seem to imply much indeterminacy for macroscopic objects. The problem with this argument is that the only way of knowing that quantum indeterminacy rarely leads to large scale differences is by using humanly designed items like clocks or computers. And these are specifically designed to be determinate: whenever our artifact is not sufficiently determinate and predictable, we change the design until we get something predictable. If we look at something in nature uninfluenced by human design, like a waterfall, is details are highly unpredictable to us. Which drop of water will be the most distant from this particular point one hour from now? There is no way to know.

But how much real indeterminacy is in the waterfall, or in the human brain, due to quantum indeterminacy? Most likely nobody knows, but it is basically a question of timescales. Do you get a great deal of indeterminacy after one hour, or after several days? One way or another, with the passage of enough time, you will get a degree of real indeterminacy as high as you like. The same thing will be equally true of human behavior. We often notice, in fact, that at short timescales there is less indeterminacy than we subjectively feel. For example, if someone hesitates to accept an invitation, in many situations, others will know that the person is very likely to decline. But the person feels very uncertain, as though there were a 50/50 chance of accepting or declining. The real probabilities might be 90/10 or even more slanted. Nonetheless, the question is one of timescales and not of whether or not there is any indeterminacy. There is, this is basically settled, it will apply to human behavior, and there is little reason to doubt that it applies at relatively short timescales compared to the timescales at which it applies to clocks and computers or other things designed with predictability in mind.

In this sense, quantum indeterminacy strongly suggests that St. Thomas is basically correct about libertarian free will.

On the other hand, Coyne is also right about something here. While it is not true that such “randomness” removes moral responsibility, the fact that people do things for reasons, or that praise and blame is a fitting response to actions done for reasons, Coyne correctly notices that it does not add to the fact that someone is responsible. If there is no human reason for the fact that a person did A for reason B rather than C for reason D, this makes their actions less intelligible, and thus less subject to responsibility. In other words, the “libertarian” part of libertarian free will does not make the will more truly a will, but less truly. In this respect, Coyne is right. This however is unrelated to quantum mechanics or to any particular scientific account. The thoughtful person can understand this simply from general considerations about what it means to act for a reason.

Causality and Moral Responsibility

Consider two imaginary situations:

(1) In the first situation, people are such that when someone sees a red light, they immediately go off and kill someone. Nothing can be done to prevent this, and no intention or desire to do otherwise makes any difference.

In this situation, killing someone after you have seen a red light is not blamed, since it cannot be avoided, but we blame people who show red lights to others. Such people are arrested and convicted as murderers.

(2) In the second situation, people are such that when someone sees a red light, there is a 5% chance they will go off and immediately kill someone, and a 95% chance they will behave normally. Nothing can change this probability: it does not matter whether the person is wicked or virtuous or what their previous attitude to killing was.

In this situation, again, we do not blame people who end up killing someone, but we call them unlucky. We do however blame people who show others red lights, and they are arrested and convicted of second degree murder, or in some cases manslaughter.

Some people would conclude from this that moral responsibility is incoherent: whether the world is deterministic or not, moral responsibility is impossible. Jerry Coyne defends this position in numerous places, as for example here:

We’ve taken a break from the many discussions on this site about free will, but, cognizant of the risks, I want to bring it up again. I think nearly all of us agree that there’s no dualism involved in our decisions: they’re determined completely by the laws of physics. Even the pure indeterminism of quantum mechanics can’t give us free will, because that’s simple randomness, and not a result of our own “will.”

Coyne would perhaps say that “free will” embodies a contradiction much in the way that “square circle” does. “Will” implies a cause, and thus something deterministic. “Free” implies indeterminism, and thus no cause.

In many places Coyne asserts that this implies that moral responsibility does not exist, as for example here:

This four-minute video on free will and responsibility, narrated by polymath Raoul Martinez, was posted by the Royal Society for the Encouragement of the Arts, Manufactures, and Commerce (RSA). Martinez’s point is one I’ve made here many times, and will surely get pushback from: determinism rules human behavior, and our “choices” are all predetermined by our genes and environment. To me, that means that the concept of “moral responsibility” is meaningless, for that implies an ability to choose freely. Nevertheless, we should still retain the concept of responsibility, meaning “an identifiable person did this or that good or bad action”. And, of course, we can sanction or praise people who were responsible in this sense, for such blame and praise can not only reinforce good behavior but is salubrious for society.

I think that Coyne is very wrong about the meaning of free will, somewhat wrong about responsibility, and likely wrong about the consequences of his views for society (e.g. he believes that his view will lead to more humane treatment of prisoners. There is no particular reason to expect this.)

The imaginary situations described in the initial paragraphs of this post do not imply that moral responsibility is impossible, but they do tell us something. In particular, they tell us that responsibility is not directly determined by determinism or its lack. And although Coyne says that “moral responsibility” implies indeterminism, surely even Coyne would not advocate blaming or punishing the person who had the 5% chance of going and killing someone. And the reason is clear: it would not “reinforce good behavior” or be “salubrious for society.” By the terms set out, it would make no difference, so blaming or punishing would be pointless.

Coyne is right that determinism does not imply that punishment is pointless. And he also recognizes that indeterminism does not of itself imply that anyone is responsible for anything. But he fails here to put two and two together: just as determinism does not imply punishment is pointless, nor that it is not, indeterminism likewise implies neither of the two. The conclusion he should draw is not that moral responsibility is meaningless, but that it is independent of both determinism and indeterminism; that is, that both deterministic compatibilism and libertarian free will allow for moral responsibility.

So what is required for praise and blame to have a point? Elsewhere we discussed C.S. Lewis’s claim that something can have a reason or a cause, but not both. In a sense, the initial dilemma in this post can be understood as a similar argument. Either our behavior has deterministic causes, or it has indeterministic causes; therefore it does not have reasons; therefore moral responsibility does not exist.

On the other hand, if people do have reasons for their behavior, there can be good reasons for blaming people who do bad things, and for punishing them. Namely, since those people are themselves acting for reasons, they will be less likely in the future to do those things, and likewise other people, fearing punishment and blame, will be less likely to do them.

As I said against Lewis, reasons do not exclude causes, but require them. Consequently what is necessary for moral responsibility are causes that are consistent with having reasons; one can easily imagine causes that are not consistent with having reasons, as in the imaginary situations described, and such causes would indeed exclude responsibility.

Schrödinger’s Cat

Erwin Schrödinger describes the context for his thought experiment with a cat:

The other alternative consists of granting reality only to the momentarily sharp determining parts – or in more general terms to each variable a sort of realization just corresponding to the quantum mechanical statistics of this variable at the relevant moment.

That it is in fact not impossible to express the degree and kind of blurring of all variables in one perfectly clear concept follows at once from the fact that Q.M. as a matter of fact has and uses such an instrument, the so-called wave function or psi-function, also called system vector. Much more is to be said about it further on. That it is an abstract, unintuitive mathematical construct is a scruple that almost always surfaces against new aids to thought and that carries no great message. At all events it is an imagined entity that images the blurring of all variables at every moment just as clearly and faithfully as does the classical model its sharp numerical values. Its equation of motion too, the law of its time variation, so long as the system is left undisturbed, lags not one iota, in clarity and determinacy, behind the equations of motion of the classical model. So the latter could be straight-forwardly replaced by the psi-function, so long as the blurring is confined to atomic scale, not open to direct control. In fact the function has provided quite intuitive and convenient ideas, for instance the “cloud of negative electricity” around the nucleus, etc. But serious misgivings arise if one notices that the uncertainty affects macroscopically tangible and visible things, for which the term “blurring” seems simply wrong. The state of a radioactive nucleus is presumably blurred in such a degree and fashion that neither the instant of decay nor the direction, in which the emitted alpha-particle leaves the nucleus, is well-established. Inside the nucleus, blurring doesn’t bother us. The emerging particle is described, if one wants to explain intuitively, as a spherical wave that continuously emanates in all directions and that impinges continuously on a surrounding luminescent screen over its full expanse. The screen however does not show a more or less constant uniform glow, but rather lights up at one instant at one spot – or, to honor the truth, it lights up now here, now there, for it is impossible to do the experiment with only a single radioactive atom. If in place of the luminescent screen one uses a spatially extended detector, perhaps a gas that is ionised by the alpha-particles, one finds the ion pairs arranged along rectilinear columns, that project backwards on to the bit of radioactive matter from which the alpha-radiation comes (C.T.R. Wilson’s cloud chamber tracks, made visible by drops of moisture condensed on the ions).

One can even set up quite ridiculous cases. A cat is penned up in a steel chamber, along with the following device (which must be secured against direct interference by the cat): in a Geiger counter there is a tiny bit of radioactive substance, so small, that perhaps in the course of the hour one of the atoms decays, but also, with equal probability, perhaps none; if it happens, the counter tube discharges and through a relay releases a hammer which shatters a small flask of hydrocyanic acid. If one has left this entire system to itself for an hour, one would say that the cat still lives if meanwhile no atom has decayed. The psi-function of the entire system would express this by having in it the living and dead cat (pardon the expression) mixed or smeared out in equal parts.

It is typical of these cases that an indeterminacy originally restricted to the atomic domain becomes transformed into macroscopic indeterminacy, which can then be resolved by direct observation. That prevents us from so naively accepting as valid a “blurred model” for representing reality. In itself it would not embody anything unclear or contradictory. There is a difference between a shaky or out-of-focus photograph and a snapshot of clouds and fog banks.

We see here the two elements described at the end of this earlier post. The psi-function is deterministic, but there seems to be an element of randomness when someone comes to check on the cat.

Hugh Everett amusingly describes a similar experiment performed on human beings (but without killing anyone):

Isolated somewhere out in space is a room containing an observer, A, who is about to perform a measurement upon a system S. After performing his measurement he will record the result in his notebook. We assume that he knows the state function of S (perhaps as a result of previous measurement), and that it is not an eigenstate of the measurement he is about to perform. A, being an orthodox quantum theorist, then believes that the outcome of his measurement is undetermined and that the process is correctly described by Process 1 [namely a random determination caused by measurement].

In the meantime, however, there is another observer, B, outside the room, who is in possession of the state function of the entire room, including S, the measuring apparatus, and A, just prior to the measurement. B is only interested in what will be found in the notebook one week hence, so he computes the state function of the room for one week in the future according to Process 2 [namely the deterministic  wave function]. One week passes, and we find B still in possession of the state function of the room, which this equally orthodox quantum theorist believes to be a complete description of the room and its contents. If B’s state function calculation tells beforehand exactly what is going to be in the notebook, then A is incorrect in his belief about the indeterminacy of the outcome of his measurement. We therefore assume that B’s state function contains non-zero amplitudes over several of the notebook entries.

At this point, B opens the door to the room and looks at the notebook (performs his observation.) Having observed the notebook entry, he turns to A and informs him in a patronizing manner that since his (B’s) wave function just prior to his entry into the room, which he knows to have been a complete description of the room and its contents, had non-zero amplitude over other than the present result of the measurement, the result must have been decided only when B entered the room, so that A, his notebook entry, and his memory about what occurred one week ago had no independent objective existence until the intervention by B. In short, B implies that A owes his present objective existence to B’s generous nature which compelled him to intervene on his behalf. However, to B’s consternation, A does not react with anything like the respect and gratitude he should exhibit towards B, and at the end of a somewhat heated reply, in which A conveys in a colorful manner his opinion of B and his beliefs, he rudely punctures B’s ego by observing that if B’s view is correct, then he has no reason to feel complacent, since the whole present situation may have no objective existence, but may depend upon the future actions of yet another observer.

Schrödinger’s problem was that the wave equation seems to describe something “blurred,” but if we assume that is because something blurred exists, it seems to contradict our experience which is of something quite distinct: a live cat or a dead cat, but not something in between.

Everett proposes that his interpretation of quantum mechanics is able to resolve this difficulty. After presenting other interpretations, he proposes his own (“Alternative 5”):

Alternative 5: To assume the universal validity of the quantum description, by the complete abandonment of Process 1 [again, this was the apparently random measurement process]. The general validity of pure wave mechanics, without any statistical assertions, is assumed for all physical systems, including observers and measuring apparata. Observation processes are to be described completely by the state function of the composite system which includes the observer and his object-system, and which at all times obeys the wave equation (Process 2).

It is evident that Alternative 5 is a theory of many advantages. It has the virtue of logical simplicity and it is complete in the sense that it is applicable to the entire universe. All processes are considered equally (there are no “measurement processes” which play any preferred role), and the principle of psycho-physical parallelism is fully maintained. Since the universal validity of the state function is asserted, one can regard the state functions themselves as the fundamental entities, and one can even consider the state function of the whole universe. In this sense this theory can be called the theory of the “universal wave function,” since all of physics is presumed to follow from this function alone. There remains, however, the question whether or not such a theory can be put into correspondence with our experience.

This present thesis is devoted to showing that this concept of a universal wave mechanics, together with the necessary correlation machinery for its interpretation, forms a logically self consistent description of a universe in which several observers are at work.

Ultimately, Everett’s response to Schrödinger is that the cat is indeed “blurred,” and that this never goes away. When someone checks on the cat, the person checking is also “blurred,” becoming a composite of someone seeing a dead cat and someone seeing a live cat. However, these are in effect two entirely separate worlds, one in which someone sees a live cat, and one in which someone sees a dead cat.

Everett mentions “the necessary correlation machinery for its interpretation,” because a mathematical theory of physics as such does not necessarily say that anyone should see anything in particular. So for example when Newton when says that there is a gravitational attraction between masses inversely proportional to the square of their distance, what exactly should we expect to see, given that? Obviously there is no way to answer this without adding something, and ultimately we need to add something non-mathematical, namely something about the way our experiences work.

I will not pretend to judge whether or not Everett does a good job defending his position. There is an interesting point here, whether or not his defense is ultimately a good one. “Orthodox” quantum mechanics, as Everett calls it, only gives statistical predictions about the future, and as long as nothing is added to the theory, it implies that deterministic predictions are impossible. It follows that if the position in our last post, on an open future, was correct, it must be possible to explain the results of quantum mechanics in terms of many worlds or multiple timelines. And I do not merely mean that we can give the same predictions with a one-world account or with a many world account. I mean that there must be a many-world account such that its contents are metaphysically identical to the contents of a one-world account with an open future.

This would nonetheless leave undetermined the question of what sort of account would be most useful to us in practice.

Miracles and Anomalies: Or, Your Religion is False

In 2011 there was an apparent observation of neutrinos traveling faster than light. Wikipedia says of this, “Even before the mistake was discovered, the result was considered anomalous because speeds higher than that of light in a vacuum are generally thought to violate special relativity, a cornerstone of the modern understanding of physics for over a century.” In other words, most scientists did not take the result very seriously, even before any specific explanation was found. As I stated here, it is possible to push unreasonably far in this direction, in such a way that one will be reluctant to ever modify one’s current theories. But there is also something reasonable about this attitude.

Alexander Pruss explains why scientists tend to be skeptical of such anomalous results in this post on Bayesianism and anomaly:

One part of the problem of anomaly is this. If a well-established scientific theory seems to predict something contrary to what we observe, we tend to stick to the theory, with barely a change in credence, while being dubious of the auxiliary hypotheses. What, if anything, justifies this procedure?

Here’s my setup. We have a well-established scientific theory T and (conjoined) auxiliary hypotheses A, and T together with A uncontroversially entails the denial of some piece of observational evidence E which we uncontroversially have (“the anomaly”). The auxiliary hypotheses will typically include claims about the experimental setup, the calibration of equipment, the lack of further causal influences, mathematical claims about the derivation of not-E from T and the above, and maybe some final catch-all thesis like the material conditional that if T and all the other auxiliary hypotheses obtain, then E does not obtain.

For simplicity I will suppose that A and T are independent, though of course that simplifying assumption is rarely true.

Here’s a quick and intuitive thought. There is a region of probability space where the conjunction of T and A is false. That area is divided into three sub-regions:

  1. T is true and A is false
  2. T is false and A is true
  3. both are false.

The initial probabilities of the three regions are, respectively, 0.0999, 0.0009999 and 0.0001. We know we are in one of these three regions, and that’s all we now know. Most likely we are in the first one, and the probability that we are in that one given that we are in one of the three is around 0.99. So our credence in T has gone down from three nines (0.999) to two nines (0.99), but it’s still high, so we get to hold on to T.

Still, this answer isn’t optimistic. A move from 0.999 to 0.99 is actually an enormous decrease in confidence.

“This answer isn’t optimistic,” because in the case of the neutrinos, this analysis would imply that scientists should have instantly become ten times more willing to consider the possibility that the theory of special relativity is false. This is surely not what happened.

Pruss therefore presents an alternative calculation:

But there is a much more optimistic thought. Note that the above wasn’t a real Bayesian calculation, just a rough informal intuition. The tip-off is that I said nothing about the conditional probabilities of E on the relevant hypotheses, i.e., the “likelihoods”.

Now setup ensures:

  1. P(E|A ∧ T)=0.

What can we say about the other relevant likelihoods? Well, if some auxiliary hypothesis is false, then E is up for grabs. So, conservatively:

  1. P(E|∼A ∧ T)=0.5
  2. P(E|∼A ∧ ∼T)=0.5

But here is something that I think is really, really interesting. I think that in typical cases where T is a well-established scientific theory and A ∧ T entails the negation of E, the probability P(E|A ∧ ∼T) is still low.

The reason is that all the evidence that we have gathered for T even better confirms the hypothesis that T holds to a high degree of approximation in most cases. Thus, even if T is false, the typical predictions of T, assuming they have conservative error bounds, are likely to still be true. Newtonian physics is false, but even conditionally on its being false we take individual predictions of Newtonian physics to have a high probability. Thus, conservatively:

  1. P(E|A ∧ ∼T)=0.1

Very well, let’s put all our assumptions together, including the ones about A and T being independent and the values of P(A) and P(T). Here’s what we get:

  1. P(E|T)=P(E|A ∧ T)P(A|T)+P(E|∼A ∧ T)P(∼A|T)=0.05
  2. P(E|∼T)=P(E|A ∧ ∼T)P(A|∼T)+P(E|∼A ∧ ∼T)P(∼A|∼T) = 0.14.

Plugging this into Bayes’ theorem, we get P(T|E)=0.997. So our credence has crept down, but only a little: from 0.999 to 0.997. This is much more optimistic (and conservative) than the big move from 0.999 to 0.99 that the intuitive calculation predicted.

So, if I am right, at least one of the reasons why anomalies don’t do much damage to scientific theories is that when the scientific theory T is well-confirmed, the anomaly is not only surprising on the theory, but it is surprising on the denial of the theory—because the background includes the data that makes T “well-confirmed” and would make E surprising even if we knew that T was false.

To make the point without the mathematics (which in any case is only used to illustrate the point, since Pruss is choosing the specific values himself), if you have a theory which would make the anomaly probable, that theory would be strongly supported by the anomaly. But we already know that theories like that are false, because otherwise the anomaly would not be an anomaly. It would be normal and common. Thus all of the actually plausible theories still make the anomaly an improbable observation, and therefore these theories are only weakly supported by the observation of the anomaly. The result is that the new observation makes at most a minor difference to your previous opinion.

We can apply this analysis to the discussion of miracles. David Hume, in his discussion of miracles, seems to desire a conclusive proof against them which is unobtainable, and in this respect he is mistaken. But near the end of his discussion, he brings up the specific topic of religion and says that his argument applies to it in a special way:

Upon the whole, then, it appears, that no testimony for any kind of miracle has ever amounted to a probability, much less to a proof; and that, even supposing it amounted to a proof, it would be opposed by another proof; derived from the very nature of the fact, which it would endeavour to establish. It is experience only, which gives authority to human testimony; and it is the same experience, which assures us of the laws of nature. When, therefore, these two kinds of experience are contrary, we have nothing to do but subtract the one from the other, and embrace an opinion, either on one side or the other, with that assurance which arises from the remainder. But according to the principle here explained, this subtraction, with regard to all popular religions, amounts to an entire annihilation; and therefore we may establish it as a maxim, that no human testimony can have such force as to prove a miracle, and make it a just foundation for any such system of religion.

The idea seems to be something like this: contrary systems of religion put forth miracles in their support, so the supporting evidence for one religion is more or less balanced by the supporting evidence for the other. Likewise, the evidence is weakened even in itself by people’s propensity to lies and delusion in such matters (some of this discussion was quoted in the earlier post on Hume and miracles). But in addition to the fairly balanced evidence we have experience basically supporting the general idea that the miracles do not happen. This is not outweighed by anything in particular, and so it is the only thing that remains after the other evidence balances itself out of the equation. Hume goes on:

I beg the limitations here made may be remarked, when I say, that a miracle can never be proved, so as to be the foundation of a system of religion. For I own, that otherwise, there may possibly be miracles, or violations of the usual course of nature, of such a kind as to admit of proof from human testimony; though, perhaps, it will be impossible to find any such in all the records of history. Thus, suppose, all authors, in all languages, agree, that, from the first of January, 1600, there was a total darkness over the whole earth for eight days: suppose that the tradition of this extraordinary event is still strong and lively among the people: that all travellers, who return from foreign countries, bring us accounts of the same tradition, without the least variation or contradiction: it is evident, that our present philosophers, instead of doubting the fact, ought to receive it as certain, and ought to search for the causes whence it might be derived. The decay, corruption, and dissolution of nature, is an event rendered probable by so many analogies, that any phenomenon, which seems to have a tendency towards that catastrophe, comes within the reach of human testimony, if that testimony be very extensive and uniform.

But suppose, that all the historians who treat of England, should agree, that, on the first of January, 1600, Queen Elizabeth died; that both before and after her death she was seen by her physicians and the whole court, as is usual with persons of her rank; that her successor was acknowledged and proclaimed by the parliament; and that, after being interred a month, she again appeared, resumed the throne, and governed England for three years: I must confess that I should be surprised at the concurrence of so many odd circumstances, but should not have the least inclination to believe so miraculous an event. I should not doubt of her pretended death, and of those other public circumstances that followed it: I should only assert it to have been pretended, and that it neither was, nor possibly could be real. You would in vain object to me the difficulty, and almost impossibility of deceiving the world in an affair of such consequence; the wisdom and solid judgment of that renowned queen; with the little or no advantage which she could reap from so poor an artifice: all this might astonish me; but I would still reply, that the knavery and folly of men are such common phenomena, that I should rather believe the most extraordinary events to arise from their concurrence, than admit of so signal a violation of the laws of nature.

But should this miracle be ascribed to any new system of religion; men, in all ages, have been so much imposed on by ridiculous stories of that kind, that this very circumstance would be a full proof of a cheat, and sufficient, with all men of sense, not only to make them reject the fact, but even reject it without farther examination. Though the Being to whom the miracle is ascribed, be, in this case, Almighty, it does not, upon that account, become a whit more probable; since it is impossible for us to know the attributes or actions of such a Being, otherwise than from the experience which we have of his productions, in the usual course of nature. This still reduces us to past observation, and obliges us to compare the instances of the violation of truth in the testimony of men, with those of the violation of the laws of nature by miracles, in order to judge which of them is most likely and probable. As the violations of truth are more common in the testimony concerning religious miracles, than in that concerning any other matter of fact; this must diminish very much the authority of the former testimony, and make us form a general resolution, never to lend any attention to it, with whatever specious pretence it may be covered.

Notice how “unfair” this seems to religion, so to speak. What is the difference between the eight days of darkness, which Hume would accept, under those conditions, and the resurrection of the queen of England, which he would not? Hume’s reaction to the two situations is more consistent than first appears. Hume would accept the historical accounts about England in the same way that he would accept the accounts about the eight days of darkness. The difference is in how he would explain the accounts. He says of the darkness, “It is evident, that our present philosophers, instead of doubting the fact, ought to receive it as certain, and ought to search for the causes whence it might be derived.” Likewise, he would accept the historical accounts as certain insofar as they say the a burial ceremony took place, the queen was absent from public life, and so on. But he would not accept that the queen was dead and came back to life. Why? The “search for the causes” seems to explain this. It is plausible to Hume that causes of eight days of darkness might be found, but not plausible to him that causes of a resurrection might be found. He hints at this in the words, “The decay, corruption, and dissolution of nature, is an event rendered probable by so many analogies,” while in contrast a resurrection would be “so signal a violation of the laws of nature.”

It is clear that Hume excludes certain miracles, such as resurrection, from the possibility of being established by the evidence of testimony. But he makes the additional point that even if he did not exclude them, he would not find it reasonable to establish a “system of religion” on such testimony, given that “violations of truth are more common in the testimony concerning religious miracles, than in that concerning any other matter of fact.”

It is hard to argue with the claim that “violations of truth” are especially common in testimony about miracles. But does any of this justify Hume’s negative attitude to miracles as establishing “systems of religion,” or is this all just prejudice?  There might well be a good deal of prejudice involved here in his opinions. Nonetheless, Alexander Pruss’s discussion of anomaly allows one to formalize Hume’s idea here as actual insight as well.

One way to look at truth in religion is to look at it as a way of life or as membership in a community. And in this way, asking whether miracles can establish a system of religion is just asking whether a person can be moved to a way of life or to join a community through such things. And clearly this is possible, and often happens. But another way to consider truth in religion is to look at a doctrinal system as a set of claims about how the world is. Looked at in this way, we should look at a doctrinal system as presenting a proposed larger context of our place in the world, one that we would be unaware of without the religion. This implies that one should have a prior probability (namely prior to consideration of arguments in its favor) strongly against the system considered as such, for reasons very much like the reasons we should have a prior probability strongly against Ron Conte’s predictions.

We can thus apply Alexander Pruss’s framework. Let us take Mormonism as the “system of religion” in question. Then taken as a set of claims about the world, our initial probability would be that it is very unlikely that the world is set up this way. Then let us take a purported miracle establishing this system: Joseph Smith finds his golden plates. In principle, if this cashed out in a certain way, it could actually establish his system. But it doesn’t cash out that way. We know very little about the plates, the circumstances of their discovery (if there was any), and their actual content. Instead, what we are left with is an anomaly: something unusual happened, and it might be able to be described as “finding golden plates,” but that’s pretty much all we know.

Then we have the theory, T, which has a high prior probability: Mormonism is almost certainly false. We have the observation : Joseph Smith discovered his golden plates (in one sense or another.) And we have the auxiliary hypotheses which imply that he could not have discovered the plates if Mormonism is false. The Bayesian updates in Pruss’s scheme imply that our conclusion is this: Mormonism is almost certainly false, and there is almost certainly an error in the auxiliary hypotheses that imply he could not have discovered them if it were false.

Thus Hume’s attitude is roughly justified: he should not change his opinion about religious systems in any significant way based on testimony about miracles.

To make you feel better, this does not prove that your religion is false. It just nearly proves that. In particular, this does not take into an account an update based on the fact that “many people accept this set of claims.” This is a different fact, and it is not an anomaly. If you update on this fact and end up with a non-trivial probability that your set of claims is true, testimony about miracles might well strengthen this into conviction.

I will respond to one particular objection, however. Some will take this argument to be stubborn and wicked, because it seems to imply that people shouldn’t be “convinced even if someone rises from the dead.” And this does in fact follow, more or less. An anomalous occurrence in most cases will have a perfectly ordinary explanation in terms of things that are already a part of our ordinary understanding of the world, without having to add some larger context. For example, suppose you heard your fan (as a piece of furniture, not as a person) talking to you. You might suppose that you were hallucinating. But suppose it turns out that you are definitely not hallucinating. Should you conclude that there is some special source from outside the normal world that is communicating with you? No: the fan scenario can happen, and it turns out to have a perfectly everyday explanation. We might agree with Hume that it would be much more implausible that a resurrection would have an everyday explanation. Nonetheless, even if we end up concluding to the existence of some larger context, and that the miracle has no such everyday explanation, there is no good reason for it to be such and such a specific system of doctrine. Consider again Ron Conte’s predictions for the future. Most likely the things that happen between now and 2040, and even the things that happen in the 2400s, are likely to be perfectly ordinary (although the things in the 2400s might differ from current events in fairly radical ways). But even if they are not, and even if apocalyptic, miraculous occurrences are common in those days, this does not raise the probability of Conte’s specific predictions above any trivial level. In the same way, the anomalous occurrences involved in the accounts of miracles will not lend any significant probability to a religious system.

The objection here is that this seems unfair to God, so to speak. What if God wanted to reveal something to the world? What could he do, besides work miracles? I won’t propose a specific answer to this, because I am not God. But I will illustrate the situation with a little story to show that there is nothing unfair to God about it.

Suppose human beings created an artificial intelligence and raised it in a simulated environment. Wanting things to work themselves out “naturally,” so to speak, because it would be less work, and because it would probably be necessary to the learning process, they institute “natural laws” in the simulated world which are followed in an exceptionless way. Once the AI is “grown up”, so to speak, they decide to start communicating with it. In the AI’s world, this will surely show up as some kind of miracle: something will happen that was utterly unpredictable to it, and which is completely inconsistent with the natural laws as it knew them.

Will the AI be forced by the reasoning of this post to ignore the communication? Well, that depends on what exactly occurs and how. At the end of his post, Pruss discusses situations where anomalous occurrences should change your mind:

Note that this argument works less well if the anomalous case is significantly different from the cases that went into the confirmation of T. In such a case, there might be much less reason to think E won’t occur if T is false. And that means that anomalies are more powerful as evidence against a theory the more distant they are from the situations we explored before when we were confirming T. This, I think, matches our intuitions: We would put almost no weight in someone finding an anomaly in the course of an undergraduate physics lab—not just because an undergraduate student is likely doing it (it could be the professor testing the equipment, though), but because this is ground well-gone over, where we expect the theory’s predictions to hold even if the theory is false. But if new observations of the center of our galaxy don’t fit our theory, that is much more compelling—in a regime so different from many of our previous observations, we might well expect that things would be different if our theory were false.

And this helps with the second half of the problem of anomaly: How do we keep from holding on to T too long in the light of contrary evidence, how do we allow anomalies to have a rightful place in undermining theories? The answer is: To undermine a theory effectively, we need anomalies that occur in situations significantly different from those that have already been explored.

If the AI finds itself in an entirely new situation, e.g. rather than hearing an obscure voice from a fan, it is consistently able to talk to the newly discovered occupant of the world on a regular basis, it will have no trouble realizing that its situation has changed, and no difficulty concluding that it is receiving communication from its author. This does, sort of, give one particular method that could be used to communicate a revelation. But there might well be many others.

Our objector will continue. This is still not fair. Now you are saying that God could give a revelation but that if he did, the world would be very different from the actual world. But what if he wanted to give a revelation in the actual world, without it being any different from the way it is? How could he convince you in that case?

Let me respond with an analogy. What if the sky were actually red like the sky of Mars, but looked blue like it is? What would convince you that it was red? The fact that there is no way to convince you that it is red in our actual situation means you are unfairly prejudiced against the redness of the sky.

In other words, indeed, I am unwilling to be convinced that the sky is red except in situations where it is actually red, and those situations are quite different from our actual situation. And indeed, I am unwilling to be convinced of a revelation except in situations where there is actually a revelation, and those are quite different from our actual situation.