Was Kavanaugh Guilty?

No, I am not going to answer the question. This post will illustrate and argue for a position that I have argued many times in the past, namely that belief is voluntary. The example is merely particularly good for proving the point. I will also be using a framework something like Bryan Caplan’s in his discussion of rational irrationality:

Two forces lie at the heart of economic models of choice: preferences and prices. A consumer’s preferences determine the shape of his demand curve for oranges; the market price he faces determines where along that demand curve he resides. What makes this insight deep is its generality. Economists use it to analyze everything from having babies to robbing banks.

Irrationality is a glaring exception. Recognizing irrationality is typically equated with rejecting economics. A “logic of the irrational” sounds self-contradictory. This chapter’s central message is that this reaction is premature. Economics can handle irrationality the same way it handles everything: preferences and prices. As I have already pointed out:

  • People have preferences over beliefs: A nationalist enjoys the belief that foreign-made products are overpriced junk; a surgeon takes pride in the belief that he operates well while drunk.
  • False beliefs range in material cost from free to enormous: Acting on his beliefs would lead the nationalist to overpay for inferior goods, and the surgeon to destroy his career.

Snapping these two building blocks together leads to a simple model of irrational conviction. If agents care about both material wealth and irrational beliefs, then as the price of casting reason aside rises, agents consume less irrationality. I might like to hold comforting beliefs across the board, but it costs too much. Living in a Pollyanna dreamworld would stop be from coping with my problems, like that dead tree in my backyard that looks like it is going to fall on my house.

Let us assume that people are considering whether to believe that Brett Kavanaugh was guilty of sexual assault. For ease of visualization, let us suppose that they have utility functions defined over the following outcomes:

(A) Believe Kavanaugh was guilty, and turn out to be right

(B) Believe Kavanaugh was guilty, and turn out to be wrong

(C) Believe Kavanaugh was innocent, and turn out to be right

(D) Believe Kavanaugh was innocent, and turn out to be wrong

(E) Admit that you do not know whether he was guilty or not (this will be presumed to be a true statement, but I will count it as less valuable than a true statement that includes more detail.)

(F) Say something bad about your political enemies

(G) Say something good about your political enemies

(H) Say something bad about your political allies

(I) Say something good about your political allies

Note that options A through E are mutually exclusive, while one or more of options F through I might or might not come together with one of those from A through E.

Let’s suppose there are three people, a right winger who cares a lot about politics and little about truth, a left winger who cares a lot about politics and little about truth, and an independent who does not care about politics and instead cares a lot about truth. Then we posit the following table of utilities:

Right Winger
Left Winger
Independent
(A)
10
10
100
(B)
-10
-10
-100
(C)
10
10
100
(D)
-10
-10
-100
(E)
5
5
50
(F)
100
100
0
(G)
-100
-100
0
(H)
-100
-100
0
(I)
100
100
0

The columns for the right and left wingers are the same, but the totals will be calculated differently because saying something good about Kavanaugh, for the right winger, is saying something good about an ally, while for the left winger, it is saying something good about an enemy, and there is a similar contrast if something bad is said.

Now there are really only three options we need to consider, namely “Believe Kavanaugh was guilty,” “Believe Kavanaugh was innocent,” and “Admit that you do not know.” In addition, in order to calculate expected utility according to the above table, we need a probability that Kavanaugh was guilty. In order not to offend readers who have already chosen an option, I will assume a probability of 50% that he was guilty, and 50% that he was innocent. Using these assumptions, we can calculate the following ultimate utilities:

Right Winger
Left Winger
Independent
Claim Guilt
-100
100
0
Claim Innocence
100
-100
0
Confess Ignorance
5
5
50

(I won’t go through this calculation in detail; it should be evident that given my simple assumptions of the probability and values, there will be no value for anyone in affirming guilt or innocence as such, but only in admitting ignorance, or in making a political point.) Given these values, obviously the left winger will choose to believe that Kavanaugh was guilty, the right winger will choose to believe that he was innocent, and the independent will admit to being ignorant.

This account obviously makes complete sense of people’s actual positions on the question, and it does that by assuming that people voluntarily choose to believe a position in the same way they choose to do other things. On the other hand, if you assume that belief is an involuntary evaluation of a state of affairs, how could the actual distribution of opinion possibly be explained?

As this is a point I have discussed many times in the past, I won’t try to respond to all possible objections. However, I will bring up two of them. In the example, I had to assume that people calculated using a probability of 50% for Kavanaugh’s guilt or innocence. So it could be objected that their “real” belief is that there is a 50% chance he was guilty, and the statement is simply an external thing.

This initial 50% is something like a prior probability, and corresponds to a general leaning towards or away from a position. As I admitted in discussion with Angra Mainyu, that inclination is largely involuntary. However, first, this is not what we call a “belief” in ordinary usage, since we frequently say that someone has a belief while having some qualms about it. Second, it is not completely immune from voluntary influences. In practice in a situation like this, it will represent something like everything the person knows about the subject and predicate apart from this particular claim. And much of what the person knows will already be in subject/predicate form, and the person will have arrived at it through a similar voluntary process.

Another objection is that at least in the case of something obviously true or obviously false, there cannot possibly be anything voluntary about it. No one can choose to believe that the moon is made of green cheese, for example.

I have responded to this to this in the past by pointing out that most of us also cannot choose to go and kill ourselves, right now, despite the fact that doing so would be voluntary. And in a similar way, there is nothing attractive about believing that the moon is made of green cheese, and so no one can do it. At least two objections will be made to this response:

1) I can’t go kill myself right now, but I know that this is because it would be bad. But I cannot believe that the moon is made of green cheese because it is false, not because it is bad.

2) It does not seem that much harm would be done by choosing to believe this about the moon, and then changing your mind after a few seconds. So if it is voluntary, why not prove it by doing so? Obviously you cannot do so.

Regarding the first point, it is true that believing the moon is made of cheese would be bad because it is false. And in fact, if you find falsity the reason you cannot accept it, how is that not because you regard falsity as really bad? In fact lack of attractiveness is extremely relevant here. If people can believe in Xenu, they would find it equally possible to believe that the moon was made of cheese, if that were the teaching of their religion. In that situation, the falsity of the claim would not be much obstacle at all.

Regarding the second point, there is a problem like Kavka’s Toxin here. Choosing to believe something, roughly speaking, means choosing to treat it as a fact, which implies a certain commitment. Choosing to act like it is true enough to say so, then immediately doing something else, is not choosing to believe it, but rather it is choosing to tell a lie. So just as one cannot intend to drink the toxin without expecting to actually drink it, so one cannot choose to believe something without expecting to continue to believe it for the foreseeable future. This is why one would not wish to accept such a statement about the moon, not only in order to prove something (especially since it would prove nothing; no one would admit that you had succeeded in believing it), but even if someone were to offer a very large incentive, say a million dollars if you managed to believe it. This would amount to offering to pay someone to give up their concern for truth entirely, and permanently.

Additionally, in the case of some very strange claims, it might be true that people do not know how to believe them, in the sense that they do not know what “acting as though this were the case” would even mean. This no more affects the general voluntariness of belief than the fact that some people cannot do backflips affects the fact that such bodily motions are in themselves voluntary.

C.S. Lewis on Punishment

C.S. Lewis discusses a certain theory of punishment:

In England we have lately had a controversy about Capital Punishment. … My subject is not Capital Punishment in particular, but that theory of punishment in general which the controversy showed to be almost universal among my fellow-countrymen. It may be called the Humanitarian theory. Those who hold it think that it is mild and merciful. In this I believe that they are seriously mistaken. I believe that the “Humanity” which it claims is a dangerous illusion and disguises the possibility of cruelty and injustice without end. I urge a return to the traditional or Retributive theory not solely, not even primarily, in the interests of society, but in the interests of the criminal.

According to the Humanitarian theory, to punish a man because he deserves it, and as much as he deserves, is mere revenge, and, therefore, barbarous and immoral. It is maintained that the only legitimate motives for punishing are the desire to deter others by example or to mend the criminal. When this theory is combined, as frequently happens, with the belief that all crime is more or less pathological, the idea of mending tails off into that of healing or curing and punishment becomes therapeutic. Thus it appears at first sight that we have passed from the harsh and self-righteous notion of giving the wicked their deserts to the charitable and enlightened one of tending the psychologically sick. What could be more amiable? One little point which is taken for granted in this theory needs, however, to be made explicit. The things done to the criminal, even if they are called cures, will be just as compulsory as they were in the old days when we called them punishments. If a tendency to steal can be cured by psychotherapy, the thief will no doubt be forced to undergo treatment. Otherwise, society cannot continue.

My contention is that this doctrine, merciful though it appears, really means that each one of us, from the moment he breaks the law, is deprived of the rights of a human being.

The reason is this. The Humanitarian theory removes from Punishment the concept of Desert. But the concept of Desert is the only connecting link between punishment and justice. It is only as deserved or undeserved that a sentence can be just or unjust. I do not here contend that the question “Is it deserved?” is the only one we can reasonably ask about a punishment. We may very properly ask whether it is likely to deter others and to reform the criminal. But neither of these two last questions is a question about justice. There is no sense in talking about a “just deterrent” or a “just cure”. We demand of a deterrent not whether it is just but whether it will deter. We demand of a cure not whether it is just but whether it succeeds. Thus when we cease to consider what the criminal deserves and consider only what will cure him or deter others, we have tacitly removed him from the sphere of justice altogether; instead of a person, a subject of rights, we now have a mere object, a patient, a “case”.

Later in the essay, he gives some examples of how the Humanitarian theory will make things worse, as in the following case:

The immediate starting point of this article was a letter I read in one of our Leftist weeklies. The author was pleading that a certain sin, now treated by our laws as a crime, should henceforward be treated as a disease. And he complained that under the present system the offender, after a term in gaol, was simply let out to return to his original environment where he would probably relapse. What he complained of was not the shutting up but the letting out. On his remedial view of punishment the offender should, of course, be detained until he was cured. And of course the official straighteners are the only people who can say when that is. The first result of the Humanitarian theory is, therefore, to substitute for a definite sentence (reflecting to some extent the community’s moral judgment on the degree of ill-desert involved) an indefinite sentence terminable only by the word of those experts–and they are not experts in moral theology nor even in the Law of Nature–who inflict it. Which of us, if he stood in the dock, would not prefer to be tried by the old system?

This post will make three points:

(1) The “Humanitarian” theory is basically correct about the purpose of punishment.

(2) C.S. Lewis is right that there are good reasons to talk about justice and about what someone deserves or does not deserve. Such considerations are, as he supposes, essential to a system of justice. Lewis is also right to suppose that many supporters of the Humanitarian theory, despite being factually correct about the purpose of punishment, are mistaken in opposing such talk as cruel and immoral.

(3) Once the Humanitarian theory is corrected in such a way as to incorporate the notion of “just deserts”, Lewis’s objections fail.

Consider the first point, the purpose of punishment. There was already some discussion of this in a previous post. In a sense, everyone already knows that Humanitarians are right about the basic purpose of punishment, including C.S. Lewis. Lewis points out the obvious fact himself: whatever you call them and however you explain them, punishments for crime are compulsory in a society because “otherwise, society cannot continue.” But why cannot society continue without punishment? What supposedly would happen if you did not have any punishments? What would actually happen if a government credibly declared that it would never again punish anything?

What would actually happen, of course, is that this amount to a declaration that the government was dissolving itself, and someone else would take over and establish new crimes and new punishments, either at that same level of generality as the original government, or at more local levels (e.g. perhaps each town would become a city-state.) In any case each of the new governments would still have punishments, so you would not have succeeded in abolishing punishment.

What happens in the imaginary situation where you do succeed, where no one else takes over? This presumably would be a Hobbesian “state of nature,” which is not a society at all. In other words, the situation simply does not count as a society at all, unless certain rules are followed pretty consistently. And those rules will not be followed consistently without punishments. So it is easy to see why punishment exists: to make sure that those rules are followed, generally speaking. Since rules are meant to make some things happen and prevent other things, punishment is simply to make sure that the rules actually function as rules. But this is exactly what the Humanitarian theory says is the purpose of punishment: to make others less likely to break the rules, and to make the one who has already broken the rules less likely to break them in the future.

Thus C.S. Lewis himself is implicitly recognizing that the Humanitarians are basically right about the purpose of punishment, in acknowledging that punishment is necessary for the very existence of society.

Let’s go on to the second point, the idea of just deserts. C.S. Lewis is right that many proponents of Humanitarian view either believe that the idea is absurd, or that if there is such a thing as deserving something, no one can deserve something bad, or that if people can deserve things, this is not really a relevant consideration for a justice system. For example, it appears that Kelsey Piper blogging at The Unit of Caring believes something along these lines; here she has a pretty reasonable post responding to some criticisms analogous to those of C.S. Lewis to the theory.

I will approach this by saying a few things about what a law is in general. St. Thomas defines law: “It is nothing else than an ordinance of reason for the common good, made by him who has care of the community, and promulgated.” But let’s drop the careful formulation and the conditions, as necessary as they may be. St. Thomas’s definition is simply a more detailed account of what everyone knows: a law is a rule that people invent for the benefit of a community.

Is there such a thing as an unjust law? In St. Thomas’s account, in a sense yes, and in a sense no. “For the common good” means that the law is beneficial. In that sense, if the law is “unjust,” it is harmful, and thus it is not for the common good. And in that sense it does not satisfy the definition of a law, and so is not a law at all. But obviously ordinary people will call it a law anyway, and in that way it is an unjust law, because it is unsuited to the purpose of a law.

Now here’s the thing. An apparent rule is not really a rule at all unless it tends to make something happen. In the case that we are talking about, namely human law, that generally means that laws require penalties for being broken in order to be laws at all. It is true that in a society with an extremely strong respect for law, it might occasionally be possible to make a law without establishing any specific penalty, and still have that law followed. The community would still need to leave itself the option of establishing a penalty; otherwise it would just be advice rather than a law.

This causes a slight problem. The purpose of a law is to make sure that certain things are done and others avoided, and the reason for penalties is to back up this purpose. But when someone breaks the law, the law has already failed. The very thing the law was meant to prevent has already happened. And what now? Should the person be punished? Why? To prevent the law from being broken? It has already been broken. So we cannot prevent it from being broken. And the thing is, punishment is something bad. So to inflict the punishment now, after the crime has already been committed, seems like just stacking one bad thing on top of another.

At this point the “Retributive” theory of justice will chime in. “We should still inflict the punishment because it is just, and the criminal deserves it.”

This is the appeal of the Humanitarian’s condemnation of the retributive theory. The Retributive theory, the Humanitarian will say, is just asserting that something bad, namely the punishment, in this situation, is something good, by bringing in the idea of “justice.” But this is a contradiction: something bad is bad by definition, and cannot be good.

The reader is perhaps beginning to understand the placement of the previous post. A law is established, with a penalty for being broken, in order to make certain things happen. This is like intending to drink the toxin. But if someone breaks the law, what is the point of inflicting the punishment? And the next morning, what is the point of drinking the toxin in the afternoon, when the money is already received or not? There is a difference of course, because in this case the dilemma only comes up because the law has been broken. We could make the cases more analogous, however, by stipulating in the case of Kavka’s toxin that the rich billionaire offers this deal: “The million will be found in your account, with a probability of 99.99%, if and only if you intend to drink the toxin only if the million is not found in your account (which will happen only in the unlucky 0.01% of cases), and you do not need to drink or intend to drink in the situation where the million is found in your account.” In this situation, the person might well reason thus:

If the morning comes and the million is not in my account, why on earth would I drink the toxin? This deal is super unfair.

Nonetheless, as in the original deal, there is one and only one way to get the million: namely, by planning to drink the toxin in that situation, and by planning not to reconsider, no matter what. As in the case of law, the probability factor that I added means that it is possible not to get the million, although you probably will. But the person who formed this intention will go through with it and drink the toxin, unless they reconsider; and they had the definite intention of not reconsidering.

The situations are now more analogous, but there is still an additional difference, one that makes it even easier to decide to follow the law than to drink the toxin. The only reason to commit to drinking the toxin was to get the million, which, in our current situation, has already failed. But in the case of the law, one purpose was to prevent the criminal from performing a certain action, and that purpose has already failed. But it also has the purpose of preventing them from doing it in the future, and preventing others from doing it. So there additional motivations for carrying out the law.

We can leave the additional difference to the side for now, however. The point would be essentially valid even if you made a law to prevent one particular act, and that act ended up being done. The retributionist would say, “Ok, so applying the punishment at this point will not prevent the thing it was meant to prevent. But it is just, and the criminal deserves it, and we should still inflict it.” And they are right: the whole idea of establishing the the rule included the idea that the punishment would actually be carried out, in this situation. There was a rule against reconsidering the rule, just as the fellow in the situation with the toxin planned not to reconsider their plan.

What is meant when it is said that a punishment is “just,” and that the criminal “deserves it,” then is simply that it is what is required by the rules we have established, and that those rules are reasonable ones.

Someone will object here. It seems that this cannot be true, because some punishments are wicked and unjust even though there were rules establishing them. And it seems that this is because people simply do not deserve those things: so there must be such a thing as “what they deserve,” in itself and independent of any rules. But this is where we must return to the point made above about just and unjust laws. One hears, for example, of cases in which people were sentenced to death for petty theft. We can agree that this is unjust in itself: but this is precisely because the rule, “someone who steals food should be killed,” is not a reasonable rule which will benefit the community. You might have something good in mind for it, namely to prevent stealing, but if you carry out the penalty on even one occasion, you have done more harm than all the stealing put together. The Humanitarians are right that the thing inflicted in a punishment is bad, and remains bad. It does not become something good in that situation. And this is precisely why it needs some real proportion to the crime.

We can analyze the situation in two ways, from the point of view of the State, considered as though a kind of person, and from the point of the view of the person who carries out the law. The State makes a kind of promise to inflict a punishment for some crimes, in such a way as to minimize the total harm of both the crimes and their punishment. Additionally, to some extent it promises not to reconsider this in situation where a crime is actually committed. “To some extent” here is of course essential: such rules are not and should not be absolutely rigid. If the crime is actually committed, the State is in a situation like our person who finds himself without the million and having committed to drink the toxin in that situation: the normal result of the situation will be that the State inflicts the punishment, and the person drinks the toxin, without any additional consideration of motivations or reasons.

From the point of view of the individual, he carries out the sentence “because it is just,” i.e. because it is required by reasonable rules which we have established for the good of the community. And that, i.e. carrying out reasonable laws, is a good thing, even though the material content includes something bad. The moral object of the executioner is the fulfillment of justice, not the killing of a person.

We have perhaps already pointed the way to the last point, namely that with the incorporation of the idea of justice, C.S. Lewis’s criticisms fail. Lewis argues that if the purpose of punishment is medicinal, then it is in principle unlimited: but this is not true even of medicine. No one would take medicine which would cause more harm than the disease, nor would it be acceptable to compel someone else to take such medicine.

More importantly, Lewis’s criticism play off the problems that are caused by believing that one needs to consider at every point, “will the consequences of this particular punishment or action be good or not?” This is not necessary because this is not the way law works, despite the fact that the general purpose is the one supposed. Law only works because to some extent it promises not to reconsider, like our fellow in the case of Kavka’s toxin. Just as you are wrong to focus on whether “drinking the toxin right now will harm me and not benefit me”, so the State would be wrong to focus too much on the particular consequences of carrying out the law right now, as opposed to the general consequences of the general law.

Thus for example Lewis supposes rulers considering the matter in an entirely utilitarian way:

But that is not the worst. If the justification of exemplary punishment is not to be based on desert but solely on its efficacy as a deterrent, it is not absolutely necessary that the man we punish should even have committed the crime. The deterrent effect demands that the public should draw the moral, “If we do such an act we shall suffer like that man.” The punishment of a man actually guilty whom the public think innocent will not have the desired effect; the punishment of a man actually innocent will, provided the public think him guilty. But every modern State has powers which make it easy to fake a trial. When a victim is urgently needed for exemplary purposes and a guilty victim cannot be found, all the purposes of deterrence will be equally served by the punishment (call it “cure” if you prefer) of an innocent victim, provided that the public can be cheated into thinking him guilty. It is no use to ask me why I assume that our rulers will be so wicked.

As said, this is not the way law works. The question will be about which laws are reasonable and beneficial in general, not about whether such and such particular actions are beneficial in particular cases. Consider a proposed law formulated with such an idea in mind:

When the ruling officials believe that it is urgently necessary to deter people from committing a crime, and no one can be found who has actually committed it, the rulers are authorized to deceive the public into believing that an innocent man has committed the crime, and to punish that innocent man.

It should not be necessary to make a long argument that as a general rule, this does not serve the good of a community, regardless of might happen in particular cases. In this way it is quite right to say that this is unjust in itself. This does not, however, establish that “what someone deserves” has any concrete content which is not established by law.

As a sort of footnote to this post, we might note that “deserts” are sometimes extended to natural consequences in much the way “law” is extended to laws of nature, mathematics, or logic. For example, Bryan Caplan distinguishes “deserving” and “undeserving” poor:

I propose to use the same standard to identify the “deserving” and “undeserving” poor.  The deserving poor are those who can’t take – and couldn’t have taken – reasonable steps to avoid poverty. The undeserving poor are those who can take – or could have taken – reasonable steps to avoid poverty.  Reasonable steps like: Work full-time, even if the best job you can get isn’t fun; spend your money on food and shelter before you get cigarettes or cable t.v.; use contraception if you can’t afford a child.  A simple test of “reasonableness”: If you wouldn’t accept an excuse from a friend, you shouldn’t accept it from anyone.

This is rather different from the sense discussed in this post, but you could view it as an extension of it. It is a rule (of mathematics, really) that “if you spend all of your money you will not have any left,” and we probably do not need to spend much effort trying to change this situation, considered in general, even if we might want to change it for an individual.

Kavka’s Toxin

Gregory Kavka discusses a thought experiment:

You are feeling extremely lucky. You have just been approached by an eccentric billionaire who has offered you the following deal. He places before you a vial of toxin that, if you drink it, will make you painfully ill for a day, but will not threaten your life or have any lasting effects. (Your spouse, a crack biochemist, confirms the properties of the toxin.) The billionaire will pay you one million dollars tomorrow morning if, at midnight tonight, you intend to drink the toxin tomorrow afternoon. He emphasizes that you need not drink the toxin to receive the money; in fact, the money will already be in your bank account hours before the time for drinking it arrives, if you succeed. (This is confirmed by your daughter, a lawyer, after she examines the legal and financial documents that the billionaire has signed.) All you have to do is sign the agreement and then intend at midnight tonight to drink the stuff tomorrow afternoon. You are perfectly free to change your mind after receiving the money and not drink the toxin. (The presence or absence of the intention is to be determined by the latest ‘mind-reading’ brain scanner and computing device designed by the great Doctor X. As a cognitive scientist, materialist, and former student of Doctor X, you have no doubt that the machine will correctly detect the presence or absence of the relevant intention.)

Confronted with this offer, you gleefully sign the contract, thinking ‘what an easy way to become a millionaire’. Not long afterwards, however, you begin to worry. You had been thinking that you could avoid drinking the toxin and just pocket the million. But you realize that if you are thinking in those terms when midnight rolls around, you will not be intending to drink the toxin tomorrow. So maybe you will actually have to drink the stuff to collect the money. It will not be pleasant, but it is sure worth a day of suffering to become a millionaire.

However, as occurs to you immediately, it cannot really be necessary to drink the toxin to pocket the money. That money will either be or not be in your bank account by 10 a.m. tomorrow, you will know then whether it is there or not, and your drinking or not drinking the toxin hours later cannot affect the completed financial transaction. So instead of planning to drink the toxin, you decide to intend today to drink it and then change your mind after midnight. But if that is your plan, then it is obvious that you do not intend to drink the toxin. (At most you intend to intend to drink it.) For having such an intention is incompatible with planning to change your mind tomorrow morning.

The discussion goes on from here for some time, but the resolution of the puzzle is easier than Kavka realizes. There is only a problem because it is implicitly assumed that the belief that you will or will not drink the toxin is something different from the intention to drink it. But in the case of voluntary actions, these are one and the same. The reason you cannot intend to drink the toxin without thinking that you will end up drinking it is simply that the intention to drink the toxin is the belief that you will end up drinking it. If the brain scanner works correctly, it registers that you intend to drink the toxin if you in fact think you will end up drinking it, and it registers that you do not intend this if you in fact think you will not drink it.

Is there a problem on the practical level? That is, is it possible for someone to get the million, or is it impossible because everyone in such a situation would expect that they would reconsider tomorrow morning, and therefore they will not believe that they will end up drinking it?

Possibly, and for some people. It is entirely possible in some situations that beliefs about what you will in fact do, apparently simply based on facts, entirely prevent certain decisions and intentions. Thus if someone has tried dozens of times in the past to give up smoking, and consistently failed, it will become more and more difficult to intend to give up smoking, and may very well become impossible.

However, Kavka gives a theoretical argument that this should be impossible in the case of his thought experiment:

Thus, we can explain your difficulty in earning a fortune: you cannot intend to act as you have no reason to act, at least when you have substantial reason not to act. And you have (or will have when the time comes) no reason to drink the toxin, and a very good reason not to, for it will make you quite sick for a day.

Again, it may well be that this reasoning would cause an individual to fail to obtain the million. But it is not necessary for this to happen. For the person does have a reason to intend to drink the toxin in the first place: namely, in order to obtain the million. And tomorrow morning their decision, i.e. their belief that they will drink the toxin, will be an efficient cause of them actually drinking the toxin, unless they reconsider. Thus if a person expects to reconsider, they may well fail to obtain the million. But someone wanting to obtain the million will also therefore plan not to reconsider. And tomorrow morning their belief that they will not reconsider will be an efficient cause of them not reconsidering, unless they reconsider their plan not to reconsider. And so on.

Thus, someone can only obtain the million if they plan to drink the toxin, they plan not to reconsider this plan, and so on. And someone with this plan can obtain the million. And maybe they will end up drinking the toxin and maybe they won’t; but the evening before, they believe that they factually will drink it. If they don’t, they fail to obtain the million. And they may well in fact drink it, simply by carrying out the original plan: going about their day without thinking about it, and simply drinking that afternoon, without any additional consideration of reasons to drink or not drink.

There is also a way to obtain the million and avoid drinking, but it cannot happen on purpose. This can happen only in one way: namely, by being lucky. You plan on every level not to reconsider, and expect this to happen, but luckily you end up being mistaken, and you do reconsider, despite expecting not to. In this case you both obtain the million, and avoid the drink.

 

Discount Rates

Eliezer Yudkowsky some years ago made this argument against temporal discounting:

I’ve never been a fan of the notion that we should (normatively) have a discount rate in our pure preferences – as opposed to a pseudo-discount rate arising from monetary inflation, or from opportunity costs of other investments, or from various probabilistic catastrophes that destroy resources or consumers.  The idea that it is literally, fundamentally 5% more important that a poverty-stricken family have clean water in 2008, than that a similar family have clean water in 2009, seems like pure discrimination to me – just as much as if you were to discriminate between blacks and whites.

Robin  Hanson disagreed, responding with this post:

But doesn’t discounting at market rates of return suggest we should do almost nothing to help far future folk, and isn’t that crazy?  No, it suggests:

  1. Usually the best way to help far future folk is to invest now to give them resources they can spend as they wish.
  2. Almost no one now in fact cares much about far future folk, or they would have bid up the price (i.e., market return) to much higher levels.

Very distant future times are ridiculously easy to help via investment.  A 2% annual return adds up to a googol (10^100) return over 12,000 years, even if there is only a 1/1000 chance they will exist or receive it.

So if you are not incredibly eager to invest this way to help them, how can you claim to care the tiniest bit about them?  How can you think anyone on Earth so cares?  And if no one cares the tiniest bit, how can you say it is “moral” to care about them, not just somewhat, but almost equally to people now?  Surely if you are representing a group, instead of spending your own wealth, you shouldn’t assume they care much.

Yudkowsky’s argument is idealistic, while Hanson is attempting to be realistic. I will look at this from a different point of view. Hanson is right, and Yudkowsky is wrong, for a still more idealistic reason than Yudkowsky’s reasons. In particular, a temporal discount rate is logically and mathematically necessary in order to have consistent preferences.

Suppose you have the chance to save 10 lives a year from now, or 2 years from now, or 3 years from now etc., such that your mutually exclusive options include the possibility of saving 10 lives x years from now for all x.

At first, it would seem to be consistent for you to say that all of these possibilities have equal value by some measure of utility.

The problem does not arise from this initial assignment, but it arises when we consider what happens when you act in this situation. Your revealed preferences in that situation will indicate that you prefer things nearer in time to things more distant, for the following reason.

It is impossible to choose a random integer without a bias towards low numbers, for the same reasons we argued here that it is impossible to assign probabilities to hypotheses without, in general, assigning simpler hypotheses higher probabilities. In a similar way, if “you will choose 2 years from now”, “you will choose 10 years from now,” “you will choose 100 years from now,” are all assigned probabilities, they cannot all be assigned equal probabilities, but you must be more likely to choose the options less distant in time, in general and overall. There will be some number n such that there is a 99.99% chance that you will choose some number of years less than n, and and a probability of 0.01% that you will choose n or more years, indicating that you have a very strong preference for saving lives sooner rather than later.

Someone might respond that this does not necessarily affect the specific value assignments, in the same way that in some particular case, we can consistently think that some particular complex hypothesis is more probable than some particular simple hypothesis. The problem with this is the hypotheses do not change their complexity, but time passes, making things distant in time become things nearer in time. Thus, for example, if Yudkowsky responds, “Fine. We assign equal value to saving lives for each year from 1 to 10^100, and smaller values to the times after that,” this will necessarily lead to dynamic inconsistency. The only way to avoid this inconsistency is to apply a discount rate to all periods of time, including ones in the near, medium, and long term future.

 

“Moral” Responsibility

In a passage quoted here, Jerry Coyne objected to the “moral” in “moral responsibility”:

To me, that means that the concept of “moral responsibility” is meaningless, for that implies an ability to choose freely. Nevertheless, we should still retain the concept of responsibility, meaning “an identifiable person did this or that good or bad action”. And, of course, we can sanction or praise people who were responsible in this sense, for such blame and praise can not only reinforce good behavior but is salubrious for society.

Suppose someone completely insane happens to kill another person, under the mistaken belief that they are doing something completely different. In such a case, “an identifiable person did this or that good or bad action,” and yet we do not say they are responsible, much less blame such a person; rather we may subject them to physical restraints, but we no more blame them than we blame the weather for the deaths that it occasionally inflicts on people. In other words, Coyne’s definition does not even work for “responsibility,” let alone moral responsibility.

Moral action has a specific meaning: something that is done, and not merely an action in itself, but in comparison with the good proposed by human reason. Consequently we have moral action only when we have something voluntarily done by a human being for a reason, or (if without a reason) with the voluntary omission of the consideration of reasons. In exactly the same situations we have moral responsibility: namely, someone voluntarily did something good, or someone voluntarily did something bad.

Praise and blame are added precisely because people are acting for reasons, and given that people tend to like praise and dislike blame, these elements, if rightly applied, will make good things better, and thus more likely to be pursued, and bad things worse, and thus more likely to be avoided. As an aside, this also suggests occasions when it is a bad idea to blame someone for something bad; namely, when blame is not likely to reduce the bad activity, or by very little, since in this case you are simply making things worse, period.

Stop, Coyne and others will say. Even if we agree with the point about praise and blame, we do not agree about moral responsibility, unless determinism is false. And nothing in the above paragraphs even refers to determinism or its opposite, and thus the above cannot be a full account of moral responsibility.

The above is, in fact, a basically complete account of moral responsibility. Although determinism is false, as was said in the linked post, its falsity has nothing to do with the matter one way or another.

The confusion about this results from a confusion between an action as a being in itself, and an action as moral, namely as considered by reason. This distinction was discussed here while considering what it means to say that some kinds of actions are always wrong. It is quite true that considered as a moral action, it would be wrong to blame someone if they did not have any other option. But that situation would be a situation where no reasonable person would act otherwise. And you do not blame someone for doing something that all reasonable people would do. You blame them in a situation where reasonable people would do otherwise: there are reasons for doing something different, but they did not act on those reasons.

But it is not the case that blame or moral responsibility depends on whether or not there is a physically possible alternative, because to consider physical alternatives is simply to speak of the action as a being in itself, and not as a moral act at all.

 

Quantum Mechanics and Libertarian Free Will

In a passage quoted in the last post, Jerry Coyne claims that quantum indeterminacy is irrelevant to free will: “Even the pure indeterminism of quantum mechanics can’t give us free will, because that’s simple randomness, and not a result of our own ‘will.'”

Coyne seems to be thinking that since quantum indeterminism has fixed probabilities in any specific situation, the result for human behavior would necessarily be like our second imaginary situation in the last post. There might be a 20% chance that you would randomly do X, and an 80% chance that you would randomly do Y, and nothing can affect these probabilities. Consequently you cannot be morally responsible for doing X or for doing Y, nor should you be praised or blamed for them.

Wait, you might say. Coyne explicitly favors praise and blame in general. But why? If you would not praise or blame someone doing something randomly, why should you praise or blame someone doing something in a deterministic manner? As explained in the last post, the question is whether reasons have any influence on your behavior. Coyne is assuming that if your behavior is deterministic, it can still be influenced by reasons, but if it is indeterministic, it cannot be. But there is no reason for this to be case. Your behavior can be influenced by reasons whether it is deterministic or not.

St. Thomas argues for libertarian free will on the grounds that there can be reasons for opposite actions:

Man does not choose of necessity. And this is because that which is possible not to be, is not of necessity. Now the reason why it is possible not to choose, or to choose, may be gathered from a twofold power in man. For man can will and not will, act and not act; again, he can will this or that, and do this or that. The reason of this is seated in the very power of the reason. For the will can tend to whatever the reason can apprehend as good. Now the reason can apprehend as good, not only this, viz. “to will” or “to act,” but also this, viz. “not to will” or “not to act.” Again, in all particular goods, the reason can consider an aspect of some good, and the lack of some good, which has the aspect of evil: and in this respect, it can apprehend any single one of such goods as to be chosen or to be avoided. The perfect good alone, which is Happiness, cannot be apprehended by the reason as an evil, or as lacking in any way. Consequently man wills Happiness of necessity, nor can he will not to be happy, or to be unhappy. Now since choice is not of the end, but of the means, as stated above (Article 3); it is not of the perfect good, which is Happiness, but of other particular goods. Therefore man chooses not of necessity, but freely.

Someone might object that if both are possible, there cannot be a reason why someone chooses one rather than the other. This is basically the claim in the third objection:

Further, if two things are absolutely equal, man is not moved to one more than to the other; thus if a hungry man, as Plato says (Cf. De Coelo ii, 13), be confronted on either side with two portions of food equally appetizing and at an equal distance, he is not moved towards one more than to the other; and he finds the reason of this in the immobility of the earth in the middle of the world. Now, if that which is equally (eligible) with something else cannot be chosen, much less can that be chosen which appears as less (eligible). Therefore if two or more things are available, of which one appears to be more (eligible), it is impossible to choose any of the others. Therefore that which appears to hold the first place is chosen of necessity. But every act of choosing is in regard to something that seems in some way better. Therefore every choice is made necessarily.

St. Thomas responds to this that it is a question of what the person considers:

If two things be proposed as equal under one aspect, nothing hinders us from considering in one of them some particular point of superiority, so that the will has a bent towards that one rather than towards the other.

Thus for example, someone might decide to become a doctor because it pays well, or they might decide to become a truck driver because they enjoy driving. Whether they consider “what would I enjoy?” or “what would pay well?” will determine which choice they make.

The reader might notice a flaw, or at least a loose thread, in St. Thomas’s argument. In our example, what determines whether you think about what pays well or what you would enjoy? This could be yet another choice. I could create a spreadsheet of possible jobs and think, “What should I put on it? Should I put the pay? or should I put what I enjoy?” But obviously the question about necessity will simply be pushed back, in this case. Is this choice itself determinate or indeterminate? And what determines what choice I make in this case? Here we are discussing an actual temporal series of thoughts, and it absolutely must have a first, since human life has a beginning in time. Consequently there will have to be a point where, if there is the possibility of “doing A for reason B” and “doing C for reason D”, it cannot be any additional consideration which determines which one is done.

Now it is possible at this point that St. Thomas is mistaken. It might be that the hypothesis that both were “really” possible is mistaken, and something does determine one rather than the other with “necessity.” It is also possible that he is not mistaken. Either way, human reasons do not influence the determination, because reason B and/or reason D are the first reasons considered, by hypothesis (if they were not, we would simply push back the question.)

At this point someone might consider this lack of the influence of reasons to imply that people are not morally responsible for doing A or for doing C. The problem with this is that if you do something without a reason (and without potentially being influenced by a reason), then indeed you would not be morally responsible. But the person doing A or C is not uninfluenced by reasons. They are influenced by reason B, or by reason D. Consequently, they are responsible for their specific action, because they do it for a reason, despite the fact that there is some other general issue that they are not responsible for.

What influence could quantum indeterminacy have here? It might be responsible for deciding between “doing A for reason B” and “doing C for reason D.” And as Coyne says, this would be “simple randomness,” with fixed probabilities in any particular situation. But none of this would prevent this from being a situation that would include libertarian free will, since libertarian free will is precisely nothing but the situation where there are two real possibilities: you might do one thing for one reason, or another thing for another reason. And that is what we would have here.

Does quantum mechanics have this influence in fact, or is this just a theoretical possibility? It very likely does. Some argue that it probably doesn’t, on the grounds that quantum mechanics does not typically seem to imply much indeterminacy for macroscopic objects. The problem with this argument is that the only way of knowing that quantum indeterminacy rarely leads to large scale differences is by using humanly designed items like clocks or computers. And these are specifically designed to be determinate: whenever our artifact is not sufficiently determinate and predictable, we change the design until we get something predictable. If we look at something in nature uninfluenced by human design, like a waterfall, is details are highly unpredictable to us. Which drop of water will be the most distant from this particular point one hour from now? There is no way to know.

But how much real indeterminacy is in the waterfall, or in the human brain, due to quantum indeterminacy? Most likely nobody knows, but it is basically a question of timescales. Do you get a great deal of indeterminacy after one hour, or after several days? One way or another, with the passage of enough time, you will get a degree of real indeterminacy as high as you like. The same thing will be equally true of human behavior. We often notice, in fact, that at short timescales there is less indeterminacy than we subjectively feel. For example, if someone hesitates to accept an invitation, in many situations, others will know that the person is very likely to decline. But the person feels very uncertain, as though there were a 50/50 chance of accepting or declining. The real probabilities might be 90/10 or even more slanted. Nonetheless, the question is one of timescales and not of whether or not there is any indeterminacy. There is, this is basically settled, it will apply to human behavior, and there is little reason to doubt that it applies at relatively short timescales compared to the timescales at which it applies to clocks and computers or other things designed with predictability in mind.

In this sense, quantum indeterminacy strongly suggests that St. Thomas is basically correct about libertarian free will.

On the other hand, Coyne is also right about something here. While it is not true that such “randomness” removes moral responsibility, the fact that people do things for reasons, or that praise and blame is a fitting response to actions done for reasons, Coyne correctly notices that it does not add to the fact that someone is responsible. If there is no human reason for the fact that a person did A for reason B rather than C for reason D, this makes their actions less intelligible, and thus less subject to responsibility. In other words, the “libertarian” part of libertarian free will does not make the will more truly a will, but less truly. In this respect, Coyne is right. This however is unrelated to quantum mechanics or to any particular scientific account. The thoughtful person can understand this simply from general considerations about what it means to act for a reason.

Causality and Moral Responsibility

Consider two imaginary situations:

(1) In the first situation, people are such that when someone sees a red light, they immediately go off and kill someone. Nothing can be done to prevent this, and no intention or desire to do otherwise makes any difference.

In this situation, killing someone after you have seen a red light is not blamed, since it cannot be avoided, but we blame people who show red lights to others. Such people are arrested and convicted as murderers.

(2) In the second situation, people are such that when someone sees a red light, there is a 5% chance they will go off and immediately kill someone, and a 95% chance they will behave normally. Nothing can change this probability: it does not matter whether the person is wicked or virtuous or what their previous attitude to killing was.

In this situation, again, we do not blame people who end up killing someone, but we call them unlucky. We do however blame people who show others red lights, and they are arrested and convicted of second degree murder, or in some cases manslaughter.

Some people would conclude from this that moral responsibility is incoherent: whether the world is deterministic or not, moral responsibility is impossible. Jerry Coyne defends this position in numerous places, as for example here:

We’ve taken a break from the many discussions on this site about free will, but, cognizant of the risks, I want to bring it up again. I think nearly all of us agree that there’s no dualism involved in our decisions: they’re determined completely by the laws of physics. Even the pure indeterminism of quantum mechanics can’t give us free will, because that’s simple randomness, and not a result of our own “will.”

Coyne would perhaps say that “free will” embodies a contradiction much in the way that “square circle” does. “Will” implies a cause, and thus something deterministic. “Free” implies indeterminism, and thus no cause.

In many places Coyne asserts that this implies that moral responsibility does not exist, as for example here:

This four-minute video on free will and responsibility, narrated by polymath Raoul Martinez, was posted by the Royal Society for the Encouragement of the Arts, Manufactures, and Commerce (RSA). Martinez’s point is one I’ve made here many times, and will surely get pushback from: determinism rules human behavior, and our “choices” are all predetermined by our genes and environment. To me, that means that the concept of “moral responsibility” is meaningless, for that implies an ability to choose freely. Nevertheless, we should still retain the concept of responsibility, meaning “an identifiable person did this or that good or bad action”. And, of course, we can sanction or praise people who were responsible in this sense, for such blame and praise can not only reinforce good behavior but is salubrious for society.

I think that Coyne is very wrong about the meaning of free will, somewhat wrong about responsibility, and likely wrong about the consequences of his views for society (e.g. he believes that his view will lead to more humane treatment of prisoners. There is no particular reason to expect this.)

The imaginary situations described in the initial paragraphs of this post do not imply that moral responsibility is impossible, but they do tell us something. In particular, they tell us that responsibility is not directly determined by determinism or its lack. And although Coyne says that “moral responsibility” implies indeterminism, surely even Coyne would not advocate blaming or punishing the person who had the 5% chance of going and killing someone. And the reason is clear: it would not “reinforce good behavior” or be “salubrious for society.” By the terms set out, it would make no difference, so blaming or punishing would be pointless.

Coyne is right that determinism does not imply that punishment is pointless. And he also recognizes that indeterminism does not of itself imply that anyone is responsible for anything. But he fails here to put two and two together: just as determinism does not imply punishment is pointless, nor that it is not, indeterminism likewise implies neither of the two. The conclusion he should draw is not that moral responsibility is meaningless, but that it is independent of both determinism and indeterminism; that is, that both deterministic compatibilism and libertarian free will allow for moral responsibility.

So what is required for praise and blame to have a point? Elsewhere we discussed C.S. Lewis’s claim that something can have a reason or a cause, but not both. In a sense, the initial dilemma in this post can be understood as a similar argument. Either our behavior has deterministic causes, or it has indeterministic causes; therefore it does not have reasons; therefore moral responsibility does not exist.

On the other hand, if people do have reasons for their behavior, there can be good reasons for blaming people who do bad things, and for punishing them. Namely, since those people are themselves acting for reasons, they will be less likely in the future to do those things, and likewise other people, fearing punishment and blame, will be less likely to do them.

As I said against Lewis, reasons do not exclude causes, but require them. Consequently what is necessary for moral responsibility are causes that are consistent with having reasons; one can easily imagine causes that are not consistent with having reasons, as in the imaginary situations described, and such causes would indeed exclude responsibility.