C.S. Lewis on Punishment

C.S. Lewis discusses a certain theory of punishment:

In England we have lately had a controversy about Capital Punishment. … My subject is not Capital Punishment in particular, but that theory of punishment in general which the controversy showed to be almost universal among my fellow-countrymen. It may be called the Humanitarian theory. Those who hold it think that it is mild and merciful. In this I believe that they are seriously mistaken. I believe that the “Humanity” which it claims is a dangerous illusion and disguises the possibility of cruelty and injustice without end. I urge a return to the traditional or Retributive theory not solely, not even primarily, in the interests of society, but in the interests of the criminal.

According to the Humanitarian theory, to punish a man because he deserves it, and as much as he deserves, is mere revenge, and, therefore, barbarous and immoral. It is maintained that the only legitimate motives for punishing are the desire to deter others by example or to mend the criminal. When this theory is combined, as frequently happens, with the belief that all crime is more or less pathological, the idea of mending tails off into that of healing or curing and punishment becomes therapeutic. Thus it appears at first sight that we have passed from the harsh and self-righteous notion of giving the wicked their deserts to the charitable and enlightened one of tending the psychologically sick. What could be more amiable? One little point which is taken for granted in this theory needs, however, to be made explicit. The things done to the criminal, even if they are called cures, will be just as compulsory as they were in the old days when we called them punishments. If a tendency to steal can be cured by psychotherapy, the thief will no doubt be forced to undergo treatment. Otherwise, society cannot continue.

My contention is that this doctrine, merciful though it appears, really means that each one of us, from the moment he breaks the law, is deprived of the rights of a human being.

The reason is this. The Humanitarian theory removes from Punishment the concept of Desert. But the concept of Desert is the only connecting link between punishment and justice. It is only as deserved or undeserved that a sentence can be just or unjust. I do not here contend that the question “Is it deserved?” is the only one we can reasonably ask about a punishment. We may very properly ask whether it is likely to deter others and to reform the criminal. But neither of these two last questions is a question about justice. There is no sense in talking about a “just deterrent” or a “just cure”. We demand of a deterrent not whether it is just but whether it will deter. We demand of a cure not whether it is just but whether it succeeds. Thus when we cease to consider what the criminal deserves and consider only what will cure him or deter others, we have tacitly removed him from the sphere of justice altogether; instead of a person, a subject of rights, we now have a mere object, a patient, a “case”.

Later in the essay, he gives some examples of how the Humanitarian theory will make things worse, as in the following case:

The immediate starting point of this article was a letter I read in one of our Leftist weeklies. The author was pleading that a certain sin, now treated by our laws as a crime, should henceforward be treated as a disease. And he complained that under the present system the offender, after a term in gaol, was simply let out to return to his original environment where he would probably relapse. What he complained of was not the shutting up but the letting out. On his remedial view of punishment the offender should, of course, be detained until he was cured. And of course the official straighteners are the only people who can say when that is. The first result of the Humanitarian theory is, therefore, to substitute for a definite sentence (reflecting to some extent the community’s moral judgment on the degree of ill-desert involved) an indefinite sentence terminable only by the word of those experts–and they are not experts in moral theology nor even in the Law of Nature–who inflict it. Which of us, if he stood in the dock, would not prefer to be tried by the old system?

This post will make three points:

(1) The “Humanitarian” theory is basically correct about the purpose of punishment.

(2) C.S. Lewis is right that there are good reasons to talk about justice and about what someone deserves or does not deserve. Such considerations are, as he supposes, essential to a system of justice. Lewis is also right to suppose that many supporters of the Humanitarian theory, despite being factually correct about the purpose of punishment, are mistaken in opposing such talk as cruel and immoral.

(3) Once the Humanitarian theory is corrected in such a way as to incorporate the notion of “just deserts”, Lewis’s objections fail.

Consider the first point, the purpose of punishment. There was already some discussion of this in a previous post. In a sense, everyone already knows that Humanitarians are right about the basic purpose of punishment, including C.S. Lewis. Lewis points out the obvious fact himself: whatever you call them and however you explain them, punishments for crime are compulsory in a society because “otherwise, society cannot continue.” But why cannot society continue without punishment? What supposedly would happen if you did not have any punishments? What would actually happen if a government credibly declared that it would never again punish anything?

What would actually happen, of course, is that this amount to a declaration that the government was dissolving itself, and someone else would take over and establish new crimes and new punishments, either at that same level of generality as the original government, or at more local levels (e.g. perhaps each town would become a city-state.) In any case each of the new governments would still have punishments, so you would not have succeeded in abolishing punishment.

What happens in the imaginary situation where you do succeed, where no one else takes over? This presumably would be a Hobbesian “state of nature,” which is not a society at all. In other words, the situation simply does not count as a society at all, unless certain rules are followed pretty consistently. And those rules will not be followed consistently without punishments. So it is easy to see why punishment exists: to make sure that those rules are followed, generally speaking. Since rules are meant to make some things happen and prevent other things, punishment is simply to make sure that the rules actually function as rules. But this is exactly what the Humanitarian theory says is the purpose of punishment: to make others less likely to break the rules, and to make the one who has already broken the rules less likely to break them in the future.

Thus C.S. Lewis himself is implicitly recognizing that the Humanitarians are basically right about the purpose of punishment, in acknowledging that punishment is necessary for the very existence of society.

Let’s go on to the second point, the idea of just deserts. C.S. Lewis is right that many proponents of Humanitarian view either believe that the idea is absurd, or that if there is such a thing as deserving something, no one can deserve something bad, or that if people can deserve things, this is not really a relevant consideration for a justice system. For example, it appears that Kelsey Piper blogging at The Unit of Caring believes something along these lines; here she has a pretty reasonable post responding to some criticisms analogous to those of C.S. Lewis to the theory.

I will approach this by saying a few things about what a law is in general. St. Thomas defines law: “It is nothing else than an ordinance of reason for the common good, made by him who has care of the community, and promulgated.” But let’s drop the careful formulation and the conditions, as necessary as they may be. St. Thomas’s definition is simply a more detailed account of what everyone knows: a law is a rule that people invent for the benefit of a community.

Is there such a thing as an unjust law? In St. Thomas’s account, in a sense yes, and in a sense no. “For the common good” means that the law is beneficial. In that sense, if the law is “unjust,” it is harmful, and thus it is not for the common good. And in that sense it does not satisfy the definition of a law, and so is not a law at all. But obviously ordinary people will call it a law anyway, and in that way it is an unjust law, because it is unsuited to the purpose of a law.

Now here’s the thing. An apparent rule is not really a rule at all unless it tends to make something happen. In the case that we are talking about, namely human law, that generally means that laws require penalties for being broken in order to be laws at all. It is true that in a society with an extremely strong respect for law, it might occasionally be possible to make a law without establishing any specific penalty, and still have that law followed. The community would still need to leave itself the option of establishing a penalty; otherwise it would just be advice rather than a law.

This causes a slight problem. The purpose of a law is to make sure that certain things are done and others avoided, and the reason for penalties is to back up this purpose. But when someone breaks the law, the law has already failed. The very thing the law was meant to prevent has already happened. And what now? Should the person be punished? Why? To prevent the law from being broken? It has already been broken. So we cannot prevent it from being broken. And the thing is, punishment is something bad. So to inflict the punishment now, after the crime has already been committed, seems like just stacking one bad thing on top of another.

At this point the “Retributive” theory of justice will chime in. “We should still inflict the punishment because it is just, and the criminal deserves it.”

This is the appeal of the Humanitarian’s condemnation of the retributive theory. The Retributive theory, the Humanitarian will say, is just asserting that something bad, namely the punishment, in this situation, is something good, by bringing in the idea of “justice.” But this is a contradiction: something bad is bad by definition, and cannot be good.

The reader is perhaps beginning to understand the placement of the previous post. A law is established, with a penalty for being broken, in order to make certain things happen. This is like intending to drink the toxin. But if someone breaks the law, what is the point of inflicting the punishment? And the next morning, what is the point of drinking the toxin in the afternoon, when the money is already received or not? There is a difference of course, because in this case the dilemma only comes up because the law has been broken. We could make the cases more analogous, however, by stipulating in the case of Kavka’s toxin that the rich billionaire offers this deal: “The million will be found in your account, with a probability of 99.99%, if and only if you intend to drink the toxin only if the million is not found in your account (which will happen only in the unlucky 0.01% of cases), and you do not need to drink or intend to drink in the situation where the million is found in your account.” In this situation, the person might well reason thus:

If the morning comes and the million is not in my account, why on earth would I drink the toxin? This deal is super unfair.

Nonetheless, as in the original deal, there is one and only one way to get the million: namely, by planning to drink the toxin in that situation, and by planning not to reconsider, no matter what. As in the case of law, the probability factor that I added means that it is possible not to get the million, although you probably will. But the person who formed this intention will go through with it and drink the toxin, unless they reconsider; and they had the definite intention of not reconsidering.

The situations are now more analogous, but there is still an additional difference, one that makes it even easier to decide to follow the law than to drink the toxin. The only reason to commit to drinking the toxin was to get the million, which, in our current situation, has already failed. But in the case of the law, one purpose was to prevent the criminal from performing a certain action, and that purpose has already failed. But it also has the purpose of preventing them from doing it in the future, and preventing others from doing it. So there additional motivations for carrying out the law.

We can leave the additional difference to the side for now, however. The point would be essentially valid even if you made a law to prevent one particular act, and that act ended up being done. The retributionist would say, “Ok, so applying the punishment at this point will not prevent the thing it was meant to prevent. But it is just, and the criminal deserves it, and we should still inflict it.” And they are right: the whole idea of establishing the the rule included the idea that the punishment would actually be carried out, in this situation. There was a rule against reconsidering the rule, just as the fellow in the situation with the toxin planned not to reconsider their plan.

What is meant when it is said that a punishment is “just,” and that the criminal “deserves it,” then is simply that it is what is required by the rules we have established, and that those rules are reasonable ones.

Someone will object here. It seems that this cannot be true, because some punishments are wicked and unjust even though there were rules establishing them. And it seems that this is because people simply do not deserve those things: so there must be such a thing as “what they deserve,” in itself and independent of any rules. But this is where we must return to the point made above about just and unjust laws. One hears, for example, of cases in which people were sentenced to death for petty theft. We can agree that this is unjust in itself: but this is precisely because the rule, “someone who steals food should be killed,” is not a reasonable rule which will benefit the community. You might have something good in mind for it, namely to prevent stealing, but if you carry out the penalty on even one occasion, you have done more harm than all the stealing put together. The Humanitarians are right that the thing inflicted in a punishment is bad, and remains bad. It does not become something good in that situation. And this is precisely why it needs some real proportion to the crime.

We can analyze the situation in two ways, from the point of view of the State, considered as though a kind of person, and from the point of the view of the person who carries out the law. The State makes a kind of promise to inflict a punishment for some crimes, in such a way as to minimize the total harm of both the crimes and their punishment. Additionally, to some extent it promises not to reconsider this in situation where a crime is actually committed. “To some extent” here is of course essential: such rules are not and should not be absolutely rigid. If the crime is actually committed, the State is in a situation like our person who finds himself without the million and having committed to drink the toxin in that situation: the normal result of the situation will be that the State inflicts the punishment, and the person drinks the toxin, without any additional consideration of motivations or reasons.

From the point of view of the individual, he carries out the sentence “because it is just,” i.e. because it is required by reasonable rules which we have established for the good of the community. And that, i.e. carrying out reasonable laws, is a good thing, even though the material content includes something bad. The moral object of the executioner is the fulfillment of justice, not the killing of a person.

We have perhaps already pointed the way to the last point, namely that with the incorporation of the idea of justice, C.S. Lewis’s criticisms fail. Lewis argues that if the purpose of punishment is medicinal, then it is in principle unlimited: but this is not true even of medicine. No one would take medicine which would cause more harm than the disease, nor would it be acceptable to compel someone else to take such medicine.

More importantly, Lewis’s criticism play off the problems that are caused by believing that one needs to consider at every point, “will the consequences of this particular punishment or action be good or not?” This is not necessary because this is not the way law works, despite the fact that the general purpose is the one supposed. Law only works because to some extent it promises not to reconsider, like our fellow in the case of Kavka’s toxin. Just as you are wrong to focus on whether “drinking the toxin right now will harm me and not benefit me”, so the State would be wrong to focus too much on the particular consequences of carrying out the law right now, as opposed to the general consequences of the general law.

Thus for example Lewis supposes rulers considering the matter in an entirely utilitarian way:

But that is not the worst. If the justification of exemplary punishment is not to be based on desert but solely on its efficacy as a deterrent, it is not absolutely necessary that the man we punish should even have committed the crime. The deterrent effect demands that the public should draw the moral, “If we do such an act we shall suffer like that man.” The punishment of a man actually guilty whom the public think innocent will not have the desired effect; the punishment of a man actually innocent will, provided the public think him guilty. But every modern State has powers which make it easy to fake a trial. When a victim is urgently needed for exemplary purposes and a guilty victim cannot be found, all the purposes of deterrence will be equally served by the punishment (call it “cure” if you prefer) of an innocent victim, provided that the public can be cheated into thinking him guilty. It is no use to ask me why I assume that our rulers will be so wicked.

As said, this is not the way law works. The question will be about which laws are reasonable and beneficial in general, not about whether such and such particular actions are beneficial in particular cases. Consider a proposed law formulated with such an idea in mind:

When the ruling officials believe that it is urgently necessary to deter people from committing a crime, and no one can be found who has actually committed it, the rulers are authorized to deceive the public into believing that an innocent man has committed the crime, and to punish that innocent man.

It should not be necessary to make a long argument that as a general rule, this does not serve the good of a community, regardless of might happen in particular cases. In this way it is quite right to say that this is unjust in itself. This does not, however, establish that “what someone deserves” has any concrete content which is not established by law.

As a sort of footnote to this post, we might note that “deserts” are sometimes extended to natural consequences in much the way “law” is extended to laws of nature, mathematics, or logic. For example, Bryan Caplan distinguishes “deserving” and “undeserving” poor:

I propose to use the same standard to identify the “deserving” and “undeserving” poor.  The deserving poor are those who can’t take – and couldn’t have taken – reasonable steps to avoid poverty. The undeserving poor are those who can take – or could have taken – reasonable steps to avoid poverty.  Reasonable steps like: Work full-time, even if the best job you can get isn’t fun; spend your money on food and shelter before you get cigarettes or cable t.v.; use contraception if you can’t afford a child.  A simple test of “reasonableness”: If you wouldn’t accept an excuse from a friend, you shouldn’t accept it from anyone.

This is rather different from the sense discussed in this post, but you could view it as an extension of it. It is a rule (of mathematics, really) that “if you spend all of your money you will not have any left,” and we probably do not need to spend much effort trying to change this situation, considered in general, even if we might want to change it for an individual.

Tautologies Not Trivial

In mathematics and logic, one sometimes speaks of a “trivial truth” or “trivial theorem”, referring to a tautology. Thus for example in this Quora question, Daniil Kozhemiachenko gives this example:

The fact that all groups of order 2 are isomorphic to one another and commutative entails that there are no non-Abelian groups of order 2.

This statement is a tautology because “Abelian group” here just means one that is commutative: the statement is like the customary example of asserting that “all bachelors are unmarried.”

Some extend this usage of “trivial” to refer to all statements that are true in virtue of the meaning of the terms, sometimes called “analytic.” The effect of this is to say that all statements that are logically necessary are trivial truths. An example of this usage can be seen in this paper by Carin Robinson. Robinson says at the end of the summary:

Firstly, I do not ask us to abandon any of the linguistic practises discussed; merely to adopt the correct attitude towards them. For instance, where we use the laws of logic, let us remember that there are no known/knowable facts about logic. These laws are therefore, to the best of our knowledge, conventions not dissimilar to the rules of a game. And, secondly, once we pass sentence on knowing, a priori, anything but trivial truths we shall have at our disposal the sharpest of philosophical tools. A tool which can only proffer a better brand of empiricism.

While the word “trivial” does have a corresponding Latin form that means ordinary or commonplace, the English word seems to be taken mainly from the “trivium” of grammar, rhetoric, and logic. This would seem to make some sense of calling logical necessities “trivial,” in the sense that they pertain to logic. Still, even here something is missing, since Robinson wants to include the truths of mathematics as trivial, and classically these did not pertain to the aforesaid trivium.

Nonetheless, overall Robinson’s intention, and presumably that of others who speak this way, is to suggest that such things are trivial in the English sense of “unimportant.” That is, they may be important tools, but they are not important for understanding. This is clear at least in our example: Robinson calls them trivial because “there are no known/knowable facts about logic.” Logical necessities tell us nothing about reality, and therefore they provide us with no knowledge. They are true by the meaning of the words, and therefore they cannot be true by reason of facts about reality.

Things that are logically necessary are not trivial in this sense. They are important, both in a practical way and directly for understanding the world.

Consider the failure of the Mars Climate Orbiter:

On November 10, 1999, the Mars Climate Orbiter Mishap Investigation Board released a Phase I report, detailing the suspected issues encountered with the loss of the spacecraft. Previously, on September 8, 1999, Trajectory Correction Maneuver-4 was computed and then executed on September 15, 1999. It was intended to place the spacecraft at an optimal position for an orbital insertion maneuver that would bring the spacecraft around Mars at an altitude of 226 km (140 mi) on September 23, 1999. However, during the week between TCM-4 and the orbital insertion maneuver, the navigation team indicated the altitude may be much lower than intended at 150 to 170 km (93 to 106 mi). Twenty-four hours prior to orbital insertion, calculations placed the orbiter at an altitude of 110 kilometers; 80 kilometers is the minimum altitude that Mars Climate Orbiter was thought to be capable of surviving during this maneuver. Post-failure calculations showed that the spacecraft was on a trajectory that would have taken the orbiter within 57 kilometers of the surface, where the spacecraft likely skipped violently on the uppermost atmosphere and was either destroyed in the atmosphere or re-entered heliocentric space.[1]

The primary cause of this discrepancy was that one piece of ground software supplied by Lockheed Martin produced results in a United States customary unit, contrary to its Software Interface Specification (SIS), while a second system, supplied by NASA, expected those results to be in SI units, in accordance with the SIS. Specifically, software that calculated the total impulse produced by thruster firings produced results in pound-force seconds. The trajectory calculation software then used these results – expected to be in newton seconds – to update the predicted position of the spacecraft.

It is presumably an analytic truth that the units defined in one way are unequal to the units defined in the other. But it was ignoring this analytic truth that was the primary cause of the space probe’s failure. So it is evident that analytic truths can be extremely important for practical purposes.

Such truths can also be important for understanding reality. In fact, they are typically more important for understanding than other truths. The argument against this is that if something is necessary in virtue of the meaning of the words, it cannot be telling us something about reality. But this argument is wrong for one simple reason: words and meaning themselves are both elements of reality, and so they do tell us something about reality, even when the truth is fully determinate given the meaning.

If one accepts the mistaken argument, in fact, sometimes one is led even further. Logically necessary truths cannot tell us anything important for understanding reality, since they are simply facts about the meaning of words. On the other hand, anything which is not logically necessary is in some sense accidental: it might have been otherwise. But accidental things that might have been otherwise cannot help us to understand reality in any deep way: it tells us nothing deep about reality to note that there is a tree outside my window at this moment, when this merely happens to be the case, and could easily have been otherwise. Therefore, since neither logically necessary things, nor logically contingent things, can help us to understand reality in any deep or important way, such understanding must be impossible.

It is fairly rare to make such an argument explicitly, but it is a common implication of many arguments that are actually made or suggested, or it at least influences the way people feel about arguments and understanding.  For example, consider this comment on an earlier post. Timocrates suggests that (1) if you have a first cause, it would have to be a brute fact, since it doesn’t have any other cause, and (2) describing reality can’t tell us any reasons but is “simply another description of how things are.” The suggestion behind these objections is that the very idea of understanding is incoherent. As I said there in response, it is true that every true statement is in some sense “just a description of how things are,” but that was what a true statement was meant to be in any case. It surely was not meant to be a description of how things are not.

That “analytic” or “tautologous” statements can indeed provide a non-trivial understanding of reality can also easily be seen by example. Some examples from this blog:

Good and being. The convertibility of being and goodness is “analytic,” in the sense that carefully thinking about the meaning of desire and the good reveals that a universe where existence as such was bad, or even failed to be good, is logically impossible. In particular, it would require a universe where there is no tendency to exist, and this is impossible given that it is posited that something exists.

Natural selection. One of the most important elements of Darwin’s theory of evolution is the following logically necessary statement: the things that have survived are more likely to be the things that were more likely to survive, and less likely to be the things that were less likely to survive.

Limits of discursive knowledge. Knowledge that uses distinct thoughts and concepts is necessarily limited by issues relating to self-reference. It is clear that this is both logically necessary, and tells us important things about our understanding and its limits.

Knowledge and being. Kant rightly recognized a sense in which it is logically impossible to “know things as they are in themselves,” as explained in this post. But as I said elsewhere, the logically impossible assertion that knowledge demands an identity between the mode of knowing and the mode of being is the basis for virtually every sort of philosophical error. So a grasp on the opposite “tautology” is extremely useful for understanding.


“Moral” Responsibility

In a passage quoted here, Jerry Coyne objected to the “moral” in “moral responsibility”:

To me, that means that the concept of “moral responsibility” is meaningless, for that implies an ability to choose freely. Nevertheless, we should still retain the concept of responsibility, meaning “an identifiable person did this or that good or bad action”. And, of course, we can sanction or praise people who were responsible in this sense, for such blame and praise can not only reinforce good behavior but is salubrious for society.

Suppose someone completely insane happens to kill another person, under the mistaken belief that they are doing something completely different. In such a case, “an identifiable person did this or that good or bad action,” and yet we do not say they are responsible, much less blame such a person; rather we may subject them to physical restraints, but we no more blame them than we blame the weather for the deaths that it occasionally inflicts on people. In other words, Coyne’s definition does not even work for “responsibility,” let alone moral responsibility.

Moral action has a specific meaning: something that is done, and not merely an action in itself, but in comparison with the good proposed by human reason. Consequently we have moral action only when we have something voluntarily done by a human being for a reason, or (if without a reason) with the voluntary omission of the consideration of reasons. In exactly the same situations we have moral responsibility: namely, someone voluntarily did something good, or someone voluntarily did something bad.

Praise and blame are added precisely because people are acting for reasons, and given that people tend to like praise and dislike blame, these elements, if rightly applied, will make good things better, and thus more likely to be pursued, and bad things worse, and thus more likely to be avoided. As an aside, this also suggests occasions when it is a bad idea to blame someone for something bad; namely, when blame is not likely to reduce the bad activity, or by very little, since in this case you are simply making things worse, period.

Stop, Coyne and others will say. Even if we agree with the point about praise and blame, we do not agree about moral responsibility, unless determinism is false. And nothing in the above paragraphs even refers to determinism or its opposite, and thus the above cannot be a full account of moral responsibility.

The above is, in fact, a basically complete account of moral responsibility. Although determinism is false, as was said in the linked post, its falsity has nothing to do with the matter one way or another.

The confusion about this results from a confusion between an action as a being in itself, and an action as moral, namely as considered by reason. This distinction was discussed here while considering what it means to say that some kinds of actions are always wrong. It is quite true that considered as a moral action, it would be wrong to blame someone if they did not have any other option. But that situation would be a situation where no reasonable person would act otherwise. And you do not blame someone for doing something that all reasonable people would do. You blame them in a situation where reasonable people would do otherwise: there are reasons for doing something different, but they did not act on those reasons.

But it is not the case that blame or moral responsibility depends on whether or not there is a physically possible alternative, because to consider physical alternatives is simply to speak of the action as a being in itself, and not as a moral act at all.


Causality and Moral Responsibility

Consider two imaginary situations:

(1) In the first situation, people are such that when someone sees a red light, they immediately go off and kill someone. Nothing can be done to prevent this, and no intention or desire to do otherwise makes any difference.

In this situation, killing someone after you have seen a red light is not blamed, since it cannot be avoided, but we blame people who show red lights to others. Such people are arrested and convicted as murderers.

(2) In the second situation, people are such that when someone sees a red light, there is a 5% chance they will go off and immediately kill someone, and a 95% chance they will behave normally. Nothing can change this probability: it does not matter whether the person is wicked or virtuous or what their previous attitude to killing was.

In this situation, again, we do not blame people who end up killing someone, but we call them unlucky. We do however blame people who show others red lights, and they are arrested and convicted of second degree murder, or in some cases manslaughter.

Some people would conclude from this that moral responsibility is incoherent: whether the world is deterministic or not, moral responsibility is impossible. Jerry Coyne defends this position in numerous places, as for example here:

We’ve taken a break from the many discussions on this site about free will, but, cognizant of the risks, I want to bring it up again. I think nearly all of us agree that there’s no dualism involved in our decisions: they’re determined completely by the laws of physics. Even the pure indeterminism of quantum mechanics can’t give us free will, because that’s simple randomness, and not a result of our own “will.”

Coyne would perhaps say that “free will” embodies a contradiction much in the way that “square circle” does. “Will” implies a cause, and thus something deterministic. “Free” implies indeterminism, and thus no cause.

In many places Coyne asserts that this implies that moral responsibility does not exist, as for example here:

This four-minute video on free will and responsibility, narrated by polymath Raoul Martinez, was posted by the Royal Society for the Encouragement of the Arts, Manufactures, and Commerce (RSA). Martinez’s point is one I’ve made here many times, and will surely get pushback from: determinism rules human behavior, and our “choices” are all predetermined by our genes and environment. To me, that means that the concept of “moral responsibility” is meaningless, for that implies an ability to choose freely. Nevertheless, we should still retain the concept of responsibility, meaning “an identifiable person did this or that good or bad action”. And, of course, we can sanction or praise people who were responsible in this sense, for such blame and praise can not only reinforce good behavior but is salubrious for society.

I think that Coyne is very wrong about the meaning of free will, somewhat wrong about responsibility, and likely wrong about the consequences of his views for society (e.g. he believes that his view will lead to more humane treatment of prisoners. There is no particular reason to expect this.)

The imaginary situations described in the initial paragraphs of this post do not imply that moral responsibility is impossible, but they do tell us something. In particular, they tell us that responsibility is not directly determined by determinism or its lack. And although Coyne says that “moral responsibility” implies indeterminism, surely even Coyne would not advocate blaming or punishing the person who had the 5% chance of going and killing someone. And the reason is clear: it would not “reinforce good behavior” or be “salubrious for society.” By the terms set out, it would make no difference, so blaming or punishing would be pointless.

Coyne is right that determinism does not imply that punishment is pointless. And he also recognizes that indeterminism does not of itself imply that anyone is responsible for anything. But he fails here to put two and two together: just as determinism does not imply punishment is pointless, nor that it is not, indeterminism likewise implies neither of the two. The conclusion he should draw is not that moral responsibility is meaningless, but that it is independent of both determinism and indeterminism; that is, that both deterministic compatibilism and libertarian free will allow for moral responsibility.

So what is required for praise and blame to have a point? Elsewhere we discussed C.S. Lewis’s claim that something can have a reason or a cause, but not both. In a sense, the initial dilemma in this post can be understood as a similar argument. Either our behavior has deterministic causes, or it has indeterministic causes; therefore it does not have reasons; therefore moral responsibility does not exist.

On the other hand, if people do have reasons for their behavior, there can be good reasons for blaming people who do bad things, and for punishing them. Namely, since those people are themselves acting for reasons, they will be less likely in the future to do those things, and likewise other people, fearing punishment and blame, will be less likely to do them.

As I said against Lewis, reasons do not exclude causes, but require them. Consequently what is necessary for moral responsibility are causes that are consistent with having reasons; one can easily imagine causes that are not consistent with having reasons, as in the imaginary situations described, and such causes would indeed exclude responsibility.

Employer and Employee Model: Truth

In the remote past, I suggested that I would someday follow up on this post. In the current post, I begin to keep that promise.

We can ask about the relationship of the various members of our company with the search for truth.

The CEO, as the predictive engine, has a fairly strong interest in truth, but only insofar as truth is frequently necessary in order to get predictive accuracy. Consequently our CEO will usually insist on the truth when it affects our expectations regarding daily life, but it will care less when we consider things remote from the senses. Additionally, the CEO is highly interested in predicting the behavior of the Employee, and it is not uncommon for falsehood to be better than truth for this purpose.

To put this in another way, the CEO’s interest in truth is instrumental: it is sometimes useful for the CEO’s true goal, predictive accuracy, but not always, and in some cases it can even be detrimental.

As I said here, the Employee is, roughly speaking, the human person as we usually think of one, and consequently the Employee has the same interest in truth that we do. I personally consider truth to be an ultimate end,  and this is probably the opinion of most people, to a greater or lesser degree. In other words, most people consider truth a good thing, even apart from instrumental considerations. Nonetheless, all of us care about various things besides truth, and therefore we also occasionally trade truth for other things.

The Vice President has perhaps the least interest in truth. We could say that they too have some instrumental concern about truth. Thus for example the VP desires food, and this instrumentally requires true ideas about where food is to be found. Nonetheless, as I said in the original post, the VP is the least rational and coherent, and may easily fail to notice such a need. Thus the VP might desire the status resulting from winning an argument, so to speak, but also desire the similar status that results from ridiculing the person holding an opposing view. The frequent result is that a person believes the falsehood that ridiculing an opponent generally increases the chance that they will change their mind (e.g. see John Loftus’s attempt to justify ridicule.)

Given this account, we can raise several disturbing questions.

First, although we have said the Employee values truth in itself, can this really be true, rather than simply a mistaken belief on the part of the Employee? As I suggested in the original account, the Employee is in some way a consequence of the CEO and the VP. Consequently, if neither of these places intrinsic value on truth, how is it possible that the Employee does?

Second, even if the Employee sincerely places an intrinsic value on truth, how is this not a misplaced value? Again, if the Employee is something like a result of the others, what is good for the Employee should be what is good for the others, and thus if truth is not intrinsically good for the others, it should not be intrinsically good for the Employee.

In response to the first question, the Employee can indeed believe in the intrinsic value of truth, and of many other things to which the CEO and VP do not assign intrinsic value. This happens because as we are considering the model, there is a real division of labor, even if the Employee arises historically in a secondary manner. As I said in the other post, the Employee’s beliefs are our beliefs, and the Employee can believe anything that we believe. Furthermore, the Employee can really act on such beliefs about the goodness of truth or other things, even when the CEO and VP do not have the same values. The reason for this is the same as the reason that the CEO will often go along with the desires of the VP, even though the CEO places intrinsic value only on predictive accuracy. The linked post explains, in effect, why the CEO goes along with sex, even though only the VP really wants it. In a similar way, if the Employee believes that sex outside of marriage is immoral, the CEO often goes along with avoiding such sex, even though the CEO cares about predictive accuracy, not about sex or its avoidance. Of course, in this particular case, there is a good chance of conflict between the Employee and VP, and the CEO dislikes conflict, since it makes it harder to predict what the person overall will end up doing. And since the VP very rarely changes its mind in this case, the CEO will often end up encouraging the Employee to change their mind about the morality of such sex: thus one of the most frequent reasons why people abandon their religion is that it says that sex in some situations is wrong, but they still desire sex in those situations.

In response to the second, the Employee is not wrong to suppose that truth is intrinsically valuable. The argument against this would be that the human good is based on human flourishing, and (it is claimed) we do not need truth for such flourishing, since the CEO and VP do not care about truth in itself. The problem with this is that such flourishing requires that the Employee care about truth, and even the CEO needs the Employee to care in this way, for the sake of its own goal of predictive accuracy. Consider a real-life company: the employer does not necessarily care about whether the employee is being paid, considered in itself, but only insofar as it is instrumentally useful for convincing the employee to work for the employer. But the employer does care about whether the employee cares about being paid: if the employee does not care about being paid, they will not work for the employer.

Concern for truth in itself, apart from predictive accuracy, affects us when we consider things that cannot possibly affect our future experience: thus in previous cases I have discussed the likelihood that there are stars and planets outside the boundaries of the visible universe. This is probably true; but if I did not care about truth in itself, I might as well say that the universe is surrounded by purple elephants. I do not expect any experience to verify or falsify the claim, so why not make it? But now notice the problem for the CEO: the CEO needs to predict what the Employee is going to do, including what they will say and believe. This will instantly become extremely difficult if the Employee decides that they can say and believe whatever they like, without regard for truth, whenever the claim will not affect their experiences. So for its own goal of predictive accuracy, the CEO needs the Employee to value truth in itself, just as an ordinary employer needs their employee to value their salary.

In real life this situation can cause problems. The employer needs their employee to care about being paid, but if they care too much, they may constantly be asking for raises, or they may quit and go work for someone who will pay more. The employer does not necessarily like these situations. In a similar way, the CEO in our company may worry if the Employee insists too much on absolute truth, because as discussed elsewhere, it can lead to other situations with unpredictable behavior from the Employee, or to situations where there is a great deal of uncertainty about how society will respond to the Employee’s behavior.

Overall, this post perhaps does not say much in substance that we have not said elsewhere, but it will perhaps provide an additional perspective on these matters.

Employer and Employee Model of Human Psychology

This post builds on the ideas in the series of posts on predictive processing and the followup posts, and also on those relating truth and expectation. Consequently the current post will likely not make much sense to those who have not read the earlier content, or to those that read it but mainly disagreed.

We set out the model by positing three members of the “company” that constitutes a human being:

The CEO. This is the predictive engine in the predictive processing model.

The Vice President. In the same model, this is the force of the historical element in the human being, which we used to respond to the “darkened room” problem. Thus for example the Vice President is responsible for the fact that someone is likely to eat soon, regardless of what they believe about this. Likewise, it is responsible for the pursuit of sex, the desire for respect and friendship, and so on. In general it is responsible for behaviors that would have been historically chosen and preserved by natural selection.

The Employee. This is the conscious person who has beliefs and goals and free will and is reflectively aware of these things. In other words, this is you, at least in a fairly ordinary way of thinking of yourself. Obviously, in another way you are composed from all of them.

Why have we arranged things in this way? Descartes, for example, would almost certainly disagree violently with this model. The conscious person, according to him, would surely be the CEO, and not an employee. And what is responsible for the relationship between the CEO and the Vice President? Let us start with this point first, before we discuss the Employee. We make the predictive engine the CEO because in some sense this engine is responsible for everything that a human being does, including the behaviors preserved by natural selection. On the other hand, the instinctive behaviors of natural selection are not responsible for everything, but they can affect the course of things enough that it is useful for the predictive engine to take them into account. Thus for example in the post on sex and minimizing uncertainty, we explained why the predictive engine will aim for situations that include having sex and why this will make its predictions more confident. Thus, the Vice President advises certain behaviors, the CEO talks to the Vice President, and the CEO ends up deciding on a course of action, which ultimately may or may not be the one advised by the Vice President.

While neither the CEO nor the Vice President is a rational being, since in our model we place the rationality in the Employee, that does not mean they are stupid. In particular, the CEO is very good at what it does. Consider a role playing video game where you have a character that can die and then resume. When someone first starts to play the game, they may die frequently. After they are good at the game, they may die only rarely, perhaps once in many days or many weeks. Our CEO is in a similar situation, but it frequently goes 80 years or more without dying, on its very first attempt. It is extremely good at its game.

What are their goals? The CEO basically wants accurate predictions. In this sense, it has one unified goal. What exactly counts as more or less accurate here would be a scientific question that we probably cannot resolve by philosophical discussion. In fact, it is very possible that this would differ in different circumstances: in this sense, even though it has a unified goal, it might not be describable by a consistent utility function. And even if it can be described in that way, since the CEO is not rational, it does not (in itself) make plans to bring about correct predictions. Making good predictions is just what it does, as falling is what a rock does. There will be some qualifications on this, however, when we discuss how the members of the company relate to one another.

The Vice President has many goals: eating regularly, having sex, having and raising children, being respected and liked by others, and so on. And even more than in the case of the CEO, there is no reason for these desires to form a coherent set of preferences. Thus the Vice President might advise the pursuit of one goal, but then change its mind in the middle, for no apparent reason, because it is suddenly attracted by one of the other goals.

Overall, before the Employee is involved, human action is determined by a kind of negotiation between the CEO and the Vice President. The CEO, which wants good predictions, has no special interest in the goals of the Vice President, but it cooperates with them because when it cooperates its predictions tend to be better.

What about the Employee? This is the rational being, and it has abstract concepts which it uses as a formal copy of the world. Before I go on, let me insist clearly on one point. If the world is represented in a certain way in the Employee’s conceptual structure, that is the way the Employee thinks the world is. And since you are the Employee, that is the way you think the world actually is. The point is that once we start thinking this way, it is easy to say, “oh, this is just a model, it’s not meant to be the real thing.” But as I said here, it is not possible to separate the truth of statements from the way the world actually is: your thoughts are formulated in concepts, but they are thoughts about the way things are. Again, all statements are maps, and all statements are about the territory.

The CEO and the Vice President exist as soon a human being has a brain; in fact some aspects of the Vice President would exist even before that. But the Employee, insofar as it refers to something with rational and self-reflective knowledge, takes some time to develop. Conceptual knowledge of the world grows from experience: it doesn’t exist from the beginning. And the Employee represents goals in terms of its conceptual structure. This is just a way of saying that as a rational being, if you say you are pursuing a goal, you have to be able to describe that goal with the concepts that you have. Consequently you cannot do this until you have some concepts.

We are ready to address the question raised earlier. Why are you the Employee, and not the CEO? In the first place, the CEO got to the company first, as we saw above. Second, consider what the conscious person does when they decide to pursue a goal. There seems to be something incoherent about “choosing a goal” in the first place: you need a goal in order to decide which means will be a good means to choose. And yet, as I said here, people make such choices anyway. And the fact that you are the Employee, and not the CEO, is the explanation for this. If you were the CEO, there would indeed be no way to choose an end. That is why the actual CEO makes no such choice: its end is already determinate, namely good predictions. And you are hired to help out with this goal. Furthermore, as a rational being, you are smarter than the CEO and the Vice President, so to speak. So you are allowed to make complicated plans that they do not really understand, and they will often go along with these plans. Notably, this can happen in real life situations of employers and employees as well.

But take an example where you are choosing an end: suppose you ask, “What should I do with my life?” The same basic thing will happen if you ask, “What should I do today,” but the second question may be easier to answer if you have some answer to the first. What sorts of goals do you propose in answer to the first question, and what sort do you actually end up pursuing?

Note that there are constraints on the goals that you can propose. In the first place, you have to be able to describe the goal with the concepts you currently have: you cannot propose to seek a goal that you cannot describe. Second, the conceptual structure itself may rule out some goals, even if they can be described. For example, the idea of good is part of the structure, and if something is thought to be absolutely bad, the Employee will (generally) not consider proposing this as a goal. Likewise, the Employee may suppose that some things are impossible, and it will generally not propose these as goals.

What happens then is this: the Employee proposes some goal, and the CEO, after consultation with the Vice President, decides to accept or reject it, based on the CEO’s own goal of getting good predictions. This is why the Employee is an Employee: it is not the one ultimately in charge. Likewise, as was said, this is why the Employee seems to be doing something impossible, namely choosing goals. Steven Kaas makes a similar point,

You are not the king of your brain. You are the creepy guy standing next to the king going “a most judicious choice, sire”.

This is not quite the same thing, since in our model you do in fact make real decisions, including decisions about the end to be pursued. Nonetheless, the point about not being the one ultimately in charge is correct. David Hume also says something similar when he says, “Reason is, and ought only to be the slave of the passions, and can never pretend to any other office than to serve and obey them.” Hume’s position is not exactly right, and in fact seems an especially bad way of describing the situation, but the basic point that there is something, other than yourself in the ordinary sense, judging your proposed means and ends and deciding whether to accept them, is one that stands.

Sometimes the CEO will veto a proposal precisely because it very obviously leaves things vague and uncertain, which is contrary to its goal of having good predictions. I once spoke of the example that a person cannot directly choose to “write a paper.” In our present model, the Employee proposes “we’re going to write a paper now,” and the CEO responds, “That’s not a viable plan as it stands: we need more detail.”

While neither the CEO nor the Vice President is a rational being, the Vice President is especially irrational, because of the lack of unity among its goals. Both the CEO and the Employee would like to have a unified plan for one’s whole life: the CEO because this makes for good predictions, and the Employee because this is the way final causes work, because it helps to make sense of one’s life, and because “objectively good” seems to imply something which is at least consistent, which will never prefer A to B, B to C, and C to A. But the lack of unity among the Vice President’s goals means that it will always come to the CEO and object, if the person attempts to coherently pursue any goal. This will happen even if it originally accepts the proposal to seek a particular goal.

Consider this real life example from a relationship between an employer and employee:


Employer: Please construct a schedule for paying these bills.

Employee: [Constructs schedule.] Here it is.

Employer: Fine.

[Time passes, and the first bill comes due, according to the schedule.]

Employer: Why do we have to pay this bill now instead of later?


In a similar way, this sort of scenario is common in our model:


Vice President: Being fat makes us look bad. We need to stop being fat.

CEO: Ok, fine. Employee, please formulate a plan to stop us from being fat.

Employee: [Formulates a diet.] Here it is.

[Time passes, and the plan requires skipping a meal.]

Vice President: What is this crazy plan of not eating!?!

CEO: Fine, cancel the plan for now and we’ll get back to it tomorrow.


In the real life example, the behavior of the employer is frustrating and irritating to the employee because there is literally nothing they could have proposed that the employer would have found acceptable. In the same way, this sort of scenario in our model is frustrating to the Employee, the conscious person, because there is no consistent plan they could have proposed that would have been acceptable to the Vice President: either they would have objected to being fat, or they would have objected to not eating.

In later posts, we will fill in some details and continue to show how this model explains various aspects of human psychology. We will also answer various objections.

Explaining Causality

A reader asks about a previous post:

a) Per Hume and his defenders, we can’t really observe causation. All we can see is event A in spacetime, then event B in spacetime. We have no reason to posit that event A and event B are, say, chairs or dogs; we can stick with a sea of observed events, and claim that the world is “nothing more” but a huge set of random 4D events. While I can see that giving such an account restores formal causation, it doesn’t salvage efficient causation, and doesn’t even help final causation. How could you move there from our “normal” view?

b) You mention that the opinion “laws are observed patterns” is not a dominant view; though, even though I’d like to sit with the majority, I can’t go further than a). I can’t build an argument for this, and fail to see how Aristotle put his four causes correctly. I always end up gnawing on an objection, like “causation is only in the mind” or similar. Help?

It is not my view that the world is a huge set of random 4D events. This is perhaps the view of Atheism and the City, but it is a mistaken one. The blogger is not mistaken in thinking that there are problems with presentism, but they cannot be solved by adopting an eternalist view. Rather, these two positions constitute a Kantian dichotomy, and as usual, both positions are false. For now, however, I will leave this to the consideration of the reader. It is not necessary to establish this to respond to the questions above.

Consider the idea that “we can’t really observe causation.” As I noted here, it does not make sense to say that we cannot observe causation unless we already understand what causation is. If the word were meaningless to us, we would have no argument that we don’t observe it; it is only because we do understand the idea of causation that we can even suggest that it might be difficult to observe. And if we do have the idea, we got the idea from somewhere, and that could only have been… from observation, of course, since we don’t have anything else to get ideas from.

Let us untie the knot. I explained causality in general in this way:

“Cause” and “effect” simply signify that the cause is the origin of the effect, and that the effect is from the cause, together with the idea that when we understand the cause, we understand the explanation for the effect. Thus “cause” adds to “origin” a certain relationship with the understanding; this is why Aristotle says that we do not think we understand a thing until we know its cause, or “why” it is. We do not understand a thing until we know its explanation.

Note that there is something “in the mind” about causality. Saying to oneself, “Aha! So that’s why that happened!” is a mental event. And we can also see how it is possible to observe causality: we can observe that one thing is from another, i.e. that a ball breaks a window, and we can also observe that knowing this provides us a somewhat satisfactory answer to the question, “Why is the window broken?”, namely, “Because it was hit by a ball.”

Someone (e.g. Atheism and the City) might object that we also cannot observe one thing coming from another. We just observe the two things, and they are, as Hume says, “loose and separate.” Once again, however, we would have no idea of “from” unless we got it from observing things. In the same early post quoted above, I explained the idea of origin, i.e. that one thing is from another:

Something first is said to be the beginning, principle, or origin of the second, and the second is said to be from the first. This simply signifies the relationship already described in the last post, together with an emphasis on the fact that the first comes before the second by “consequence of being”, in the way described.

“The relationship already described in the last post” is that of before and after. In other words, wherever we have any kind of order at all, we have one thing from another. And we observe order, even when we simply see one thing after another, and thus we also observe things coming from other things.

What about efficient causality? If we adopt the explanation above, asserting the existence of efficient causality is nothing more or less than asserting that things sometimes make other things happen, like balls breaking windows, and that knowing about this is a way for us to understand the effects (e.g. broken windows.)

Similarly, denying the existence of efficient causality means either denying that anything ever makes anything else happen, or denying that knowing about this makes us understand anything, even in a minor way. Atheism and the City seems to want to deny that anything ever makes anything else happen:

Most importantly, my view technically is not that causality doesn’t exist, it’s that causality doesn’t exist in the way we typically think it does. That is, my view of causality is completely different from the general every day notion of causality most people have. The naive assumption one often gets when hearing my view is that I’m saying cause and effect relationships don’t exist at all, such that if you threw a brick at glass window it wouldn’t shatter, or if you jumped in front of a speeding train you wouldn’t get smashed to death by it. That’s not what my view says at all.

On my view of causality, if you threw a brick at a glass window it would shatter, if you jumped in front of a speeding train you’d be smashed to death by it. The difference between my view of causality vs the typical view is that on my view causes do not bring their effects into existence in the sense of true ontological becoming.

I am going to leave aside the discussion of “true ontological becoming,” because it is a distraction from the real issue. Does Atheism and the City deny that things ever make other things happen? It appears so, but consider that “things sometimes make other things happen” is just a more general description of the very same situations as descriptions like, “Balls sometimes break windows.” So if you want to deny that things make other things happen, you should also deny that balls break windows. Now our blogger perhaps wants to say, “I don’t deny that balls break windows in the everyday sense, but they don’t break them in a true ontological sense.” Again, I will simply point in the right direction here. Asserting the existence of efficient causes does not describe a supposedly “truly true” ontology; it is simply a more general description of a situation where balls sometimes break windows.

We can make a useful comparison here between understanding causality, and understanding desire and the good. The knowledge of desire begins with a fairly direct experience, that of feeling the desire, often even as physical sensation. In the same way, we have a direct experience of “understanding something,” namely the feeling of going, “Ah, got it! That’s why this is, this is how it is.” And just as we explain the fact of our desire by saying that the good is responsible for it, we explain the fact of our understanding by saying that the apprehension of causes is responsible. And just as being and good are convertible, so that goodness is not some extra “ontological” thing, so also cause and origin are convertible. But something has to have a certain relationship with us to be good for us; eating food is good for us while eating rocks is not. In a similar way, origins need to have a specific relationship with us in order to provide an understanding of causality, as I said in the post where these questions came up.

Does this mean that “causation is only in the mind”? Not really, any more than the analogous account implies that goodness is only in the mind. An aspect of goodness is in the mind, namely insofar as we distinguish it from being in general, but the thing itself is real, namely the very being of things. And likewise an aspect of causality is in the mind, namely the fact that it explains something to us, but the thing itself is real, namely the relationships of origin in things.