Perfectly Random

Suppose you have a string of random binary digits such as the following:

00111100010101001100011011001100110110010010100111

This string is 50 digits long, and was the result of a single attempt using the linked generator.

However, something seems distinctly non-random about it: there are exactly 25 zeros and exactly 25 ones. Naturally, this will not always happen, but most of the time the proportion of zeros will be fairly close to half. And evidently this is necessary, since if the proportion was usually much different from half, then the selection could not have been random in the first place.

There are other things about this string that are definitely not random. It contains only zeros and ones, and no other digits, much less items like letters from the alphabet, or items like ‘%’ and ‘$’.

Why do we have these apparently non-random characteristics? Both sorts of characteristics, the approximate and typical proportion, and the more rigid characteristics, are necessary consequences of the way we obtained or defined this number.

It is easy to see that such characteristics are inevitable. Suppose someone wants to choose something random without any non-random characteristics. Let’s suppose they want to avoid the first sort of characteristic, which is perhaps the “easier” task. They can certainly make the proportion of zeros approximately 75% or anything else that they please. But this will still be a non-random characteristic.

They try again. Suppose they succeed in preventing the series of digits from converging to any specific probability. If they do, there is one and only one way to do this. Much as in our discussion of the mathematical laws of nature, the only way to accomplish this will be to go back and forth between longer and longer strings of zeros and ones. But this is an extremely non-random characteristic. So they may have succeeded in avoiding one particular type of non-randomness, but only at the cost of adding something else very non-random.

Again, consider the second kind of characteristic. Here things are even clearer: the only way to avoid the second kind of characteristic is not to attempt any task in the first place. The only way to win is not to play. Once we have said “your task is to do such and such,” we have already specified some non-random characteristics of the second kind; to avoid such characteristics is to avoid the task completely.

“Completely random,” in fact, is an incoherent idea. No such thing can exist anywhere, in the same way that “formless matter” cannot actually exist, but all matter is formed in one way or another.

The same thing applies to David Hume’s supposed problem of induction. I ended that post with the remark that for his argument to work, he must be “absolutely certain that the future will resemble the past in no way.” But this of course is impossible in the first place; the past and the future are both defined as periods of time, and so there is some resemblance in their very definition, in the same way that any material thing must have some form in its definition, and any “random” thing must have something non-random in its definition.

 

Discount Rates

Eliezer Yudkowsky some years ago made this argument against temporal discounting:

I’ve never been a fan of the notion that we should (normatively) have a discount rate in our pure preferences – as opposed to a pseudo-discount rate arising from monetary inflation, or from opportunity costs of other investments, or from various probabilistic catastrophes that destroy resources or consumers.  The idea that it is literally, fundamentally 5% more important that a poverty-stricken family have clean water in 2008, than that a similar family have clean water in 2009, seems like pure discrimination to me – just as much as if you were to discriminate between blacks and whites.

Robin  Hanson disagreed, responding with this post:

But doesn’t discounting at market rates of return suggest we should do almost nothing to help far future folk, and isn’t that crazy?  No, it suggests:

  1. Usually the best way to help far future folk is to invest now to give them resources they can spend as they wish.
  2. Almost no one now in fact cares much about far future folk, or they would have bid up the price (i.e., market return) to much higher levels.

Very distant future times are ridiculously easy to help via investment.  A 2% annual return adds up to a googol (10^100) return over 12,000 years, even if there is only a 1/1000 chance they will exist or receive it.

So if you are not incredibly eager to invest this way to help them, how can you claim to care the tiniest bit about them?  How can you think anyone on Earth so cares?  And if no one cares the tiniest bit, how can you say it is “moral” to care about them, not just somewhat, but almost equally to people now?  Surely if you are representing a group, instead of spending your own wealth, you shouldn’t assume they care much.

Yudkowsky’s argument is idealistic, while Hanson is attempting to be realistic. I will look at this from a different point of view. Hanson is right, and Yudkowsky is wrong, for a still more idealistic reason than Yudkowsky’s reasons. In particular, a temporal discount rate is logically and mathematically necessary in order to have consistent preferences.

Suppose you have the chance to save 10 lives a year from now, or 2 years from now, or 3 years from now etc., such that your mutually exclusive options include the possibility of saving 10 lives x years from now for all x.

At first, it would seem to be consistent for you to say that all of these possibilities have equal value by some measure of utility.

The problem does not arise from this initial assignment, but it arises when we consider what happens when you act in this situation. Your revealed preferences in that situation will indicate that you prefer things nearer in time to things more distant, for the following reason.

It is impossible to choose a random integer without a bias towards low numbers, for the same reasons we argued here that it is impossible to assign probabilities to hypotheses without, in general, assigning simpler hypotheses higher probabilities. In a similar way, if “you will choose 2 years from now”, “you will choose 10 years from now,” “you will choose 100 years from now,” are all assigned probabilities, they cannot all be assigned equal probabilities, but you must be more likely to choose the options less distant in time, in general and overall. There will be some number n such that there is a 99.99% chance that you will choose some number of years less than n, and and a probability of 0.01% that you will choose n or more years, indicating that you have a very strong preference for saving lives sooner rather than later.

Someone might respond that this does not necessarily affect the specific value assignments, in the same way that in some particular case, we can consistently think that some particular complex hypothesis is more probable than some particular simple hypothesis. The problem with this is the hypotheses do not change their complexity, but time passes, making things distant in time become things nearer in time. Thus, for example, if Yudkowsky responds, “Fine. We assign equal value to saving lives for each year from 1 to 10^100, and smaller values to the times after that,” this will necessarily lead to dynamic inconsistency. The only way to avoid this inconsistency is to apply a discount rate to all periods of time, including ones in the near, medium, and long term future.

 

“Moral” Responsibility

In a passage quoted here, Jerry Coyne objected to the “moral” in “moral responsibility”:

To me, that means that the concept of “moral responsibility” is meaningless, for that implies an ability to choose freely. Nevertheless, we should still retain the concept of responsibility, meaning “an identifiable person did this or that good or bad action”. And, of course, we can sanction or praise people who were responsible in this sense, for such blame and praise can not only reinforce good behavior but is salubrious for society.

Suppose someone completely insane happens to kill another person, under the mistaken belief that they are doing something completely different. In such a case, “an identifiable person did this or that good or bad action,” and yet we do not say they are responsible, much less blame such a person; rather we may subject them to physical restraints, but we no more blame them than we blame the weather for the deaths that it occasionally inflicts on people. In other words, Coyne’s definition does not even work for “responsibility,” let alone moral responsibility.

Moral action has a specific meaning: something that is done, and not merely an action in itself, but in comparison with the good proposed by human reason. Consequently we have moral action only when we have something voluntarily done by a human being for a reason, or (if without a reason) with the voluntary omission of the consideration of reasons. In exactly the same situations we have moral responsibility: namely, someone voluntarily did something good, or someone voluntarily did something bad.

Praise and blame are added precisely because people are acting for reasons, and given that people tend to like praise and dislike blame, these elements, if rightly applied, will make good things better, and thus more likely to be pursued, and bad things worse, and thus more likely to be avoided. As an aside, this also suggests occasions when it is a bad idea to blame someone for something bad; namely, when blame is not likely to reduce the bad activity, or by very little, since in this case you are simply making things worse, period.

Stop, Coyne and others will say. Even if we agree with the point about praise and blame, we do not agree about moral responsibility, unless determinism is false. And nothing in the above paragraphs even refers to determinism or its opposite, and thus the above cannot be a full account of moral responsibility.

The above is, in fact, a basically complete account of moral responsibility. Although determinism is false, as was said in the linked post, its falsity has nothing to do with the matter one way or another.

The confusion about this results from a confusion between an action as a being in itself, and an action as moral, namely as considered by reason. This distinction was discussed here while considering what it means to say that some kinds of actions are always wrong. It is quite true that considered as a moral action, it would be wrong to blame someone if they did not have any other option. But that situation would be a situation where no reasonable person would act otherwise. And you do not blame someone for doing something that all reasonable people would do. You blame them in a situation where reasonable people would do otherwise: there are reasons for doing something different, but they did not act on those reasons.

But it is not the case that blame or moral responsibility depends on whether or not there is a physically possible alternative, because to consider physical alternatives is simply to speak of the action as a being in itself, and not as a moral act at all.

 

Quantum Mechanics and Libertarian Free Will

In a passage quoted in the last post, Jerry Coyne claims that quantum indeterminacy is irrelevant to free will: “Even the pure indeterminism of quantum mechanics can’t give us free will, because that’s simple randomness, and not a result of our own ‘will.'”

Coyne seems to be thinking that since quantum indeterminism has fixed probabilities in any specific situation, the result for human behavior would necessarily be like our second imaginary situation in the last post. There might be a 20% chance that you would randomly do X, and an 80% chance that you would randomly do Y, and nothing can affect these probabilities. Consequently you cannot be morally responsible for doing X or for doing Y, nor should you be praised or blamed for them.

Wait, you might say. Coyne explicitly favors praise and blame in general. But why? If you would not praise or blame someone doing something randomly, why should you praise or blame someone doing something in a deterministic manner? As explained in the last post, the question is whether reasons have any influence on your behavior. Coyne is assuming that if your behavior is deterministic, it can still be influenced by reasons, but if it is indeterministic, it cannot be. But there is no reason for this to be case. Your behavior can be influenced by reasons whether it is deterministic or not.

St. Thomas argues for libertarian free will on the grounds that there can be reasons for opposite actions:

Man does not choose of necessity. And this is because that which is possible not to be, is not of necessity. Now the reason why it is possible not to choose, or to choose, may be gathered from a twofold power in man. For man can will and not will, act and not act; again, he can will this or that, and do this or that. The reason of this is seated in the very power of the reason. For the will can tend to whatever the reason can apprehend as good. Now the reason can apprehend as good, not only this, viz. “to will” or “to act,” but also this, viz. “not to will” or “not to act.” Again, in all particular goods, the reason can consider an aspect of some good, and the lack of some good, which has the aspect of evil: and in this respect, it can apprehend any single one of such goods as to be chosen or to be avoided. The perfect good alone, which is Happiness, cannot be apprehended by the reason as an evil, or as lacking in any way. Consequently man wills Happiness of necessity, nor can he will not to be happy, or to be unhappy. Now since choice is not of the end, but of the means, as stated above (Article 3); it is not of the perfect good, which is Happiness, but of other particular goods. Therefore man chooses not of necessity, but freely.

Someone might object that if both are possible, there cannot be a reason why someone chooses one rather than the other. This is basically the claim in the third objection:

Further, if two things are absolutely equal, man is not moved to one more than to the other; thus if a hungry man, as Plato says (Cf. De Coelo ii, 13), be confronted on either side with two portions of food equally appetizing and at an equal distance, he is not moved towards one more than to the other; and he finds the reason of this in the immobility of the earth in the middle of the world. Now, if that which is equally (eligible) with something else cannot be chosen, much less can that be chosen which appears as less (eligible). Therefore if two or more things are available, of which one appears to be more (eligible), it is impossible to choose any of the others. Therefore that which appears to hold the first place is chosen of necessity. But every act of choosing is in regard to something that seems in some way better. Therefore every choice is made necessarily.

St. Thomas responds to this that it is a question of what the person considers:

If two things be proposed as equal under one aspect, nothing hinders us from considering in one of them some particular point of superiority, so that the will has a bent towards that one rather than towards the other.

Thus for example, someone might decide to become a doctor because it pays well, or they might decide to become a truck driver because they enjoy driving. Whether they consider “what would I enjoy?” or “what would pay well?” will determine which choice they make.

The reader might notice a flaw, or at least a loose thread, in St. Thomas’s argument. In our example, what determines whether you think about what pays well or what you would enjoy? This could be yet another choice. I could create a spreadsheet of possible jobs and think, “What should I put on it? Should I put the pay? or should I put what I enjoy?” But obviously the question about necessity will simply be pushed back, in this case. Is this choice itself determinate or indeterminate? And what determines what choice I make in this case? Here we are discussing an actual temporal series of thoughts, and it absolutely must have a first, since human life has a beginning in time. Consequently there will have to be a point where, if there is the possibility of “doing A for reason B” and “doing C for reason D”, it cannot be any additional consideration which determines which one is done.

Now it is possible at this point that St. Thomas is mistaken. It might be that the hypothesis that both were “really” possible is mistaken, and something does determine one rather than the other with “necessity.” It is also possible that he is not mistaken. Either way, human reasons do not influence the determination, because reason B and/or reason D are the first reasons considered, by hypothesis (if they were not, we would simply push back the question.)

At this point someone might consider this lack of the influence of reasons to imply that people are not morally responsible for doing A or for doing C. The problem with this is that if you do something without a reason (and without potentially being influenced by a reason), then indeed you would not be morally responsible. But the person doing A or C is not uninfluenced by reasons. They are influenced by reason B, or by reason D. Consequently, they are responsible for their specific action, because they do it for a reason, despite the fact that there is some other general issue that they are not responsible for.

What influence could quantum indeterminacy have here? It might be responsible for deciding between “doing A for reason B” and “doing C for reason D.” And as Coyne says, this would be “simple randomness,” with fixed probabilities in any particular situation. But none of this would prevent this from being a situation that would include libertarian free will, since libertarian free will is precisely nothing but the situation where there are two real possibilities: you might do one thing for one reason, or another thing for another reason. And that is what we would have here.

Does quantum mechanics have this influence in fact, or is this just a theoretical possibility? It very likely does. Some argue that it probably doesn’t, on the grounds that quantum mechanics does not typically seem to imply much indeterminacy for macroscopic objects. The problem with this argument is that the only way of knowing that quantum indeterminacy rarely leads to large scale differences is by using humanly designed items like clocks or computers. And these are specifically designed to be determinate: whenever our artifact is not sufficiently determinate and predictable, we change the design until we get something predictable. If we look at something in nature uninfluenced by human design, like a waterfall, is details are highly unpredictable to us. Which drop of water will be the most distant from this particular point one hour from now? There is no way to know.

But how much real indeterminacy is in the waterfall, or in the human brain, due to quantum indeterminacy? Most likely nobody knows, but it is basically a question of timescales. Do you get a great deal of indeterminacy after one hour, or after several days? One way or another, with the passage of enough time, you will get a degree of real indeterminacy as high as you like. The same thing will be equally true of human behavior. We often notice, in fact, that at short timescales there is less indeterminacy than we subjectively feel. For example, if someone hesitates to accept an invitation, in many situations, others will know that the person is very likely to decline. But the person feels very uncertain, as though there were a 50/50 chance of accepting or declining. The real probabilities might be 90/10 or even more slanted. Nonetheless, the question is one of timescales and not of whether or not there is any indeterminacy. There is, this is basically settled, it will apply to human behavior, and there is little reason to doubt that it applies at relatively short timescales compared to the timescales at which it applies to clocks and computers or other things designed with predictability in mind.

In this sense, quantum indeterminacy strongly suggests that St. Thomas is basically correct about libertarian free will.

On the other hand, Coyne is also right about something here. While it is not true that such “randomness” removes moral responsibility, the fact that people do things for reasons, or that praise and blame is a fitting response to actions done for reasons, Coyne correctly notices that it does not add to the fact that someone is responsible. If there is no human reason for the fact that a person did A for reason B rather than C for reason D, this makes their actions less intelligible, and thus less subject to responsibility. In other words, the “libertarian” part of libertarian free will does not make the will more truly a will, but less truly. In this respect, Coyne is right. This however is unrelated to quantum mechanics or to any particular scientific account. The thoughtful person can understand this simply from general considerations about what it means to act for a reason.

Causality and Moral Responsibility

Consider two imaginary situations:

(1) In the first situation, people are such that when someone sees a red light, they immediately go off and kill someone. Nothing can be done to prevent this, and no intention or desire to do otherwise makes any difference.

In this situation, killing someone after you have seen a red light is not blamed, since it cannot be avoided, but we blame people who show red lights to others. Such people are arrested and convicted as murderers.

(2) In the second situation, people are such that when someone sees a red light, there is a 5% chance they will go off and immediately kill someone, and a 95% chance they will behave normally. Nothing can change this probability: it does not matter whether the person is wicked or virtuous or what their previous attitude to killing was.

In this situation, again, we do not blame people who end up killing someone, but we call them unlucky. We do however blame people who show others red lights, and they are arrested and convicted of second degree murder, or in some cases manslaughter.

Some people would conclude from this that moral responsibility is incoherent: whether the world is deterministic or not, moral responsibility is impossible. Jerry Coyne defends this position in numerous places, as for example here:

We’ve taken a break from the many discussions on this site about free will, but, cognizant of the risks, I want to bring it up again. I think nearly all of us agree that there’s no dualism involved in our decisions: they’re determined completely by the laws of physics. Even the pure indeterminism of quantum mechanics can’t give us free will, because that’s simple randomness, and not a result of our own “will.”

Coyne would perhaps say that “free will” embodies a contradiction much in the way that “square circle” does. “Will” implies a cause, and thus something deterministic. “Free” implies indeterminism, and thus no cause.

In many places Coyne asserts that this implies that moral responsibility does not exist, as for example here:

This four-minute video on free will and responsibility, narrated by polymath Raoul Martinez, was posted by the Royal Society for the Encouragement of the Arts, Manufactures, and Commerce (RSA). Martinez’s point is one I’ve made here many times, and will surely get pushback from: determinism rules human behavior, and our “choices” are all predetermined by our genes and environment. To me, that means that the concept of “moral responsibility” is meaningless, for that implies an ability to choose freely. Nevertheless, we should still retain the concept of responsibility, meaning “an identifiable person did this or that good or bad action”. And, of course, we can sanction or praise people who were responsible in this sense, for such blame and praise can not only reinforce good behavior but is salubrious for society.

I think that Coyne is very wrong about the meaning of free will, somewhat wrong about responsibility, and likely wrong about the consequences of his views for society (e.g. he believes that his view will lead to more humane treatment of prisoners. There is no particular reason to expect this.)

The imaginary situations described in the initial paragraphs of this post do not imply that moral responsibility is impossible, but they do tell us something. In particular, they tell us that responsibility is not directly determined by determinism or its lack. And although Coyne says that “moral responsibility” implies indeterminism, surely even Coyne would not advocate blaming or punishing the person who had the 5% chance of going and killing someone. And the reason is clear: it would not “reinforce good behavior” or be “salubrious for society.” By the terms set out, it would make no difference, so blaming or punishing would be pointless.

Coyne is right that determinism does not imply that punishment is pointless. And he also recognizes that indeterminism does not of itself imply that anyone is responsible for anything. But he fails here to put two and two together: just as determinism does not imply punishment is pointless, nor that it is not, indeterminism likewise implies neither of the two. The conclusion he should draw is not that moral responsibility is meaningless, but that it is independent of both determinism and indeterminism; that is, that both deterministic compatibilism and libertarian free will allow for moral responsibility.

So what is required for praise and blame to have a point? Elsewhere we discussed C.S. Lewis’s claim that something can have a reason or a cause, but not both. In a sense, the initial dilemma in this post can be understood as a similar argument. Either our behavior has deterministic causes, or it has indeterministic causes; therefore it does not have reasons; therefore moral responsibility does not exist.

On the other hand, if people do have reasons for their behavior, there can be good reasons for blaming people who do bad things, and for punishing them. Namely, since those people are themselves acting for reasons, they will be less likely in the future to do those things, and likewise other people, fearing punishment and blame, will be less likely to do them.

As I said against Lewis, reasons do not exclude causes, but require them. Consequently what is necessary for moral responsibility are causes that are consistent with having reasons; one can easily imagine causes that are not consistent with having reasons, as in the imaginary situations described, and such causes would indeed exclude responsibility.

Employer and Employee Model: Happiness

We discussed Aristotle’s definition of happiness as activity according to virtue here, followed by a response to an objection.

There is another objection, however, which Aristotle raises himself in Book I, chapter 8 of the Nicomachean Ethics:

Yet evidently, as we said, it needs the external goods as well; for it is impossible, or not easy, to do noble acts without the proper equipment. In many actions we use friends and riches and political power as instruments; and there are some things the lack of which takes the lustre from happiness, as good birth, goodly children, beauty; for the man who is very ugly in appearance or ill-born or solitary and childless is not very likely to be happy, and perhaps a man would be still less likely if he had thoroughly bad children or friends or had lost good children or friends by death. As we said, then, happiness seems to need this sort of prosperity in addition; for which reason some identify happiness with good fortune, though others identify it with virtue.

Aristotle is responding to the implicit objection by saying that it is “impossible, or not easy” to act according to virtue when one is doing badly in other ways. Yet probably most of us know some people who are virtuous while suffering various misfortunes, and it seems pretty unreasonable, as well as uncharitable, to assert that the reason that they are somewhat unhappy with their circumstances is that the lack of “proper equipment” leads to a lack of virtuous activity. Or at any rate, even if this contributes to the matter, it does not seem to be a full explanation. The book of Job, for example, is based almost entirely on the possibility of being both virtuous and miserable, and Job would very likely respond to Aristotle, “How then will you comfort me with empty nothings? There is nothing left of your answers but falsehood.”

Aristotle brings up a similar issue at the beginning of Book VIII:

After what we have said, a discussion of friendship would naturally follow, since it is a virtue or implies virtue, and is besides most necessary with a view to living. For without friends no one would choose to live, though he had all other goods; even rich men and those in possession of office and of dominating power are thought to need friends most of all; for what is the use of such prosperity without the opportunity of beneficence, which is exercised chiefly and in its most laudable form towards friends? Or how can prosperity be guarded and preserved without friends? The greater it is, the more exposed is it to risk. And in poverty and in other misfortunes men think friends are the only refuge. It helps the young, too, to keep from error; it aids older people by ministering to their needs and supplementing the activities that are failing from weakness; those in the prime of life it stimulates to noble actions-‘two going together’-for with friends men are more able both to think and to act. Again, parent seems by nature to feel it for offspring and offspring for parent, not only among men but among birds and among most animals; it is felt mutually by members of the same race, and especially by men, whence we praise lovers of their fellowmen. We may even in our travels how near and dear every man is to every other. Friendship seems too to hold states together, and lawgivers to care more for it than for justice; for unanimity seems to be something like friendship, and this they aim at most of all, and expel faction as their worst enemy; and when men are friends they have no need of justice, while when they are just they need friendship as well, and the truest form of justice is thought to be a friendly quality.

But it is not only necessary but also noble; for we praise those who love their friends, and it is thought to be a fine thing to have many friends; and again we think it is the same people that are good men and are friends.

There is a similar issue here: lack of friends may make someone unhappy, but lack of friends is not lack of virtue. Again Aristotle is in part responding by pointing out that the activity of some virtues depends on the presence of friends, just as he said that temporal goods were necessary as instruments. Once again, however, even if there is some truth in it, the answer does not seem adequate, especially since Aristotle believes that the highest form of happiness is found in contemplation, which seems to depend much less on friends than other types of activity.

Consider again Aristotle’s argument for happiness as virtue, presented in the earlier post. It depends on the idea of a “function”:

Presumably, however, to say that happiness is the chief good seems a platitude, and a clearer account of what it is still desired. This might perhaps be given, if we could first ascertain the function of man. For just as for a flute-player, a sculptor, or an artist, and, in general, for all things that have a function or activity, the good and the ‘well’ is thought to reside in the function, so would it seem to be for man, if he has a function. Have the carpenter, then, and the tanner certain functions or activities, and has man none? Is he born without a function? Or as eye, hand, foot, and in general each of the parts evidently has a function, may one lay it down that man similarly has a function apart from all these? What then can this be? Life seems to be common even to plants, but we are seeking what is peculiar to man. Let us exclude, therefore, the life of nutrition and growth. Next there would be a life of perception, but it also seems to be common even to the horse, the ox, and every animal. There remains, then, an active life of the element that has a rational principle; of this, one part has such a principle in the sense of being obedient to one, the other in the sense of possessing one and exercising thought. And, as ‘life of the rational element’ also has two meanings, we must state that life in the sense of activity is what we mean; for this seems to be the more proper sense of the term. Now if the function of man is an activity of soul which follows or implies a rational principle, and if we say ‘so-and-so-and ‘a good so-and-so’ have a function which is the same in kind, e.g. a lyre, and a good lyre-player, and so without qualification in all cases, eminence in respect of goodness being added to the name of the function (for the function of a lyre-player is to play the lyre, and that of a good lyre-player is to do so well): if this is the case, and we state the function of man to be a certain kind of life, and this to be an activity or actions of the soul implying a rational principle, and the function of a good man to be the good and noble performance of these, and if any action is well performed when it is performed in accordance with the appropriate excellence: if this is the case, human good turns out to be activity of soul in accordance with virtue, and if there are more than one virtue, in accordance with the best and most complete.

Aristotle took what was most specifically human and identified happiness with performing well in that most specifically human way. This is reasonable, but it leads to the above issues, because a human being is not only what is most specifically human, but also possesses the aspects that Aristotle dismissed here as common to other things. Consequently, activity according to virtue would be the most important aspect of functioning well as a human being, and in this sense Aristotle’s account is reasonable, but there are other aspects as well.

Using our model, we can present a more unified account of happiness which includes these other aspects without the seemingly arbitrary way in which Aristotle noted the need for temporal goods and friendship for happiness. The specifically rational character belongs mainly to the Employee, and thus when Aristotle identifies happiness with virtuous action, he is mainly identifying happiness with the activity of the Employee. And this is surely its most important aspect. But since the actual human being is the whole company, it is more complete to identify happiness with the good functioning of the whole company. And the whole company is functioning well overall when the CEO’s goal of accurate prediction is regularly being achieved.

Consider two ways in which someone might respond to the question, “How are you doing?” If someone isn’t doing very well, they might say, “Well, I’ve been having a pretty rough time,” while if they are better off, they might say, “Things are going pretty smoothly.” Of course people might use other words, but notice the contrast in my examples: a life that is going well is often said to be going “smoothly”, while the opposite is described as “rough.” And the difference here between smooth and rough is precisely the difference between predictive accuracy and inaccuracy. We might see this more easily by considering some restricted examples:

First, suppose two people are jogging. One is keeping an even pace, keeping their balance, rounding corners smoothly, and keeping to the middle of the path. The other is becoming tired, slowing down a bit and speeding up a bit. They are constantly off balance and suffering disturbing jolts when they hit unexpected bumps in the path, perhaps narrowly avoiding tripping. If we compare what is happening here with the general idea of predictive processing, it seems that the difference between the two is that first person is predicting accurately, while the second is predicting inaccurately. The second person is not rationing their energy and breath correctly, they suffer jolts or near trips when they did not correctly expect the lay of the land, and so on.

Suppose someone is playing a video game. The one who plays it well is the one who is very prepared for every eventuality. They correctly predict what is going to happen in the game both with regard to what happens “by itself,” and what will happen as a result of their in-game actions. They play the game “smoothly.”

Suppose I am writing this blog post and feel myself in a state of “flow,” and I consequently am enjoying the activity. This can only happen as long as the process is fairly “smooth.” If I stop for long periods in complete uncertainty of what to write next, the state will go away. In other words, the condition depends on having at each moment a fairly good idea of what is coming next; it depends on accurate prediction.

The reader might understand the point in relation to these limited examples, but how does this apply to life in general, and especially to virtue and vice, which are according to Aristotle the main elements of happiness and unhappiness?

In a basic way virtuous activity is reasonable activity, and vicious activity is unreasonable activity. The problem with vice, in this account, is that it immediately sets up a serious interior conflict. The Employee is a rational being and is constantly being affected by reasons to do things. Vice, in one way or another, persuades them to do unreasonable things, and the reasons for not doing those things will be constantly pulling in the opposite direction. When St. Paul complains that he wills something different from what he does, he is speaking of this kind of conflict. But conflicting tendencies leads to uncertain results, and so our CEO is unhappy with this situation.

Now you might object: if a vicious man is unhappy because of conflicting tendencies, what if they are so wicked that they have no conflict, but simply and contentedly do what is evil?

The response to this would be somewhat along the lines of the answer we gave to the objection that moral obligation should not depend on desiring some particular end. First, it is probably impossible for a human being to become so corrupted that they cannot see, at least to some degree, that bad things are bad. Second, consider the wicked men according to Job’s description:

Why do the wicked live on,
reach old age, and grow mighty in power?
Their children are established in their presence,
and their offspring before their eyes.
Their houses are safe from fear,
and no rod of God is upon them.
Their bull breeds without fail;
their cow calves and never miscarries.
They send out their little ones like a flock,
and their children dance around.
They sing to the tambourine and the lyre,
and rejoice to the sound of the pipe.
They spend their days in prosperity,
and in peace they go down to Sheol.

Just as we said that if you assume that someone is entirely corrupt, the idea of “obligation” may well become irrelevant to them, without that implying anything wrong with the general idea of moral obligation, in a similar way, it would be metaphorical to speak of such a person as “unhappy”; you could say this with the intention of saying that they exist in an objectively bad situation, but not in the ordinary sense of the term, in which it includes subjective discontent.

We could explain a great deal more with this account of happiness: not only the virtuous life in general, but also a great deal of the spiritual, psychological, and other practical advice which is typically given. But this is all perhaps for another time.

Employer and Employee Model: Truth

In the remote past, I suggested that I would someday follow up on this post. In the current post, I begin to keep that promise.

We can ask about the relationship of the various members of our company with the search for truth.

The CEO, as the predictive engine, has a fairly strong interest in truth, but only insofar as truth is frequently necessary in order to get predictive accuracy. Consequently our CEO will usually insist on the truth when it affects our expectations regarding daily life, but it will care less when we consider things remote from the senses. Additionally, the CEO is highly interested in predicting the behavior of the Employee, and it is not uncommon for falsehood to be better than truth for this purpose.

To put this in another way, the CEO’s interest in truth is instrumental: it is sometimes useful for the CEO’s true goal, predictive accuracy, but not always, and in some cases it can even be detrimental.

As I said here, the Employee is, roughly speaking, the human person as we usually think of one, and consequently the Employee has the same interest in truth that we do. I personally consider truth to be an ultimate end,  and this is probably the opinion of most people, to a greater or lesser degree. In other words, most people consider truth a good thing, even apart from instrumental considerations. Nonetheless, all of us care about various things besides truth, and therefore we also occasionally trade truth for other things.

The Vice President has perhaps the least interest in truth. We could say that they too have some instrumental concern about truth. Thus for example the VP desires food, and this instrumentally requires true ideas about where food is to be found. Nonetheless, as I said in the original post, the VP is the least rational and coherent, and may easily fail to notice such a need. Thus the VP might desire the status resulting from winning an argument, so to speak, but also desire the similar status that results from ridiculing the person holding an opposing view. The frequent result is that a person believes the falsehood that ridiculing an opponent generally increases the chance that they will change their mind (e.g. see John Loftus’s attempt to justify ridicule.)

Given this account, we can raise several disturbing questions.

First, although we have said the Employee values truth in itself, can this really be true, rather than simply a mistaken belief on the part of the Employee? As I suggested in the original account, the Employee is in some way a consequence of the CEO and the VP. Consequently, if neither of these places intrinsic value on truth, how is it possible that the Employee does?

Second, even if the Employee sincerely places an intrinsic value on truth, how is this not a misplaced value? Again, if the Employee is something like a result of the others, what is good for the Employee should be what is good for the others, and thus if truth is not intrinsically good for the others, it should not be intrinsically good for the Employee.

In response to the first question, the Employee can indeed believe in the intrinsic value of truth, and of many other things to which the CEO and VP do not assign intrinsic value. This happens because as we are considering the model, there is a real division of labor, even if the Employee arises historically in a secondary manner. As I said in the other post, the Employee’s beliefs are our beliefs, and the Employee can believe anything that we believe. Furthermore, the Employee can really act on such beliefs about the goodness of truth or other things, even when the CEO and VP do not have the same values. The reason for this is the same as the reason that the CEO will often go along with the desires of the VP, even though the CEO places intrinsic value only on predictive accuracy. The linked post explains, in effect, why the CEO goes along with sex, even though only the VP really wants it. In a similar way, if the Employee believes that sex outside of marriage is immoral, the CEO often goes along with avoiding such sex, even though the CEO cares about predictive accuracy, not about sex or its avoidance. Of course, in this particular case, there is a good chance of conflict between the Employee and VP, and the CEO dislikes conflict, since it makes it harder to predict what the person overall will end up doing. And since the VP very rarely changes its mind in this case, the CEO will often end up encouraging the Employee to change their mind about the morality of such sex: thus one of the most frequent reasons why people abandon their religion is that it says that sex in some situations is wrong, but they still desire sex in those situations.

In response to the second, the Employee is not wrong to suppose that truth is intrinsically valuable. The argument against this would be that the human good is based on human flourishing, and (it is claimed) we do not need truth for such flourishing, since the CEO and VP do not care about truth in itself. The problem with this is that such flourishing requires that the Employee care about truth, and even the CEO needs the Employee to care in this way, for the sake of its own goal of predictive accuracy. Consider a real-life company: the employer does not necessarily care about whether the employee is being paid, considered in itself, but only insofar as it is instrumentally useful for convincing the employee to work for the employer. But the employer does care about whether the employee cares about being paid: if the employee does not care about being paid, they will not work for the employer.

Concern for truth in itself, apart from predictive accuracy, affects us when we consider things that cannot possibly affect our future experience: thus in previous cases I have discussed the likelihood that there are stars and planets outside the boundaries of the visible universe. This is probably true; but if I did not care about truth in itself, I might as well say that the universe is surrounded by purple elephants. I do not expect any experience to verify or falsify the claim, so why not make it? But now notice the problem for the CEO: the CEO needs to predict what the Employee is going to do, including what they will say and believe. This will instantly become extremely difficult if the Employee decides that they can say and believe whatever they like, without regard for truth, whenever the claim will not affect their experiences. So for its own goal of predictive accuracy, the CEO needs the Employee to value truth in itself, just as an ordinary employer needs their employee to value their salary.

In real life this situation can cause problems. The employer needs their employee to care about being paid, but if they care too much, they may constantly be asking for raises, or they may quit and go work for someone who will pay more. The employer does not necessarily like these situations. In a similar way, the CEO in our company may worry if the Employee insists too much on absolute truth, because as discussed elsewhere, it can lead to other situations with unpredictable behavior from the Employee, or to situations where there is a great deal of uncertainty about how society will respond to the Employee’s behavior.

Overall, this post perhaps does not say much in substance that we have not said elsewhere, but it will perhaps provide an additional perspective on these matters.