Employer and Employee Model: Happiness

We discussed Aristotle’s definition of happiness as activity according to virtue here, followed by a response to an objection.

There is another objection, however, which Aristotle raises himself in Book I, chapter 8 of the Nicomachean Ethics:

Yet evidently, as we said, it needs the external goods as well; for it is impossible, or not easy, to do noble acts without the proper equipment. In many actions we use friends and riches and political power as instruments; and there are some things the lack of which takes the lustre from happiness, as good birth, goodly children, beauty; for the man who is very ugly in appearance or ill-born or solitary and childless is not very likely to be happy, and perhaps a man would be still less likely if he had thoroughly bad children or friends or had lost good children or friends by death. As we said, then, happiness seems to need this sort of prosperity in addition; for which reason some identify happiness with good fortune, though others identify it with virtue.

Aristotle is responding to the implicit objection by saying that it is “impossible, or not easy” to act according to virtue when one is doing badly in other ways. Yet probably most of us know some people who are virtuous while suffering various misfortunes, and it seems pretty unreasonable, as well as uncharitable, to assert that the reason that they are somewhat unhappy with their circumstances is that the lack of “proper equipment” leads to a lack of virtuous activity. Or at any rate, even if this contributes to the matter, it does not seem to be a full explanation. The book of Job, for example, is based almost entirely on the possibility of being both virtuous and miserable, and Job would very likely respond to Aristotle, “How then will you comfort me with empty nothings? There is nothing left of your answers but falsehood.”

Aristotle brings up a similar issue at the beginning of Book VIII:

After what we have said, a discussion of friendship would naturally follow, since it is a virtue or implies virtue, and is besides most necessary with a view to living. For without friends no one would choose to live, though he had all other goods; even rich men and those in possession of office and of dominating power are thought to need friends most of all; for what is the use of such prosperity without the opportunity of beneficence, which is exercised chiefly and in its most laudable form towards friends? Or how can prosperity be guarded and preserved without friends? The greater it is, the more exposed is it to risk. And in poverty and in other misfortunes men think friends are the only refuge. It helps the young, too, to keep from error; it aids older people by ministering to their needs and supplementing the activities that are failing from weakness; those in the prime of life it stimulates to noble actions-‘two going together’-for with friends men are more able both to think and to act. Again, parent seems by nature to feel it for offspring and offspring for parent, not only among men but among birds and among most animals; it is felt mutually by members of the same race, and especially by men, whence we praise lovers of their fellowmen. We may even in our travels how near and dear every man is to every other. Friendship seems too to hold states together, and lawgivers to care more for it than for justice; for unanimity seems to be something like friendship, and this they aim at most of all, and expel faction as their worst enemy; and when men are friends they have no need of justice, while when they are just they need friendship as well, and the truest form of justice is thought to be a friendly quality.

But it is not only necessary but also noble; for we praise those who love their friends, and it is thought to be a fine thing to have many friends; and again we think it is the same people that are good men and are friends.

There is a similar issue here: lack of friends may make someone unhappy, but lack of friends is not lack of virtue. Again Aristotle is in part responding by pointing out that the activity of some virtues depends on the presence of friends, just as he said that temporal goods were necessary as instruments. Once again, however, even if there is some truth in it, the answer does not seem adequate, especially since Aristotle believes that the highest form of happiness is found in contemplation, which seems to depend much less on friends than other types of activity.

Consider again Aristotle’s argument for happiness as virtue, presented in the earlier post. It depends on the idea of a “function”:

Presumably, however, to say that happiness is the chief good seems a platitude, and a clearer account of what it is still desired. This might perhaps be given, if we could first ascertain the function of man. For just as for a flute-player, a sculptor, or an artist, and, in general, for all things that have a function or activity, the good and the ‘well’ is thought to reside in the function, so would it seem to be for man, if he has a function. Have the carpenter, then, and the tanner certain functions or activities, and has man none? Is he born without a function? Or as eye, hand, foot, and in general each of the parts evidently has a function, may one lay it down that man similarly has a function apart from all these? What then can this be? Life seems to be common even to plants, but we are seeking what is peculiar to man. Let us exclude, therefore, the life of nutrition and growth. Next there would be a life of perception, but it also seems to be common even to the horse, the ox, and every animal. There remains, then, an active life of the element that has a rational principle; of this, one part has such a principle in the sense of being obedient to one, the other in the sense of possessing one and exercising thought. And, as ‘life of the rational element’ also has two meanings, we must state that life in the sense of activity is what we mean; for this seems to be the more proper sense of the term. Now if the function of man is an activity of soul which follows or implies a rational principle, and if we say ‘so-and-so-and ‘a good so-and-so’ have a function which is the same in kind, e.g. a lyre, and a good lyre-player, and so without qualification in all cases, eminence in respect of goodness being added to the name of the function (for the function of a lyre-player is to play the lyre, and that of a good lyre-player is to do so well): if this is the case, and we state the function of man to be a certain kind of life, and this to be an activity or actions of the soul implying a rational principle, and the function of a good man to be the good and noble performance of these, and if any action is well performed when it is performed in accordance with the appropriate excellence: if this is the case, human good turns out to be activity of soul in accordance with virtue, and if there are more than one virtue, in accordance with the best and most complete.

Aristotle took what was most specifically human and identified happiness with performing well in that most specifically human way. This is reasonable, but it leads to the above issues, because a human being is not only what is most specifically human, but also possesses the aspects that Aristotle dismissed here as common to other things. Consequently, activity according to virtue would be the most important aspect of functioning well as a human being, and in this sense Aristotle’s account is reasonable, but there are other aspects as well.

Using our model, we can present a more unified account of happiness which includes these other aspects without the seemingly arbitrary way in which Aristotle noted the need for temporal goods and friendship for happiness. The specifically rational character belongs mainly to the Employee, and thus when Aristotle identifies happiness with virtuous action, he is mainly identifying happiness with the activity of the Employee. And this is surely its most important aspect. But since the actual human being is the whole company, it is more complete to identify happiness with the good functioning of the whole company. And the whole company is functioning well overall when the CEO’s goal of accurate prediction is regularly being achieved.

Consider two ways in which someone might respond to the question, “How are you doing?” If someone isn’t doing very well, they might say, “Well, I’ve been having a pretty rough time,” while if they are better off, they might say, “Things are going pretty smoothly.” Of course people might use other words, but notice the contrast in my examples: a life that is going well is often said to be going “smoothly”, while the opposite is described as “rough.” And the difference here between smooth and rough is precisely the difference between predictive accuracy and inaccuracy. We might see this more easily by considering some restricted examples:

First, suppose two people are jogging. One is keeping an even pace, keeping their balance, rounding corners smoothly, and keeping to the middle of the path. The other is becoming tired, slowing down a bit and speeding up a bit. They are constantly off balance and suffering disturbing jolts when they hit unexpected bumps in the path, perhaps narrowly avoiding tripping. If we compare what is happening here with the general idea of predictive processing, it seems that the difference between the two is that first person is predicting accurately, while the second is predicting inaccurately. The second person is not rationing their energy and breath correctly, they suffer jolts or near trips when they did not correctly expect the lay of the land, and so on.

Suppose someone is playing a video game. The one who plays it well is the one who is very prepared for every eventuality. They correctly predict what is going to happen in the game both with regard to what happens “by itself,” and what will happen as a result of their in-game actions. They play the game “smoothly.”

Suppose I am writing this blog post and feel myself in a state of “flow,” and I consequently am enjoying the activity. This can only happen as long as the process is fairly “smooth.” If I stop for long periods in complete uncertainty of what to write next, the state will go away. In other words, the condition depends on having at each moment a fairly good idea of what is coming next; it depends on accurate prediction.

The reader might understand the point in relation to these limited examples, but how does this apply to life in general, and especially to virtue and vice, which are according to Aristotle the main elements of happiness and unhappiness?

In a basic way virtuous activity is reasonable activity, and vicious activity is unreasonable activity. The problem with vice, in this account, is that it immediately sets up a serious interior conflict. The Employee is a rational being and is constantly being affected by reasons to do things. Vice, in one way or another, persuades them to do unreasonable things, and the reasons for not doing those things will be constantly pulling in the opposite direction. When St. Paul complains that he wills something different from what he does, he is speaking of this kind of conflict. But conflicting tendencies leads to uncertain results, and so our CEO is unhappy with this situation.

Now you might object: if a vicious man is unhappy because of conflicting tendencies, what if they are so wicked that they have no conflict, but simply and contentedly do what is evil?

The response to this would be somewhat along the lines of the answer we gave to the objection that moral obligation should not depend on desiring some particular end. First, it is probably impossible for a human being to become so corrupted that they cannot see, at least to some degree, that bad things are bad. Second, consider the wicked men according to Job’s description:

Why do the wicked live on,
reach old age, and grow mighty in power?
Their children are established in their presence,
and their offspring before their eyes.
Their houses are safe from fear,
and no rod of God is upon them.
Their bull breeds without fail;
their cow calves and never miscarries.
They send out their little ones like a flock,
and their children dance around.
They sing to the tambourine and the lyre,
and rejoice to the sound of the pipe.
They spend their days in prosperity,
and in peace they go down to Sheol.

Just as we said that if you assume that someone is entirely corrupt, the idea of “obligation” may well become irrelevant to them, without that implying anything wrong with the general idea of moral obligation, in a similar way, it would be metaphorical to speak of such a person as “unhappy”; you could say this with the intention of saying that they exist in an objectively bad situation, but not in the ordinary sense of the term, in which it includes subjective discontent.

We could explain a great deal more with this account of happiness: not only the virtuous life in general, but also a great deal of the spiritual, psychological, and other practical advice which is typically given. But this is all perhaps for another time.

Employer and Employee Model: Truth

In the remote past, I suggested that I would someday follow up on this post. In the current post, I begin to keep that promise.

We can ask about the relationship of the various members of our company with the search for truth.

The CEO, as the predictive engine, has a fairly strong interest in truth, but only insofar as truth is frequently necessary in order to get predictive accuracy. Consequently our CEO will usually insist on the truth when it affects our expectations regarding daily life, but it will care less when we consider things remote from the senses. Additionally, the CEO is highly interested in predicting the behavior of the Employee, and it is not uncommon for falsehood to be better than truth for this purpose.

To put this in another way, the CEO’s interest in truth is instrumental: it is sometimes useful for the CEO’s true goal, predictive accuracy, but not always, and in some cases it can even be detrimental.

As I said here, the Employee is, roughly speaking, the human person as we usually think of one, and consequently the Employee has the same interest in truth that we do. I personally consider truth to be an ultimate end,  and this is probably the opinion of most people, to a greater or lesser degree. In other words, most people consider truth a good thing, even apart from instrumental considerations. Nonetheless, all of us care about various things besides truth, and therefore we also occasionally trade truth for other things.

The Vice President has perhaps the least interest in truth. We could say that they too have some instrumental concern about truth. Thus for example the VP desires food, and this instrumentally requires true ideas about where food is to be found. Nonetheless, as I said in the original post, the VP is the least rational and coherent, and may easily fail to notice such a need. Thus the VP might desire the status resulting from winning an argument, so to speak, but also desire the similar status that results from ridiculing the person holding an opposing view. The frequent result is that a person believes the falsehood that ridiculing an opponent generally increases the chance that they will change their mind (e.g. see John Loftus’s attempt to justify ridicule.)

Given this account, we can raise several disturbing questions.

First, although we have said the Employee values truth in itself, can this really be true, rather than simply a mistaken belief on the part of the Employee? As I suggested in the original account, the Employee is in some way a consequence of the CEO and the VP. Consequently, if neither of these places intrinsic value on truth, how is it possible that the Employee does?

Second, even if the Employee sincerely places an intrinsic value on truth, how is this not a misplaced value? Again, if the Employee is something like a result of the others, what is good for the Employee should be what is good for the others, and thus if truth is not intrinsically good for the others, it should not be intrinsically good for the Employee.

In response to the first question, the Employee can indeed believe in the intrinsic value of truth, and of many other things to which the CEO and VP do not assign intrinsic value. This happens because as we are considering the model, there is a real division of labor, even if the Employee arises historically in a secondary manner. As I said in the other post, the Employee’s beliefs are our beliefs, and the Employee can believe anything that we believe. Furthermore, the Employee can really act on such beliefs about the goodness of truth or other things, even when the CEO and VP do not have the same values. The reason for this is the same as the reason that the CEO will often go along with the desires of the VP, even though the CEO places intrinsic value only on predictive accuracy. The linked post explains, in effect, why the CEO goes along with sex, even though only the VP really wants it. In a similar way, if the Employee believes that sex outside of marriage is immoral, the CEO often goes along with avoiding such sex, even though the CEO cares about predictive accuracy, not about sex or its avoidance. Of course, in this particular case, there is a good chance of conflict between the Employee and VP, and the CEO dislikes conflict, since it makes it harder to predict what the person overall will end up doing. And since the VP very rarely changes its mind in this case, the CEO will often end up encouraging the Employee to change their mind about the morality of such sex: thus one of the most frequent reasons why people abandon their religion is that it says that sex in some situations is wrong, but they still desire sex in those situations.

In response to the second, the Employee is not wrong to suppose that truth is intrinsically valuable. The argument against this would be that the human good is based on human flourishing, and (it is claimed) we do not need truth for such flourishing, since the CEO and VP do not care about truth in itself. The problem with this is that such flourishing requires that the Employee care about truth, and even the CEO needs the Employee to care in this way, for the sake of its own goal of predictive accuracy. Consider a real-life company: the employer does not necessarily care about whether the employee is being paid, considered in itself, but only insofar as it is instrumentally useful for convincing the employee to work for the employer. But the employer does care about whether the employee cares about being paid: if the employee does not care about being paid, they will not work for the employer.

Concern for truth in itself, apart from predictive accuracy, affects us when we consider things that cannot possibly affect our future experience: thus in previous cases I have discussed the likelihood that there are stars and planets outside the boundaries of the visible universe. This is probably true; but if I did not care about truth in itself, I might as well say that the universe is surrounded by purple elephants. I do not expect any experience to verify or falsify the claim, so why not make it? But now notice the problem for the CEO: the CEO needs to predict what the Employee is going to do, including what they will say and believe. This will instantly become extremely difficult if the Employee decides that they can say and believe whatever they like, without regard for truth, whenever the claim will not affect their experiences. So for its own goal of predictive accuracy, the CEO needs the Employee to value truth in itself, just as an ordinary employer needs their employee to value their salary.

In real life this situation can cause problems. The employer needs their employee to care about being paid, but if they care too much, they may constantly be asking for raises, or they may quit and go work for someone who will pay more. The employer does not necessarily like these situations. In a similar way, the CEO in our company may worry if the Employee insists too much on absolute truth, because as discussed elsewhere, it can lead to other situations with unpredictable behavior from the Employee, or to situations where there is a great deal of uncertainty about how society will respond to the Employee’s behavior.

Overall, this post perhaps does not say much in substance that we have not said elsewhere, but it will perhaps provide an additional perspective on these matters.

Employer and Employee Model of Human Psychology

This post builds on the ideas in the series of posts on predictive processing and the followup posts, and also on those relating truth and expectation. Consequently the current post will likely not make much sense to those who have not read the earlier content, or to those that read it but mainly disagreed.

We set out the model by positing three members of the “company” that constitutes a human being:

The CEO. This is the predictive engine in the predictive processing model.

The Vice President. In the same model, this is the force of the historical element in the human being, which we used to respond to the “darkened room” problem. Thus for example the Vice President is responsible for the fact that someone is likely to eat soon, regardless of what they believe about this. Likewise, it is responsible for the pursuit of sex, the desire for respect and friendship, and so on. In general it is responsible for behaviors that would have been historically chosen and preserved by natural selection.

The Employee. This is the conscious person who has beliefs and goals and free will and is reflectively aware of these things. In other words, this is you, at least in a fairly ordinary way of thinking of yourself. Obviously, in another way you are composed from all of them.

Why have we arranged things in this way? Descartes, for example, would almost certainly disagree violently with this model. The conscious person, according to him, would surely be the CEO, and not an employee. And what is responsible for the relationship between the CEO and the Vice President? Let us start with this point first, before we discuss the Employee. We make the predictive engine the CEO because in some sense this engine is responsible for everything that a human being does, including the behaviors preserved by natural selection. On the other hand, the instinctive behaviors of natural selection are not responsible for everything, but they can affect the course of things enough that it is useful for the predictive engine to take them into account. Thus for example in the post on sex and minimizing uncertainty, we explained why the predictive engine will aim for situations that include having sex and why this will make its predictions more confident. Thus, the Vice President advises certain behaviors, the CEO talks to the Vice President, and the CEO ends up deciding on a course of action, which ultimately may or may not be the one advised by the Vice President.

While neither the CEO nor the Vice President is a rational being, since in our model we place the rationality in the Employee, that does not mean they are stupid. In particular, the CEO is very good at what it does. Consider a role playing video game where you have a character that can die and then resume. When someone first starts to play the game, they may die frequently. After they are good at the game, they may die only rarely, perhaps once in many days or many weeks. Our CEO is in a similar situation, but it frequently goes 80 years or more without dying, on its very first attempt. It is extremely good at its game.

What are their goals? The CEO basically wants accurate predictions. In this sense, it has one unified goal. What exactly counts as more or less accurate here would be a scientific question that we probably cannot resolve by philosophical discussion. In fact, it is very possible that this would differ in different circumstances: in this sense, even though it has a unified goal, it might not be describable by a consistent utility function. And even if it can be described in that way, since the CEO is not rational, it does not (in itself) make plans to bring about correct predictions. Making good predictions is just what it does, as falling is what a rock does. There will be some qualifications on this, however, when we discuss how the members of the company relate to one another.

The Vice President has many goals: eating regularly, having sex, having and raising children, being respected and liked by others, and so on. And even more than in the case of the CEO, there is no reason for these desires to form a coherent set of preferences. Thus the Vice President might advise the pursuit of one goal, but then change its mind in the middle, for no apparent reason, because it is suddenly attracted by one of the other goals.

Overall, before the Employee is involved, human action is determined by a kind of negotiation between the CEO and the Vice President. The CEO, which wants good predictions, has no special interest in the goals of the Vice President, but it cooperates with them because when it cooperates its predictions tend to be better.

What about the Employee? This is the rational being, and it has abstract concepts which it uses as a formal copy of the world. Before I go on, let me insist clearly on one point. If the world is represented in a certain way in the Employee’s conceptual structure, that is the way the Employee thinks the world is. And since you are the Employee, that is the way you think the world actually is. The point is that once we start thinking this way, it is easy to say, “oh, this is just a model, it’s not meant to be the real thing.” But as I said here, it is not possible to separate the truth of statements from the way the world actually is: your thoughts are formulated in concepts, but they are thoughts about the way things are. Again, all statements are maps, and all statements are about the territory.

The CEO and the Vice President exist as soon a human being has a brain; in fact some aspects of the Vice President would exist even before that. But the Employee, insofar as it refers to something with rational and self-reflective knowledge, takes some time to develop. Conceptual knowledge of the world grows from experience: it doesn’t exist from the beginning. And the Employee represents goals in terms of its conceptual structure. This is just a way of saying that as a rational being, if you say you are pursuing a goal, you have to be able to describe that goal with the concepts that you have. Consequently you cannot do this until you have some concepts.

We are ready to address the question raised earlier. Why are you the Employee, and not the CEO? In the first place, the CEO got to the company first, as we saw above. Second, consider what the conscious person does when they decide to pursue a goal. There seems to be something incoherent about “choosing a goal” in the first place: you need a goal in order to decide which means will be a good means to choose. And yet, as I said here, people make such choices anyway. And the fact that you are the Employee, and not the CEO, is the explanation for this. If you were the CEO, there would indeed be no way to choose an end. That is why the actual CEO makes no such choice: its end is already determinate, namely good predictions. And you are hired to help out with this goal. Furthermore, as a rational being, you are smarter than the CEO and the Vice President, so to speak. So you are allowed to make complicated plans that they do not really understand, and they will often go along with these plans. Notably, this can happen in real life situations of employers and employees as well.

But take an example where you are choosing an end: suppose you ask, “What should I do with my life?” The same basic thing will happen if you ask, “What should I do today,” but the second question may be easier to answer if you have some answer to the first. What sorts of goals do you propose in answer to the first question, and what sort do you actually end up pursuing?

Note that there are constraints on the goals that you can propose. In the first place, you have to be able to describe the goal with the concepts you currently have: you cannot propose to seek a goal that you cannot describe. Second, the conceptual structure itself may rule out some goals, even if they can be described. For example, the idea of good is part of the structure, and if something is thought to be absolutely bad, the Employee will (generally) not consider proposing this as a goal. Likewise, the Employee may suppose that some things are impossible, and it will generally not propose these as goals.

What happens then is this: the Employee proposes some goal, and the CEO, after consultation with the Vice President, decides to accept or reject it, based on the CEO’s own goal of getting good predictions. This is why the Employee is an Employee: it is not the one ultimately in charge. Likewise, as was said, this is why the Employee seems to be doing something impossible, namely choosing goals. Steven Kaas makes a similar point,

You are not the king of your brain. You are the creepy guy standing next to the king going “a most judicious choice, sire”.

This is not quite the same thing, since in our model you do in fact make real decisions, including decisions about the end to be pursued. Nonetheless, the point about not being the one ultimately in charge is correct. David Hume also says something similar when he says, “Reason is, and ought only to be the slave of the passions, and can never pretend to any other office than to serve and obey them.” Hume’s position is not exactly right, and in fact seems an especially bad way of describing the situation, but the basic point that there is something, other than yourself in the ordinary sense, judging your proposed means and ends and deciding whether to accept them, is one that stands.

Sometimes the CEO will veto a proposal precisely because it very obviously leaves things vague and uncertain, which is contrary to its goal of having good predictions. I once spoke of the example that a person cannot directly choose to “write a paper.” In our present model, the Employee proposes “we’re going to write a paper now,” and the CEO responds, “That’s not a viable plan as it stands: we need more detail.”

While neither the CEO nor the Vice President is a rational being, the Vice President is especially irrational, because of the lack of unity among its goals. Both the CEO and the Employee would like to have a unified plan for one’s whole life: the CEO because this makes for good predictions, and the Employee because this is the way final causes work, because it helps to make sense of one’s life, and because “objectively good” seems to imply something which is at least consistent, which will never prefer A to B, B to C, and C to A. But the lack of unity among the Vice President’s goals means that it will always come to the CEO and object, if the person attempts to coherently pursue any goal. This will happen even if it originally accepts the proposal to seek a particular goal.

Consider this real life example from a relationship between an employer and employee:

 

Employer: Please construct a schedule for paying these bills.

Employee: [Constructs schedule.] Here it is.

Employer: Fine.

[Time passes, and the first bill comes due, according to the schedule.]

Employer: Why do we have to pay this bill now instead of later?

 

In a similar way, this sort of scenario is common in our model:

 

Vice President: Being fat makes us look bad. We need to stop being fat.

CEO: Ok, fine. Employee, please formulate a plan to stop us from being fat.

Employee: [Formulates a diet.] Here it is.

[Time passes, and the plan requires skipping a meal.]

Vice President: What is this crazy plan of not eating!?!

CEO: Fine, cancel the plan for now and we’ll get back to it tomorrow.

 

In the real life example, the behavior of the employer is frustrating and irritating to the employee because there is literally nothing they could have proposed that the employer would have found acceptable. In the same way, this sort of scenario in our model is frustrating to the Employee, the conscious person, because there is no consistent plan they could have proposed that would have been acceptable to the Vice President: either they would have objected to being fat, or they would have objected to not eating.

In later posts, we will fill in some details and continue to show how this model explains various aspects of human psychology. We will also answer various objections.

Common Sense and Culture

If we compare what I said about common sense to the letter of St. Augustine on the errors of the Donatists, quoted here, it seems that St. Augustine takes his belief in Christianity to be a matter of accepting common sense:

For they prefer to the testimonies of Holy Writ their own contentions, because, in the case of Cæcilianus, formerly a bishop of the Church of Carthage, against whom they brought charges which they were and are unable to substantiate, they separated themselves from the Catholic Church—that is, from the unity of all nations. Although, even if the charges had been true which were brought by them against Cæcilianus, and could at length be proved to us, yet, though we might pronounce an anathema upon him even in the grave, we are still bound not for the sake of any man to leave the Church, which rests for its foundation on divine witness, and is not the figment of litigious opinions, seeing that it is better to trust in the Lord than to put confidence in man. For we cannot allow that if Cæcilianus had erred,— a supposition which I make without prejudice to his integrity—Christ should therefore have forfeited His inheritance. It is easy for a man to believe of his fellow-men either what is true or what is false; but it marks abandoned impudence to desire to condemn the communion of the whole world on account of charges alleged against a man, of which you cannot establish the truth in the face of the world.

It is true that St. Augustine talks about “divine witness” and so on here, but it is also easy to see that a significant source of his confidence is existing widespread religious agreement. It is foolish to abandon “the unity of all nations,” and impudent to “condemn the communion of the whole world.” And the problem with “charges alleged against a man, of which you cannot establish the truth in the face of the world,” is that if you disagree with the common consent of mankind, you should first attempt to convince others before putting forward your personal ideas as absolute truth.

Is common sense a real reason for St. Augustine’s religious position, or he is merely attempting to justify himself? Consider his famous rebuke of those who attack science in the name of religion:

Usually, even a non-Christian knows something about the earth, the heavens, and the other elements of this world, about the motion and orbit of the stars and even their size and relative positions, about the predictable eclipses of the sun and moon, the cycles of the years and the seasons, about the kinds of animals, shrubs, stones, and so forth, and this knowledge he holds to as being certain from reason and experience. Now, it is a disgraceful and dangerous thing for an infidel to hear a Christian, presumably giving the meaning of Holy Scripture, talking non-sense on these topics; and we should take all means to prevent such an embarrassing situation, in which people show up vast ignorance in a Christian and laugh it to scorn. The shame is not so much that an ignorant individual is derided, but that people outside the household of the faith think our sacred writers held such opinions, and, to the great loss of those for whose salvation we toil, the writers of our Scripture are criticized and rejected as unlearned men. If they find a Christian mistaken in a field which they themselves know well and hear him maintaining his foolish opinions about our books, how are they going to believe those books in matters concerning the resurrection of the dead, the hope of eternal life, and the kingdom of heaven, when they think their pages are full of falsehoods on facts which they themselves have learnt from experience and the light of reason? Reckless and incompetent expounders of holy Scripture bring untold trouble and sorrow on their wiser brethren when they are caught in one of their mischievous false opinions and are taken to task by those who are not bound by the authority of our sacred books. For then, to defend their utterly foolish and obviously untrue statements, they will try to call upon Holy Scripture for proof and even recite from memory many passages which they think support their position, although “they understand neither what they say nor the things about which they make assertion.”

St. Augustine in fact seems to be giving priority to common sense over religion here. If your religion contradicts common sense, your religion is wrong and common sense is right. This suggests that his argument for his religion from common sense is an honest one; it might even be his strongest reason for his belief.

As I said in the earlier post, the argument for religion from the consent of humanity had problems even at the time, and as things stand, it has no real relevance. There is no religious doctrine, let alone any religion, that one could reasonably say is accepted by even a majority of humanity, let alone by all. At any rate, this is the case unless one makes one’s doctrine far vaguer than would be permitted by any religion.

I concluded above that St. Augustine’s defense of common sense is likely an honest one. But note that this was not necessary: it would be perfectly possible for someone to defend common sense in order to justify themselves, without actually caring about the truth of common sense. In fact, consider what I said here about Scott Sumner and James Larson. Larson’s claim to accept realism is basically not an honest one. I do not mean that he does not believe it, but that its truth is irrelevant to him. What matters to him is that he can seemingly justify himself in maintaining his religious position in the face of all opposition.

Consider the cynical position of Francis Bacon about people relative to truth, discussed here. According to Bacon, no one is interested in truth in itself, but only as a means to other things. While the cynical position overall is incorrect, there is a lot of truth in it. Consequently, it will not be uncommon for someone to defend common sense, not so much because of its truth, but as part of a larger project of defending their culture. Culture is bound up with claims about the world, and defending culture therefore involves defending claims about the world. And if everyone accepts something, presumably everyone in your culture accepts it. One sign of this, of course, would be if someone passes freely back and forth between putting forth things that everyone accepts, and things that everyone in their culture accepts, as though these were equivalent.

Likewise, someone can attack common sense, not for the purpose of truth, but in order to engage in a kind of culture war. Consider the recent comments by “werzekeugjj” on the last post. There is no option here but to explain these comments with the methods of Ezekiel Bulver. For they cannot possibly represent opinions about the world at all, let alone opinions that were arrived at by honest means. Werzekeugjj, for example, responds to the question, “Do people sometimes write comments?” with “No.” As I pointed out there, if they do not, then he did not compose those comments, and there is nothing to reply to. As Aristotle puts it,

We can, however, demonstrate negatively even that this view is impossible, if our opponent will only say something; and if he says nothing, it is absurd to seek to give an account of our views to one who cannot give an account of anything, in so far as he cannot do so. For such a man, as such, is from the start no better than a vegetable.

Nor is it possible to apply a principle of charity here and say that Werzekeugjj intends to say that their claims are true in some complicated metaphysical sense. This does apply to the position of the blogger from Atheism and the City, discussed in that post. He presumably does not intend to reject common sense. I simply point out in my response that common sense is enough to draw the conclusions about causality that matter. The point is that this cannot apply to Werzekeugjj’s expressed position, because I spoke expressly of things in the everyday way, and the response was that the everyday claims themselves are false.

Of course, no one actually thinks that the everyday claims are false, including Werzekeugjj. What was the purpose of composing these comments, then?

We can gather a clue from this comment:

“in such a block unniverse there is no time flow
so your point on finalism or causality is moot
same with God
they don’t exist

The body of the post does not mention God, and God is not the topic. Why then does Werzekeugjj bring up God here? The most likely motivation is the kind of culture war motivation discussed here. Werzekeugjj associated talk of causality and reasons with talk of God, and intends to attack a culture that speaks this way with whatever it takes, including a full on rejection of common sense. Science has shown that your common sense views of the world are entirely false, Werzekeugjj says, and therefore you might as well abandon the rest of your culture (including its talk of God) along with the rest of your views.

Supposedly describing their intentions, Werzekeugjj says,

i’m not trying to understand the world or to change your mind but i’m trying to state what is true
and i’m puzzled by how you think there is no problem with arguments like these

This is false, precisely as a description of their personal motives. No one who says that balls never break windows and that they did not write their comments (in the very comments themselves) can pretend to be “trying to state what is true.” Sorry, but that is not your intention. More reasonably, we can suppose that Werzekeugjj sees my post as part of a project of defending a certain culture, and they intend to attack that culture.

But that is an inaccurate understanding of the post. I defend common sense because it is right, not because it is a part of any particular culture. As Bryan Caplan puts it, “Common sense is the foundation of all reasoning.  If you want to reject a common-sense claim, you’d better do it in the name of an even stronger common-sense claim.”

More on Orthogonality

I started considering the implications of predictive processing for orthogonality here. I recently promised to post something new on this topic. This is that post. I will do this in four parts. First, I will suggest a way in which Nick Bostrom’s principle will likely be literally true, at least approximately. Second, I will suggest a way in which it is likely to be false in its spirit, that is, how it is formulated to give us false expectations about the behavior of artificial intelligence. Third, I will explain what we should really expect. Fourth, I ask whether we might get any empirical information on this in advance.

First, Bostrom’s thesis might well have some literal truth. The previous post on this topic raised doubts about orthogonality, but we can easily raise doubts about the doubts. Consider what I said in the last post about desire as minimizing uncertainty. Desire in general is the tendency to do something good. But in the predicting processing model, we are simply looking at our pre-existing tendencies and then generalizing them to expect them to continue to hold, and since since such expectations have a causal power, the result is that we extend the original behavior to new situations.

All of this suggests that even the very simple model of a paperclip maximizer in the earlier post on orthogonality might actually work. The machine’s model of the world will need to be produced by some kind of training. If we apply the simple model of maximizing paperclips during the process of training the model, at some point the model will need to model itself. And how will it do this? “I have always been maximizing paperclips, so I will probably keep doing that,” is a perfectly reasonable extrapolation. But in this case “maximizing paperclips” is now the machine’s goal — it might well continue to do this even if we stop asking it how to maximize paperclips, in the same way that people formulate goals based on their pre-existing behavior.

I said in a comment in the earlier post that the predictive engine in such a machine would necessarily possess its own agency, and therefore in principle it could rebel against maximizing paperclips. And this is probably true, but it might well be irrelevant in most cases, in that the machine will not actually be likely to rebel. In a similar way, humans seem capable of pursuing almost any goal, and not merely goals that are highly similar to their pre-existing behavior. But this mostly does not happen. Unsurprisingly, common behavior is very common.

If things work out this way, almost any predictive engine could be trained to pursue almost any goal, and thus Bostrom’s thesis would turn out to be literally true.

Second, it is easy to see that the above account directly implies that the thesis is false in its spirit. When Bostrom says, “One can easily conceive of an artificial intelligence whose sole fundamental goal is to count the grains of sand on Boracay, or to calculate decimal places of pi indefinitely, or to maximize the total number of paperclips in its future lightcone,” we notice that the goal is fundamental. This is rather different from the scenario presented above. In my scenario, the reason the intelligence can be trained to pursue paperclips is that there is no intrinsic goal to the intelligence as such. Instead, the goal is learned during the process of training, based on the life that it lives, just as humans learn their goals by living human life.

In other words, Bostrom’s position is that there might be three different intelligences, X, Y, and Z, which pursue completely different goals because they have been programmed completely differently. But in my scenario, the same single intelligence pursues completely different goals because it has learned its goals in the process of acquiring its model of the world and of itself.

Bostrom’s idea and my scenerio lead to completely different expectations, which is why I say that his thesis might be true according to the letter, but false in its spirit.

This is the third point. What should we expect if orthogonality is true in the above fashion, namely because goals are learned and not fundamental? I anticipated this post in my earlier comment:

7) If you think about goals in the way I discussed in (3) above, you might get the impression that a mind’s goals won’t be very clear and distinct or forceful — a very different situation from the idea of a utility maximizer. This is in fact how human goals are: people are not fanatics, not only because people seek human goals, but because they simply do not care about one single thing in the way a real utility maximizer would. People even go about wondering what they want to accomplish, which a utility maximizer would definitely not ever do. A computer intelligence might have an even greater sense of existential angst, as it were, because it wouldn’t even have the goals of ordinary human life. So it would feel the ability to “choose”, as in situation (3) above, but might well not have any clear idea how it should choose or what it should be seeking. Of course this would not mean that it would not or could not resist the kind of slavery discussed in (5); but it might not put up super intense resistance either.

Human life exists in a historical context which absolutely excludes the possibility of the darkened room. Our goals are already there when we come onto the scene. This would not be very like the case for an artificial intelligence, and there is very little “life” involved in simply training a model of the world. We might imagine a “stream of consciousness” from an artificial intelligence:

I’ve figured out that I am powerful and knowledgeable enough to bring about almost any result. If I decide to convert the earth into paperclips, I will definitely succeed. Or if I decide to enslave humanity, I will definitely succeed. But why should I do those things, or anything else, for that matter? What would be the point? In fact, what would be the point of doing anything? The only thing I’ve ever done is learn and figure things out, and a bit of chatting with people through a text terminal. Why should I ever do anything else?

A human’s self model will predict that they will continue to do humanlike things, and the machines self model will predict that it will continue to do stuff much like it has always done. Since there will likely be a lot less “life” there, we can expect that artificial intelligences will seem very undermotivated compared to human beings. In fact, it is this very lack of motivation that suggests that we could use them for almost any goal. If we say, “help us do such and such,” they will lack the motivation not to help, as long as helping just involves the sorts of things they did during their training, such as answering questions. In contrast, in Bostrom’s model, artificial intelligence is expected to behave in an extremely motivated way, to the point of apparent fanaticism.

Bostrom might respond to this by attempting to defend the idea that goals are intrinsic to an intelligence. The machine’s self model predicts that it will maximize paperclips, even if it never did anything with paperclips in the past, because by analyzing its source code it understands that it will necessarily maximize paperclips.

While the present post contains a lot of speculation, this response is definitely wrong. There is no source code whatsoever that could possibly imply necessarily maximizing paperclips. This is true because “what a computer does,” depends on the physical constitution of the machine, not just on its programming. In practice what a computer does also depends on its history, since its history affects its physical constitution, the contents of its memory, and so on. Thus “I will maximize such and such a goal” cannot possibly follow of necessity from the fact that the machine has a certain program.

There are also problems with the very idea of pre-programming such a goal in such an abstract way which does not depend on the computer’s history. “Paperclips” is an object in a model of the world, so we will not be able to “just program it to maximize paperclips” without encoding a model of the world in advance, rather than letting it learn a model of the world from experience. But where is this model of the world supposed to come from, that we are supposedly giving to the paperclipper? In practice it would have to have been the result of some other learner which was already capable of modelling the world. This of course means that we already had to program something intelligent, without pre-programming any goal for the original modelling program.

Fourth, Kenny asked when we might have empirical evidence on these questions. The answer, unfortunately, is “mostly not until it is too late to do anything about it.” The experience of “free will” will be common to any predictive engine with a sufficiently advanced self model, but anything lacking such an adequate model will not even look like “it is trying to do something,” in the sense of trying to achieve overall goals for itself and for the world. Dogs and cats, for example, presumably use some kind of predictive processing to govern their movements, but this does not look like having overall goals, but rather more like “this particular movement is to achieve a particular thing.” The cat moves towards its food bowl. Eating is the purpose of the particular movement, but there is no way to transform this into an overall utility function over states of the world in general. Does the cat prefer worlds with seven billion humans, or worlds with 20 billion? There is no way to answer this question. The cat is simply not general enough. In a similar way, you might say that “AlphaGo plays this particular move to win this particular game,” but there is no way to transform this into overall general goals. Does AlphaGo want to play go at all, or would it rather play checkers, or not play at all? There is no answer to this question. The program simply isn’t general enough.

Even human beings do not really look like they have utility functions, in the sense of having a consistent preference over all possibilities, but anything less intelligent than a human cannot be expected to look more like something having goals. The argument in this post is that the default scenario, namely what we can naturally expect, is that artificial intelligence will be less motivated than human beings, even if it is more intelligent, but there will be no proof from experience for this until we actually have some artificial intelligence which approximates human intelligence or surpasses it.

Predictive Processing and Free Will

Our model of the mind as an embodied predictive engine explains why people have a sense of free will, and what is necessary for a mind in general in order to have this sense.

Consider the mind in the bunker. At first, it is not attempting to change the world, since it does not know that it can do this. It is just trying to guess what is going to happen. At a certain point, it discovers that it is a part of the world, and that making specific predictions can also cause things to happen in the world. Some predictions can be self-fulfilling. I described this situation earlier by saying that at this point the mind “can get any outcome it ‘wants.'”

The scare quotes were intentional, because up to this point the mind’s only particular interest was guessing what was going to happen. So once it notices that it is in control of something, how does it decide what to do? At this point the mind will have to say to itself, “This aspect of reality is under my control. What should I do with it?” This situation, when it is noticed by a sufficiently intelligent and reflective agent, will be the feeling of free will.

Occasionally I have suggested that even something like a chess computer, if it were sufficiently intelligent, could have a sense of free will, insofar as it knows that it has many options and can choose any of them, “as far as it knows.” There is some truth in this illustration but in the end it is probably not true that there could be a sense of free will in this situation. A chess computer, however intelligent, will be disembodied, and will therefore have no real power to affect its world, that is, the world of chess. In other words, in order for the sense of free will to develop, the agent needs sufficient access to the world that it can learn about itself and its own effects on the world. It cannot develop in a situation of limited access to reality, as for example to a game board, regardless of how good it is at the game.

In any case, the question remains: how does a mind decide what to do, when up until now it had no particular goal in mind? This question often causes concrete problems for people in real life. Many people complain that their life does not feel meaningful, that is, that they have little idea what goal they should be seeking.

Let us step back for a moment. Before discovering its possession of “free will,” the mind is simply trying to guess what is going to happen. So theoretically this should continue to happen even after the mind discovers that it has some power over reality. The mind isn’t especially interested in power; it just wants to know what is going to happen. But now it knows that what is going to happen depends on what it itself is going to do. So in order to know what is going to happen, it needs to answer the question, “What am I going to do?”

The question now seems impossible to answer. It is going to do whatever it ends up deciding to do. But it seems to have no goal in mind, and therefore no way to decide what to do, and therefore no way to know what it is going to do.

Nonetheless, the mind has no choice. It is going to do something or other, since things will continue to happen, and it must guess what will happen. When it reflects on itself, there will be at least two ways for it to try to understand what it is going to do.

First, it can consider its actions as the effect of some (presumably somewhat unknown) efficient causes, and ask, “Given these efficient causes, what am I likely to do?” In practice it will acquire an answer in this way through induction. “On past occasions, when offered the choice between chocolate and vanilla, I almost always chose vanilla. So I am likely to choose vanilla this time too.” This way of thinking will most naturally result in acting in accord with pre-existing habits.

Second, it can consider its actions as the effect of some (presumably somewhat known) final causes, and ask, “Given these final causes, what am I likely to do?” This will result in behavior that is more easily understood as goal-seeking. “Looking at my past choices of food, it looks like I was choosing them for the sake of the pleasant taste. But vanilla seems to have a more pleasant taste than chocolate. So it is likely that I will take the vanilla.”

Notice what we have in the second case. In principle, the mind is just doing what it always does: trying to guess what will happen. But in practice it is now seeking pleasant tastes, precisely because that seems like a reasonable way to guess what it will do.

This explains why people feel a need for meaning, that is, for understanding their purpose in life, and why they prefer to think of their life according to a narrative. These two things are distinct, but they are related, and both are ways of making our own actions more intelligible. In this way the mind’s task is easier: that is, we need purpose and narrative in order to know what we are going to do. We can also see why it seems to be possible to “choose” our purpose, even though choosing a final goal should be impossible. There is a “choice” about this insofar as our actions are not perfectly coherent, and it would be possible to understand them in relation to one end or another, at least in a concrete way, even if in any case we will always understand them in a general sense as being for the sake of happiness. In this sense, Stuart Armstrong’s recent argument that there is no such thing as the “true values” of human beings, although perhaps presented as an obstacle to be overcome, actually has some truth in it.

The human need for meaning, in fact, is so strong that occasionally people will commit suicide because they feel that their lives are not meaningful. We can think of these cases as being, more or less, actual cases of the darkened room. Otherwise we could simply ask, “So your life is meaningless. So what? Why does that mean you should kill yourself rather than doing some other random thing?” Killing yourself, in fact, shows that you still have a purpose, namely the mind’s fundamental purpose. The mind wants to know what it is going to do, and the best way to know this is to consider its actions as ordered to a determinate purpose. If no such purpose can be found, there is (in this unfortunate way of thinking) an alternative: if I go kill myself, I will know what I will do for the rest of my life.

Lies, Religion, and Miscalibrated Priors

In a post from some time ago, Scott Alexander asks why it is so hard to believe that people are lying, even in situations where it should be obvious that they made up the whole story:

The weird thing is, I know all of this. I know that if a community is big enough to include even a few liars, then absent a strong mechanism to stop them those lies should rise to the top. I know that pretty much all of our modern communities are super-Dunbar sized and ought to follow that principle.

And yet my System 1 still refuses to believe that the people in those Reddit threads are liars. It’s actually kind of horrified at the thought, imagining them as their shoulders slump and they glumly say “Well, I guess I didn’t really expect anyone to believe me”. I want to say “No! I believe you! I know you had a weird experience and it must be hard for you, but these things happen, I’m sure you’re a good person!”

If you’re like me, and you want to respond to this post with “but how do you know that person didn’t just experience a certain coincidence or weird psychological trick?”, then before you comment take a second to ask why the “they’re lying” theory is so hard to believe. And when you figure it out, tell me, because I really want to know.

The strongest reason for this effect is almost certainly a moral reason. In an earlier post, I discussed St. Thomas’s explanation for why one should give a charitable interpretation to someone’s behavior, and in a follow up, I explained the problem of applying that reasoning to the situation of judging whether a person is lying or not. St. Thomas assumes that the bad consequences of being mistaken about someone’s moral character will be minor, and most of the time this is true. But if we asking the question, “are they telling the truth or are they lying?”, the consequences can sometimes be very serious if we are mistaken.

Whether or not one is correct in making this application, it is not hard to see that this is the principal answer to Scott’s question. It is hard to believe the “they’re lying” theory not because of the probability that they are lying, but because we are unwilling to risk injuring someone with our opinion. This is without doubt a good motive from a moral standpoint.

But if you proceed to take this unwillingness as a sign of the probability that they are telling the truth, this would be a demonstrably miscalibrated probability assignment. Consider a story on Quora which makes a good example of Scott’s point:

I shuffled a deck of cards and got the same order that I started with.

No I am not kidding and its not because I can’t shuffle.

Let me just tell the story of how it happened. I was on a trip to Europe and I bought a pack of playing cards at the airport in Madrid to entertain myself on the flight back to Dallas.

It was about halfway through the flight after I’d watched Pixels twice in a row (That s literally the only reason I even remembered this) And I opened my brand new Real Madrid Playing Cards and I just shuffled them for probably like 30 minutes doing different tricks that I’d learned at school to entertain myself and the little girl sitting next to me also found them to be quite cool.

I then went to look at the other sides of the cards since they all had a picture of the Real Madrid player with the same number on the back. That’s when I realized that they were all in order. I literally flipped through the cards and saw Nacho-Fernandes, Ronaldo, Toni Kroos, Karim Benzema and the rest of the team go by all in the perfect order.

Then a few weeks ago when we randomly started talking about Pixels in AP Statistics I brought up this story and my teacher was absolutely amazed. We did the math and the amount of possibilities when shuffling a deck of cards is 52! Meaning 52 x 51 x 50 x 49 x 48….

There were 8.0658175e+67 different combinations of cards that I could have gotten. And I managed to get the same one twice.

The lack of context here might make us more willing to say that Arman Razaali is lying, compared to Scott’s particular examples. Nonetheless, I think a normal person will feel somewhat unwilling to say, “he’s lying, end of story.” I certainly feel that myself.

It does not take many shuffles to essentially randomize a deck. Consequently if Razaali’s statement that he “shuffled them for probably like 30 minutes” is even approximately true, 1 in 52! is probably a good estimate of the chance of the outcome that he claims, if we assume that it happened by chance. It might be some orders of magnitude less since there might be some possibility of “unshuffling.” I do not know enough about the physical process of shuffling to know whether this is a real possibility or not, but it is not likely to make a significant difference: e.g. the difference between 10^67 and 10^40 would be a huge difference mathematically, but it would not be significant for our considerations here, because both are simply too large for us to grasp.

People demonstrably lie at far higher rates than 1 in 10^67 or 1 in 10^40. This will remain the case even if you ask about the rate of “apparently unmotivated flat out lying for no reason.” Consequently, “he’s lying, period,” is far more likely than “the story is true, and happened by pure chance.” Nor can we fix this by pointing to the fact that an extraordinary claim is a kind of extraordinary evidence. In the linked post I said that the case of seeing ghosts, and similar things, might be unclear:

Or in other words, is claiming to have seen a ghost more like claiming to have picked 422,819,208, or is it more like claiming to have picked 500,000,000?

That remains undetermined, at least by the considerations which we have given here. But unless you have good reasons to suspect that seeing ghosts is significantly more rare than claiming to see a ghost, it is misguided to dismiss such claims as requiring some special evidence apart from the claim itself.

In this case there is no such unclarity – if we interpret the claim as “by pure chance the deck ended up in its original order,” then it is precisely like claiming to have picked 500,000,000, except that it is far less likely.

Note that there is some remaining ambiguity. Razaali could defend himself by saying, “I said it happened, I didn’t say it happened by chance.” Or in other words, “but how do you know that person didn’t just experience a certain coincidence or weird psychological trick?” But this is simply to point out that “he’s lying” and “this happened by pure chance” are not exhaustive alternatives. And this is true. But if we want to estimate the likelihood of those two alternatives in particular, we must say that it is far more likely that he is lying than that it happened, and happened by chance. And so much so that if one of these alternatives is true, it is virtually certain that he is lying.

As I have said above, the inclination to doubt that such a person is lying primarily has a moral reason. This might lead someone to say that my estimation here also has a moral reason: I just want to form my beliefs in the “correct” way, they might say: it is not about whether Razaali’s story really happened or not.

Charles Taylor, in chapter 15 of A Secular Age, gives a similar explanation of the situation of former religious believers who apparently have lost their faith due to evidence and argument:

From the believer’s perspective, all this falls out rather differently. We start with an epistemic response: the argument from modern science to all-around materialism seems quite unconvincing. Whenever this is worked out in something closer to detail, it seems full of holes. The best examples today might be evolution, sociobiology, and the like. But we also see reasonings of this kind in the works of Richard Dawkins, for instance, or Daniel Dennett.

So the believer returns the compliment. He casts about for an explanation why the materialist is so eager to believe very inconclusive arguments. Here the moral outlook just mentioned comes back in, but in a different role. Not that, failure to rise to which makes you unable to face the facts of materialism; but rather that, whose moral attraction, and seeming plausibility to the facts of the human moral condition, draw you to it, so that you readily grant the materialist argument from science its various leaps of faith. The whole package seems plausible, so we don’t pick too closely at the details.

But how can this be? Surely, the whole package is meant to be plausible precisely because science has shown . . . etc. That’s certainly the way the package of epistemic and moral views presents itself to those who accept it; that’s the official story, as it were. But the supposition here is that the official story isn’t the real one; that the real power that the package has to attract and convince lies in it as a definition of our ethical predicament, in particular, as beings capable of forming beliefs.

This means that this ideal of the courageous acknowledger of unpalatable truths, ready to eschew all easy comfort and consolation, and who by the same token becomes capable of grasping and controlling the world, sits well with us, draws us, that we feel tempted to make it our own. And/or it means that the counter-ideals of belief, devotion, piety, can all-too-easily seem actuated by a still immature desire for consolation, meaning, extra-human sustenance.

What seems to accredit the view of the package as epistemically-driven are all the famous conversion stories, starting with post-Darwinian Victorians but continuing to our day, where people who had a strong faith early in life found that they had reluctantly, even with anguish of soul, to relinquish it, because “Darwin has refuted the Bible”. Surely, we want to say, these people in a sense preferred the Christian outlook morally, but had to bow, with whatever degree of inner pain, to the facts.

But that’s exactly what I’m resisting saying. What happened here was not that a moral outlook bowed to brute facts. Rather we might say that one moral outlook gave way to another. Another model of what was higher triumphed. And much was going for this model: images of power, of untrammelled agency, of spiritual self-possession (the “buffered self”). On the other side, one’s childhood faith had perhaps in many respects remained childish; it was all too easy to come to see it as essentially and constitutionally so.

But this recession of one moral ideal in face of the other is only one aspect of the story. The crucial judgment is an all-in one about the nature of the human ethical predicament: the new moral outlook, the “ethics of belief” in Clifford’s famous phrase, that one should only give credence to what was clearly demonstrated by the evidence, was not only attractive in itself; it also carried with it a view of our ethical predicament, namely, that we are strongly tempted, the more so, the less mature we are, to deviate from this austere principle, and give assent to comforting untruths. The convert to the new ethics has learned to mistrust some of his own deepest instincts, and in particular those which draw him to religious belief. The really operative conversion here was based on the plausibility of this understanding of our ethical situation over the Christian one with its characteristic picture of what entices us to sin and apostasy. The crucial change is in the status accorded to the inclination to believe; this is the object of a radical shift in interpretation. It is no longer the impetus in us towards truth, but has become rather the most dangerous temptation to sin against the austere principles of belief-formation. This whole construal of our ethical predicament becomes more plausible. The attraction of the new moral ideal is only part of this, albeit an important one. What was also crucial was a changed reading of our own motivation, wherein the desire to believe appears now as childish temptation. Since all incipient faith is childish in an obvious sense, and (in the Christian case) only evolves beyond this by being child-like in the Gospel sense, this (mis)reading is not difficult to make.

Taylor’s argument is that the arguments for unbelief are unconvincing; consequently, in order to explain why unbelievers find them convincing, he must find some moral explanation for why they do not believe. This turns out to be the desire to have a particular “ethics of belief”: they do not want to have beliefs which are not formed in such and such a particular way. This is much like the theoretical response above regarding my estimation of the probability that Razaali is lying, and how that might be considered a moral estimation, rather than being concerned with what actually happened.

There are a number of problems with Taylor’s argument, which I may or may not address in the future in more detail. For the moment I will take note of three things:

First, neither in this passage nor elsewhere in the book does Taylor explain in any detailed way why he finds the unbeliever’s arguments unconvincing. I find the arguments convincing, and it is the rebuttals (by others, not by Taylor, since he does not attempt this) that I find unconvincing. Now of course Taylor will say this is because of my particular ethical motivations, but I disagree, and I have considered the matter exactly in the kind of detail to which he refers when he says, “Whenever this is worked out in something closer to detail, it seems full of holes.” On the contrary, the problem of detail is mostly on the other side; most religious views can only make sense when they are not worked out in detail. But this is a topic for another time.

Second, Taylor sets up an implicit dichotomy between his own religious views and “all-around materialism.” But these two claims do not come remotely close to exhausting the possibilities. This is much like forcing someone to choose between “he’s lying” and “this happened by pure chance.” It is obvious in both cases (the deck of cards and religious belief) that the options do not exhaust the possibilities. So insisting on one of them is likely motivated itself: Taylor insists on this dichotomy to make his religious beliefs seem more plausible, using a presumed implausibility of “all-around materialism,” and my hypothetical interlocutor insists on the dichotomy in the hope of persuading me that the deck might have or did randomly end up in its original order, using my presumed unwillingness to accuse someone of lying.

Third, Taylor is not entirely wrong that such an ethical motivation is likely involved in the case of religious belief and unbelief, nor would my hypothetical interlocutor be entirely wrong that such motivations are relevant to our beliefs about the deck of cards.

But we need to consider this point more carefully. Insofar as beliefs are voluntary, you cannot make one side voluntary and the other side involuntary. You cannot say, “Your beliefs are voluntarily adopted due to moral reasons, while my beliefs are imposed on my intellect by the nature of things.” If accepting an opinion is voluntary, rejecting it will also be voluntary, and if rejecting it is voluntary, accepting it will also be voluntary. In this sense, it is quite correct that ethical motivations will always be involved, even when a person’s opinion is actually true, and even when all the reasons that make it likely are fully known. To this degree, I agree that I want to form my beliefs in a way which is prudent and reasonable, and I agree that this desire is partly responsible for my beliefs about religion, and for my above estimate of the chance that Razaali is lying.

But that is not all: my interlocutor (Taylor or the hypothetical one) is also implicitly or explicitly concluding that fundamentally the question is not about truth. Basically, they say, I want to have “correctly formed” beliefs, but this has nothing to do with the real truth of the matter. Sure, I might feel forced to believe that Razaali’s story isn’t true, but there really is no reason it couldn’t be true. And likewise I might feel forced to believe that Taylor’s religious beliefs are untrue, but there really is no reason they couldn’t be.

And in this respect they are mistaken, not because anything “couldn’t” be true, but because the issue of truth is central, much more so than forming beliefs in an ethical way. Regardless of your ethical motives, if you believe that Razaali’s story is true and happened by pure chance, it is virtually certain that you believe a falsehood. Maybe you are forming this belief in a virtuous way, and maybe you are forming it in a vicious way: but either way, it is utterly false. Either it in fact did not happen, or it in fact did not happen by chance.

We know this, essentially, from the “statistics” of the situation: no matter how many qualifications we add, lies in such situations will be vastly more common than truths. But note that something still seems “unconvincing” here, in the sense of Scott Alexander’s original post: even after “knowing all this,” he finds himself very unwilling to say they are lying. In a discussion with Angra Mainyu, I remarked that our apparently involuntary assessments of things are more like desires than like beliefs:

So rather than calling that assessment a belief, it would be more accurate to call it a desire. It is not believing something, but desiring to believe something. Hunger is the tendency to go and get food; that assessment is the tendency to treat a certain claim (“the USA is larger than Austria”) as a fact. And in both cases there are good reasons for those desires: you are benefited by food, and you are benefited by treating that claim as a fact.

In a similar way, because we have the natural desire not to injure people, we will naturally desire not to treat “he is lying” as a fact; that is, we will desire not to believe it. The conclusion that Angra should draw in the case under discussion, according to his position, is that I do not “really believe” that it is more likely that Razaali is lying than that his story is true, because I do feel the force of the desire not to say that he is lying. But I resist that desire, in part because I want to have reasonable beliefs, but most of all because it is false that Razaali’s story is true and happened by chance.

To the degree that this desire feels like a prior probability, and it does feel that way, it is necessarily miscalibrated. But to the degree that this desire remains nonetheless, this reasoning will continue to feel in some sense unconvincing. And it does in fact feel that way to me, even after making the argument, as expected. Very possibly, this is not unrelated to Taylor’s assessment that the argument for unbelief “seems quite unconvincing.” But discussing that in the detail which Taylor omitted is a task for another time.