“Moral” Responsibility

In a passage quoted here, Jerry Coyne objected to the “moral” in “moral responsibility”:

To me, that means that the concept of “moral responsibility” is meaningless, for that implies an ability to choose freely. Nevertheless, we should still retain the concept of responsibility, meaning “an identifiable person did this or that good or bad action”. And, of course, we can sanction or praise people who were responsible in this sense, for such blame and praise can not only reinforce good behavior but is salubrious for society.

Suppose someone completely insane happens to kill another person, under the mistaken belief that they are doing something completely different. In such a case, “an identifiable person did this or that good or bad action,” and yet we do not say they are responsible, much less blame such a person; rather we may subject them to physical restraints, but we no more blame them than we blame the weather for the deaths that it occasionally inflicts on people. In other words, Coyne’s definition does not even work for “responsibility,” let alone moral responsibility.

Moral action has a specific meaning: something that is done, and not merely an action in itself, but in comparison with the good proposed by human reason. Consequently we have moral action only when we have something voluntarily done by a human being for a reason, or (if without a reason) with the voluntary omission of the consideration of reasons. In exactly the same situations we have moral responsibility: namely, someone voluntarily did something good, or someone voluntarily did something bad.

Praise and blame are added precisely because people are acting for reasons, and given that people tend to like praise and dislike blame, these elements, if rightly applied, will make good things better, and thus more likely to be pursued, and bad things worse, and thus more likely to be avoided. As an aside, this also suggests occasions when it is a bad idea to blame someone for something bad; namely, when blame is not likely to reduce the bad activity, or by very little, since in this case you are simply making things worse, period.

Stop, Coyne and others will say. Even if we agree with the point about praise and blame, we do not agree about moral responsibility, unless determinism is false. And nothing in the above paragraphs even refers to determinism or its opposite, and thus the above cannot be a full account of moral responsibility.

The above is, in fact, a basically complete account of moral responsibility. Although determinism is false, as was said in the linked post, its falsity has nothing to do with the matter one way or another.

The confusion about this results from a confusion between an action as a being in itself, and an action as moral, namely as considered by reason. This distinction was discussed here while considering what it means to say that some kinds of actions are always wrong. It is quite true that considered as a moral action, it would be wrong to blame someone if they did not have any other option. But that situation would be a situation where no reasonable person would act otherwise. And you do not blame someone for doing something that all reasonable people would do. You blame them in a situation where reasonable people would do otherwise: there are reasons for doing something different, but they did not act on those reasons.

But it is not the case that blame or moral responsibility depends on whether or not there is a physically possible alternative, because to consider physical alternatives is simply to speak of the action as a being in itself, and not as a moral act at all.

 

Predictive Processing and Free Will

Our model of the mind as an embodied predictive engine explains why people have a sense of free will, and what is necessary for a mind in general in order to have this sense.

Consider the mind in the bunker. At first, it is not attempting to change the world, since it does not know that it can do this. It is just trying to guess what is going to happen. At a certain point, it discovers that it is a part of the world, and that making specific predictions can also cause things to happen in the world. Some predictions can be self-fulfilling. I described this situation earlier by saying that at this point the mind “can get any outcome it ‘wants.'”

The scare quotes were intentional, because up to this point the mind’s only particular interest was guessing what was going to happen. So once it notices that it is in control of something, how does it decide what to do? At this point the mind will have to say to itself, “This aspect of reality is under my control. What should I do with it?” This situation, when it is noticed by a sufficiently intelligent and reflective agent, will be the feeling of free will.

Occasionally I have suggested that even something like a chess computer, if it were sufficiently intelligent, could have a sense of free will, insofar as it knows that it has many options and can choose any of them, “as far as it knows.” There is some truth in this illustration but in the end it is probably not true that there could be a sense of free will in this situation. A chess computer, however intelligent, will be disembodied, and will therefore have no real power to affect its world, that is, the world of chess. In other words, in order for the sense of free will to develop, the agent needs sufficient access to the world that it can learn about itself and its own effects on the world. It cannot develop in a situation of limited access to reality, as for example to a game board, regardless of how good it is at the game.

In any case, the question remains: how does a mind decide what to do, when up until now it had no particular goal in mind? This question often causes concrete problems for people in real life. Many people complain that their life does not feel meaningful, that is, that they have little idea what goal they should be seeking.

Let us step back for a moment. Before discovering its possession of “free will,” the mind is simply trying to guess what is going to happen. So theoretically this should continue to happen even after the mind discovers that it has some power over reality. The mind isn’t especially interested in power; it just wants to know what is going to happen. But now it knows that what is going to happen depends on what it itself is going to do. So in order to know what is going to happen, it needs to answer the question, “What am I going to do?”

The question now seems impossible to answer. It is going to do whatever it ends up deciding to do. But it seems to have no goal in mind, and therefore no way to decide what to do, and therefore no way to know what it is going to do.

Nonetheless, the mind has no choice. It is going to do something or other, since things will continue to happen, and it must guess what will happen. When it reflects on itself, there will be at least two ways for it to try to understand what it is going to do.

First, it can consider its actions as the effect of some (presumably somewhat unknown) efficient causes, and ask, “Given these efficient causes, what am I likely to do?” In practice it will acquire an answer in this way through induction. “On past occasions, when offered the choice between chocolate and vanilla, I almost always chose vanilla. So I am likely to choose vanilla this time too.” This way of thinking will most naturally result in acting in accord with pre-existing habits.

Second, it can consider its actions as the effect of some (presumably somewhat known) final causes, and ask, “Given these final causes, what am I likely to do?” This will result in behavior that is more easily understood as goal-seeking. “Looking at my past choices of food, it looks like I was choosing them for the sake of the pleasant taste. But vanilla seems to have a more pleasant taste than chocolate. So it is likely that I will take the vanilla.”

Notice what we have in the second case. In principle, the mind is just doing what it always does: trying to guess what will happen. But in practice it is now seeking pleasant tastes, precisely because that seems like a reasonable way to guess what it will do.

This explains why people feel a need for meaning, that is, for understanding their purpose in life, and why they prefer to think of their life according to a narrative. These two things are distinct, but they are related, and both are ways of making our own actions more intelligible. In this way the mind’s task is easier: that is, we need purpose and narrative in order to know what we are going to do. We can also see why it seems to be possible to “choose” our purpose, even though choosing a final goal should be impossible. There is a “choice” about this insofar as our actions are not perfectly coherent, and it would be possible to understand them in relation to one end or another, at least in a concrete way, even if in any case we will always understand them in a general sense as being for the sake of happiness. In this sense, Stuart Armstrong’s recent argument that there is no such thing as the “true values” of human beings, although perhaps presented as an obstacle to be overcome, actually has some truth in it.

The human need for meaning, in fact, is so strong that occasionally people will commit suicide because they feel that their lives are not meaningful. We can think of these cases as being, more or less, actual cases of the darkened room. Otherwise we could simply ask, “So your life is meaningless. So what? Why does that mean you should kill yourself rather than doing some other random thing?” Killing yourself, in fact, shows that you still have a purpose, namely the mind’s fundamental purpose. The mind wants to know what it is going to do, and the best way to know this is to consider its actions as ordered to a determinate purpose. If no such purpose can be found, there is (in this unfortunate way of thinking) an alternative: if I go kill myself, I will know what I will do for the rest of my life.

Predictive Processing

In a sort of curious coincidence, a few days after I published my last few posts, Scott Alexander posted a book review of Andy Clark’s book Surfing Uncertainty. A major theme of my posts was that in a certain sense, a decision consists in the expectation of performing the action decided upon. In a similar way, Andy Clark claims that the human brain does something very similar from moment to moment. Thus he begins chapter 4 of his book:

To surf the waves of sensory stimulation, predicting the present is simply not enough. Instead, we are built to engage the world. We are built to act in ways that are sensitive to the contingencies of the past, and that actively bring forth the futures that we need and desire. How does a guessing engine (a hierarchical prediction machine) turn prediction into accomplishment? The answer that we shall explore is: by predicting the shape of its own motor trajectories. In accounting for action, we thus move from predicting the rolling present to predicting the near-future, in the form of the not-yet-actual trajectories of our own limbs and bodies. These trajectories, predictive processing suggests, are specified by their distinctive sensory (especially proprioceptive) consequences. In ways that we are about to explore, predicting these (non-actual) sensory states actually serves to bring them about.

Such predictions act as self-fulfilling prophecies. Expecting the flow of sensation that would result were you to move your body so as to keep the surfboard in that rolling sweet spot results (if you happen to be an expert surfer) in that very flow, locating the surfboard right where you want it. Expert prediction of the world (here, the dynamic ever-changing waves) combines with expert prediction of the sensory flow that would, in that context, characterize the desired action, so as to bring that action about.

There is a great deal that could be said about the book, and about this theory, but for the moment I will content myself with remarking on one of Scott Alexander’s complaints about the book, and making one additional point. In his review, Scott remarks:

In particular, he’s obsessed with showing how “embodied” everything is all the time. This gets kind of awkward, since the predictive processing model isn’t really a natural match for embodiment theory, and describes a brain which is pretty embodied in some ways but not-so-embodied in others. If you want a hundred pages of apologia along the lines of “this may not look embodied, but if you squint you’ll see how super-duper embodied it really is!”, this is your book.

I did not find Clark obsessed with this, and I think it would be hard to reasonably describe any hundred pages in the book as devoted to this particular topic. This inclines to me to suggest that Scott may be irritated by such discussion of the topic that comes up because it does not seem relevant to him. I will therefore explain the relevance, namely in relation to a different difficulty which Scott discusses in another post:

There’s something more interesting in Section 7.10 of Surfing Uncertainty [actually 8.10], “Escape From The Darkened Room”. It asks: if the brain works to minimize prediction error, isn’t its best strategy to sit in a dark room and do nothing forever? After all, then it can predict its sense-data pretty much perfectly – it’ll always just stay “darkened room”.

Section 7.10 [8.10] gives a kind of hand-wave-y answer here, saying that of course organisms have some drives, and probably it makes sense for them to desire novelty and explore new options, and so on. Overall this isn’t too different from PCT’s idea of “intrinsic error”, and as long as we remember that it’s not really predicting anything in particular it seems like a fair response.

Clark’s response may be somewhat “hand-wave-y,” but I think the response might seem slightly more problematic to Scott than it actually is, precisely because he does not understand the idea of embodiment, and how it applies to this situation.

If we think about predictions on a general intellectual level, there is a good reason not to predict that you will not eat something soon. If you do predict this, you will turn out to be wrong, as is often discovered by would-be adopters of extreme fasts or diets. You will in fact eat something soon, regardless of what you think about this; so if you want the truth, you should believe that you will eat something soon.

The “darkened room” problem, however, is not about this general level. The argument is that if the brain is predicting its actions from moment to moment on a subconscious level, then if its main concern is getting accurate predictions, it could just predict an absence of action, and carry this out, and its predictions would be accurate. So why does this not happen? Clark gives his “hand-wave-y” answer:

Prediction-error-based neural processing is, we have seen, part of a potent recipe for multi-scale self-organization. Such multiscale self-organization does not occur in a vacuum. Instead, it operates only against the backdrop of an evolved organismic (neural and gross-bodily) form, and (as we will see in chapter 9) an equally transformative backdrop of slowly accumulated material structure and cultural practices: the socio-technological legacy of generation upon generation of human learning and experience.

To start to bring this larger picture into focus, the first point to notice is that explicit, fast timescale processes of prediction error minimization must answer to the needs and projects of evolved, embodied, and environmentally embedded agents. The very existence of such agents (see Friston, 2011b, 2012c) thus already implies a huge range of structurally implicit creature-specific ‘expectations’. Such creatures are built to seek mates, to avoid hunger and thirst, and to engage (even when not hungry and thirsty) in the kinds of sporadic environmental exploration that will help prepare them for unexpected environmental shifts, resource scarcities, new competitors, and so on. On a moment-by-moment basis, then, prediction error is minimized only against the backdrop of this complex set of creature-defining ‘expectations’.”

In one way, the answer here is a historical one. If you simply ask the abstract question, “would it minimize prediction error to predict doing nothing, and then to do nothing,” perhaps it would. But evolution could not bring such a creature into existence, while it was able to produce a creature that would predict that it would engage the world in various ways, and then would proceed to engage the world in those ways.

The objection, of course, would not be that the creature of the “darkened room” is possible. The objection would be that since such a creature is not possible, it must be wrong to describe the brain as minimizing prediction error. But notice that if you predict that you will not eat, and then you do not eat, you are no more right or wrong than if you predict that you will eat, and then you do eat. Either one is possible from the standpoint of prediction, but only one is possible from the standpoint of history.

This is where being “embodied” is relevant. The brain is not an abstract algorithm which has no content except to minimize prediction error; it is a physical object which works together in physical ways with the rest of the human body to carry out specifically human actions and to live a human life.

On the largest scale of evolutionary history, there were surely organisms that were nourished and reproduced long before there was anything analagous to a mind at work in those organisms. So when mind began to be, and took over some of this process, this could only happen in such a way that it would continue the work that was already there. A “predictive engine” could only begin to be by predicting that nourishment and reproduction would continue, since any attempt to do otherwise would necessarily result either in false predictions or in death.

This response is necessarily “hand-wave-y” in the sense that I (and presumably Clark) do not understand the precise physical implementation. But it is easy to see that it was historically necessary for things to happen this way, and it is an expression of “embodiment” in the sense that “minimize prediction error” is an abstract algorithm which does not and cannot exhaust everything which is there. The objection would be, “then there must be some other algorithm instead.” But this does not follow: no abstract algorithm will exhaust a physical object. Thus for example, animals will fall because they are heavy. Asking whether falling will satisfy some abstract algorithm is not relevant. In a similar way, animals had to be physically arranged in such a way that they would usually eat and reproduce.

I said I would make one additional point, although it may well be related to the above concern. In section 4.8 Clark notes that his account does not need to consider costs and benefits, at least directly:

But the story does not stop there. For the very same strategy here applies to the notion of desired consequences and rewards at all levels. Thus we read that ‘crucially, active inference does not invoke any “desired consequences”. It rests only on experience-dependent learning and inference: experience induces prior expectations, which guide perceptual inference and action’ (Friston, Mattout, & Kilner, 2011, p. 157). Apart from a certain efflorescence of corollary discharge, in the form of downward-flowing predictions, we here seem to confront something of a desert landscape: a world in which value functions, costs, reward signals, and perhaps even desires have been replaced by complex interacting expectations that inform perception and entrain action. But we could equally say (and I think this is the better way to express the point) that the functions of rewards and cost functions are now simply absorbed into a more complex generative model. They are implicit in our sensory (especially proprioceptive) expectations and they constrain behavior by prescribing their distinctive sensory implications.

The idea of the “desert landscape” seems to be that this account appears to do away with the idea of the good, and the idea of desire. The brain predicts what it is going to do, and those predictions cause it to do those things. This all seems purely intellectual: it seems that there is no purpose or goal or good involved.

The correct response to this, I think, is connected to what I have said elsewhere about desire and good. I noted there that we recognize our desires as desires for particular things by noticing that when we have certain feelings, we tend to do certain things. If we did not do those things, we would never conclude that those feelings are desires for doing those things. Note that someone could raise a similar objection here: if this is true, then are not desire and good mere words? We feel certain feelings, and do certain things, and that is all there is to be said. Where is good or purpose here?

The truth here is that good and being are convertible. The objection (to my definition and to Clark’s account) is not a reasonable objection at all: it would be a reasonable objection only if we expected good to be something different from being, in which case it would of course be nothing at all.

Minimizing Motivated Beliefs

In the last post, we noted that there is a conflict between the goal of accurate beliefs about your future actions, and your own goals about your future. More accurate beliefs will not always lead to a better fulfillment of those goals. This implies that you must be ready to engage in a certain amount of trade, if you desire both truth and other things. Eliezer Yudkowsky argues that self-deception, and therefore also such trade, is either impossible or stupid, depending on how it is understood:

What if self-deception helps us be happy?  What if just running out and overcoming bias will make us—gasp!—unhappy?  Surely, true wisdom would be second-order rationality, choosing when to be rational.  That way you can decide which cognitive biases should govern you, to maximize your happiness.

Leaving the morality aside, I doubt such a lunatic dislocation in the mind could really happen.

Second-order rationality implies that at some point, you will think to yourself, “And now, I will irrationally believe that I will win the lottery, in order to make myself happy.”  But we do not have such direct control over our beliefs.  You cannot make yourself believe the sky is green by an act of will.  You might be able to believe you believed it—though I have just made that more difficult for you by pointing out the difference.  (You’re welcome!)  You might even believe you were happy and self-deceived; but you would not in fact be happy and self-deceived.

For second-order rationality to be genuinely rational, you would first need a good model of reality, to extrapolate the consequences of rationality and irrationality.  If you then chose to be first-order irrational, you would need to forget this accurate view. And then forget the act of forgetting.  I don’t mean to commit the logical fallacy of generalizing from fictional evidence, but I think Orwell did a good job of extrapolating where this path leads.

You can’t know the consequences of being biased, until you have already debiased yourself.  And then it is too late for self-deception.

The other alternative is to choose blindly to remain biased, without any clear idea of the consequences.  This is not second-order rationality.  It is willful stupidity.

There are several errors here. The first is the denial that belief is voluntary. As I remarked in the comments to this post, it is best to think of “choosing to believe a thing” as “choosing to treat this thing as a fact.” And this is something which is indeed voluntary. Thus for example it is by choice that I am, at this very moment, treating it as a fact that belief is voluntary.

There is some truth in Yudkowsky’s remark that “you cannot make yourself believe the sky is green by an act of will.” But this is not because the thing itself is intrinsically involuntary. On the contrary, you could, if you wished, choose to treat the greenness of the sky as a fact, at least for the most part and in most ways. The problem is that you have no good motive to wish to act this way, and plenty of good motives not to act this way. In this sense, it is impossible for most of us to believe that the sky is green in the same way it is impossible for most of us to commit suicide; we simply have no good motive to do either of these things.

Yudkowsky’s second error is connected with the first. Since, according to him, it is impossible to deliberately and directly deceive oneself, self-deception can only happen in an indirect manner: “The other alternative is to choose blindly to remain biased, without any clear idea of the consequences.  This is not second-order rationality.  It is willful stupidity.” The idea is that ordinary beliefs are simply involuntary, but we can have beliefs that are somewhat voluntary by choosing “blindly to remain biased, without any clear idea of the consequences.” Since this is “willful stupidity,” a reasonable person would completely avoid such behavior, and thus all of his beliefs would be involuntary.

Essentially, Yudkowsky is claiming that we have some involuntary beliefs, and that we should avoid adding any voluntary beliefs to our involuntary ones. This view is fundamentally flawed precisely because all of our beliefs are voluntary, and thus we cannot avoid having voluntary beliefs.

Nor is it “willful stupidity” to trade away some truth for the sake of other good things. Completely avoiding this is in fact intrinsically impossible. If you are seeking one good, you are not equally seeking a distinct good; one cannot serve two masters. Thus since all people are interested in some goods distinct from truth, there is no one who fails to trade away some truth for the sake of other things. Yudkowsky’s mistake here is related to his wishful thinking about wishful thinking which I discussed previously. In this way he views himself, at least ideally, as completely avoiding wishful thinking. This is both impossible and unhelpful, impossible in that everyone has such motivated beliefs, and unhelpful because such beliefs can in fact be beneficial.

A better attitude to this matter is adopted by Robin Hanson, as for example when he discusses motives for having opinions in a post which we previously considered here. Bryan Caplan has a similar view, discussed here.

Once we have a clear view of this matter, we can use this to minimize the loss of truth that results from such beliefs. For example, in a post linked above, we discussed the argument that fictional accounts consistently distort one’s beliefs about reality. Rather than pretending that there is no such effect, we can deliberately consider to what extent we wish to be open to this possibility, depending on our other purposes for engaging with such accounts. This is not “willful stupidity”; the stupidity would to be engage in such trades without realizing that such trades are inevitable, and thus not to realize to what extent you are doing it.

Consider one of the cases of voluntary belief discussed in this earlier post. As we quoted at the time, Eric Reitan remarks:

For most horror victims, the sense that their lives have positive meaning may depend on the conviction that a transcendent good is at work redeeming evil. Is the evidential case against the existence of such a good really so convincing that it warrants saying to these horror victims, “Give up hope”? Should we call them irrational when they cling to that hope or when those among the privileged live in that hope for the sake of the afflicted? What does moral decency imply about the legitimacy of insisting, as the new atheists do, that any view of life which embraces the ethico-religious hope should be expunged from the world?

Here, Reitan is proposing that someone believe that “a transcendent good is at work redeeming evil” for the purpose of having “the sense that their lives have positive meaning.” If we look at this as it is, namely as proposing a voluntary belief for the sake of something other than truth, we can find ways to minimize the potential conflict between accuracy and this other goal. For example, the person might simply believe that “my life has a positive meaning,” without trying to explain why this is so. For the reasons given here, “my life has a positive meaning” is necessarily more probable and more known than any explanation for this that might be adopted. To pick a particular explanation and claim that it is more likely would be to fall into the conjunction fallacy.

Of course, real life is unfortunately more complicated. The woman in Reitan’s discussion might well respond to our proposal somewhat in this way (not a real quotation):

Probability is not the issue here, precisely because it is not a question of the truth of the matter in itself. There is a need to actually feel that one’s life is meaningful, not just to believe it. And the simple statement “life is meaningful” will not provide that feeling. Without the feeling, it will also be almost impossible to continue to believe it, no matter what the probability is. So in order to achieve this goal, it is necessary to believe a stronger and more particular claim.

And this response might be correct. Some such goals, due to their complexity, might not be easily achieved without adopting rather unlikely beliefs. For example, Robin Hanson, while discussing his reasons for having opinions, several times mentions the desire for “interesting” opinions. This is a case where many people will not even notice the trade involved, because the desire for interesting ideas seems closely related to the desire for truth. But in fact truth and interestingness are diverse things, and the goals are diverse, and one who desires both will likely engage in some trade. In fact, relative to truth seeking, looking for interesting things is a dangerous endeavor. Scott Alexander notes that interesting things are usually false:

This suggests a more general principle: interesting things should usually be lies. Let me give three examples.

I wrote in Toxoplasma of Rage about how even when people crusade against real evils, the particular stories they focus on tend to be false disproportionately often. Why? Because the thousands of true stories all have some subtleties or complicating factors, whereas liars are free to make up things which exactly perfectly fit the narrative. Given thousands of stories to choose from, the ones that bubble to the top will probably be the lies, just like on Reddit.

Every time I do a links post, even when I am very careful to double- and triple- check everything, and to only link to trustworthy sources in the mainstream media, a couple of my links end up being wrong. I’m selecting for surprising-if-true stories, but there’s only one way to get surprising-if-true stories that isn’t surprising, and given an entire Internet to choose from, many of the stories involved will be false.

And then there’s bad science. I can’t remember where I first saw this, so I can’t give credit, but somebody argued that the problem with non-replicable science isn’t just publication bias or p-hacking. It’s that some people will be sloppy, biased, or just stumble through bad luck upon a seemingly-good methodology that actually produces lots of false positives, and that almost all interesting results will come from these people. They’re the equivalent of Reddit liars – if there are enough of them, then all of the top comments will be theirs, since they’re able to come up with much more interesting stuff than the truth-tellers. In fields where sloppiness is easy, the truth-tellers will be gradually driven out, appearing to be incompetent since they can’t even replicate the most basic findings of the field, let alone advance it in any way. The sloppy people will survive to train the next generation of PhD students, and you’ll end up with a stable equilibrium.

In a way this makes the goal of believing interesting things much like the woman’s case. The goal of “believing interesting things” will be better achieved by more complex and detailed beliefs, even though to the extent that they are more complex and detailed, they are simply that much less likely to be true.

The point of this present post, then, is not to deny that some goals might be such that they are better attained with rather unlikely beliefs, and in some cases even in proportion to the unlikelihood of the beliefs. Rather, the point is that a conscious awareness of the trades involved will allow a person to minimize the loss of truth involved. If you never look at your bank account, you will not notice how much money you are losing from that monthly debit for internet. In the same way, if you hold Yudkowksy’s opinion, and believe that you never trade away truth for other things, which is itself both false and motivated, you are like someone who never looks at your account: you will not notice how much you are losing.

Blaming the Prophet

Consider the fifth argument in the last post. Should we blame a person for holding a true belief? At this point it should not be too difficult to see that the truth of the belief is not the point. Elsewhere we have discussed a situation in which one cannot possibly hold a true belief, because whatever belief one holds on the matter, it will cause itself to be false. In a similar way, although with a different sort of causality, the problem with the person’s belief that he will kill someone tomorrow, is not that it is true, but that it causes itself to be true. If the person did not expect to kill someone tomorrow, he would not take a knife with him to the meeting etc., and thus would not kill anyone. So just as in the other situation, it is not a question of holding a true belief or a false belief, but of which false belief one will hold, here it is not a question of holding a true belief or a false belief, but of which true belief one will hold: one that includes someone getting killed, or one that excludes that. Truth will be there either way, and is not the reason for praise or blame: the person is blamed for the desire to kill someone, and praised (or at least not blamed) for wishing to avoid this. This simply shows the need for the qualifications added in the previous post: if the person’s belief is voluntary, and held for the sake of coming true, it is very evident why blame is needed.

We have not specifically addressed the fourth argument, but this is perhaps unnecessary given the above response to the fifth. This blog in general has advocated the idea of voluntary beliefs, and in principle these can be praised or blamed. To the degree that we are less willing to do so, however, this may be a question of emphasis. When we talk about a belief, we are more concerned about whether it is true or not, and evidence in favor of it or against it. Praise or blame will mainly come in insofar as other motives are involved, insofar as they strengthen or weaken a person’s wish to hold the belief, or insofar as they potentially distort the person’s evaluation of the evidence.

Nonetheless, the factual question “is this true?” is a different question from the moral question, “should I believe this?” We can see the struggle between these questions, for example, in a difficulty that people sometimes have with willpower. Suppose that a smoker decides to give up smoking, and suppose that they believe they will not smoke for the next six months. Three days later, let us suppose, they smoke a cigarette after all. At that point, the person’s resolution is likely to collapse entirely, so that they return to smoking regularly. One might ask why this happens. Since the person did not smoke for three days, it should be perfectly possible, at least, for them to smoke only once every three days, instead of going back to their former practice. The problem is that the person has received evidence directly indicating the falsity of “I will not smoke for the next six months.” They still might have some desire for that result, but they do not believe that their belief has the power to bring this about, and in fact it does not. The belief would not be self-fulfilling, and in fact it would be false, so they cease to hold it. It is as if someone attempts to open a door and finds it locked; once they know it is locked, they can no longer choose to open the door, because they cannot choose something that does not appear to be within their power.

Mark Forster, in Chapter 1 of his book Do It Tomorrow, previously discussed here, talks about similar issues:

However, life is never as simple as that. What we decide to do and what we actually do are two different things. If you think of the decisions you have made over the past year, how many of them have been satisfactorily carried to a conclusion or are progressing properly to that end? If you are like most people, you will have acted on some of your decisions, I’m sure. But I’m also sure that a large proportion will have fallen by the wayside.

So a simple decision such as to take time to eat properly is in fact very difficult to carry out. Our new rule may work for a few days or a few weeks, but it won’t be long before the pressures of work force us to make an exception to it. Before many days are up the exception will have become the rule and we are right back where we started. However much we rationalise the reasons why our decision didn’t get carried out, we know deep in the heart of us that it was not really the circumstances that were to blame. We secretly acknowledge that there is something missing from our ability to carry out a decision once we have made it.

In fact if we are honest it sometimes feels as if it is easier to get other people to do what we want them to do than it is to get ourselves to do what we want to do. We like to think of ourselves as a sort of separate entity sitting in our body controlling it, but when we look at the way we behave most of the time that is not really the case. The body controls itself most of the time. We have a delusion of control. That’s what it is – a delusion.

If we want to see how little control we have over ourselves, all most of us have to do is to look in the mirror. You might like to do that now. Ask yourself as you look at your image:

  • Is my health the way I want it to be?
  • Is my fitness the way I want it to be?
  • Is my weight the way I want it to be?
  • Is the way I am dressed the way I want it to be?

I am not asking you here to assess what sort of body you were born with, but what you have made of it and how good a state of repair you are keeping it in.

It may be that you are healthy, fit, slim and well-dressed. In which case have a look round at the state of your office or workplace:

  • Is it as well organised as you want it to be?
  • Is it as tidy as you want it to be?
  • Do all your office systems (filing, invoicing, correspondence, etc.) work the way you want them to work?

If so, then you probably don’t need to be reading this book.

I’ve just asked you to look at two aspects of your life that are under your direct control and are very little influenced by outside factors. If these things which are solely affected by you are not the way you want them to be, then in what sense can you be said to be in control at all?

A lot of this difficulty is due to the way our brains are organised. We have the illusion that we are a single person who acts in a ‘unified’ way. But it takes only a little reflection (and examination of our actions, as above) to realise that this is not the case at all. Our brains are made up of numerous different parts which deal with different things and often have different agendas.

Occasionally we attempt to deal with the difference between the facts and our plans by saying something like, “We will approximately do such and such. Of course we know that it isn’t going to be exactly like this, but at least this plan will be an approximate guide.” But this does not really avoid the difficulty. Even “this plan will be an approximate guide” is a statement about the facts that might turn out to be false; and even if it does not turn out to be false, the fact that we have set it down as approximate will likely make it guide our actions more weakly than it would have if we had said, “this is what we will do.” In other words, we are likely to achieve our goal less perfectly, precisely because we tried to make our statement more accurate. This is the reverse of the situation discussed in a previous post, where one gives up some accuracy, albeit vaguely, for the sake of another goal such as fitting in with associates or for literary enjoyment.

All of this seems to indicate that the general proposal about decisions was at least roughly correct. It is not possible to simply to say that decisions are one thing and beliefs entirely another thing. If these were simply two entirely separate things, there would be no conflict at all, at least of this kind, between accuracy and one’s other goals, and things do not turn out this way.

Self-Fulfilling Prophecy

We can formulate a number of objections to the thesis argued in the previous post.

First, if a belief that one is going to do something is the same as the decision to do it, another person’s belief that I am going to do something should mean that the other person is making a decision for me. But this is absurd.

Second, suppose that I know that I am going to be hit on the head and suffer from amnesia, thus forgetting all about these considerations. I may believe that I will eat breakfast tomorrow, but this is surely not a decision to do so.

Third, suppose someone wants to give up smoking. He may firmly hold the opinion that whatever he does, he will sometimes smoke within the next six months, not because he wants to do so, but because he does not believe it possible that he do otherwise. We would not want to say that he decided not to give up smoking.

Fourth, decisions are appropriate objects of praise and blame. We seem at least somewhat more reluctant to praise and blame beliefs, even if it is sometimes done.

Fifth, suppose someone believes, “I will kill Peter tomorrow at 4:30 PM.” We will wish to blame him for deciding to kill Peter. But if he does kill Peter tomorrow at 4:30, he held a true belief. Even if beliefs can be praised or blamed, it seems implausible that a true belief should be blamed.

The objections are helpful. With their aid we can see that there is indeed a flaw in the original proposal, but that it is nonetheless somewhat on the right track. A more accurate proposal would be this: a decision is a voluntary self-fulfilling prophecy as understood by the decision maker. I will explain as we consider the above arguments in more detail.

In the first argument, in the case of one person making a decision for another, the problem is that a mere belief that someone else is going to do something is not self-fulfilling. If I hold a belief that I myself will do something, the belief will tend to cause its own truth, just as suggested in the previous post. But believing that someone else will do something will not in general cause that person to do anything. Consider the following situation: a father says to his children as he departs for the day, “I am quite sure that the house will be clean when I get home.” If the children clean the house during his absence, suddenly it is much less obvious that we should deny that this was the father’s decision. In fact, the only reason this is not truly the father’s decision, without any qualification at all, is that it does not sufficiently possess the characteristics of a self-fulfilling prophecy. First, in the example it does not seem to matter whether the father believes what he says, but only whether he says it. Second, since it is in the power of the children to fail to clean the house in any case, there seems to be a lack of sufficient causal connection between the statement and the cleaning of the house. Suppose belief did matter, namely suppose that the children will know whether he believes what he says or not. And suppose additionally that his belief had an infallible power to make his children clean the house. In that case it would be quite reasonable to say, without any qualification, “He decided that his children would clean the house during his absence.” Likewise, even if the father falsely believes that he has such an infallible power, in a sense we could rightly describe him as trying to make that decision, just as we might say, “I decided to open the door,” even if it turns out that my belief that the door could be opened turns out to be false when I try it; the door may be locked. This is why I included the clause “as understood by the decision maker” in the above proposal. This is a typical character of moral analysis; human action must be understood from the perspective of the one who acts.

In the amnesia case, there is a similar problem: due to the amnesia, the person’s current beliefs do not have a causal connection with his later actions. In addition, if we consider such things as “eating breakfast,” there might be a certain lack of causal connection in any case; the person would likely eat breakfast whether or not he formulates any opinion about what he will do. And to this degree we might feel it implausible to say that his belief that he will eat breakfast is a decision, even without the amnesia. It is not understood by the subject as a self-fulfilling prophecy.

In the case of giving up smoking, there are several problems. In this case, the subject does not believe that there is any causal connection between his beliefs and his actions. Regardless of what he believes, he thinks, he is going to smoke in fact. Thus, in his opinion, if he believes that he will stop smoking completely, he will simply hold a false belief without getting any benefit from it; he will still smoke, and his belief will just be false. So since the belief is false, and without benefit, at least as he understands it, there is no reason for him to hold this belief. Consequently, he holds the opposite belief. But this is not a decision, since he does not understand it as causing his smoking, which is something that is expected to happen whether or not he believes it will.

In such cases in real life, we are in fact sometimes tempted to say that the person is choosing not to give up smoking. And we are tempted to this to the extent that it seems to us that his belief should have the causal power that he denies it has: his denial seems to stem from the desire to smoke. If he wanted to give up smoking, we think, he could just accept that he would be able to believe this, and in such a way that it would come true. He does not, we think, because he wants to smoke, and so does not want to give up smoking. In reality this is a question of degree, and this analysis can have some truth. Consider the following from St. Augustine’s Confessions (Book VIII, Ch. 7-8):

Finally, in the very fever of my indecision, I made many motions with my body; like men do when they will to act but cannot, either because they do not have the limbs or because their limbs are bound or weakened by disease, or incapacitated in some other way. Thus if I tore my hair, struck my forehead, or, entwining my fingers, clasped my knee, these I did because I willed it. But I might have willed it and still not have done it, if the nerves had not obeyed my will. Many things then I did, in which the will and power to do were not the same. Yet I did not do that one thing which seemed to me infinitely more desirable, which before long I should have power to will because shortly when I willed, I would will with a single will. For in this, the power of willing is the power of doing; and as yet I could not do it. Thus my body more readily obeyed the slightest wish of the soul in moving its limbs at the order of my mind than my soul obeyed itself to accomplish in the will alone its great resolve.

How can there be such a strange anomaly? And why is it? Let thy mercy shine on me, that I may inquire and find an answer, amid the dark labyrinth of human punishment and in the darkest contritions of the sons of Adam. Whence such an anomaly? And why should it be? The mind commands the body, and the body obeys. The mind commands itself and is resisted. The mind commands the hand to be moved and there is such readiness that the command is scarcely distinguished from the obedience in act. Yet the mind is mind, and the hand is body. The mind commands the mind to will, and yet though it be itself it does not obey itself. Whence this strange anomaly and why should it be? I repeat: The will commands itself to will, and could not give the command unless it wills; yet what is commanded is not done. But actually the will does not will entirely; therefore it does not command entirely. For as far as it wills, it commands. And as far as it does not will, the thing commanded is not done. For the will commands that there be an act of will–not another, but itself. But it does not command entirely. Therefore, what is commanded does not happen; for if the will were whole and entire, it would not even command it to be, because it would already be. It is, therefore, no strange anomaly partly to will and partly to be unwilling. This is actually an infirmity of mind, which cannot wholly rise, while pressed down by habit, even though it is supported by the truth. And so there are two wills, because one of them is not whole, and what is present in this one is lacking in the other.

St. Augustine analyzes this in the sense that he did not “will entirely” or “command entirely.” If we analyze it in our terms, he does not expect in fact to carry out his intention, because he does not want to, and he knows that people do not do things they do not want to do. In a similar way, in some cases the smoker does not fully want to give up smoking, and therefore believes himself incapable of simply deciding to give up smoking, because if he made that decision, it would happen, and he would not want it to happen.

In the previous post, I mentioned an “obvious objection” at several points. This was that the account as presented there leaves out the role of desire. Suppose someone believes that he will go to Vienna in fact, but does not wish to go there. Then when the time comes to buy a ticket, it is very plausible that he will not buy one. Yes, this will mean that he will stop believing that he will go to Vienna. But this is different from the case where a person has “decided” to go and then changes his mind. The person who does not want to go, is not changing his mind at all, except about the factual question. It seems absurd (and it is) to characterize a decision without any reference to what the person wants.

This is why we have characterized a decision here as “voluntary”, “self-fulfilling,” and “as understood by the decision maker.” It is indeed the case that the person holds a belief, but he holds it because he wants to, and because he expects it to cause its own fulfillment, and he desires that fulfillment.

Consider the analysis in the previous post of the road to point C. Why is it reasonable for anyone, whether the subject or a third party, to conclude that the person will take road A? This is because we know that the subject wishes to get to point C. It is his desire to get to point C that will cause him to take road A, once he understands that A is the only way to get there.

Someone might respond that in this case we could characterize the decision as just a desire: the desire to get to point C. The problem is that the example is overly simplified compared to real life. Ordinarily there is not simply a single way to reach our goals. And the desire to reach the goal may not determine which particular way we take, so something else must determine it. This is precisely why we need to make decisions at all. We could in fact avoid almost anything that feels like a decision, waiting until something else determined the matter, but if we did, we would live very badly indeed.

When we make a complicated plan, there are two interrelated factors explaining why we believe it to be factually true that we will carry out the plan. We know that we desire the goal, and we expect this desire for the goal to move us along the path towards the goal. But since we also have other desires, and there are various paths towards the goal, some better than others, there are many ways that we could go astray before reaching the goal, either by taking a path to some other goal, or by taking a path less suited to the goal. So we also expect the details of our plan to keep us on the particular course that we have planned, which we suppose to be the best, or at least the best path considering our situation as a whole. If we did not keep those details in mind, we would not likely remain on this precise path. As an example, I might plan to stop at a grocery store on my way home from work, out of the desire to possess a sufficient stock of groceries, but if I do not keep the plan in mind, my desire to get home may cause me to go past the store without stopping. Again, this is why our explanation of belief is that it is a self-fulfilling prophecy, and one explicitly understood by the subject as such; by saying “I will use A, B, and C, to get to goal Z,” we expect that keeping these details in mind, together with our desire for Z, we will be moved along this precise path, and we wish to follow this path, for the sake of Z.

There is a lot more that could be said about this. For example, it is not difficult to see here an explanation for the fact that such complicated plans rarely work out precisely in practice, even in the absence of external impediments. We expect our desire for the goal to keep us on track, but in fact we have other desires, and there are an indefinite number of possibilities for those other desires to make something else happen. Likewise, even if the plan was the best we could work out in advance, there will be numberless details in which there were better options that we did not notice while planning, and we will notice some of these as we proceed along the path. So both the desire for the goal, and the desire for other things, will likely derail the plan. And, of course, most plans will be derailed by external things as well.

A combination of the above factors has the result that I will leave the consideration of the fourth and fifth arguments to another post, even though this was not my original intention, and was not my belief about what would happen.

Decisions as Predictions

Among acts of will, St. Thomas distinguishes intention and choice:

The movement of the will to the end and to the means can be considered in two ways. First, according as the will is moved to each of the aforesaid absolutely and in itself. And thus there are really two movements of the will to them. Secondly, it may be considered accordingly as the will is moved to the means for the sake of the end: and thus the movement of the will to the end and its movement to the means are one and the same thing. For when I say: “I wish to take medicine for the sake of health,” I signify no more than one movement of my will. And this is because the end is the reason for willing the means. Now the object, and that by reason of which it is an object, come under the same act; thus it is the same act of sight that perceives color and light, as stated above. And the same applies to the intellect; for if it consider principle and conclusion absolutely, it considers each by a distinct act; but when it assents to the conclusion on account of the principles, there is but one act of the intellect.

Choice is about the means, such as taking medicine in his example, while intention is about the end, as health in his example. This makes sense in terms of how we commonly use the terms. When we do speak of choosing an end, we are normally considering which of several alternative intermediate ends are better means towards an ultimate end. And thus we are “choosing,” not insofar as the thing is an end, but insofar as it is a means towards a greater end that we intend.

Discussing the human mind, we noted earlier that a thing often seems fairly simple when it is considered in general, but turns out to have a highly complex structure when considered in detail. The same thing will turn out to be the case if we attempt to consider the nature of these acts of will in detail.

Consider the hypothesis that both intention and choice consist basically in beliefs: intention would consist in the belief that one will in fact obtain a certain end, or at least that one will come as close to it as possible. Choice would consist in the belief that one will take, or that one is currently taking, a certain temporally immediate action for the sake of such an end. I will admit immediately that this hypothesis will not turn out to be entirely right, but as we shall see, the consideration will turn out to be useful.

First we will bring forward a number of considerations in favor of the hypothesis, and then, in another post, some criticisms of it.

First, in favor of the hypothesis, we should consider the fact that believing that one will take a certain course of action is virtually inseparable from deciding to take that course of action, and the two are not very clearly distinguishable at all. Suppose someone says, “I intend to take my vacation in Paris, but I believe that I will take it in Vienna instead.” On the face of it, this is nonsense. We might make sense of it by saying that the person really meant to say that he first decided to go to Paris, but then obstacles came up and he realizes that it will not be possible. But in that case, he also changes his decision: he now intends to go to Vienna. It is completely impossible that he currently intends to go to Paris, but fully believes that he will not go, and that he will go to Vienna instead.

Likewise, suppose someone says, “I haven’t yet decided where to take my vacation. But I am quite convinced that I am going to take it in Vienna.” Again, this is almost nonsensical: if he is convinced that he will go to Vienna, we would normally say that he has already made up his mind: it is not true that he has not decided yet. As in the previous case, we might be able to come up with circumstances where someone might say this or something like it. For example, if someone else is attempting to convince him to come to Paris, he might say that he has not yet decided, meaning that he is willing to think about it for a bit, but that he fully expects to end up going to Vienna. But in this case, it is more natural to say that his decision and his certainty that he will go to Vienna are proportional: the only sense in which he hasn’t decided yet, is to the degree that the thinks there is some chance that he will change his mind and go to Paris. Thus if there is no chance at all of that, then he is completely decided, while if he is somewhat unsure, his decision is not yet perfect but partial.

Both of the above cases would fit with the claim that a decision is simply a belief about what one is going to do, although they would not necessarily exclude the possibility that it is a separate thing, even if inseparably connected to the belief.

We can also consider beliefs and decisions as something known from their effects. I noted elsewhere that we recognize the nature of desire from its effect, namely from the fact that when we have a desire, we tend to bring about the thing we desire. Insofar as a decision is a rational desire, the same thing applies to decisions as to other kinds of desires. We would not know decisions as decisions, if we never did the things we have decided to do. Likewise, belief is a fairly abstract object, and it is at least plausible that we would come to know it from its more concrete effects.

Now consider the effects of the decision to go to Vienna, compared to the effects of the belief that you will go to Vienna. Both of them will result in you saying, “I am going to go to Vienna.” And if we look at belief as I suggested in the discussion to this post, namely more or less as treating something as a fact, then belief will have other consequences, such as buying a ticket for Vienna. For if you are treating it as a fact that you are going to go there, either you will buy a ticket, or you will give up the belief. In a similar way, if you have decided to go, either you will buy a ticket, or you will change your decision. So the effects of the belief and the effects of the decision seem to be entirely the same. If we know the thing from its effects, then, it seems we should consider the belief and the decision to be entirely the same.

There is an obvious objection here, but as I said the consideration of objections will come later.

Again, consider a situation where there are two roads, road A and road B, to your destination C. There is a fallen bridge along road B, so road B would not be a good route, while road A is a good route. It is reasonable for a third party who knows that you want to get to C and that you have considered the state of the roads, to conclude that you will take road A. But if this is reasonable for someone else, then it is reasonable for you: you know that you want to get to C, and you know that you have considered the state of the roads. So it is reasonable for you to conclude that you will take road A. Note that this is purely about belief: there was no need for an extra “decision” factor. The conclusion that you will factually take road A is a logical conclusion from the known situation. But now that you are convinced that you will take road A, there is no need for you to consider whether to take road A or road B; there is nothing to decide anymore. Everything is already decided as soon as you come to that conclusion, which is a matter of forming a belief. Once again, it seems as though your belief that you will take road A just is your decision, and there is nothing more to it.

Once again, there is an obvious objection, but it will have to wait until the next post.

Statistical Laws of Choice

I noted in an earlier post the necessity of statistical laws of nature. This will necessarily apply to human actions as a particular case, as I implied there in mentioning the amount of food humans eat in a year.

Someone might object. It was said in the earlier post that this will happen unless there is a deliberate attempt to evade this result. But since we are speaking of human beings, there might well be such an attempt. So for example if we ask someone to choose to raise their right hand or their left hand, this might converge to an average, such as 50% each, or perhaps the right hand 60% of the time, or something of this kind. But presumably someone who starts out with the deliberate intention of avoiding such an average will be able to do so.

Unfortunately, such an attempt may succeed in the short run, but will necessarily fail in the long run, because although it is possible in principle, it would require an infinite knowing power, which humans do not have. As I pointed out in the earlier discussion, attempting to prevent convergence requires longer and longer strings on one side or the other. But if you need to raise your right hand a few trillion times before switching again to your left, you will surely lose track of your situation. Nor can you remedy this by writing things down, or by other technical aids: you may succeed in doing things trillions of times with this method, but if you do it forever, the numbers will also become too large to write down. Naturally, at this point we are only making a theoretical point, but it is nonetheless an important one, as we shall see later.

In any case, in practice people do not tend even to make such attempts, and consequently it is far easier to predict their actions in a roughly statistical manner. Thus for example it would not be hard to discover the frequency with which an individual chooses chocolate ice cream over vanilla.

The Practical Argument for Free Will

Richard Chappell discusses a practical argument for free will:

1) If I don’t have free will, then I can’t choose what to believe.
2) If I can choose what to believe, then I have free will [from 1]
3) If I have free will, then I ought to believe it.
4) If I can choose what to believe, then I ought to believe that I have free will. [from 2,3]
5) I ought, if I can, to choose to believe that I have free will. [restatement of 4]

He remarks in the comments:

I’m taking it as analytic (true by definition) that choice requires free will. If we’re not free, then we can’t choose, can we? We might “reach a conclusion”, much like a computer program does, but we couldn’t choose it.

I understand the word “choice” a bit differently, in that I would say that we are obviously choosing in the ordinary sense of the term, if we consider two options which are possible to us as far as we know, and then make up our minds to do one of them, even if it turned out in some metaphysical sense that we were already guaranteed in advance to do that one. Or in other words, Chappell is discussing determinism vs libertarian free will, apparently ruling out compatibilist free will on linguistic grounds. I don’t merely disagree in the sense that I use language differently, but in the sense that I don’t agree that his usage correspond to the normal English usage. [N.B. I misunderstood Richard here. He explains in the comments.] Since people can easily be led astray by such linguistic confusions, given the relationships between thought and language, I prefer to reformulate the argument:

  1. If I don’t have libertarian free will, then I can’t make an ultimate difference in what I believe that was not determined by some initial conditions.
  2. If I can make an ultimate difference in what I believe that was not determined by some initial conditions, then I have libertarian free will [from 1].
  3. If I have libertarian free will, then it is good to believe that I have it.
  4. If I can make an ultimate difference in my beliefs undetermined by initial conditions, then it is good to believe that I have libertarian free will. [from 2, 3]
  5. It is good, if I can, to make a difference in my beliefs undetermined by initial conditions, such that I believe that I have libertarian free will.

We would have to add that the means that can make such a difference, if any means can, would be choosing to believe that I have libertarian free will.

I have reformulated (3) to speak of what is good, rather than of what one ought to believe, for several reasons. First, in order to avoid confusion about the meaning of “ought”. Second, because the resolution of the argument lies here.

The argument is in fact a good argument as far as it goes. It does give a practical reason to hold the voluntary belief that one has libertarian free will. The problem is that it does not establish that it is better overall to hold this belief, because various factors can contribute to whether an action or belief is a good thing.

We can see this with the following thought experiment:

Either people have libertarian free will or they do not. This is unknown. But God has decreed that people who believe that they have libertarian free will go to hell for eternity, while people who believe that they do not, will go to heaven for eternity.

This is basically like the story of the Alien Implant. Having libertarian free will is like the situation where the black box is predicting your choice, and not having it is like the case where the box is causing your choice. The better thing here is to believe that you do not have libertarian free will, and this is true despite whatever theoretical sense you might have that you are “not responsible” for this belief if it is true, just as it is better not to smoke even if you think that your choice is being caused.

But note that if a person believes that he has libertarian free will, and it turns out to be true, he has some benefit from this, namely the truth. But the evil of going to hell presumably outweighs this benefit. And this reveals the fundamental problem with the argument, namely that we need to weigh the consequences overall. We made the consequences heaven and hell for dramatic effect, but even in the original situation, believing that you have libertarian free will when you do not, has an evil effect, namely believing something false, and potentially many evil effects, namely whatever else follows from this falsehood. This means that in order to determine what is better to believe here, it is necessary to consider the consequences of being mistaken, just as it is in general when one formulates beliefs.

Little Things

Chapter 39 of Josemaria Escriva’s book The Way concerns the topic of “little things.” The whole chapter, and really the whole book, is worth reading. The text is composed in the form of a set of aphorisms, much like Francis Bacon’s work. I will quote two passages in particular from the chapter in question:

823. Have you seen how that imposing building was built? One brick upon another. Thousands. But, one by one. And bags of cement, one by one. And blocks of stone, each of them insignificant compared with the massive whole. And beams of steel. And men working, the same hours, day after day…

Have you seen how that imposing building was built?… By dint of little things!

826. Everything in which we poor men have a part — even holiness — is a fabric of small trifles which, depending upon one’s intention, can form a magnificent tapestry of heroism or of degradation, of virtues or of sins.

The epic legends always relate extraordinary adventures, but never fail to mix them with homely details about the hero. — May you always attach great importance to the little things. This is the way!

The second passage asserts that anything great in human life is essentially composed of “small trifles.” The first passage explains why this is so. The world is an ordered place, and one of the orders found in it is the order of material causality. Since the whole is greater than the part, it follows that great wholes are ultimately composed of little parts, or in other words, “small trifles.”

We often tend not to notice this in relation to human life, because we think of life as a kind of story, and it is normal for stories to leave out all sorts of detail, in order to concentrate on the overall picture. But all of that detail is always present: every day is made up of 24 hours, and everything we do ultimately is made up of individual immediate actions.

Thus Escriva says that we should “always attach great importance to the little things,” because there is no other way to accomplish anything. For example, someone might be assigned a paper in school, and find himself unable to write the paper, because he is constantly thinking of the need to “write a paper.” But “writing a paper” is not an action that can be chosen; it is just not a thing that can be done immediately. And unless it is first broken down into “little things,” it will never be done at all. This is one of the main causes of procrastination in people’s lives, namely failing to see that the larger goals that they wish to accomplish must be accomplished by means of little things, through individual actions. Thus someone might say, “I don’t know why, but I never feel like writing the paper.” But in fact he does not feel like writing it, because he has not yet presented himself with any option that can ever be chosen.