Predictive Processing

In a sort of curious coincidence, a few days after I published my last few posts, Scott Alexander posted a book review of Andy Clark’s book Surfing Uncertainty. A major theme of my posts was that in a certain sense, a decision consists in the expectation of performing the action decided upon. In a similar way, Andy Clark claims that the human brain does something very similar from moment to moment. Thus he begins chapter 4 of his book:

To surf the waves of sensory stimulation, predicting the present is simply not enough. Instead, we are built to engage the world. We are built to act in ways that are sensitive to the contingencies of the past, and that actively bring forth the futures that we need and desire. How does a guessing engine (a hierarchical prediction machine) turn prediction into accomplishment? The answer that we shall explore is: by predicting the shape of its own motor trajectories. In accounting for action, we thus move from predicting the rolling present to predicting the near-future, in the form of the not-yet-actual trajectories of our own limbs and bodies. These trajectories, predictive processing suggests, are specified by their distinctive sensory (especially proprioceptive) consequences. In ways that we are about to explore, predicting these (non-actual) sensory states actually serves to bring them about.

Such predictions act as self-fulfilling prophecies. Expecting the flow of sensation that would result were you to move your body so as to keep the surfboard in that rolling sweet spot results (if you happen to be an expert surfer) in that very flow, locating the surfboard right where you want it. Expert prediction of the world (here, the dynamic ever-changing waves) combines with expert prediction of the sensory flow that would, in that context, characterize the desired action, so as to bring that action about.

There is a great deal that could be said about the book, and about this theory, but for the moment I will content myself with remarking on one of Scott Alexander’s complaints about the book, and making one additional point. In his review, Scott remarks:

In particular, he’s obsessed with showing how “embodied” everything is all the time. This gets kind of awkward, since the predictive processing model isn’t really a natural match for embodiment theory, and describes a brain which is pretty embodied in some ways but not-so-embodied in others. If you want a hundred pages of apologia along the lines of “this may not look embodied, but if you squint you’ll see how super-duper embodied it really is!”, this is your book.

I did not find Clark obsessed with this, and I think it would be hard to reasonably describe any hundred pages in the book as devoted to this particular topic. This inclines to me to suggest that Scott may be irritated by such discussion of the topic that comes up because it does not seem relevant to him. I will therefore explain the relevance, namely in relation to a different difficulty which Scott discusses in another post:

There’s something more interesting in Section 7.10 of Surfing Uncertainty [actually 8.10], “Escape From The Darkened Room”. It asks: if the brain works to minimize prediction error, isn’t its best strategy to sit in a dark room and do nothing forever? After all, then it can predict its sense-data pretty much perfectly – it’ll always just stay “darkened room”.

Section 7.10 [8.10] gives a kind of hand-wave-y answer here, saying that of course organisms have some drives, and probably it makes sense for them to desire novelty and explore new options, and so on. Overall this isn’t too different from PCT’s idea of “intrinsic error”, and as long as we remember that it’s not really predicting anything in particular it seems like a fair response.

Clark’s response may be somewhat “hand-wave-y,” but I think the response might seem slightly more problematic to Scott than it actually is, precisely because he does not understand the idea of embodiment, and how it applies to this situation.

If we think about predictions on a general intellectual level, there is a good reason not to predict that you will not eat something soon. If you do predict this, you will turn out to be wrong, as is often discovered by would-be adopters of extreme fasts or diets. You will in fact eat something soon, regardless of what you think about this; so if you want the truth, you should believe that you will eat something soon.

The “darkened room” problem, however, is not about this general level. The argument is that if the brain is predicting its actions from moment to moment on a subconscious level, then if its main concern is getting accurate predictions, it could just predict an absence of action, and carry this out, and its predictions would be accurate. So why does this not happen? Clark gives his “hand-wave-y” answer:

Prediction-error-based neural processing is, we have seen, part of a potent recipe for multi-scale self-organization. Such multiscale self-organization does not occur in a vacuum. Instead, it operates only against the backdrop of an evolved organismic (neural and gross-bodily) form, and (as we will see in chapter 9) an equally transformative backdrop of slowly accumulated material structure and cultural practices: the socio-technological legacy of generation upon generation of human learning and experience.

To start to bring this larger picture into focus, the first point to notice is that explicit, fast timescale processes of prediction error minimization must answer to the needs and projects of evolved, embodied, and environmentally embedded agents. The very existence of such agents (see Friston, 2011b, 2012c) thus already implies a huge range of structurally implicit creature-specific ‘expectations’. Such creatures are built to seek mates, to avoid hunger and thirst, and to engage (even when not hungry and thirsty) in the kinds of sporadic environmental exploration that will help prepare them for unexpected environmental shifts, resource scarcities, new competitors, and so on. On a moment-by-moment basis, then, prediction error is minimized only against the backdrop of this complex set of creature-defining ‘expectations’.”

In one way, the answer here is a historical one. If you simply ask the abstract question, “would it minimize prediction error to predict doing nothing, and then to do nothing,” perhaps it would. But evolution could not bring such a creature into existence, while it was able to produce a creature that would predict that it would engage the world in various ways, and then would proceed to engage the world in those ways.

The objection, of course, would not be that the creature of the “darkened room” is possible. The objection would be that since such a creature is not possible, it must be wrong to describe the brain as minimizing prediction error. But notice that if you predict that you will not eat, and then you do not eat, you are no more right or wrong than if you predict that you will eat, and then you do eat. Either one is possible from the standpoint of prediction, but only one is possible from the standpoint of history.

This is where being “embodied” is relevant. The brain is not an abstract algorithm which has no content except to minimize prediction error; it is a physical object which works together in physical ways with the rest of the human body to carry out specifically human actions and to live a human life.

On the largest scale of evolutionary history, there were surely organisms that were nourished and reproduced long before there was anything analagous to a mind at work in those organisms. So when mind began to be, and took over some of this process, this could only happen in such a way that it would continue the work that was already there. A “predictive engine” could only begin to be by predicting that nourishment and reproduction would continue, since any attempt to do otherwise would necessarily result either in false predictions or in death.

This response is necessarily “hand-wave-y” in the sense that I (and presumably Clark) do not understand the precise physical implementation. But it is easy to see that it was historically necessary for things to happen this way, and it is an expression of “embodiment” in the sense that “minimize prediction error” is an abstract algorithm which does not and cannot exhaust everything which is there. The objection would be, “then there must be some other algorithm instead.” But this does not follow: no abstract algorithm will exhaust a physical object. Thus for example, animals will fall because they are heavy. Asking whether falling will satisfy some abstract algorithm is not relevant. In a similar way, animals had to be physically arranged in such a way that they would usually eat and reproduce.

I said I would make one additional point, although it may well be related to the above concern. In section 4.8 Clark notes that his account does not need to consider costs and benefits, at least directly:

But the story does not stop there. For the very same strategy here applies to the notion of desired consequences and rewards at all levels. Thus we read that ‘crucially, active inference does not invoke any “desired consequences”. It rests only on experience-dependent learning and inference: experience induces prior expectations, which guide perceptual inference and action’ (Friston, Mattout, & Kilner, 2011, p. 157). Apart from a certain efflorescence of corollary discharge, in the form of downward-flowing predictions, we here seem to confront something of a desert landscape: a world in which value functions, costs, reward signals, and perhaps even desires have been replaced by complex interacting expectations that inform perception and entrain action. But we could equally say (and I think this is the better way to express the point) that the functions of rewards and cost functions are now simply absorbed into a more complex generative model. They are implicit in our sensory (especially proprioceptive) expectations and they constrain behavior by prescribing their distinctive sensory implications.

The idea of the “desert landscape” seems to be that this account appears to do away with the idea of the good, and the idea of desire. The brain predicts what it is going to do, and those predictions cause it to do those things. This all seems purely intellectual: it seems that there is no purpose or goal or good involved.

The correct response to this, I think, is connected to what I have said elsewhere about desire and good. I noted there that we recognize our desires as desires for particular things by noticing that when we have certain feelings, we tend to do certain things. If we did not do those things, we would never conclude that those feelings are desires for doing those things. Note that someone could raise a similar objection here: if this is true, then are not desire and good mere words? We feel certain feelings, and do certain things, and that is all there is to be said. Where is good or purpose here?

The truth here is that good and being are convertible. The objection (to my definition and to Clark’s account) is not a reasonable objection at all: it would be a reasonable objection only if we expected good to be something different from being, in which case it would of course be nothing at all.

Minimizing Motivated Beliefs

In the last post, we noted that there is a conflict between the goal of accurate beliefs about your future actions, and your own goals about your future. More accurate beliefs will not always lead to a better fulfillment of those goals. This implies that you must be ready to engage in a certain amount of trade, if you desire both truth and other things. Eliezer Yudkowsky argues that self-deception, and therefore also such trade, is either impossible or stupid, depending on how it is understood:

What if self-deception helps us be happy?  What if just running out and overcoming bias will make us—gasp!—unhappy?  Surely, true wisdom would be second-order rationality, choosing when to be rational.  That way you can decide which cognitive biases should govern you, to maximize your happiness.

Leaving the morality aside, I doubt such a lunatic dislocation in the mind could really happen.

Second-order rationality implies that at some point, you will think to yourself, “And now, I will irrationally believe that I will win the lottery, in order to make myself happy.”  But we do not have such direct control over our beliefs.  You cannot make yourself believe the sky is green by an act of will.  You might be able to believe you believed it—though I have just made that more difficult for you by pointing out the difference.  (You’re welcome!)  You might even believe you were happy and self-deceived; but you would not in fact be happy and self-deceived.

For second-order rationality to be genuinely rational, you would first need a good model of reality, to extrapolate the consequences of rationality and irrationality.  If you then chose to be first-order irrational, you would need to forget this accurate view. And then forget the act of forgetting.  I don’t mean to commit the logical fallacy of generalizing from fictional evidence, but I think Orwell did a good job of extrapolating where this path leads.

You can’t know the consequences of being biased, until you have already debiased yourself.  And then it is too late for self-deception.

The other alternative is to choose blindly to remain biased, without any clear idea of the consequences.  This is not second-order rationality.  It is willful stupidity.

There are several errors here. The first is the denial that belief is voluntary. As I remarked in the comments to this post, it is best to think of “choosing to believe a thing” as “choosing to treat this thing as a fact.” And this is something which is indeed voluntary. Thus for example it is by choice that I am, at this very moment, treating it as a fact that belief is voluntary.

There is some truth in Yudkowsky’s remark that “you cannot make yourself believe the sky is green by an act of will.” But this is not because the thing itself is intrinsically involuntary. On the contrary, you could, if you wished, choose to treat the greenness of the sky as a fact, at least for the most part and in most ways. The problem is that you have no good motive to wish to act this way, and plenty of good motives not to act this way. In this sense, it is impossible for most of us to believe that the sky is green in the same way it is impossible for most of us to commit suicide; we simply have no good motive to do either of these things.

Yudkowsky’s second error is connected with the first. Since, according to him, it is impossible to deliberately and directly deceive oneself, self-deception can only happen in an indirect manner: “The other alternative is to choose blindly to remain biased, without any clear idea of the consequences.  This is not second-order rationality.  It is willful stupidity.” The idea is that ordinary beliefs are simply involuntary, but we can have beliefs that are somewhat voluntary by choosing “blindly to remain biased, without any clear idea of the consequences.” Since this is “willful stupidity,” a reasonable person would completely avoid such behavior, and thus all of his beliefs would be involuntary.

Essentially, Yudkowsky is claiming that we have some involuntary beliefs, and that we should avoid adding any voluntary beliefs to our involuntary ones. This view is fundamentally flawed precisely because all of our beliefs are voluntary, and thus we cannot avoid having voluntary beliefs.

Nor is it “willful stupidity” to trade away some truth for the sake of other good things. Completely avoiding this is in fact intrinsically impossible. If you are seeking one good, you are not equally seeking a distinct good; one cannot serve two masters. Thus since all people are interested in some goods distinct from truth, there is no one who fails to trade away some truth for the sake of other things. Yudkowsky’s mistake here is related to his wishful thinking about wishful thinking which I discussed previously. In this way he views himself, at least ideally, as completely avoiding wishful thinking. This is both impossible and unhelpful, impossible in that everyone has such motivated beliefs, and unhelpful because such beliefs can in fact be beneficial.

A better attitude to this matter is adopted by Robin Hanson, as for example when he discusses motives for having opinions in a post which we previously considered here. Bryan Caplan has a similar view, discussed here.

Once we have a clear view of this matter, we can use this to minimize the loss of truth that results from such beliefs. For example, in a post linked above, we discussed the argument that fictional accounts consistently distort one’s beliefs about reality. Rather than pretending that there is no such effect, we can deliberately consider to what extent we wish to be open to this possibility, depending on our other purposes for engaging with such accounts. This is not “willful stupidity”; the stupidity would to be engage in such trades without realizing that such trades are inevitable, and thus not to realize to what extent you are doing it.

Consider one of the cases of voluntary belief discussed in this earlier post. As we quoted at the time, Eric Reitan remarks:

For most horror victims, the sense that their lives have positive meaning may depend on the conviction that a transcendent good is at work redeeming evil. Is the evidential case against the existence of such a good really so convincing that it warrants saying to these horror victims, “Give up hope”? Should we call them irrational when they cling to that hope or when those among the privileged live in that hope for the sake of the afflicted? What does moral decency imply about the legitimacy of insisting, as the new atheists do, that any view of life which embraces the ethico-religious hope should be expunged from the world?

Here, Reitan is proposing that someone believe that “a transcendent good is at work redeeming evil” for the purpose of having “the sense that their lives have positive meaning.” If we look at this as it is, namely as proposing a voluntary belief for the sake of something other than truth, we can find ways to minimize the potential conflict between accuracy and this other goal. For example, the person might simply believe that “my life has a positive meaning,” without trying to explain why this is so. For the reasons given here, “my life has a positive meaning” is necessarily more probable and more known than any explanation for this that might be adopted. To pick a particular explanation and claim that it is more likely would be to fall into the conjunction fallacy.

Of course, real life is unfortunately more complicated. The woman in Reitan’s discussion might well respond to our proposal somewhat in this way (not a real quotation):

Probability is not the issue here, precisely because it is not a question of the truth of the matter in itself. There is a need to actually feel that one’s life is meaningful, not just to believe it. And the simple statement “life is meaningful” will not provide that feeling. Without the feeling, it will also be almost impossible to continue to believe it, no matter what the probability is. So in order to achieve this goal, it is necessary to believe a stronger and more particular claim.

And this response might be correct. Some such goals, due to their complexity, might not be easily achieved without adopting rather unlikely beliefs. For example, Robin Hanson, while discussing his reasons for having opinions, several times mentions the desire for “interesting” opinions. This is a case where many people will not even notice the trade involved, because the desire for interesting ideas seems closely related to the desire for truth. But in fact truth and interestingness are diverse things, and the goals are diverse, and one who desires both will likely engage in some trade. In fact, relative to truth seeking, looking for interesting things is a dangerous endeavor. Scott Alexander notes that interesting things are usually false:

This suggests a more general principle: interesting things should usually be lies. Let me give three examples.

I wrote in Toxoplasma of Rage about how even when people crusade against real evils, the particular stories they focus on tend to be false disproportionately often. Why? Because the thousands of true stories all have some subtleties or complicating factors, whereas liars are free to make up things which exactly perfectly fit the narrative. Given thousands of stories to choose from, the ones that bubble to the top will probably be the lies, just like on Reddit.

Every time I do a links post, even when I am very careful to double- and triple- check everything, and to only link to trustworthy sources in the mainstream media, a couple of my links end up being wrong. I’m selecting for surprising-if-true stories, but there’s only one way to get surprising-if-true stories that isn’t surprising, and given an entire Internet to choose from, many of the stories involved will be false.

And then there’s bad science. I can’t remember where I first saw this, so I can’t give credit, but somebody argued that the problem with non-replicable science isn’t just publication bias or p-hacking. It’s that some people will be sloppy, biased, or just stumble through bad luck upon a seemingly-good methodology that actually produces lots of false positives, and that almost all interesting results will come from these people. They’re the equivalent of Reddit liars – if there are enough of them, then all of the top comments will be theirs, since they’re able to come up with much more interesting stuff than the truth-tellers. In fields where sloppiness is easy, the truth-tellers will be gradually driven out, appearing to be incompetent since they can’t even replicate the most basic findings of the field, let alone advance it in any way. The sloppy people will survive to train the next generation of PhD students, and you’ll end up with a stable equilibrium.

In a way this makes the goal of believing interesting things much like the woman’s case. The goal of “believing interesting things” will be better achieved by more complex and detailed beliefs, even though to the extent that they are more complex and detailed, they are simply that much less likely to be true.

The point of this present post, then, is not to deny that some goals might be such that they are better attained with rather unlikely beliefs, and in some cases even in proportion to the unlikelihood of the beliefs. Rather, the point is that a conscious awareness of the trades involved will allow a person to minimize the loss of truth involved. If you never look at your bank account, you will not notice how much money you are losing from that monthly debit for internet. In the same way, if you hold Yudkowksy’s opinion, and believe that you never trade away truth for other things, which is itself both false and motivated, you are like someone who never looks at your account: you will not notice how much you are losing.

The Practical Argument for Free Will

Richard Chappell discusses a practical argument for free will:

1) If I don’t have free will, then I can’t choose what to believe.
2) If I can choose what to believe, then I have free will [from 1]
3) If I have free will, then I ought to believe it.
4) If I can choose what to believe, then I ought to believe that I have free will. [from 2,3]
5) I ought, if I can, to choose to believe that I have free will. [restatement of 4]

He remarks in the comments:

I’m taking it as analytic (true by definition) that choice requires free will. If we’re not free, then we can’t choose, can we? We might “reach a conclusion”, much like a computer program does, but we couldn’t choose it.

I understand the word “choice” a bit differently, in that I would say that we are obviously choosing in the ordinary sense of the term, if we consider two options which are possible to us as far as we know, and then make up our minds to do one of them, even if it turned out in some metaphysical sense that we were already guaranteed in advance to do that one. Or in other words, Chappell is discussing determinism vs libertarian free will, apparently ruling out compatibilist free will on linguistic grounds. I don’t merely disagree in the sense that I use language differently, but in the sense that I don’t agree that his usage correspond to the normal English usage. [N.B. I misunderstood Richard here. He explains in the comments.] Since people can easily be led astray by such linguistic confusions, given the relationships between thought and language, I prefer to reformulate the argument:

  1. If I don’t have libertarian free will, then I can’t make an ultimate difference in what I believe that was not determined by some initial conditions.
  2. If I can make an ultimate difference in what I believe that was not determined by some initial conditions, then I have libertarian free will [from 1].
  3. If I have libertarian free will, then it is good to believe that I have it.
  4. If I can make an ultimate difference in my beliefs undetermined by initial conditions, then it is good to believe that I have libertarian free will. [from 2, 3]
  5. It is good, if I can, to make a difference in my beliefs undetermined by initial conditions, such that I believe that I have libertarian free will.

We would have to add that the means that can make such a difference, if any means can, would be choosing to believe that I have libertarian free will.

I have reformulated (3) to speak of what is good, rather than of what one ought to believe, for several reasons. First, in order to avoid confusion about the meaning of “ought”. Second, because the resolution of the argument lies here.

The argument is in fact a good argument as far as it goes. It does give a practical reason to hold the voluntary belief that one has libertarian free will. The problem is that it does not establish that it is better overall to hold this belief, because various factors can contribute to whether an action or belief is a good thing.

We can see this with the following thought experiment:

Either people have libertarian free will or they do not. This is unknown. But God has decreed that people who believe that they have libertarian free will go to hell for eternity, while people who believe that they do not, will go to heaven for eternity.

This is basically like the story of the Alien Implant. Having libertarian free will is like the situation where the black box is predicting your choice, and not having it is like the case where the box is causing your choice. The better thing here is to believe that you do not have libertarian free will, and this is true despite whatever theoretical sense you might have that you are “not responsible” for this belief if it is true, just as it is better not to smoke even if you think that your choice is being caused.

But note that if a person believes that he has libertarian free will, and it turns out to be true, he has some benefit from this, namely the truth. But the evil of going to hell presumably outweighs this benefit. And this reveals the fundamental problem with the argument, namely that we need to weigh the consequences overall. We made the consequences heaven and hell for dramatic effect, but even in the original situation, believing that you have libertarian free will when you do not, has an evil effect, namely believing something false, and potentially many evil effects, namely whatever else follows from this falsehood. This means that in order to determine what is better to believe here, it is necessary to consider the consequences of being mistaken, just as it is in general when one formulates beliefs.

Wishful Thinking about Wishful Thinking

Cameron Harwick discusses an apparent relationship between “New Atheism” and group selection:

Richard Dawkins’ best-known scientific achievement is popularizing the theory of gene-level selection in his book The Selfish Gene. Gene-level selection stands apart from both traditional individual-level selection and group-level selection as an explanation for human cooperation. Steven Pinker, similarly, wrote a long article on the “false allure” of group selection and is an outspoken critic of the idea.

Dawkins and Pinker are also both New Atheists, whose characteristic feature is not only a disbelief in religious claims, but an intense hostility to religion in general. Dawkins is even better known for his popular books with titles like The God Delusion, and Pinker is a board member of the Freedom From Religion Foundation.

By contrast, David Sloan Wilson, a proponent of group selection but also an atheist, is much more conciliatory to the idea of religion: even if its factual claims are false, the institution is probably adaptive and beneficial.

Unrelated as these two questions might seem – the arcane scientific dispute on the validity of group selection, and one’s feelings toward religion – the two actually bear very strongly on one another in practice.

After some discussion of the scientific issue, Harwick explains the relationship he sees between these two questions:

Why would Pinker argue that human self-sacrifice isn’t genuine, contrary to introspection, everyday experience, and the consensus in cognitive science?

To admit group selection, for Pinker, is to admit the genuineness of human altruism. Barring some very strange argument, to admit the genuineness of human altruism is to admit the adaptiveness of genuine altruism and broad self-sacrifice. And to admit the adaptiveness of broad self-sacrifice is to admit the adaptiveness of those human institutions that coordinate and reinforce it – namely, religion!

By denying the conceptual validity of anything but gene-level selection, therefore, Pinker and Dawkins are able to brush aside the evidence on religion’s enabling role in the emergence of large-scale human cooperation, and conceive of it as merely the manipulation of the masses by a disingenuous and power-hungry elite – or, worse, a memetic virus that spreads itself to the detriment of its practicing hosts.

In this sense, the New Atheist’s fundamental axiom is irrepressibly religious: what is true must be useful, and what is false cannot be useful. But why should anyone familiar with evolutionary theory think this is the case?

As another example of the tendency Cameron Harwick is discussing, we can consider this post by Eliezer Yudkowsky:

Perhaps the real reason that evolutionary “just-so stories” got a bad name is that so many attempted stories are prima facie absurdities to serious students of the field.

As an example, consider a hypothesis I’ve heard a few times (though I didn’t manage to dig up an example).  The one says:  Where does religion come from?  It appears to be a human universal, and to have its own emotion backing it – the emotion of religious faith.  Religion often involves costly sacrifices, even in hunter-gatherer tribes – why does it persist?  What selection pressure could there possibly be for religion?

So, the one concludes, religion must have evolved because it bound tribes closer together, and enabled them to defeat other tribes that didn’t have religion.

This, of course, is a group selection argument – an individual sacrifice for a group benefit – and see the referenced posts if you’re not familiar with the math, simulations, and observations which show that group selection arguments are extremely difficult to make work.  For example, a 3% individual fitness sacrifice which doubles the fitness of the tribe will fail to rise to universality, even under unrealistically liberal assumptions, if the tribe size is as large as fifty.  Tribes would need to have no more than 5 members if the individual fitness cost were 10%.  You can see at a glance from the sex ratio in human births that, in humans, individual selection pressures overwhelmingly dominate group selection pressures.  This is an example of what I mean by prima facie absurdity.

It does not take much imagination to see that religion could have “evolved because it bound tribes closer together” without group selection in a technical sense having anything to do with this process. But I will not belabor this point, since Eliezer’s own answer regarding the origin of religion does not exactly keep his own feelings hidden:

So why religion, then?

Well, it might just be a side effect of our ability to do things like model other minds, which enables us to conceive of disembodied minds.  Faith, as an emotion, might just be co-opted hope.

But if faith is a true religious adaptation, I don’t see why it’s even puzzling what the selection pressure could have been.

Heretics were routinely burned alive just a few centuries ago.  Or stoned to death, or executed by whatever method local fashion demands.  Questioning the local gods is the notional crime for which Socrates was made to drink hemlock.

Conversely, Huckabee just won Iowa’s nomination for tribal-chieftain.

Why would you need to go anywhere near the accursèd territory of group selectionism in order to provide an evolutionary explanation for religious faith?  Aren’t the individual selection pressures obvious?

I don’t know whether to suppose that (1) people are mapping the question onto the “clash of civilizations” issue in current affairs, (2) people want to make religion out to have some kind of nicey-nice group benefit (though exterminating other tribes isn’t very nice), or (3) when people get evolutionary hypotheses wrong, they just naturally tend to get it wrong by postulating group selection.

Let me give my own extremely credible just-so story: Eliezer Yudkowsky wrote this not fundamentally to make a point about group selection, but because he hates religion, and cannot stand the idea that it might have some benefits. It is easy to see this from his use of language like “nicey-nice,” and his suggestion that the main selection pressure in favor of religion would be likely to be something like being burned at the stake, or that it might just have been a “side effect,” that is, that there was no advantage to it.

But as St. Paul says, “Therefore you have no excuse, whoever you are, when you judge others; for in passing judgment on another you condemn yourself, because you, the judge, are doing the very same things.” Yudkowsky believes that religion is just wishful thinking. But his belief that religion therefore cannot be useful is itself nothing but wishful thinking. In reality religion can be useful just as voluntary beliefs in general can be useful.

Vaguely Trading Away Truth

Robin Hanson asks his readers about religion:

Consider two facts:

  1. People with religious beliefs, and associated behavior, consistently tend to have better lives. It seems that religious folks tend to be happier, live longer, smoke less, exercise more, earn more, get and stay married more, commit less crime, use less illegal drugs, have more social connections, donate and volunteer more, and have more kids. Yes, the correlation between religion and these good things is in part because good people tend to become more religious, but it is probably also in part because religious people tend to become better. So if you want to become good in these ways, an obvious strategy is to become more religious, which is helped by having more religious beliefs.
  2. Your far beliefs, such as on religion and politics, can’t effect your life much except via how they effect your behavior, and your associates’ opinions of you. When you think about cosmology, ancient Rome, the nature of world government, or starving folks in Africa, it might feel like those things matter to you. But in terms of the kinds of things that evolution could plausibly have built you to actually care about (vs. pretend to care about), those far things just can’t directly matter much to your life. While your beliefs about far things might influence how you act, and what other people think of you, their effects on your quality of life, via such channels of influence, don’t depend much on whether these beliefs are true.

Perhaps, like me, you find religious beliefs about Gods, spirits, etc. to be insufficiently supported by evidence, coherence, or simplicity to be a likely approximation to the truth. Even so, ask yourself: why care so much about truth? Yes, you probably think you care about believing truth – but isn’t it more plausible that you mainly care about thinking you like truth? Doesn’t that have a more plausible evolutionary origin than actually caring about far truth?

Yes, there are near practical areas of your life where truth can matter a lot. But most religious people manage to partition their beliefs, so their religious beliefs don’t much pollute their practical beliefs. And this doesn’t even seem to require much effort on their part. Why not expect that you could do similarly?

Yes, it might seem hard to get yourself to believe things that seem implausible to you at the moment, but we humans have lots of well-used ways to get ourselves to believe things we want to believe. Are you willing to start trying those techniques on this topic?

Now, a few unusual people might have an unusually large influence on far topics, and to those people truth about far topics might plausibly matter more to their personal lives, and to things that evolution might plausibly have wanted them to directly care about. For example, if you were king of the world, maybe you’d reasonably care more about what happens to the world as a whole.

But really, what are the chances that you are actually such a person? And if not, why not try to be more religious?

Look, Robin is saying, maybe you think that religions aren’t true. But the fact is that it isn’t very plausible that you care that much about truth anyway. So why not be religious anyway, regardless of the truth, since there are known benefits to this?

A few days after the above post, Robin points out some evidence that stories tend to distort a person’s beliefs about the world, and then says:

A few days ago I asked why not become religious, if it will give you a better life, even if the evidence for religious beliefs is weak? Commenters eagerly declared their love of truth. Today I’ll ask: if you give up the benefits of religion, because you love far truth, why not also give up stories, to gain even more far truth? Alas, I expect that few who claim to give up religion because they love truth will also give up stories for the same reason. Why?

One obvious explanation: many of you live in subcultures where being religious is low status, but loving stories is high status. Maybe you care a lot less about far truth than you do about status.

We have discussed in an earlier post some of the reasons why stories can distort a person’s opinions about the world.

It is very plausible to me that Robin’s proposed explanation, namely status seeking, does indeed exercise a great deal of influence among his target audience. But this would not tend to be a very conscious process, and would likely be expressed consciously in other ways. A more likely conscious explanation would be this representative comment from one of Robin’s readers:

There is a clear difference in choosing to be religious and choosing to partake in a story. By being religious, you profess belief in some set of ideas on the nature of the world. If you read a fictional story, there is no belief. Religions are supposed to be taken as fact. It is non-fiction, whether it’s true or not. Fictional stories are known to not be true. You don’t sacrifice any of a love for truth as you’ve put it by digesting the contents of a fictional story, because none of the events of the story are taken as fact, whereas religious texts are to be taken as fact. Aristotle once said, “It is the mark of an educated mind to be able to entertain a thought without accepting it.” When reading fictional stories, you know that the events aren’t real, but entertain the circumstances created in the story to be able to increase our understanding of ourselves, others, and the world. This is the point of the stories, and they thereby aid in the search for truth, as we have to ask ourselves questions about how we would relate in similar situations. The authors own ideas shown in the story may not be what you personally believe in, but the educated mind can entertain the ideas and not believe in them, increasing our knowledge of the truth by opening ourselves up to others viewpoints. Religions are made to be believed without any real semblance of proof, there is no entertaining the idea, only acceptance of it. This is where truth falls out the window, as where there is no proof, the truth cannot be ascertained.

The basic argument would be that if a non-religious person simply decides to be religious, he is choosing to believe something he thinks to be false, which is against the love of truth. But if the person reads a story, he is not choosing to believe anything he thinks to be false, so he is not going against the love of truth.

For Robin, the two situations are roughly equivalent, because there are known reasons why reading fiction will distort one’s beliefs about the world, even if we do not know in advance the particular false beliefs we will end up adopting, or the particular false beliefs that we will end up thinking more likely, or the true beliefs that we might lose or consider less likely.

But there is in fact a difference. This is more or less the difference between accepting the real world and accepting the world of Omelas. In both cases evils are accepted, but in one case they are accepted vaguely, and in the other clearly and directly. In a similar way, it would be difficult for a person to say, “I am going to start believing this thing which I currently think to be false, in order to get some benefit from it,” and much easier to say, “I will do this thing which will likely distort my beliefs in some vague way, in order to get some benefit from it.”

When accepting evil for the sake of good, we are more inclined to do it in this vague way in general. But this is even more the case when we trade away truth in particular for the sake of other things. In part this is precisely because of the more apparent absurdity of saying, “I will accept the false as true for the sake of some benefit,” although Socrates would likely respond that it would be equally absurd to say, “I will do the evil as though it were good for the sake of some benefit.”

Another reason why this is more likely, however, is that it is easier for a person to tell himself that he is not giving up any truth at all; thus the author of the comment quoted above asserted that reading fiction does not lead to any false beliefs whatsoever. This is related to what I said in the post here: trading the truth for something else, even vaguely, implies less love of truth than refusing the trade, and consequently the person may not care enough to accurately discern whether or not they are losing any truth.

Those Who Walk Away from Omelas

In The Brothers Karamazov, after numerous examples of the torture of children and other horrors, Ivan Karamazov rejects theodicy with this argument:

“Besides, too high a price is asked for harmony; it’s beyond our means to pay so much to enter on it. And so I hasten to give back my entrance ticket, and if I am an honest man I am bound to give it back as soon as possible. And that I am doing. It’s not God that I don’t accept, Alyosha, only I most respectfully return him the ticket.”

“That’s rebellion,” murmured Alyosha, looking down.

“Rebellion? I am sorry you call it that,” said Ivan earnestly. “One can hardly live in rebellion, and I want to live. Tell me yourself, I challenge your answer. Imagine that you are creating a fabric of human destiny with the object of making men happy in the end, giving them peace and rest at last, but that it was essential and inevitable to torture to death only one tiny creature — that baby beating its breast with its fist, for instance — and to found that edifice on its unavenged tears, would you consent to be the architect on those conditions? Tell me, and tell the truth.”

“No, I wouldn’t consent,” said Alyosha softly.

Ivan’s argument is that a decent human being would not be willing to bring good out of evil in the particular way that happens in the universe, and therefore much less should a good God be willing to do that.

I will leave aside the theological argument for the moment, although it is certainly worthy of discussion.

Ursula Le Guin wrote a short story or thought experiment about this situation called The Ones Who Walk Away From Omelas. There is supposedly a perfectly happy society, but it all depends on the torture of a single child. Everybody knows about this, and at a certain age they are brought to see the child. Two very different responses to this are described:

The terms are strict and absolute; there may not even be a kind word spoken to the child.

Often the young people go home in tears, or in a tearless rage, when they have seen the child and faced this terrible paradox. They may brood over it for weeks or years. But as time goes on they begin to realize that even if the child could be released, it would not get much good of its freedom: a little vague pleasure of warmth and food, no doubt, but little more. It is too degraded and imbecile to know any real joy. It has been afraid too long ever to be free of fear. Its habits are too uncouth for it to respond to humane treatment. Indeed, after so long it would probably be wretched without walls about it to protect it, and darkness for its eyes, and its own excrement to sit in. Their tears at the bitter injustice dry when they begin to perceive the terrible justice of reality, and to accept it. Yet it is their tears and anger, the trying of their generosity and the acceptance of their helplessness, which are perhaps the true source of the splendor of their lives. Theirs is no vapid, irresponsible happiness. They know that they, like the child, are not free. They know compassion. It is the existence of the child, and their knowledge of its existence, that makes possible the nobility of their architecture, the poignancy of their music, the profundity of their science. It is because of the child that they are so gentle with children. They know that if the wretched one were not there snivelling in the dark, the other one, the flute-player, could make no joyful music as the young riders line up in their beauty for the race in the sunlight of the first morning of summer.

Now do you believe in them? Are they not more credible? But there is one more thing to tell, and this is quite incredible.

At times one of the adolescent girls or boys who go to see the child does not go home to weep or rage, does not, in fact, go home at all. Sometimes also a man or woman much older falls silent for a day or two, and then leaves home. These people go out into the street, and walk down the street alone. They keep walking, and walk straight out of the city of Omelas, through the beautiful gates. They keep walking across the farmlands of Omelas. Each one goes alone, youth or girl man or woman. Night falls; the traveler must pass down village streets, between the houses with yellow-lit windows, and on out into the darkness of the fields. Each alone, they go west or north, towards the mountains. They go on. They leave Omelas, they walk ahead into the darkness, and they do not come back. The place they go towards is a place even less imaginable to most of us than the city of happiness. I cannot describe it at all. It is possible that it does not exist. But they seem to know where they are going, the ones who walk away from Omelas.

Some would argue that the ones who walk away are simply confused. In the real world we are constantly permitting evils for the sake of other goods, and as a whole the evils included here are much greater than the torture of a single child. So Omelas should actually be much better and much more acceptable than the real world.

This response however is mistaken, because the real issue is one about the moral object. It is not enough to say that the good outweighs the evil, because a case of doing evil for the sake of good remains a case of doing evil. This is a little more confusing in the story, where one could interpret the actions of those who stay to be merely negative: they are not the ones who brought the situation about or maintain it. But in Ivan’s example, the question is whether you are willing to torture a child for the sake of the universal harmony, and Ivan’s implication is that if there is to be a universal harmony, God must be willing to torture people, and in general to cause all the evils of the world, to bring it about.

In any case, whether people are right or wrong about what they do, it is certainly true that we are much more willing to permit evils in a vague and general way to bring about good, than we are to produce evils in a very direct way to bring about good.

Questions on Culture

The conclusion of the last post raises at least three questions, and perhaps others.

First, something still seems wrong or at least incomplete with the picture presented. It is one thing to suppose that things can tend to improve. It is another to suppose that they can get constantly worse. You can count to higher and higher numbers; but you cannot count down forever, because you reach a lower limit. In the same way, insofar as culture seems a necessary part of human life, there seems to be a limit on on how degraded a culture could become. So if there is a constant tendency towards the decline of culture, we should have already reached the lower limit.

Second, if one looks at history over longer time scales, it seems obvious that there are also large cultural improvements, as in the history of art and so on. It is not clear how this can happen if there is a constant tendency towards decline.

Third, we argued earlier that the world overall tends to be successful in the sense defined here. The conclusion of the last past seems to call this into question, at least in the sense that we cannot be sure: if things are improving in some ways, and getting worse in others, then it remains unclear whether things are overall getting better or worse. Or perhaps things are just staying the same overall.

It may be some time before I respond to these questions, so for now I will simply point out that their answers will evidently be related to one another.

 

Scott Alexander on the Decline of Culture

From Scott Alexander’s Tumblr:

voximperatoris:

[This post is copied over from Stephen Hicks.]

An instructive series of quotations, collected over the years, on the theme of pessimism about the present in relation to the past:

Plato, 360 BCE: “In that country [Egypt] arithmetical games have been invented for the use of mere children, which they learn as pleasure and amusement. I have late in life heard with amazement of our ignorance in these matters [science in general]; to me we appear to be more like pigs than men, and I am quite ashamed, not only of myself, but of all Greeks.” (Laws, Book VII)

Catullus, c. 60 BCE: “Oh, this age! How tasteless and ill-bred it is!”

Sallust, 86– c. 35 BCE: “to speak of the morals of our country, the nature of my theme seems to suggest that I go farther back and give a brief account of the institutions of our forefathers in peace and in war, how they governed the commonwealth, how great it was when they bequeathed it to us, and how by gradual changes it has ceased to be the noblest and best, and has become the worst and most vicious.” About Rome’s forefathers: “good morals were cultivated at home and in the field; there was the greatest harmony and little or no avarice; justice and probity prevailed among them.” They “adorned the shrines of the gods with piety, their own homes with glory, while from the vanquished they took naught save the power of doing harm.” But Rome now is a moral mess: “The men of to‑day, on the contrary, basest of creatures, with supreme wickedness are robbing our allies of all that those heroes in the hour of victory had left them; they act as though the one and only way to rule were to wrong.” (The Catiline War)

Horace, c. 23-13 BCE: “Our fathers, viler than our grandfathers, begot us who are viler still, and we shall bring forth a progeny more degenerate still.” (Odes 3:6)

Alberti, 1436: Nature is no longer producing great intellects — “or giants which in her youthful and more glorious days she had produced so marvelously and abundantly.” (On Painting)

Peter Paul Rubens, c. 1620: “For what else can our degenerate race do in this age of error. Our lowly disposition keeps us close to the ground, and we have declined from that heroic genius and judgment of the ancients.”

Mary Wollstonecraft, c. 1790: “As from the respect paid to property flow, as from a poisoned fountain, most of the evils and vices which render this world such a dreary scene to the contemplative mind.”

William Wordsworth, 1802:
“Milton! thou should’st be living at this hour:
England hath need of thee: she is a fen
Of stagnant waters: altar, sword, and pen,
Fireside, the heroic wealth of hall and bower,
Have forfeited their ancient English dower
Of inward happiness. We are selfish men;
Oh! raise us up, return to us again;
And give us manners, virtue, freedom, power.”
(“London”)

John Stuart Mill, in 1859, speaking of his generation: “the present low state of the human mind.” (On Liberty, Chapter 3)

Friedrich Nietzsche, in 1871: “What else, in the desolate waste of present-day culture, holds any promise of a sound, healthy future? In vain we look for a single powerfully branching root, a spot of earth that is fruitful: we see only dust, sand, dullness, and languor” (Birth of Tragedy, Section 20).

Frederick Taylor, 1911: “We can see our forests vanishing, our water-powers going to waste, our soil being carried by floods into the sea; and the end of our coal and our iron is in sight.” (Scientific Management)

T. S. Eliot, c. 1925: “We can assert with some confidence that our own period is one of decline; that the standards of culture are lower than they were fifty years ago; and that the evidences of this decline are visible in every department of human activity.”

So has the world really been in constant decline? Or perhaps, as Gibbon put it in The Decline and Fall of the Roman Empire (1776): “There exists in human nature a strong propensity to depreciate the advantages, and to magnify the evils, of the present times.”

Words to keep in mind as we try to assess objectively our own generation’s serious problems.

I hate this argument. It’s the only time I ever see “Every single person from history has always believed that X is true” used as an argument *against* X.

I mean, imagine that I listed Thomas Aquinas as saying “Technology sure has gotten better the past few decades,” and then Leonardo da Vinci, “Technology sure has gotten better the past few decades”. Benjamin Franklin, “Technology sure has gotten better the past few decades”. Abraham Lincon, “Technology sure has gotten better the past few decades. Henry Ford, “Technology sure has gotten better the past few decades.”

My conclusion – people who think technology is advancing now are silly, there’s just some human bias toward always believing technology is advancing.

In the same way technology can always be advancing, culture can always be declining, for certain definitions of culture that emphasize the parts less compatible with modern society. Like technology, this isn’t a monotonic process – there will be disruptions every time one civilization collapses and a new one begins, and occasional conscious attempts by whole societies to reverse the trend, but in general, given movement from time t to time t+1, people can correctly notice cultural decline.

I mean, really. If, like Nietszche, your thing is the BRUTE STRENGTH of the valiant warrior, do you think that the modern office worker has exactly as much valiant warrior spirit as the 19th century frontiersman? Do you think the 19th century frontiersman had as much as the medieval crusader? Do you think the medieval crusader had as much as the Spartans? Pinker says the world is going from a state of violence to a state of security, and the flip side of that is people getting, on average, more domesticated and having less of the wild free spirit that Nietszche idealized.

Likewise, when people talk about “virtue”, a lot of the time they’re talking about chastity and willingness to remain faithful in a monogamous marriage for the purpose of procreation. And a lot of the time they don’t even mean actual chastity, they mean vocal public support for chastity and social norms demanding it. Do you really believe our culture has as much of that as previous cultures do? Remember, the sort of sharia law stuff that we find so abhorrent and misogynist was considered progressive during Mohammed’s time, and with good reason.

I would even argue that Alberti is right about genius. There are certain forms of genius that modern society selects for and certain ones it selects against. Remember, before writing became common, the Greek bards would have mostly memorized Homer. I think about the doctors of past ages, who had amazing ability to detect symptoms with the naked eye in a way that almost nobody now can match because we use CT scan instead and there’s no reason to learn this art. (Also, I think modern doctors have much fewer total hours of training than older doctors, because as bad as today’s workplace-protection/no-overtime rules are, theirs were worse)

And really? Using the fact that some guy complained of soil erosion as proof that nobody’s complaints are ever valid? Soil erosion is a real thing, it’s bad, and AFAIK it does indeed keep getting worse.

More controversially, if T.S. Eliot wants to look at a world that over four hundred years, went from the Renaissance masters to modern art, I am totally okay with him calling that a terrible cultural decline.

Scott’s argument is plausible, although he seems somewhat confused insofar as he appears to associate Mohammed with monogamy. And since we are discussing the matter with an interlocutor who maintains that the decline of culture is obvious, we will concede the point immediately. Scott seems a bit ambivalent in regard to whether a declining culture is a bad thing, but we will concede that as well, other things being equal.

However, we do not clearly see an answer here to one of the questions raised in the last post: if culture tends to decline, why does this happen? Scott seems to suggest an answer when he says, “Culture can always be declining, for certain definitions of culture that emphasize the parts less compatible with modern society.” According to this, culture tends to decline because it becomes incompatible with modern society. The problem with this is that it seems to be a “moronic pseudo-reason”: 2017 is just one year among others. So no parts of culture should be less compatible with life in 2017, than with life in 1017, or in any other year. Chesterton makes a similar argument:

We often read nowadays of the valor or audacity with which some rebel attacks a hoary tyranny or an antiquated superstition. There is not really any courage at all in attacking hoary or antiquated things, any more than in offering to fight one’s grandmother. The really courageous man is he who defies tyrannies young as the morning and superstitions fresh as the first flowers. The only true free-thinker is he whose intellect is as much free from the future as from the past. He cares as little for what will be as for what has been; he cares only for what ought to be. And for my present purpose I specially insist on this abstract independence. If I am to discuss what is wrong, one of the first things that are wrong is this: the deep and silent modern assumption that past things have become impossible. There is one metaphor of which the moderns are very fond; they are always saying, “You can’t put the clock back.” The simple and obvious answer is “You can.” A clock, being a piece of human construction, can be restored by the human finger to any figure or hour. In the same way society, being a piece of human construction, can be reconstructed upon any plan that has ever existed.

There is another proverb, “As you have made your bed, so you must lie on it”; which again is simply a lie. If I have made my bed uncomfortable, please God I will make it again. We could restore the Heptarchy or the stage coaches if we chose. It might take some time to do, and it might be very inadvisable to do it; but certainly it is not impossible as bringing back last Friday is impossible. This is, as I say, the first freedom that I claim: the freedom to restore. I claim a right to propose as a solution the old patriarchal system of a Highland clan, if that should seem to eliminate the largest number of evils. It certainly would eliminate some evils; for instance, the unnatural sense of obeying cold and harsh strangers, mere bureaucrats and policemen. I claim the right to propose the complete independence of the small Greek or Italian towns, a sovereign city of Brixton or Brompton, if that seems the best way out of our troubles. It would be a way out of some of our troubles; we could not have in a small state, for instance, those enormous illusions about men or measures which are nourished by the great national or international newspapers. You could not persuade a city state that Mr. Beit was an Englishman, or Mr. Dillon a desperado, any more than you could persuade a Hampshire Village that the village drunkard was a teetotaller or the village idiot a statesman. Nevertheless, I do not as a fact propose that the Browns and the Smiths should be collected under separate tartans. Nor do I even propose that Clapham should declare its independence. I merely declare my independence. I merely claim my choice of all the tools in the universe; and I shall not admit that any of them are blunted merely because they have been used.

Patience

St. Thomas describes the virtue of patience:

I answer that, As stated above (II-II:123:1), the moral virtues are directed to the good, inasmuch as they safeguard the good of reason against the impulse of the passions. Now among the passions sorrow is strong to hinder the good of reason, according to 2 Corinthians 7:10, “The sorrow of the world worketh death,” and Sirach 30:25, “Sadness hath killed many, and there is no profit in it.” Hence the necessity for a virtue to safeguard the good of reason against sorrow, lest reason give way to sorrow: and this patience does. Wherefore Augustine says (De Patientia ii): “A man’s patience it is whereby he bears evil with an equal mind,” i.e. without being disturbed by sorrow, “lest he abandon with an unequal mind the goods whereby he may advance to better things.” It is therefore evident that patience is a virtue.

This brings to mind things like a martyr being afflicted by others for the truth that he holds and enduring this steadfastly, but in fact it applies well even to the ordinary idea of patience, according to which, for example, we might say that Ray Kurzweil’s impatience for technological progress leads him to false opinions about current historical trends.

We can illustrate this with a little story. Peter, impatient to get home from work, exceeds the speed limit and weaves in and out of traffic. Minutes before getting home, he hits a slippery patch on the road. His car goes off the road, ramming a tree and killing him.

Despite being nothing but a story, it is one that has without a doubt been played out in real life with minor or major variations again and again. We can apply the saying of St. Augustine quoted by St. Thomas. Peter’s patience would consist in “bearing evil with an equal mind,” that is, enduring the fact that he is not home yet without disturbance, “lest he abandon with an unequal mind the goods whereby he may advance to better things,” that is, since his disturbed and unequal mind leads him to abandon the goods, that is, the ordered manner of driving, whereby he may advance to better things, that is, actually to get home.

Patience is rightly thought to be related to the virtue of humility. One who judges rightly about his place in the order of things will understand that it is natural in this order that what is best tends to come last. The good wine is served last. Thus such a person should endure without disturbance the lack that comes earlier, in order not to abandon the good by which he might achieve the good that comes later.

The Good I Do Not Want

St. Paul says in the letter to the Romans, “I can will what is right, but I cannot do it. For I do not do the good I want, but the evil I do not want is what I do.”

This happens because the person is divided. Simply speaking I may believe that the thing that I want to do is right; but in another way, I perceive or suppose that “the evil I do not want” is good.

This sort of division can happen in the opposite way as well, so that a person wills the evil that he takes to be good, but cannot do it, because another part of him perceives that it is evil and to be avoided.

Procrastination can work as an example of both cases. Without a doubt procrastinating is often failing to do the good that one wills; but it is also often refusing to do something that would be mostly pointless, and in this sense, it is refusing to do something bad, and thus one could say that “I do not the evil I want, but the good I do not want is what I do.”