Decisions of Faith

In the implicit discussion between Kurt Wise, Trent Horn, and Gregory Dawes, Trent Horn and Gregory Dawes disagree about the truth of Christianity and Catholicism, while they agree that a person should be willing to decide about the truth or falsehood of religious ideas based on arguments. Kurt Wise, in contrast, claims that there can be no argument or evidence whatsoever, no matter how strong, that could ever bring him to change his mind.

If Wise’s claim is taken in the very strong sense of the claim to possess absolute subjective certainty, namely the kind that implies that he literally cannot be wrong, this has been more or less adequately refuted in the original post on sola meThus for example Wise holds that Scripture is the Word of God in a strong sense, namely one that implies that God actually asserts the things asserted in Scripture. Many Christians do not hold this. Likewise, Wise holds that Scripture asserts that the earth is young, and again, many Christians do not hold this. So Wise has the responsibility of justifying his position, rather than asserting that he has the infallible knowledge that he alone is right and that other Christians are wrong.

The same thing would be true if the issue were his general commitment to Christianity. Here it is a bit more complex because the real question in this case is, “Is it good for me to belong to a Christian community?“, but one can give neither a positive nor a negative answer to this question without asserting various facts about the world, facts that will differ from one individual to another, but facts nonetheless. Once again Wise will have no special claim to possess an ability to discern these facts infallibly.

If Wise is merely claiming to possess objective certainty, perhaps on account of the possession of divine faith which cannot be in error, then he should be open to changing his mind based on arguments, as Horn and Dawes hold, in the same way that a person should be open to acknowledging mistakes in his mathematical arguments, should someone happen to point out such mistakes.

However, our earlier discussions suggest that the real issue is different, that it is not a question of any kind of certainty, whether subjective or objective. We have seen that belief in general is voluntary, and that it involves various motives. We have seen that this applies especially to beliefs remote from the senses, and to God and religion in particular. All of this suggests that something different is at stake in claims such Wise’s. Let’s look again at Wise’s concluding statement:

Although there are scientific reasons for accepting a young earth, I am a young-age creationist because that is my understanding of the Scripture. As I shared with my professors years ago when I was in college, if all the evidence in the universe turned against creationism, I would be the first to admit it, but I would still be a creationist because that is what the Word of God seems to indicate. Here I must stand.

He seems to suggest having reasons for holding young earth creationism, namely reasons which would make it likely to be true. In particular, “that is what the Word of God seems to indicate.” But if God says something, this seems to mean it is true. So he appears to be claiming a reason to think that creationism is objectively true. On the other hand, “If all the evidence in the universe turned against creationism, I would be the first to admit it, but I would still be a creationist,” stands directly in contrast to this. In other words, here he seems to be saying that the kind of reasons that make a thing likely to be true or false do not matter to him.

The truth of the matter is the latter more than the former. In other words, someone who says about a religious issue, “No evidence could ever change my mind about this,” is not saying this because he possesses the kinds of certainty discussed above. Rather, he is suggesting that evidence and his motives for belief are detached from one another to such an extent that differences in evidence will never give him a sufficient motive to change his decision to believe.

We can see this in Wise’s description of his personal decision, found in the same short text from In Six Days.

Eighth grade found me extremely interested in all fields of science. For over a year, while others considered being firemen and astronauts, I was dreaming of getting a Ph.D. from Harvard University and teaching at a big university. I knew this to be an unattainable dream, for I knew it was a dream, but …well, it was still a dream. That year, the last in the series of nine years in our small country school, was terminated by the big science fair. The words struck fear in all, for not only was it important for our marks and necessary for our escape from the elementary sentence for crimes unknown, but it was also a sort of initiation to allow admittance into the big city high school the next year. The 1,200 students of the high school dwarfed the combined populations of three towns I lived closer to than that high school. Just the thought of such hoards of people scared us silly. In any case, the science fair was anticipated years in advance and I started work on mine nearly a year ahead of the fair itself.

I decided to do my science fair project on evolution. I poured myself into its study. I memorized the geologic column. My father and I constructed a set of wooden steps representing geologic time where the run of each step represented the relative length of each period. I bought models and collected fossils. I constructed clay representations of fossils I did not have and sketched out continental/ocean configurations for each period. I completed the colossal project before the day of the fair. Since that day was set aside for last minute corrections and setup, I had nothing to do. So, while the bustle of other students whirred about us, I admitted to my friend Carl (who had joined me in the project in lieu of his own) that I had a problem. When he asked what the problem was I told him that I could not reconcile what I had learned in the project with the claims of the Bible. When Carl asked for clarification, I took out a Bible and read Genesis 1 aloud to him.

At the end, and after I had explained that the millions of years of evolution did not seem to comport well with the six days of creation, Carl agreed that it did seem like a real problem. As I struggled with this, I hit upon what I thought was an ingenious (and original!) solution to the problem. I said to Carl, “What if the days were millions of years long?” After discussing this for some time, Carl seemed to be satisfied. I was not — at least not completely.

What nagged me was that even if the days were long periods of time, the order was still out of whack. After all, science said the sun came before the earth — or at least at the same time — and the Bible said that the earth came three days before the sun. Whereas science said that the sea creatures came before plants and the land creatures came before flying creatures, the Bible indicated that plants preceded sea creatures and flying creatures preceded land creatures. On the other hand, making the days millions of years long seemed to take away most of the conflict. I thus determined to shelve these problems in the back recesses of my mind.

It didn’t work. Over the next couple of years, the conflict of order nagged me. No matter how I tried, I could not keep the matter out of mind. Finally, one day in my sophomore year of high school, when I thought I could stand it no longer, I determined to resolve the issue. After lights were out, under my covers with flashlight in hand I took a newly purchased Bible and a pair of scissors and set to work. Beginning at Genesis 1:1, I determined to cut out every verse in the Bible which would have to be taken out to believe in evolution. Wanting this to be as fair as possible, and giving the benefit of the doubt to evolution, I determined to read all the verses on both sides of a page and cut out every other verse, being careful not to cut the margin of the page, but to poke the page in the midst of the verse and cut the verse out around that.

In this fashion, night after night, for weeks and months, I set about the task of systematically going through the entire Bible from cover to cover. Although the end of the matter seemed obvious pretty early on, I persevered. I continued for two reasons. First, I am obsessive compulsive. Second, I dreaded the impending end. As much as my life was wrapped up in nature at age eight and in science in eighth grade, it was even more wrapped up in science and nature at this point in my life. All that I loved to do was involved with some aspect of science. At the same time, evolution was part of that science and many times was taught as an indispensable part of science. That is exactly what I thought — that science couldn’t be without evolution. For me to reject evolution would be for me to reject all of science and to reject everything I loved and dreamed of doing.

The day came when I took the scissors to the very last verse — nearly the very last verse of the Bible. It was Revelation 22:19: “If any man shall take away from the words of the book of this prophecy, God shall take away his part out of the book of life, and out of the holy city, and from the things which are written in this book.” It was with trembling hands that I cut out this verse, I can assure you! With the task complete, I was now forced to make the decision I had dreaded for so long.

With the cover of the Bible taken off, I attempted to physically lift the Bible from the bed between two fingers. Yet, try as I might, and even with the benefit of intact margins throughout the pages of Scripture, I found it impossible to pick up the Bible without it being rent in two. I had to make a decision between evolution and Scripture. Either the Scripture was true and evolution was wrong or evolution was true and I must toss out the Bible. However, at that moment I thought back to seven or so years before when a Bible was pushed to a position in front of me and I had come to know Jesus Christ. I had in those years come to know Him. I had become familiar with His love and His concern for me. He had become a real friend to me. He was the reason I was even alive both physically and spiritually. I could not reject Him. Yet, I had come to know Him through His Word. I could not reject that either. It was there that night that I accepted the Word of God and rejected all that would ever counter it, including evolution. With that, in great sorrow, I tossed into the fire all my dreams and hopes in science.

This is not a description of discovering that creationism is objectively true and that evolution is objectively false. It is the description of a personal decision, which is framed in terms of being faithful to Christ and rejecting evolution, or accepting evolution and rejecting Christ. Wise chooses to be faithful to Christ. Since this was not a question of weighing evidence for anything in the first place, any evidence that comes up should never affect his motives for his decision. Thus he says that no evidence can ever change his decision.

I would argue that in this way too, Trent Horn and Gregory Dawes are correct, and that Kurt Wise is mistaken. The problem is that people have a hard time understanding their motives for believing things. Most people think without reflection that most of their beliefs are simply motivated by the truth and by the evidence for that truth. So if asked, “Would you change your most important and fundamental beliefs if you are confronted with conclusive evidence against them?”, most people will respond by saying that such evidence cannot and will not come up, since their beliefs are true, rather than saying that they would not change their beliefs in that situation. Kurt Wise, on the other hand, does not deny that the situation could come up, but says that he would not change his mind even in this situation.

The implication of Wise’s claim is that his motives for belief are entirely detached from evidence. This is actually true to a great extent, as can be seen from his description of his decision. However, it is not entirely true. Just as people are mistaken if they suppose that their beliefs are motivated by evidence alone, so Wise is mistaken to suppose that evidence is entirely irrelevant to his decision.

This can be seen most of all from the fact that Wise’s position requires that he make the three claims mentioned in yesterday’s post, namely that God always tells the truth, that Scripture is the Word of God in the sense that what is asserted in Scripture is asserted by God, and that Scripture asserts that the earth is young (or in the context of his decision, that evolution contradicts Scripture; he says that the conclusion that the earth is young was something additional.) If any of these three claims are mistaken, then Wise could decide to be faithful to Christ without rejecting evolution. So the framing of his decision depends on knowing that these three things are true. And precisely because these three claims together imply that evolution is false, evidence for evolution is also evidence that at least one of these three claims is mistaken. And note that in his description of the events that led up to his decision, Wise is in fact mistaken about the meaning of Genesis 1.

Since evidence for evolution is evidence that one of the three claims is mistaken, then if “all the evidence in the universe” were to indicate that evolution is true, all the evidence in the universe would also indicate that Wise has made a mistake in the way he framed his decision. Evidence remains relevant to his decision, therefore, because he may have been mistaken in this way, even if the decision in itself is not about weighing evidence for anything.

Someone could respond that Wise was wrong to frame his decision in this way, or at least to make it absolute in this particular way, but that he would be right to hold absolutely to the decision to be faithful to Christ, and to say that evidence is entirely irrelevant to this decision, as long as he does not bring in evolution, creation, Scripture, and so on.

The problem with this is that even if he frames his decision as “to be faithful or unfaithful to Christ,” the framing of this decision still requires that he assert various facts about the world, just as his actual decision did. For example, if Christ did not exist, as certain people believe, then one cannot be faithful to Christ as to a person, and again he would turn out to have been mistaken in the very way he framed his decision. So his decision requires that he assert that Christ existed, which is a claim about the world. Of course it is not very likely that Christ did not exist, but evidence is relevant to the issue, and this is only one of many possible ways that he could be mistaken. If Christ was not worthy of trust, and Wise knew this, perhaps he would make a different decision.

To put this in an entirely general way, even if your decision seems to involve only motives that seem unrelated to truth and to evidence, “this is a good decision,” is itself a claim about the world. Either this claim is true, or it is false, and evidence is relevant to it. If it is false, you should change your mind about that decision. Consequently you should always be open to evidence and arguments against the truth of your position, or even against the goodness of your decision, just as Trent Horn and Gregory Dawes assert.

There is still another way that Kurt Wise is mistaken. He is mistaken to think that evidence should be irrelevant to his decision. But he is also mistaken to think that evidence is in fact irrelevant to it. He says that he would not change his mind even if all the evidence in the universe stood against him, but this is not the case. He is a human being who possesses human nature, and he is changeable in the same way that other human beings are. It is clear from the above discussion that Wise would be better off if he were more open to reality, but this does not mean that reality does not affect him at all, or that things could not happen which would change his mind, as for example if he had a personal experience of God in which God explained to him that his understanding of Scripture was mistaken.

Sola Me Revisited

Earlier we discussed the idea of sola methe claim of an individual to possess the infallible ability to discern a doctrine to be revealed by God.

Kurt Wise, concluding his contribution to the book In Six Days: Why Fifty Scientists Choose to Believe in Creationprovides an example of someone making such a claim, at least effectively:

Although there are scientific reasons for accepting a young earth, I am a young-age creationist because that is my understanding of the Scripture. As I shared with my professors years ago when I was in college, if all the evidence in the universe turned against creationism, I would be the first to admit it, but I would still be a creationist because that is what the Word of God seems to indicate. Here I must stand.

Basically Wise is making three claims:

(1) God always tells the truth.

(2) Scripture is the Word of God.

(3) Scripture says that the earth is young.

It follows from these three claims that the earth is actually young. Insofar as Wise says that he would not change his mind about this no matter how much evidence was found against it, this implies that he is absolutely certain of all three of these claims. Any evidence against a young earth, in fact, is evidence against the conjunction of these three claims, and Wise is saying that he will never give up this conjunction no matter how much evidence is brought against it.

Trent Horn, in a blog post entitled Response to a Mormon Criticprovides an implicit criticism of this kind of idea when he says, “Is there anything that would convince you that Mormonism is false? If not, then why should you expect other people to leave their faiths and become Mormon when you aren’t prepared to do the same?”

Trent Horn is a convert to Catholicism, so his question can be understood as a criticism of people who would be unwilling to change their minds as he himself did, or at least he is saying that someone who is unwilling to change his mind himself, should not criticize others for not changing their minds, even if they disagree with him.

Gregory Dawes, interviewed by Richard Marshall, provides another example of such a criticism:

Christian philosopher William Lane Craig writes somewhere about what he calls the “ministerial” and the “magisterial” use of reason. (It’s a traditional view — he’s merely citing Martin Luther — and one that Craig endorses.) On this view, the task of reason is to find arguments in support of the faith and to counter any arguments against it. Reason is not, however, the basis of the Christian’s faith. The basis of the Christian’s faith is (what she takes to be) the “internal testimony of the Holy Spirit” in her heart. Nor can rational reflection can be permitted to undermine that faith. The commitment of faith is irrevocable; to fall away from it is sinful, indeed the greatest of sins.

It follows that while the arguments put forward by many Christian philosophers are serious arguments, there is something less than serious about the spirit in which they are being offered. There is a direction in which those arguments will not be permitted to go. Arguments that support the faith will be seriously entertained; those that apparently undermine the faith must be countered, at any cost. Philosophy, to use the traditional phrase, is merely a “handmaid” of theology.

There is, to my mind, something frivolous about a philosophy of this sort. My feeling is that if we do philosophy, it ought to be because we take arguments seriously. This means following them wherever they lead. This may sound naïve. There are moral commitments, for instance, that few of us would be prepared to abandon, even if we lacked good arguments in their support. But if the followers of Hume are right, there is a close connection between our moral beliefs and our moral sentiments that would justify this attitude. In any case, even in matters of morality, we should not be maintaining positions that have lots of arguments against them and few in their favour, just because we have made a commitment to do so.

Dawes is a former Catholic, and as in the case of Horn, his statement can be taken as a criticism of people who would be unwilling to change their minds as he himself did. According to him you are not taking arguments seriously if you know in advance, like Kurt Wise, that you will never change your mind about certain things.

I would argue that relative to the question of certainty, both Trent Horn and Gregory Dawes are basically right, in several different ways, and that Kurt Wise is basically wrong in those ways. I will explain this in more detail in another post.

More or Less Remote From the Senses

All of human knowledge comes from experience, and all experience is first derived from the senses. Aristotle describes this process at the beginning of his Metaphysics:

All men by nature desire to know. An indication of this is the delight we take in our senses; for even apart from their usefulness they are loved for themselves; and above all others the sense of sight. For not only with a view to action, but even when we are not going to do anything, we prefer seeing (one might say) to everything else. The reason is that this, most of all the senses, makes us know and brings to light many differences between things.

By nature animals are born with the faculty of sensation, and from sensation memory is produced in some of them, though not in others. And therefore the former are more intelligent and apt at learning than those which cannot remember; those which are incapable of hearing sounds are intelligent though they cannot be taught, e.g. the bee, and any other race of animals that may be like it; and those which besides memory have this sense of hearing can be taught.

The animals other than man live by appearances and memories, and have but little of connected experience; but the human race lives also by art and reasonings. Now from memory experience is produced in men; for the several memories of the same thing produce finally the capacity for a single experience. And experience seems pretty much like science and art, but really science and art come to men through experience; for ‘experience made art’, as Polus says, ‘but inexperience luck.’ Now art arises when from many notions gained by experience one universal judgement about a class of objects is produced. For to have a judgement that when Callias was ill of this disease this did him good, and similarly in the case of Socrates and in many individual cases, is a matter of experience; but to judge that it has done good to all persons of a certain constitution, marked off in one class, when they were ill of this disease, e.g. to phlegmatic or bilious people when burning with fevers-this is a matter of art.

Since our knowledge depends on the senses, to the degree that knowledge becomes more remote from the senses, it becomes harder to know the truth. But more remote in what way? More remote precisely in being less directly derived from the things that we sense. Thus for example Descartes provides an example of a particularly ridiculous error when he says in his Principles of Philosophy,

Fourthly, if body C were wholly at rest and were slightly larger than B, whatever the speed at which B were moved toward C, it would never move this C, but would repelled from it in the contrary direction; because a body at rest resists a great speed more than a small one, and this in proportion to the excess of the one over the other, and, therefore, there would always be a greater force in C to resist than in B to impel.

In other words, if a smaller object hits a larger object, the larger object will not move in any way, but the smaller one will rebound in the opposite direction. How false this is does not need to be argued, and this precisely because of its closeness to the senses.

Sometimes a distinction is made between empirical and non-empirical knowledge, but in truth there cannot be a rigid distinction between these two things, because all of our knowledge is empirical, and thus there can only be differences of degree here. Thus for example the question of whether there is meaning in the universe might be considered a philosophical rather than an empirical issue, but we have given empirical reasons for thinking that there is.

But again, to the degree that a certain matter is more distant from the senses, it will be more difficult to know the truth about that matter, and consequently people are more likely to make a mistake about it. This happens in two ways.

In the first and more obvious way, when it is more difficult to test the matter with something sensible, as we might test Descartes’s claim about a smaller body hitting a larger body, it is easier to fall into error without there being a simple way to correct that error.

The second way is less obvious, but follows from the discussion about beliefs and motivations. If some fact about the world makes a big difference in our sensible experience, then we will be interested in knowing that fact, in order to be able to affect our experience. If a stove is hot, touching it will be painful, so it is important to know whether the stove is hot or not. Thus, for the sake of such purposes, people will be interested in knowing the truth about matters close to the senses. But if some fact does not seem to affect our sensible experience much, then people will care less about knowing the truth about that matter, since they do not need to know it in order to affect their experience. This implies that other motives, motives distinct from the desire for truth, will affect their beliefs in these matters more than in matters closer to the senses. And insofar as they are more affected by motives that can lead away from the truth, they will again be more likely to fall into error, this time without a strong desire to correct that error.

Taking these two ways in combination, people will fall into error more frequently in matters that are more distant from the senses, and in such situations people will have neither a great desire of correcting the error, nor an easy way to do so.

If we compare these somewhat theoretical deductions with experience, they are verified fairly well. There are various matters where there is much more disagreement than in other matters. For example, there is much more widespread disagreement in religion, theology, philosophy, politics, ethics, economics, and so on, than there is in mathematics and physics. We can easily see that the areas with more widespread disagreement are the ones more remote from the senses. Someone might say that politics, ethics, and economics are not remote from the senses, but if we consider the fact that all of these topics involve moral issues, we can see that they are in fact remote in the way under consideration, namely that it is not easy to subject them to sensible tests. And on the other hand, where there is disagreement in physics, it is likely to be in matters where it is hard for the difference to make a difference to the senses, as for example in interpretations of quantum mechanics.

Greater disagreement, of course, does not demonstratively prove the existence of more error, since even when there is agreement, there can be agreement on something false. But it strongly suggests the existence of more error, since disagreement cannot exist without someone being wrong, whereas agreement can be without anyone being in error. And in the areas mentioned, disagreement is so widespread that there is necessarily a great deal of error in those matters.

And these areas are also areas where we can see that people’s opinions are strongly affected by motives distinct from the desire for truth, as was suggested by the theoretical account above. Some indications of this:

First, in such areas people tend to form into various groups or “schools”, where the majority of a whole body of opinions are accepted. This happens more in religion and in politics than in the other examples, but the tendency is apparent in the other cases as well. If people were influenced only by the desire for truth, we could expect a somewhat more even distribution of opinion, where intermediate positions would be more common. Instead, the actual situation suggests that people have a desire to fit into certain groups, and to some extent adopt their opinions in order to favor this result.

Second, arguments in such areas tend to be more emotional than arguments about matters which are more easily tested. If an argument is witnessed by outsiders who have no understanding of the topic, and one of the participants is much more emotional than the others, the outsiders will tend to presume that the less emotional participant is more likely to be right. And this is for a good reason, namely that the emotions are moved more by sensible goods, rather than by truth in itself, and consequently someone who is very emotional about some intellectual issue is likely being moved by desires other than the desire for truth.

Third, related to the second reason, conversations about such matters are much more likely to be “bad conversations” of the kind noted in the previous post. They are much more likely to result in anger and insults, and in the belief that one’s conversational partner is not of good will, than conversations about mathematics. Thus for example many people accuse others who do not accept their religion of being of bad will, as for example in this blog comment:

For Pete’s sake, a simple self-educated layperson like myself has engaged in countless debates about the historicity of the Resurrection, both in person and on websites such as this one, and have not only come out on top every single time, but have yet to ever hear presented (not even once!) a decent case contra that cannot be demolished with minimal effort. The solidity and strength of the pro arguments, coupled with the pathetic weakness of all proposed alternative explanations, are what have led me to the (unwilling) conclusion that it takes an active act of will to reject them, and that unbelievers are, as in the words of Saint Paul, “without excuse”.

Likewise, it is very common for people to consider others who disagree with their politics to be bad people. Thus for example Susan Douglas writes,

I hate Republicans. I can’t stand the thought of having to spend the next two years watching Mitch McConnell, John Boehner, Ted Cruz, Darrell Issa or any of the legions of other blowhards denying climate change, thwarting immigration reform or championing fetal “personhood.”

After some discussion, she concludes the post:

Why does this work? A series of studies has found that political conservatives tend toward certain psychological characteristics. What are they? Dogmatism, rigidity and intolerance
 of ambiguity; a need to avoid uncertainty; support for authoritarianism; a heightened sense of threat from others; and a personal need for structure. How do these qualities influence political thinking?

According to researchers, the two core dimensions of conservative thought are resistance to change and support for inequality. These, in turn, are core elements of social intolerance. The need for certainty, the need to manage fear of social change, lead to black-and-white thinking and an embrace of stereotypes. Which could certainly lead to a desire to deride those not like you—whether people of color, LGBT people or Democrats. And, especially since the early 1990s, Republican politicians and pundits have been feeding these needs with a single-minded, uncomplicated, good-vs.-evil worldview that vilifies Democrats.

So now we hate them back. And for good reason. Which is too bad. I miss the Fred Lippitts of yore and the civilized discourse and political accomplishments they made possible. And so do millions of totally fed-up Americans.

As I stated in the post on beliefs and motivations, it is not difficult for people to notice that motives other than the desire for truth are influencing other people, but they tend not to notice those motives in themselves. In a similar way, many people will have no difficulty admitting that the point of this post applies to other people, but will have a much harder time admitting that it applies to themselves.

Of course it is true that some people have more of a desire for truth in itself than other people. And the stronger this desire in a person, the more likely the person is to hold the true position in any of these matters. But it is not credible to suppose that people are actually divided in the “good-vs.-evil” way that Susan Douglas says that Republicans divide people, and in which she herself divides people. If I were a Mormon, for example, it would remain absurd for me to suppose that Mormons are good people and that everyone else is bad, or that Mormons are reasonable people and that everyone else is unreasonable. Given the premise that Mormonism is true, it would follow that a person more interested in truth would be more likely to adopt Mormonism. But it would not follow that Mormons overall have a different nature from other people, nor is this credible in the slightest.

In other words, of course there are true positions in religion, theology, philosophy, politics, ethics, economics, and so on. But overall people’s motives are more affected by non-truth-related motives, and by only-somewhat-truth-related motives, in these matters than in matters closer to the senses, and they are therefore more likely to fall into error in the areas more remote from the senses. Now you might personally hold the true position in some of these matters, or in all of these matters. Or perhaps you don’t. Likewise, you might personally care more about the truth than other people do. Or perhaps you don’t. Either way, there is little reason to suppose that the point of this post does not apply to you.

Quick to Hear and Slow to Speak

St. James says in 1:19-20 of his letter,Let every man be quick to hear, slow to speak, slow to anger, for the anger of man does not work the righteousness of God.”

What does he mean? How is it possible for every man to be quick to hear and slow to speak? A conversation needs to have an approximately equal amount of listening and speaking. If each of two conversational partners insists on listening instead of speaking, the conversation will go nowhere. Whenever one is speaking, the other should be listening, and if one is listening, the other must be speaking, since it is not listening if neither of the two is saying anything.

The reference to anger is a clue. St. James is speaking of our natural tendencies, and saying that the natural tendency to anger is excessive and must be resisted. Likewise, we tend to have more of a desire to speak than to listen. We would rather explain our own position than listen to that of another. In order for a conversation to go well, each of the partners should restrain his own desire to express his own opinion, in order to listen to the other. This does not imply anything impossible any more than restraining anger is impossible; people have a naturally excessive desire to express themselves in the same way that they have a naturally excessive tendency to become angry. Thus St. James is not against a conversation which is equally composed of listening and speaking; but he is saying that such a conversation requires restraint on both parties to the conversation. A conversation without such restraint leads to situations where someone thinks, “he’s not listening to me,” which then leads precisely to the anger that St. James is opposing.

Robert Aumann has a paper, “Agreeing to Disagree”, which mathematically demonstrates that people having the same prior probability distribution and following the laws of probability, cannot have a different posterior probability regarding any matter, assuming that their opinions of the matter are common knowledge between them. He begins his paper:

If two people have the same priors, and their posteriors for a given event A are common knowledge, then these posteriors must be equal. This is so even though they may base their posteriors on quite different information. In brief, people with the same priors cannot agree to disagree.

We publish this observation with some diffidence, since once one has the appropriate framework, it is mathematically trivial.

The implication is something like this: one person may believe that there is a 50% chance it will rain tomorrow. Another person, having access to other information, such as having seen the weather channel, thinks that there is a 70% chance of rain. Currently these estimates are not common knowledge. But if the two people converse until they both know each other’s current opinion (which will possibly no longer be 50% and 70%), they must agree on the probability of rain, given that they have the same prior distribution.

There are several reasons why this does not apply to real human beings. First of all, people do not have an actual prior probability distribution; such a distribution means having an estimate of the original probability of every possible statement, and obviously people do not actually have such a thing. So not having a prior distribution at all, they cannot possibly have the same prior distribution.

Second, the theorem presumes that each of the two knows that each of the two is reasonable in exactly the sense required, namely having such a prior and updating on it according to the laws of probability. In real life no one does this, even apart from the fact that they do not have such a prior.

Various extensions of the theorem have been published by others, some of which come closer to having a bearing on real human beings. Possibly I will consider some of these results in the future. Even without such extensions, however, Aumann’s result does have some relationship with real disagreements.

We have all had good conversations and bad conversations when we disagreed with someone, and it is not so difficult to recognize the difference. In the best conversations, we may have actually come to partial or even full agreement, even if not exactly on the original position of either partner. In the worst conversations, neither partner budged, and both concluded that the other was being stubborn and unreasonable. Possibly the conversation descended to the point of anger, insults and attributing bad will to the other. And on the other hand we have also had conversations which were somewhat in the middle between these two extremes.

These facts are related to Aumann’s result because his result is that reasonable conversational partners must end up agreeing, this being understood in a simplified mathematical sense. Because of the simplifications it does not strictly apply to real life, but something like it is also true in real life, and we can see that in our experiences with conversations with others involving disagreements. In other words, basically whenever we get to the point where neither partner will budge, we begin to think that someone is being stubborn and at least somewhat unreasonable.

St. James is explaining how to avoid the bad conversations and have the good conversations. And that is by being “quick to hear.” It is a question of listening to the other. And basically that implies asking the question, “How is this right, in what way is it true?” If someone approaches a conversation with the idea that he is going to prove that the other is wrong, the other will get the impression that he is not being listened to. And this impression, in this case, is basically correct. A person sees what he is saying as true, not as false, so one who does not see how it could be true does not even understand it. If you say something, I do not even understand you, until I see a way that what you are saying could be so. And on the other hand, if I do approach a conversation with the idea of seeing what is true in the position that is in disagreement with mine, the conversation will be far more likely to end up as one of the good conversations, and far more likely to end in agreement.

Often even if a person is wrong in his conclusion, part of that conclusion is correct, and it is important for someone speaking with him to acknowledge the part that is correct before criticizing the part that is wrong. And on the other hand, even if a person’s conclusion is completely wrong, insofar as that is possible, there will always be some evidence for his conclusion. It is important to acknowledge that evidence, rather than simply pointing out the evidence against his conclusion.

The Sun and The Bat

Aristotle begins Book II of the Metaphysics in this way:

The investigation of the truth is in one way hard, in another easy. An indication of this is found in the fact that no one is able to attain the truth adequately, while, on the other hand, we do not collectively fail, but every one says something true about the nature of things, and while individually we contribute little or nothing to the truth, by the union of all a considerable amount is amassed. Therefore, since the truth seems to be like the proverbial door, which no one can fail to hit, in this respect it must be easy, but the fact that we can have a whole truth and not the particular part we aim at shows the difficulty of it.

Perhaps, too, as difficulties are of two kinds, the cause of the present difficulty is not in the facts but in us. For as the eyes of bats are to the blaze of day, so is the reason in our soul to the things which are by nature most evident of all.

I have a slight feeling of uncertainty about the argument I made yesterday. I do not see any flaw in the argument, and it seems to me that it works. But the uncertain feeling remains. This suggests that “the cause of the present difficulty is not in the facts but in us,” and that the reason for the feeling is that “as the eyes of bats are to the blaze of day, so is the reason in our soul to the things which are by nature most evident of all.”

Should I dismiss this feeling? One could argue that such feelings result from the imagination or from custom, so that if we do not see any flaw in the argument, we should just try to get over such feelings.

I think this would be a mistake, if the meaning is that one should try to get over it without making some other kind of progress first. In Plato’s Meno, Socrates talks about the fact that someone just learning something does not yet possess it in an entirely clear way:

Soc. And that is the line which the learned call the diagonal. And if this is the proper name, then you, Meno’s slave, are prepared to affirm that the double space is the square of the diagonal?

Boy. Certainly, Socrates.

Soc. What do you say of him, Meno? Were not all these answers given out of his own head?

Men. Yes, they were all his own.

Soc. And yet, as we were just now saying, he did not know?

Men. True.

Soc. But still he had in him those notions of his-had he not?

Men. Yes.

Soc. Then he who does not know may still have true notions of that which he does not know?

Men. He has.

Soc. And at present these notions have just been stirred up in him, as in a dream; but if he were frequently asked the same questions, in different forms, he would know as well as any one at last?

Men. I dare say.

The boy at first possesses the idea that the double square is the square on the diagonal as something which has “just been stirred up in him, as in a dream,” rather than in a clear way. And this is appropriate, because as we seen earlier, people can make mistakes even in mathematics. One reason that the boy needs to be “frequently asked the same questions, in different forms,” is that by doing this he will become much more sure that he has not been misled by the argument.

There is at least one objective sign that there may be a flaw in my argument from yesterday, namely the fact that it does not appear to be an argument that anyone has made previously, at least as far as I know. It may be implicit both in the theology of St. Thomas and in that of Hans Urs von Balthasar, in the discussion of the distinction between the persons of the Trinity as a source for the distinction of creation from Creator, but it does not appear to have been made explicitly. In general I think it is a mistake to ignore such a feeling of uncertainty, just as it would be a mistake for the boy to ignore the dreamlike character of his knowledge. That feeling means something. It may not mean anything about the facts, as Aristotle says, but it means that there is something lacking in my understanding. And if I manage to banish the feeling, to “just get over it,” that does not necessarily mean that I have actually cured that lack in my understanding. It may be present just as much as before.

This is similar to the situation where someone observes some evidence against what he currently believes to be true. His “gut feeling” will tell him that it is something that stands against what he believes, but he may attempt to remove that feeling by fitting it into the context of his belief. However, even if his explanation is correct, even if his estimate of the prior probability is mistaken, his original feeling was not meaningless. In almost every case, it really was evidence against his position.

When we have a conclusion and we want to make an argument for it, whether that is because we suspect that the conclusion is true and want to settle the matter, or because we simply want the conclusion to be true, there is a danger of claiming to know for sure that an argument works without really understanding it. And this danger seems to be especially serious when one is speaking about God or the first principles of things, because they are like the blaze of day compared to the eyes of bats. For example, I commented on a recent post by James Chastek because it appears to me that he is trying to take a shortcut, that he is trying to avoid the hard work of actually understanding. I have not yet continued that discussion because I suspect that neither he nor I actually understand the argument that he is making, and I would prefer to understand it better before continuing the discussion.

Do not ignore such vague feelings. Do not dismiss them as “just the imagination,” do not try to “just get over it.” They are telling you something, and often something important.

Intelligence Doesn’t Always Help

Suppose X is some statement, and you currently think that there is a 50% chance that X is true, and a 50% chance that it is false.

You ask two people about X. One says that it is true, and the other says that it is false. The one saying that it is true is somewhat more intelligent than the one who says that it is false. The evidence contained in these claims will surely not balance out exactly. At this point, then, is it more likely that X is true, or that it is false?

It is reasonable to say that at this point it is more likely that X is true. In fact, if we discovered that on average it would be more likely that X is false in such a situation, we should rename the thing we were calling “intelligence” and call it “unintelligence”, since it would not promote understanding but the lack of it.

But this is a question about what is true on average. It will not always be true once other factors are taken into account. And since belief is voluntary, and people believe things for other motives besides truth, under many circumstances intelligence can actually hinder the search for truth. For we can expect that a more intelligent person will on average be somewhat better at finding a way to attain his goals than a less intelligent person. Thus to the degree that those goals happen to hinder the search for truth, the more intelligent person will actually be less likely to come to the truth.

For example, several of the goals that people often have in believing things, or at least continuing to believe what they have believed in the past, is maintaining the appearance of stability, since instability is often seen as bad, and avoiding the shame of admitting that one was wrong. To the degree that a person has these goals, a more intelligent person will be more capable of finding ways to avoid changing his mind, even when his current position happens to be false. He will be more capable of finding subtle ways to defend his position, more capable of finding reasons to dismiss opposing arguments, and more capable of finding ways to avoid stumbling upon evidence against his current position.

Even apart from these two particular motives, there are of course any number of other motives which are potentially opposed to the search for truth in concrete cases. And to the degree that such motives are involved, intelligence will not be a help to the truth, but a hindrance.

Extraordinary Claims and Extraordinary Evidence

Marcello Truzzi states in an article On the Extraordinary: An Attempt At Clarification“An extraordinary claim requires extraordinary proof.” This was later restated by Carl Sagan as, “Extraordinary claims require extraordinary evidence.” This is frequently used to argue against things such as “gods, ghosts, the paranormal, and UFOs.”

However, this kind of argument, at least as it is usually made, neglects to take into account the fact that claims themselves are already evidence.

Here is one example: while writing this article, I used an online random number generator to pick a random integer between one and a billion inclusive. The number was 422,819,208.

Suppose we evaluate my claim with the standard that extraordinary claims require extraordinary evidence, and neglect to consider the evidence contained within the claim itself. In this case, given that I did in fact pick a number in the manner stated, the probability that the number would be 422,819,208 is one in a billion. So readers should respond, “Either he didn’t pick the number in the manner stated, or the number was not 422,819,208. The probability that both of those were true is one in a billion. I simply don’t believe him.”

There is obviously a problem here, since in fact I did pick the number in the way stated, and that was actually the number. And the problem is precisely leaving out of consideration the evidence contained within the claim itself. Given that I make a claim that I picked a random number between one and a billion, the probability that I would claim 422,819,208 in particular is approximately one in a billion. So when you see me claim that I picked that number, you are seeing evidence (namely the fact that I am making the claim) which is very unlikely in itself. The fact that I made that claim is much more likely, however, if I actually picked that number, rather than some other number. Thus the very fact that I made the claim is strong evidence that I did pick the number 422,819,208 rather than some other number.

In this sense, extraordinary claims are already extraordinary evidence, and thus do not require some special justification.

However, we can consider another case, a hypothetical one. Suppose that in the above paragraphs, instead of the number 422,819,208, I had used the number 500,000,000, claiming that this was in fact the number that I got from the random number generator.

In that case you might have found the argument much less credible. Why?

Assuming that I did in fact pick the number randomly, the probability of picking 422,819,208 is one in a billion. And again, assuming that I did in fact pick the number randomly, the probability of picking 500,000,000 is one in a billion. So no difference here.

But both of those assume that I did pick the number randomly. And if I did not, the probabilities would not be the same. Instead, the fact that simpler things are more probable would come into play. At least with the language and notation that we are actually using, the number 500,000,000 is much simpler than the number 422,819,208. Consequently, assuming that I picked a number non-randomly and then told you about it,  is significantly more probable than one in a billion that I would pick the number 500,000,000, and thus less probable than one in a billion that I would pick 422,819,208 (this is why I said above that the probability of the claim was only approximately one in a billion; because in fact it is even less than that.)

For that reason, if I had actually claimed to have picked 500,000,000, you might well have concluded that the most reasonable explanation of the facts was that I did not actually use the random number generator, or that it had malfunctioned, rather than that the number was actually picked randomly.

This is relevant to the kinds of things where the postulate that “extraordinary claims require extraordinary evidence” is normally used. Consider the claim, “I was down in the graveyard at midnight last night and saw a ghost there.”

How often have you personally seen a ghost? Probably never, and even if you have, surely not many times. And if so, seeing a ghost is not exactly an everyday occurrence. Considered in itself, therefore, this is an improbable occurrence, and if we evaluated the claim without considering the evidence included within the claim itself, we would simply assume the account is mistaken.

However, part of the reason that we know that seeing ghosts is not a common event is that people do not often make such claims. Apparently 18% of Americans say that they have seen a ghost at one time or another. But this still means that 82% of Americans have never seen one, and even most of the 18% presumably do not mean to say that it has happened often. So this would still leave seeing ghosts as a pretty rare event. Consider how it would be if 99.5% of people said they had seen ghosts, but you personally had never seen one. Instead of thinking that seeing ghosts is rare, you would likely think that you were just unlucky (or lucky, as the case may be.)

Instead of this situation, however, seeing ghosts is rare, and claiming to see ghosts is also rare. This implies that the claim to have seen a ghost is already extraordinary evidence that a person in fact saw a ghost, just as my claiming to have picked 422,819,208 was extraordinary evidence that I actually picked that number.

Nonetheless, there is a difference between the case of the ghost and the case of the number between one and a billion. We already know that there are exactly one billion numbers between one and a billion inclusive. So given that I pick a number within this range, the probability of each number must be on average one in a billion. If it is more probable that I would pick certain numbers, such as 500,000,000, it must be less probable that I would pick others, such as 422,819,208. We don’t have an equivalent situation with the case of the ghost, because we don’t know in advance how often people actually see ghosts. Even if we can find an exact measure of how often people claim to see ghosts, that will not tell us how often people lie or are mistaken about it. Thus although we can say that claiming to see a ghost is good evidence of someone actually having seen a ghost, we don’t know in advance whether or not the evidence is good enough. It is “extraordinary evidence,” but is it extraordinary enough? Or in other words, is claiming to have seen a ghost more like claiming to have picked 422,819,208, or is it more like claiming to have picked 500,000,000?

That remains undetermined, at least by the considerations which we have given here. But unless you have good reasons to suspect that seeing ghosts is significantly more rare than claiming to see a ghost, it is misguided to dismiss such claims as requiring some special evidence apart from the claim itself.

Claims and Evidence

Earlier I have mentioned the fact that when someone holds a position, this very fact is evidence for his position. Here I will consider this in more detail.

The reason to think that the claim is evidence for the position is that it seems more likely that someone would hold a position if the position is true than if it is false. It is evident that this must hold in general, or it would be impossible to learn a language, since people would be equally likely to say “the sky is blue” even if it was not blue, and therefore it would be impossible for children to learn that this sentence says that the sky is blue rather than that the sky is green.

However, someone might object that it is not true in general, and that in some cases claims either have no evidential effect, or that they are evidence that the claim is false.

What would be necessary for this to be true? Let’s take a case where the claim might have no evidential effect at all. Suppose someone says that exactly one year from today, you will eat strawberries for dessert. We might suppose this has no effect: the person has no way of knowing what you will be eating, and therefore he seems equally likely to make the claim, whether you will be eating strawberries or not.

But unless we unreasonably think that it is absolutely certain that prophetic knowledge of the future does not exist, then there is some probability that the statement is prophetic. This will make him somewhat more likely to make the statement if you will in fact be eating strawberries, unless there is a completely equal chance of his statement being anti-prophetic, that is, being made because you will not be eating strawberries. But this would equally require that he know the future, and consequently this amounts to saying that he is equally likely in general to assert or deny the eating of strawberries, even when he knows the truth. But we already admitted that this is not the case: someone who knows the truth is, in general, more likely to assert the truth than to deny it. Thus it is unreasonable to deny that such a statement is in fact evidence that you will eat strawberries for dessert a year from now.

In order for a claim to be evidence that the thing is false, we would have to have something similar: a case where someone who knows the truth is more likely to deny it than to assert it. This would not clearly be the case even, e.g. if we knew that someone was inventing an alibi. It may be that people who invent alibis include more truths than falsehoods in them, taken as a whole. But it could be the case in very concrete circumstances, and taking these circumstances into account. For example, if someone writes a novel “based on a true story,” the fact that the protagonist is called “Peter Smith,” may be evidence that in real life the person’s name was not Peter Smith.

In this case, of course, there is not even a claim that Peter Smith was the person’s name in the first place. So we actually have still not established the existence of such a claim. And if such a case is found, it will be the circumstances, rather than the general fact of the claim, which are evidence against it. Considered in itself, the fact that someone makes a claim or holds a position, is evidence for that claim or position.

The Evidence Still Does Not Change Sides

Our Mormon protagonist, still shaken by his discovery about the book of Abraham, now discovers another fact:

(A) Although Joseph Smith claimed to translate the Book of Mormon from ancient golden plates, there are many passages that evidently borrow from the King James Bible in particular.

Our protagonist considers this for a while and then thinks, (B) “God is just more tolerant of this sort of thing than I realized…”

On my earlier post, Michael Bolin commented:

While this is technically correct, it is worth noting that something akin to the evidence changing sides does happen, due to the practical difficulty with assigning probabilities. Namely, realizing that some fact is true, which in itself lowers the probability of the original hypothesis, may cause one to assign different values to the probability of a bunch of other facts given the hypothesis, such that the net effect after taking those other facts into account is to make the hypothesis more likely than it would have been if one had taken those other facts into account without the original observation.

I responded at the time:

It’s not clear to me what you are saying in practice, and seems to me that such a thing cannot happen without a violation of the laws of probability (this does not necessarily mean it cannot be reasonable, if the meaning is that you realize that your prior probability distribution was simply mistaken in the first place).

This is in fact what is happening when our protagonist concludes that God is more tolerant than he supposed. Fact (A) is evidence against the truth of Mormonism. But when this fact is considered together with the original fact about the book of Abraham, our protagonist concludes that “God is tolerant about false claims about the origin of his revealed texts” is more probable given the truth of Mormonism than he originally supposed. This is a change in his prior probability distribution, and it weakens the evidence against Mormonism found in the two claims about the Book of Abraham and about the Book of Mormon.

If someone in fact adjusted his probability of the truth of Mormonism based on the fact about the Book of Abraham, then discovered (A) and adjusted his probability of (B) by changing his prior, then the probability of Mormonism being true might indeed become somewhat higher than it was simply after the discovery about the book of Abraham.

However, several things should be noted concerning this:

(1) In practice, people have a very hard time admitting that there is any evidence at all against their position. Now if someone believes that the Book of Abraham was in fact translated from an ancient Egyptian manuscript, he would probably realize (if he thought about it), that if this turns out to be false, it would be evidence against Mormonism. Consequently, if he thinks that Mormonism is true, he likely sets a prior where it is nearly impossible for Smith’s claim about the book of Abraham to be false. In other words, he more or less thinks that if the Book of Abraham was not translated from the Egyptian manuscript, Mormonism would be false. But when he realizes that the Book of Abraham was not translated from the manuscript, he does not conclude that Mormonism is false, but rather tries to make the evidence change sides. So in effect he already adjusts his prior, and in fact in an inappropriate way, because even if his original prior was excessively against the possibility of the fact about the Book of Abraham, that fact is objectively evidence against Mormonism, not in favor of it. And when he discovers fact (A), this too is evidence against Mormonism, and he should adjust his probability in this direction, and not in the opposite direction.

(2) If someone actually adjusted his probabilities in the appropriate way after the discovery of the first fact about the Book of Abraham, the discovery of (A) and the conclusion (B) could somewhat increase the probability of the truth of Mormonism over what he supposed it was after the original discovery about the Book of Abraham. However, according to the new prior, both the fact about the Book of Abraham and the fact (A) would still constitute evidence against the truth of Mormonism, and thus the probability of Mormonism would remain less than it would be with the new prior but without the facts concerning the origin of the texts.

(3) Similarly, there is little reason to suppose that the final probability of Mormonism would be greater than the original probability before the change in the prior. Rather, you would have something like this:

  1. Original probability of Mormonism: 95%.
  2. Probability after adjusting for discovery about the book of Abraham: 25%.
  3. Probability after adjusting for fact (A): 10%
  4. Probability after adjusting the prior with conclusion (B): 75%.

Of course these are randomly invented numbers, and in practice a real person does not adjust this much, and his original probability is likely even higher than 95%. Nonetheless it is an illustration of what is likely to happen in terms of the evidence. Even after adjusting the prior, the facts about the origins of the texts simply remain evidence against Mormonism, not evidence for it, and consequently the final probability remains less than the original probability.

The evidence still does not change sides.

Things it is Better to Believe

Since belief is voluntary, it follows that truth is only one of the possible motives for belief, and people can believe things for the sake of other ends as well. Consequently there may be some things that it is always better to believe, even without making a special effort to determine whether they are true or not.

Consider the claim that “Life is meaningful,” understood to mean that there are purposes in life, and therefore good and evil.

If this is true, it is good to believe it, because it is true, and because we need to be aware of our ends in order to seek them.

If it is not true, then it does no harm to believe it, because good and evil do not exist in that situation. And likewise, it cannot be better not to believe it in this situation, since better and worse do not exist.

Thus the strategy of believing that life is meaningful weakly dominates the strategy of believing that life is not meaningful, although it does not “strictly” dominate, since the payoffs of the two strategies are equal if life is not in fact meaningful. Consequently the better thing is to choose to believe that life is meaningful, even without a particular investigation of the facts.

As another particular example of this kind of reasoning, we can consider this discussion from chapter 9 of Eric Reitan’s book Is God a Delusion? A Reply to Religion’s Cultured Despisers:

In his Letter to a Christian Nation (2006), Sam Harris notes that, statistically speaking, somewhere in the world right now a little girl is being abducted, raped, and killed. And the same statistics suggest that her parents believe that “an all-powerful and all-loving God is watching over them and their family.” Harris asks, “Are they right to believe this? Is it good that they believe this?” His answer? “No.” According to Harris, this answer contains “the entirety of atheism” and is simply “an admission of the obvious.” He encourages his readers to “admit the obvious”: that when devout Hurricane Katrina victims drowned in their attics while praying to God for deliverance, they “died talking to an imaginary friend” (pp. 50-2). With righteous indignation, Harris condemns the “boundless narcissism” of those who survive a disaster only to “believe themselves spared by a loving God, while this same God drowned infants in their cribs” (p. 54).

After some discussion of this, Eric Reitan’s strategy is not to try to make things look any better, but to make them look even worse:

A mother, running late for a morning meeting, rushes out the door with both her children. The older son is to be dropped off at preschool, the baby girl at a nearby daycare. When the preschool lets out, the daycare’s minivan will bring the son to the daycare, where he will wait with his baby sister until their mother gets off work.

The mother gets to work, leaving the car in a sunny lot. It’s a hot day. She makes it to her meeting and has a productive day. At five o’clock she gets in her car and drives to the daycare. Her son runs to her. She picks him up and kisses his head, then looks around for her baby girl. Not seeing her, she asks one of the daycare workers. “I’m sorry, ma’am. You didn’t drop her off this morning.” The reply, tentative and apologetic, doesn’t have the tone of something that should tear a life apart. But it does. The mother’s hands go numb. Her son falls from her grasp. It feels as if all the darkness in the world is pressing outward from inside her. No. Impossible. But she has no memory of unstrapping that precious little girl, of carrying her into the daycare. No memory, in the rush of the morning, the urgency to get to her meeting on time. Driving to the daycare after work, looking forward to seeing her children, she never looked at what was in the back seat. And now her knees give out and the sobs escape even before she makes it to the car, even before she sees what’s there. Someone is soothing the son, who stands at the daycare door. The mother is beating at the car windows with her fists. In her imagination the baby girl is screaming for mommy, for comfort, as the car grows hotter and hotter, while all the while the mother is in her stupid meeting, talking about stupid contracts, feeling relieved that she’d made it to work on time. And the son, distressed beyond understanding by his mother’s behavior, breaks free of the daycare worker and runs towards her – into the path of an oncoming car.

This story is loosely based on real events. And there are life stories bleaker than this. Horror is real. According to the 2007 Global Monitoring Report put out by the World Bank, there are at present more than one billion people on earth living in “extreme” poverty (that is, on less than $1 per day). Such poverty is not only dire in itself but renders the poor terribly vulnerable to exploitation, disease, and natural disasters. I could fill a book with harrowing stories of human lives crushed by a combination of poverty, brutal abuse, and the grim indifference of nature. But that isn’t needed, I think, in order to convince most readers that there are horrors in the world so devastating that those who undergo them feel as if their entire lives are stripped of positive value, as if they’d be better off dead – while those who are implicated in them, once they come to appreciate the full measure of their complicity, are torn apart by self-loathing. If there is a God, His reasons for permitting such evils are hidden from us. And, as Marilyn Adams has pointed out, even if traditional theodicies give some general sense of why God might create a world in which evils exist, these theodicies bring no comfort to the mother as she turns away from her infant’s corpse just in time to see her son crushed under the wheels of a screeching car. It won’t give meaning to her life. It won’t eliminate the horror. Her existence has, in a few heartbeats, become worse than a void. It’s become come a space of affliction compared to which the void would be preferable. This woman needs salvation.

In order to survive such a situation, Reitan says, it is necessary to believe in the redemption of evil, namely that in some way a greater good is brought out of such horrors:

For most horror victims, the sense that their lives have positive meaning may depend on the conviction that a transcendent good is at work redeeming evil. Is the evidential case against the existence of such a good really so convincing that it warrants saying to these horror victims, “Give up hope”? Should we call them irrational when they cling to that hope or when those among the privileged live in that hope for the sake of the afflicted? What does moral decency imply about the legitimacy of insisting, as the new atheists do, that any view of life which embraces the ethico-religious hope should be expunged from the world?

He concludes the chapter with this response to Sam Harris:

Somewhere, even as I write this, a girl is being raped and murdered. Her parents believe in a transcendent God of love who will redeem even the most shocking horrors.

Are they right to believe this? Is it good that they believe this?

In the darkness of affliction, Harris’s answer rings hollow.

This example differs in some ways from my original example of the claim that life is meaningful. The original claim was understood to be the assertion that good and evil exist in life, and thus the denial would be that there is no good and evil in life. But in Reitan’s discussion, the claim is stronger, namely something like “God exists and will always bring good out of evil.” Harris’s denial of this claim does not in itself imply the non-existence of good and evil, at least not without additional consideration. In fact, his position is that very great evils exist and that we should not try to explain them away with this claim about God, so he is not, on the face of it, denying that good and evil exist. Consequently one cannot immediately deduce that believing the claim about God is a dominant strategy.

Nonetheless, if Reitan’s and Harris’s views are compared here with respect to the good of the people concerned, a fair comparison, since Harris was the one who the claimed that it was bad for people to believe such things, it does seem that Reitan has a much stronger case. The woman does seem much better off believing the claim about God and the redemption of evil.

But this is not the whole story.