# Prayer and Probability

The reader might wonder about the relation between the previous post and my discussion of Arman Razaali. If I could say it is more likely that he was lying than that the thing happened as stated, why shouldn’t they believe the same about my personal account?

In the first place there is a question of context. I deliberately took Razaali’s account randomly from the internet without knowing anything about him. Similarly if someone randomly passes through and reads the previous post without having ready anything else on this blog, it would not be unreasonable for them to think I might have just made it up. But if someone has read more here they probably have a better estimate of my character. (If you have read more and still think I made it up, well, you are a very poor judge of character and there is not much I can do about that.)

Second, I did not say he was lying. I said it was more likely than the extreme alternative hypothesis that the thing happened exactly as stated and that it happened purely by chance. And given later events (namely his comment here), I do not think he was lying at all.

Third, the probabilities are very different.

## “Calculating” the probability

What is the probability of the events I described happening purely by chance? The first thing to determine is what we are counting when we say that something has a chance of 1/X, whatever X is. Out of X cases, the thing should happen about once. In the Razaali case, ‘X’ would be something like “shuffling a deck of cards for 30 minutes and ending up with the deck in the original order.” That should happen about once, if you shuffle and check your deck of cards about 10^67 times.

It is not so easy to say what you are counting if you are trying to determine the probability of a coincidence. And one factor that makes this feel weirder and less probable is that since a coincidence involves several different things happening, you tend to think about it as though there were an extra difficulty in each and every one of the things needing to happen. But in reality you should take one of them as a fixed fact and simply ask about the probability of the other given the fixed thing. To illustrate this, consider the “birthday problem“: in a group of 23 people, the chance that two of them will have the same birthday is over 50%. This “feels” too high; most people would guess that the chance would be lower. But even without doing the math, one can begin to see why this is so by thinking through a few steps of the problem. 22 days is about 6% of the days in a year; so if we take one person, who has a birthday on some day or other, there will be about a 6% chance that one of the other people have the same birthday. If none of them do, take the second person; the chance one of the remaining 21 people will have the same birthday as them will still be pretty close to 6%, which gets us up to almost 12% (it doesn’t quite add up in exactly this way, but it’s close). And we still have a lot more combinations to check. So you can already start to see that how easy it will turn out to be to get up to 50%. In any case, the basic point is that the “coincidence” is not important; each person has a birthday, and we can treat that day as fixed while we compare it to all the others.

In the same way, if you are asking about the probability that someone prays for a thing, and then that thing happens (by chance), you don’t need to consider the prayer as some extra factor — it is enough to ask how often the thing in question happens, and that will tell you your chance. If someone is looking for a job and prays a novena for this intention, and receives a job offer immediately afterwards, the chance will be something like “how often a person looking for a job receives a job offer.” For example, if it takes five months on average to get a job when you are looking, the probability of receiving an offer on a random day should be about 1/150; so out 150 people praying novenas for a job while engaged in a job search, about 1 of them should get an offer immediately afterwards.

What would have counted as “the thing happening” in the personal situation described in the last post? There are a number of subjective factors here, and depending on how one looks at it, especially depending on the detail with which the situation is described. For example, as I said in the last post, it is normal to think of the “answer” to novena on the last day or the day after — so if a person praying for a job receives an offer on either of those days, they will likely consider it just as much of an answer. This means the estimate of 1/150 is really too low; it should really be 1/75. And given that many people would stretch out the period (in which they would count the result as an answer) to as much as a week, we could make the odds as high as 1/21. Looking loosely at other details could similarly improve the odds; e.g. if receiving an interview invitation that later leads to a job is included, the odds would be even higher.

But since we are considering whether the odds might be as bad as 1/10^67, let’s assume we include a fair amount of detail. What are the odds that on a specific day a stranger tells someone that “Our Lady wants you to become a religious and she is afraid that you are going astray,” or words to that effect?

The odds here should be just as objective as the odds with the cards — there should be a real number here — for reasons explained elsewhere, but unfortunately unlike the cards, we have nowhere near enough experience to get a precise number. Nonetheless it is easy to see that various details about the situation made it actually more likely than it would be for a perfectly random person. Since I had a certain opinion of my friend’s situation, that makes it far more likely than chance that other people aware of the situation would have a similar opinion. And although we are talking about a “stranger” here, that stranger was known to a third party that knew my friend, and we have no way of knowing what, if anything, might have passed through that channel.

If we arbitrarily assume that one in a million people in similar situations (i.e. where other people have similar opinions about them) hear such a thing at some point in their lives, and assume that we need to hit one particular day out of 50 years here, then we can “calculate” the chance: 1 / (365 * 50 * 1,000,000), or about 1 in 18 billion. To put it in counting terms, 1 in 18 billion novenas like this will result in the thing happening by chance.

Now it may be that one in a million persons is too high (although if anything it may also be too low; the true value may be more like 1 / 100,000, making the overall probability 1 / 180 million). But it is easy to see that there is no reasonable way that you can say this is as unlikely as shuffling a deck of cards and getting it in the original order.

## The Alternative Hypothesis

A thing that happens once in 18 billion person days is not so rare that you would expect such things to never occur (although you would expect them to most likely not happen to you). Nonetheless, you might want to consider whether there is some better explanation than chance.

But a problem arises immediately: it is not clear that the alternative makes it much more likely. After all, I was very surprised by these events when they happened, even though at the time I did attribute an explicitly religious explanation. Indeed, Fr. Joseph Bolin argues that you should not expect prayer to increase the chances of any event. But if this is the case, then the odds of it happening will be the same given the religious explanation as given the chance explanation. Which means the event would not even be evidence for the religious explanation.

In actual fact, it is evidence for the religious explanation, but only because Fr. Joseph’s account is not necessarily true. It could be true that when one prays for something sufficiently rare, the chance of it happening increases by a factor of 1,000; the cases would still be so rare that people would not be likely to discover this fact.

Nonetheless, the evidence is much weaker than a probability of 1 in 18 billion would suggest, namely because the alternative hypothesis does not prevent the events from remaining very unlikely. This is an application of the discussion here, where I argued that “anomalous” evidence should not change your opinion much about anything. This is actually something the debunkers get right, even if they are mistaken about other things.

# Might People on the Internet Sometimes Tell the Truth?

## Lies and Scott Alexander

Scott Alexander wrote a very good post called Might People on the Internet Sometimes Lie, which I have linked to several times in the past. In the first linked post (Lies, Religion, and Miscalibrated Priors), I answered Scott’s question (why it is hard to believe that people are lying even when they probably are), but also pointed out that “either they are lying or the thing actually happened in such and such a specific way” is a false dichotomy in any case.

In the example in my post, I spoke about Arman Razaali and his claim that he shuffled a deck of cards for 30 minutes and ended up with the deck in its original order. As I stated in the post,

People demonstrably lie at far higher rates than 1 in 10^67 or 1 in 10^40. This will remain the case even if you ask about the rate of “apparently unmotivated flat out lying for no reason.” Consequently, “he’s lying, period,” is far more likely than “the story is true, and happened by pure chance.” Nor can we fix this by pointing to the fact that an extraordinary claim is a kind of extraordinary evidence

But as I also stated there, those are not the only options. As it turns out, although my readers may have missed this, Razaali himself stumbled upon my post somewhat later and posted something in the comments there:

At first, I must say that I was a bit flustered when I saw this post come up when I was checking what would happen when I googled myself. But it’s an excellent read, exceptionally done with excellent analysis. Although I feel the natural urge to be offended by this, I’m really not. Your message is very clear, and it articulates the inner workings of the human mind very well, and in fact, I found that I would completely agree. Having lost access to that Quora account a month or two ago, I can’t look back at what I wrote. I can easily see how the answer gave on Quora could very easily be seen as a lie, and if I read it with no context, I would probably think it was fake too. But having been there at the moment as I counted the cards, I am biased towards believing what I saw, even though I could have miscounted horrendously.

Does this sound like something written by one of Scott Alexander’s “annoying trolls”?

Not to me, anyway. I am aware that I am also disinclined for moral reasons to believe that Razaali was lying, for the reasons I stated in that post. Nonetheless, it seems fair to say that this comment fits better with some intermediate hypothesis (e.g. “it was mostly in order and he was mistaken”) rather than with the idea that “he was lying.”

## Religion vs. UFOs

I participated in this exchange on Twitter:

Ross Douthat:

Of what use are our professionally-eccentric, no-heresy-too-wild reasoners like @robinhanson if they assume a priori that “spirits or creatures from other dimensions” are an inherently crazy idea?: https://overcomingbias.com/2021/05/ufos-say-govt-competence-is-either-surprisingly-high-or-surprisingly-low.html

But we don’t want to present ourselves as finding any strange story as equally likely. Yes, we are willing to consider most anything, at least from a good source, & we disagree with others on which stories seem more plausible. But we present ourselves as having standards! 🙂

Me:

I think @DouthatNYT intended to hint that many religious experiences offer arguments for religions that are at least as strong as arguments from UFOs for aliens, and probably stronger.

I agree with him and find both unconvincing.

But find it very impressive you were willing to express those opinions.

You can find videos on best recent evidence for ghosts, which to me looks much less persuasive than versions for UFOs. But evidence for non-ghost spirits, they don’t even bother to make videos for that, as there’s almost nothing.

Me:

It is just not true that there is “almost nothing.” E.g. see the discussion in my post here:

Miracles and Multiple Witnesses

Robin does not respond. Possibly he just does not want to spend more time on the matter. But I think there is also something else going on; engaging with this would suggest to people that he does not “have standards.” It is bad enough for his reputation if he talks about UFOs; it would be much worse if he engaged in a discussion about rosaries turning to gold, which sounds silly to most Catholics, let alone to non-Catholic Christians, people of other religions, and non-religious people.

But I meant what I said in that post, when I said, “these reports should be taken seriously.” Contrary to the debunkers, there is nothing silly about something being reported by thousands of people. It is possible that every one of those reports is a lie or a mistake. Likely, even. But I will not assume that this is the case when no one has even bothered to check.

Scott Alexander is probably one of the best bloggers writing today, and one of the most honest, to the degree that his approach to religious experiences is somewhat better. For example, although I was unfortunately unable to find the text just now, possibly because it was in a comment (and some of those threads have thousands of comments) and not in a post, he once spoke about the Miracle of the Sun at Fatima, and jokingly called it something like, “a glitch in the matrix.” The implication was that (1) he does not believe in the religious explanation, but nonetheless (2) the typical “debunkings” are just not very plausible. I agree with this. There are some hints that there might be a natural explanation, but the suggestions are fairly stretched compared to the facts.

## December 24th, 2010 – Jan 4th, 2011

What follows is a description of events that happened to me personally in the period named. They are facts. They are not lies. There is no distortion, not due to human memory failures or anything else. The account here is based on detailed records that I made at the time, which I still possess, and which I just reviewed today to ensure that there would be no mistake.

At that time I was a practicing Catholic. On December 24th, 2010, I started a novena to Mary. I was concerned about a friend’s vocation; I believed that they were called to religious life; they had thought the same for a long time but were beginning to change their mind. The intention of the novena was to respond to this situation.

I did not mention this novena to anyone at the time, or to anyone at all before the events described here.

The last day of the novena was January 1st, 2011, a Marian feast day. (It is a typical practice to end a novena on a feast day of saint to whom the novena is directed.)

On January 4th, 2011, I had a conversation with the same friend. I made no mention at any point during this conversation of the above novena, and there is no way that they could have known about it, or at any rate no way that our debunking friends would consider “ordinary.”

They told me about events that happened to them on January 2nd, 2011.

Note that these events were second hand for me (narrated by my friend) and third hand for any readers this blog might have. This does not matter, however; since my friend had no idea about the novena, even if they were completely making it up (which I believe in no way), it would be nearly as surprising.

When praying a novena, it is typical to expect the “answer to the prayer” on the last day or on the day after, as in an example online:

The Benedictine nuns of St Cecilia’s Abbey on the Isle of Wight (http://www.stceciliasabbey.org.uk) recently started a novena to Fr Doyle with the specific intention of finding some Irish vocations. Anybody with even a passing awareness of the Catholic Church in Ireland is aware that there is a deep vocations crisis. Well, the day after the novena ended, a young Irish lady in her 20’s arrived for a visit at the convent. Today, the Feast of the Immaculate Conception, she will start her time as a postulant at St Cecilia’s Abbey.

Some might dismiss this as coincidence. Those with faith will see it in a different light. Readers can make up their own minds.

January 2nd, 2011, was the day after my novena ended, and the day to which my friend (unaware of the novena) attributed the following event:

They happened to meet with another person, one who was basically a stranger to them, but met through a mutual acquaintance (mutual to my friend and the stranger; unknown to me.) This person (the stranger) asked my friend to pray with her. She then told my friend that “Our Lady knows that you suffer a lot… She wants you to become a religious and she is afraid that you are going astray…”

Apart from a grammatical change for context, the above sentences are a direct quotation from my friend’s account. Note the relationship with the text I placed in bold earlier.

## To be Continued

I may have more to say about these events, but for now I want to say two things:

(1) These events actually happened. The attitude of the debunkers is that if anything “extraordinary” ever happens, it is at best a psychological experience, not a question of the facts. This is just false, and this is what I referred to when I mentioned their second error in the previous post.

(2) I do not accept a religious explanation of these events (at any rate not in any sense that would imply that a body of religious doctrine is true as a whole.)

# A Correction Regarding Laplace

A few years ago, I quoted Stanley Jaki on an episode supposedly involved Laplace:

Laplace shouted, “We have had enough such myths,” when his fellow academician Marc-Auguste Pictet urged, in the full hearing of the Académie des Sciences, that attention be given to the report about a huge meteor shower that fell at L’Aigle, near Paris, on April 26, 1803.

I referred to this recently on Twitter. When another user found it surprising that Laplace would have said this, I attempted to track it down, and came to the conclusion that this very account is a “myth” itself, in some sense. Jaki tells the same story in different words in the book Miracles and Physics:

The defense of miracles done with an eye on physics should include a passing reference to meteorites. Characteristic of the stubborn resistance of scientific academies to those strange bits of matter was Laplace’s shouting, “We’ve had enough of such myths,” when Pictet, a fellow academician, urged a reconsideration of the evidence provided by “lay-people” as plain eyewitnesses.

(p. 94)

Jaki provides no reference in God and the sun at Fatima. The text in Miracles and Physics has a footnote, but it provides generic related information that does not lead back to any such episode.

Did Jaki make it up? People do just make things up“, but in this case whatever benefit Jaki might get from it would seem to be outweighed by the potential reputational damage of being discovered in such a lie, so it seems unlikely. More likely he is telling a story from memory, with the belief that the details just don’t matter very much. And since he provides plenty of other sources, I am sure he knows full well that he is omitting any source here, presumably because he does not have one at hand. He may even be trying to cover up this omission, in a sense, by footnoting the passage with information that does not source it. It seems likely that the story is a lecture hall account that has been modified by the passage of time. One reason to suppose such a source is that Jaki is not alone in the claim that Laplace opposed the idea of meteorites as stones from the sky until 1803. E.T. Jaynes, in Probability Theory: The Logic of Science, makes a similar claim:

Note that we can recognize the clear truth of this psychological phenomenon without taking any stand about the truth of the miracle; it is possible that the educated people are wrong. For example, in Laplace’s youth educated persons did not believe in meteorites, but dismissed them as ignorant folklore because they are so rarely observed. For one familiar with the laws of mechanics the notion that “stones fall from the sky” seemed preposterous, while those without any conception of mechanical law saw no difficulty in the idea. But the fall at Laigle in 1803, which left fragments studied by Biot and other French scientists, changed the opinions of the educated — including Laplace himself. In this case the uneducated, avid for the marvelous, happened to be right: c’est la vie.

(p. 505)

Like Jaki, Jaynes provides no source. Still, is that good enough reason to doubt the account? Let us examine a text from the book The History of Meteoritics and Key Meteorite Collections. In the article, “Meteorites in history,” Ursula Marvin remarks:

Early in 1802 the French mathematician Pierre-Simon de Laplace (1749-1827) raised the question at the National Institute of a lunar volcanic origin of fallen stones, and quickly gained support for this idea from two physicist colleagues Jean Baptiste Biot (1774-1862) and Siméon-Denis Poisson (1781-1840). The following September, Laplace (1802, p. 277) discussed it in a letter to von Zach.

The idea won additional followers when Biot (1803a) referred to it as ‘Laplace’s hypothesis’, although Laplace, himself, never published an article on it.

(p.49)

This has a source for Laplace’s letter of 1802, although I was not able to find it online. It seems very unlikely that Laplace would have speculated on meteorites as coming from lunar volcanos in 1802, and then called them “myths” in 1803. So where does this story come from? In Cosmic Debris: Meteorites in History, John Burke gives this account:

There is also a problem with respect to the number of French scientists who, after Pictet published a résumé of Howard’s article in the May 1802 issue of the Bibliothèque Britannique, continued to oppose the idea that stones fell from the atmosphere. One can infer from a statement of Lamétherie that there was considerable opposition, for he reported that when Pictet read a memoir to the Institut on the results of Howard’s report “he met with such disfavor that it required a great deal of fortitude for him to finish his reading.” However, Biot’s description of the session varies a good deal. Pictet’s account, he wrote, was received with a “cautious eagerness,” though the “desire to explain everything” caused the phenomenon to be rejected for a long time. There were, in fact, only three scientists who publicly expressed their opposition: the brothers Jean-André and Guillaume-Antoine Deluc of Geneva, and Eugène Patrin, an associate member of the mineralogy section of the Institut and librarian at the École des mines.

When Pictet early in 1801 published a favorable review of Chladni’s treatise, it drew immediate fire from the Deluc brothers. Jean, a strict Calvinist, employed the same explanation of a fall that the Fougeroux committee had used thirty years before: stones did not fall; the event was imagined when lightning struck close to the observer. Just as no fragment of our globe separate and become lost in space, he wrote, fragments could not be detached from another planet. It was also very unlikely that solid masses had been wandering in space since the creation, because they would have long since fallen into the sphere of attraction of some planet. And even if they did fall, they would penetrate the earth to a great depth and shatter into a thousand pieces.

(p.51)

It seems quite possible that Pictet’s “reading a memoir” here and “meeting with disfavor” (regardless of details, since Burke notes it had different descriptions at the time) is the same incident that Jaki describes as having been met with “We’ve had enough of such myths!” when Pictet “urged a reconsideration of the evidence.” If these words were ever said, then, they were presumably said by one of these brothers or someone else, and not by Laplace.

How does this sort of thing happen, if we charitably assume that Jaki was not being fundamentally dishonest? As stated above, it seems likely that he knew he did not have a source. He may even have been consciously aware that it might not have been Laplace who made this statement, if anyone did. But he was sure there was a dispute about the matter, and presumably thought that it just wasn’t too important who it was or the details of the situation, since the main point was that scientists are frequently reluctant to accept facts when those facts occur rarely and are not deliberately reproducible. And if we reduce Jaki’s position to these two things, namely, (1) that scientists at one point disputed the reality and meteorites, and (2) this sort of thing frequently happens with rare and hard to reproduce phenomena, then the position is accurate.

But this behavior, the description of situations with the implication that the details just don’t matter much, is very bad, and directly contributes to the reluctance of many scientists to accept the reality of “extraordinary” phenomena, even in situations where they are, in fact, real.

# Common Sense

I have tended to emphasize common sense as a basic source in attempting to philosophize or otherwise understand reality. Let me explain what I mean by the idea of common sense.

The basic idea is that something is common sense when everyone agrees that something is true. If we start with this vague account, something will be more definitively common sense to the degree that it is truer that everyone agrees, and likewise to the degree that it is truer that everyone agrees.

If we consider anything that one might think of as a philosophical view, we will find at least a few people who disagree, at least verbally, with the claim. But we may be able to find some that virtually everyone agrees with. These pertain more to common sense than things that fewer people agree with. Likewise, if we consider everyday claims rather than philosophical ones, we will probably be able to find things that everyone agrees with apart from some very localized contexts. These pertain even more to common sense. Likewise, if everyone has always agreed with something both in the past and present, that pertains more to common sense than something that everyone agrees with in the present, but where some have disagreed in the past.

It will be truer that everyone agrees in various ways: if everyone is very certain of something, that pertains more to common sense than something people are less certain about. If some people express disagreement with a view, but everyone’s revealed preferences or beliefs indicate agreement, that can be said to pertain to common sense to some degree, but not so much as where verbal affirmations and revealed preferences and beliefs are aligned.

Naturally, all of this is a question of vague boundaries: opinions are more or less a matter of common sense. We cannot sort them into two clear categories of “common sense” and “not common sense.” Nonetheless, we would want to base our arguments, as much as possible, on things that are more squarely matters of common sense.

One might object that the proposal is impossible. For no one can really reason except from their own opinions. Otherwise, one might be formulating a chain of argument, but it is not one’s own argument or one’s own conclusion. But this objection is easily answered. In the first place, if everyone agrees on something, you probably agree yourself, and so reasoning from common sense will still be reasoning from your own opinions. Second, if you don’t personally agree, since belief is voluntary, you are capable of agreeing if you choose, and you probably should, for reasons which will be explained in answering the second question.

Nonetheless, the objection is a reasonable place to point out one additional qualification. “Everyone agrees with this” is itself a personal point of view that someone holds, and no one is infallible even with respect to this. So you might think that everyone agrees, while in fact they do not. But this simply means that you have no choice but to do the best you can in determining what is or what is not common sense. Of course you can be mistaken about this, as you can about anything.

Why argue from common sense? I will make two points, a practical one and a theoretical one. The practical point is that if your arguments are public, as for example this blog, rather than written down in a private journal, then you presumably want people to read them and to gain from them in some way. The more you begin from common sense, the more profitable your thoughts will be in this respect. More people will be able to gain from your thoughts and arguments if more people agree with the starting points.

There is also a theoretical point. Consider the statement, “The truth of a statement never makes a person more likely to utter it.” If this statement were true, no one could ever utter it on account of its truth, but only for other reasons. So it is not something that a seeker of truth would ever say. On the other hand, there can be no doubt that the falsehood of some statements, on some occasions, makes those statements more likely to be affirmed by some people. Nonetheless, the nature of language demands that people have an overall tendency, most of the time and in most situations, to speak the truth. We would not be able to learn the meaning of a word without it being applied accurately, most of the time, to the thing that it means. In fact, if everyone was always uttering falsehoods, we would simply learn that “is” means “is not,” and that “is not,” means “is,” and the supposed falsehoods would not be false in the language that we would acquire.

It follows that greater agreement that something is true, other things being equal, implies that the thing is more likely to be actually true. Stones have a tendency to fall down: so if we find a great collection of stones, the collection is more likely to be down at the bottom of a cliff rather than perched precisely on the tip of a mountain. Likewise, people have a tendency to utter the truth, so a great collection of agreement suggests something true rather than something false.

Of course, this argument depends on “other things being equal,” which is not always the case. It is possible that most people agree on something, but you are reasonably convinced that they are mistaken, for other reasons. But if this is the case, your arguments should depend on things that they would agree with even more strongly than they agree with the opposite of your conclusion. In other words, it should be based on things which pertain even more to common sense. Suppose it does not: ultimately the very starting point of your argument is something that everyone else agrees is false. This will probably be an evident insanity from the beginning, but let us suppose that you find it reasonable. In this case, Robin Hanson’s result discussed here implies that you must be convinced that you were created in very special circumstances which would guarantee that you would be right, even though no one else was created in these circumstances. There is of course no basis for such a conviction. And our ability to modify our priors, discussed there, implies that the reasonable behavior is to choose to agree with the priors of common sense, if we find our natural priors departing from them, except in cases where the disagreement is caused by agreement with even stronger priors of common sense. Thus for example in this post I gave reasons for disagreeing with our natural prior on the question, “Is this person lying or otherwise deceived?” in some cases. But this was based on mathematical arguments that are even more convincing than that natural prior.

# Lies, Religion, and Miscalibrated Priors

In a post from some time ago, Scott Alexander asks why it is so hard to believe that people are lying, even in situations where it should be obvious that they made up the whole story:

The weird thing is, I know all of this. I know that if a community is big enough to include even a few liars, then absent a strong mechanism to stop them those lies should rise to the top. I know that pretty much all of our modern communities are super-Dunbar sized and ought to follow that principle.

And yet my System 1 still refuses to believe that the people in those Reddit threads are liars. It’s actually kind of horrified at the thought, imagining them as their shoulders slump and they glumly say “Well, I guess I didn’t really expect anyone to believe me”. I want to say “No! I believe you! I know you had a weird experience and it must be hard for you, but these things happen, I’m sure you’re a good person!”

If you’re like me, and you want to respond to this post with “but how do you know that person didn’t just experience a certain coincidence or weird psychological trick?”, then before you comment take a second to ask why the “they’re lying” theory is so hard to believe. And when you figure it out, tell me, because I really want to know.

The strongest reason for this effect is almost certainly a moral reason. In an earlier post, I discussed St. Thomas’s explanation for why one should give a charitable interpretation to someone’s behavior, and in a follow up, I explained the problem of applying that reasoning to the situation of judging whether a person is lying or not. St. Thomas assumes that the bad consequences of being mistaken about someone’s moral character will be minor, and most of the time this is true. But if we asking the question, “are they telling the truth or are they lying?”, the consequences can sometimes be very serious if we are mistaken.

Whether or not one is correct in making this application, it is not hard to see that this is the principal answer to Scott’s question. It is hard to believe the “they’re lying” theory not because of the probability that they are lying, but because we are unwilling to risk injuring someone with our opinion. This is without doubt a good motive from a moral standpoint.

But if you proceed to take this unwillingness as a sign of the probability that they are telling the truth, this would be a demonstrably miscalibrated probability assignment. Consider a story on Quora which makes a good example of Scott’s point:

I shuffled a deck of cards and got the same order that I started with.

No I am not kidding and its not because I can’t shuffle.

Let me just tell the story of how it happened. I was on a trip to Europe and I bought a pack of playing cards at the airport in Madrid to entertain myself on the flight back to Dallas.

It was about halfway through the flight after I’d watched Pixels twice in a row (That s literally the only reason I even remembered this) And I opened my brand new Real Madrid Playing Cards and I just shuffled them for probably like 30 minutes doing different tricks that I’d learned at school to entertain myself and the little girl sitting next to me also found them to be quite cool.

I then went to look at the other sides of the cards since they all had a picture of the Real Madrid player with the same number on the back. That’s when I realized that they were all in order. I literally flipped through the cards and saw Nacho-Fernandes, Ronaldo, Toni Kroos, Karim Benzema and the rest of the team go by all in the perfect order.

Then a few weeks ago when we randomly started talking about Pixels in AP Statistics I brought up this story and my teacher was absolutely amazed. We did the math and the amount of possibilities when shuffling a deck of cards is 52! Meaning 52 x 51 x 50 x 49 x 48….

There were 8.0658175e+67 different combinations of cards that I could have gotten. And I managed to get the same one twice.

The lack of context here might make us more willing to say that Arman Razaali is lying, compared to Scott’s particular examples. Nonetheless, I think a normal person will feel somewhat unwilling to say, “he’s lying, end of story.” I certainly feel that myself.

It does not take many shuffles to essentially randomize a deck. Consequently if Razaali’s statement that he “shuffled them for probably like 30 minutes” is even approximately true, 1 in 52! is probably a good estimate of the chance of the outcome that he claims, if we assume that it happened by chance. It might be some orders of magnitude less since there might be some possibility of “unshuffling.” I do not know enough about the physical process of shuffling to know whether this is a real possibility or not, but it is not likely to make a significant difference: e.g. the difference between 10^67 and 10^40 would be a huge difference mathematically, but it would not be significant for our considerations here, because both are simply too large for us to grasp.

People demonstrably lie at far higher rates than 1 in 10^67 or 1 in 10^40. This will remain the case even if you ask about the rate of “apparently unmotivated flat out lying for no reason.” Consequently, “he’s lying, period,” is far more likely than “the story is true, and happened by pure chance.” Nor can we fix this by pointing to the fact that an extraordinary claim is a kind of extraordinary evidence. In the linked post I said that the case of seeing ghosts, and similar things, might be unclear:

Or in other words, is claiming to have seen a ghost more like claiming to have picked 422,819,208, or is it more like claiming to have picked 500,000,000?

That remains undetermined, at least by the considerations which we have given here. But unless you have good reasons to suspect that seeing ghosts is significantly more rare than claiming to see a ghost, it is misguided to dismiss such claims as requiring some special evidence apart from the claim itself.

In this case there is no such unclarity – if we interpret the claim as “by pure chance the deck ended up in its original order,” then it is precisely like claiming to have picked 500,000,000, except that it is far less likely.

Note that there is some remaining ambiguity. Razaali could defend himself by saying, “I said it happened, I didn’t say it happened by chance.” Or in other words, “but how do you know that person didn’t just experience a certain coincidence or weird psychological trick?” But this is simply to point out that “he’s lying” and “this happened by pure chance” are not exhaustive alternatives. And this is true. But if we want to estimate the likelihood of those two alternatives in particular, we must say that it is far more likely that he is lying than that it happened, and happened by chance. And so much so that if one of these alternatives is true, it is virtually certain that he is lying.

As I have said above, the inclination to doubt that such a person is lying primarily has a moral reason. This might lead someone to say that my estimation here also has a moral reason: I just want to form my beliefs in the “correct” way, they might say: it is not about whether Razaali’s story really happened or not.

Charles Taylor, in chapter 15 of A Secular Age, gives a similar explanation of the situation of former religious believers who apparently have lost their faith due to evidence and argument:

From the believer’s perspective, all this falls out rather differently. We start with an epistemic response: the argument from modern science to all-around materialism seems quite unconvincing. Whenever this is worked out in something closer to detail, it seems full of holes. The best examples today might be evolution, sociobiology, and the like. But we also see reasonings of this kind in the works of Richard Dawkins, for instance, or Daniel Dennett.

So the believer returns the compliment. He casts about for an explanation why the materialist is so eager to believe very inconclusive arguments. Here the moral outlook just mentioned comes back in, but in a different role. Not that, failure to rise to which makes you unable to face the facts of materialism; but rather that, whose moral attraction, and seeming plausibility to the facts of the human moral condition, draw you to it, so that you readily grant the materialist argument from science its various leaps of faith. The whole package seems plausible, so we don’t pick too closely at the details.

But how can this be? Surely, the whole package is meant to be plausible precisely because science has shown . . . etc. That’s certainly the way the package of epistemic and moral views presents itself to those who accept it; that’s the official story, as it were. But the supposition here is that the official story isn’t the real one; that the real power that the package has to attract and convince lies in it as a definition of our ethical predicament, in particular, as beings capable of forming beliefs.

This means that this ideal of the courageous acknowledger of unpalatable truths, ready to eschew all easy comfort and consolation, and who by the same token becomes capable of grasping and controlling the world, sits well with us, draws us, that we feel tempted to make it our own. And/or it means that the counter-ideals of belief, devotion, piety, can all-too-easily seem actuated by a still immature desire for consolation, meaning, extra-human sustenance.

What seems to accredit the view of the package as epistemically-driven are all the famous conversion stories, starting with post-Darwinian Victorians but continuing to our day, where people who had a strong faith early in life found that they had reluctantly, even with anguish of soul, to relinquish it, because “Darwin has refuted the Bible”. Surely, we want to say, these people in a sense preferred the Christian outlook morally, but had to bow, with whatever degree of inner pain, to the facts.

But that’s exactly what I’m resisting saying. What happened here was not that a moral outlook bowed to brute facts. Rather we might say that one moral outlook gave way to another. Another model of what was higher triumphed. And much was going for this model: images of power, of untrammelled agency, of spiritual self-possession (the “buffered self”). On the other side, one’s childhood faith had perhaps in many respects remained childish; it was all too easy to come to see it as essentially and constitutionally so.

But this recession of one moral ideal in face of the other is only one aspect of the story. The crucial judgment is an all-in one about the nature of the human ethical predicament: the new moral outlook, the “ethics of belief” in Clifford’s famous phrase, that one should only give credence to what was clearly demonstrated by the evidence, was not only attractive in itself; it also carried with it a view of our ethical predicament, namely, that we are strongly tempted, the more so, the less mature we are, to deviate from this austere principle, and give assent to comforting untruths. The convert to the new ethics has learned to mistrust some of his own deepest instincts, and in particular those which draw him to religious belief. The really operative conversion here was based on the plausibility of this understanding of our ethical situation over the Christian one with its characteristic picture of what entices us to sin and apostasy. The crucial change is in the status accorded to the inclination to believe; this is the object of a radical shift in interpretation. It is no longer the impetus in us towards truth, but has become rather the most dangerous temptation to sin against the austere principles of belief-formation. This whole construal of our ethical predicament becomes more plausible. The attraction of the new moral ideal is only part of this, albeit an important one. What was also crucial was a changed reading of our own motivation, wherein the desire to believe appears now as childish temptation. Since all incipient faith is childish in an obvious sense, and (in the Christian case) only evolves beyond this by being child-like in the Gospel sense, this (mis)reading is not difficult to make.

Taylor’s argument is that the arguments for unbelief are unconvincing; consequently, in order to explain why unbelievers find them convincing, he must find some moral explanation for why they do not believe. This turns out to be the desire to have a particular “ethics of belief”: they do not want to have beliefs which are not formed in such and such a particular way. This is much like the theoretical response above regarding my estimation of the probability that Razaali is lying, and how that might be considered a moral estimation, rather than being concerned with what actually happened.

There are a number of problems with Taylor’s argument, which I may or may not address in the future in more detail. For the moment I will take note of three things:

First, neither in this passage nor elsewhere in the book does Taylor explain in any detailed way why he finds the unbeliever’s arguments unconvincing. I find the arguments convincing, and it is the rebuttals (by others, not by Taylor, since he does not attempt this) that I find unconvincing. Now of course Taylor will say this is because of my particular ethical motivations, but I disagree, and I have considered the matter exactly in the kind of detail to which he refers when he says, “Whenever this is worked out in something closer to detail, it seems full of holes.” On the contrary, the problem of detail is mostly on the other side; most religious views can only make sense when they are not worked out in detail. But this is a topic for another time.

Second, Taylor sets up an implicit dichotomy between his own religious views and “all-around materialism.” But these two claims do not come remotely close to exhausting the possibilities. This is much like forcing someone to choose between “he’s lying” and “this happened by pure chance.” It is obvious in both cases (the deck of cards and religious belief) that the options do not exhaust the possibilities. So insisting on one of them is likely motivated itself: Taylor insists on this dichotomy to make his religious beliefs seem more plausible, using a presumed implausibility of “all-around materialism,” and my hypothetical interlocutor insists on the dichotomy in the hope of persuading me that the deck might have or did randomly end up in its original order, using my presumed unwillingness to accuse someone of lying.

Third, Taylor is not entirely wrong that such an ethical motivation is likely involved in the case of religious belief and unbelief, nor would my hypothetical interlocutor be entirely wrong that such motivations are relevant to our beliefs about the deck of cards.

But we need to consider this point more carefully. Insofar as beliefs are voluntary, you cannot make one side voluntary and the other side involuntary. You cannot say, “Your beliefs are voluntarily adopted due to moral reasons, while my beliefs are imposed on my intellect by the nature of things.” If accepting an opinion is voluntary, rejecting it will also be voluntary, and if rejecting it is voluntary, accepting it will also be voluntary. In this sense, it is quite correct that ethical motivations will always be involved, even when a person’s opinion is actually true, and even when all the reasons that make it likely are fully known. To this degree, I agree that I want to form my beliefs in a way which is prudent and reasonable, and I agree that this desire is partly responsible for my beliefs about religion, and for my above estimate of the chance that Razaali is lying.

But that is not all: my interlocutor (Taylor or the hypothetical one) is also implicitly or explicitly concluding that fundamentally the question is not about truth. Basically, they say, I want to have “correctly formed” beliefs, but this has nothing to do with the real truth of the matter. Sure, I might feel forced to believe that Razaali’s story isn’t true, but there really is no reason it couldn’t be true. And likewise I might feel forced to believe that Taylor’s religious beliefs are untrue, but there really is no reason they couldn’t be.

And in this respect they are mistaken, not because anything “couldn’t” be true, but because the issue of truth is central, much more so than forming beliefs in an ethical way. Regardless of your ethical motives, if you believe that Razaali’s story is true and happened by pure chance, it is virtually certain that you believe a falsehood. Maybe you are forming this belief in a virtuous way, and maybe you are forming it in a vicious way: but either way, it is utterly false. Either it in fact did not happen, or it in fact did not happen by chance.

We know this, essentially, from the “statistics” of the situation: no matter how many qualifications we add, lies in such situations will be vastly more common than truths. But note that something still seems “unconvincing” here, in the sense of Scott Alexander’s original post: even after “knowing all this,” he finds himself very unwilling to say they are lying. In a discussion with Angra Mainyu, I remarked that our apparently involuntary assessments of things are more like desires than like beliefs:

So rather than calling that assessment a belief, it would be more accurate to call it a desire. It is not believing something, but desiring to believe something. Hunger is the tendency to go and get food; that assessment is the tendency to treat a certain claim (“the USA is larger than Austria”) as a fact. And in both cases there are good reasons for those desires: you are benefited by food, and you are benefited by treating that claim as a fact.

In a similar way, because we have the natural desire not to injure people, we will naturally desire not to treat “he is lying” as a fact; that is, we will desire not to believe it. The conclusion that Angra should draw in the case under discussion, according to his position, is that I do not “really believe” that it is more likely that Razaali is lying than that his story is true, because I do feel the force of the desire not to say that he is lying. But I resist that desire, in part because I want to have reasonable beliefs, but most of all because it is false that Razaali’s story is true and happened by chance.

To the degree that this desire feels like a prior probability, and it does feel that way, it is necessarily miscalibrated. But to the degree that this desire remains nonetheless, this reasoning will continue to feel in some sense unconvincing. And it does in fact feel that way to me, even after making the argument, as expected. Very possibly, this is not unrelated to Taylor’s assessment that the argument for unbelief “seems quite unconvincing.” But discussing that in the detail which Taylor omitted is a task for another time.

# Minimizing Motivated Beliefs

In the last post, we noted that there is a conflict between the goal of accurate beliefs about your future actions, and your own goals about your future. More accurate beliefs will not always lead to a better fulfillment of those goals. This implies that you must be ready to engage in a certain amount of trade, if you desire both truth and other things. Eliezer Yudkowsky argues that self-deception, and therefore also such trade, is either impossible or stupid, depending on how it is understood:

What if self-deception helps us be happy?  What if just running out and overcoming bias will make us—gasp!—unhappy?  Surely, true wisdom would be second-order rationality, choosing when to be rational.  That way you can decide which cognitive biases should govern you, to maximize your happiness.

Leaving the morality aside, I doubt such a lunatic dislocation in the mind could really happen.

Second-order rationality implies that at some point, you will think to yourself, “And now, I will irrationally believe that I will win the lottery, in order to make myself happy.”  But we do not have such direct control over our beliefs.  You cannot make yourself believe the sky is green by an act of will.  You might be able to believe you believed it—though I have just made that more difficult for you by pointing out the difference.  (You’re welcome!)  You might even believe you were happy and self-deceived; but you would not in fact be happy and self-deceived.

For second-order rationality to be genuinely rational, you would first need a good model of reality, to extrapolate the consequences of rationality and irrationality.  If you then chose to be first-order irrational, you would need to forget this accurate view. And then forget the act of forgetting.  I don’t mean to commit the logical fallacy of generalizing from fictional evidence, but I think Orwell did a good job of extrapolating where this path leads.

You can’t know the consequences of being biased, until you have already debiased yourself.  And then it is too late for self-deception.

The other alternative is to choose blindly to remain biased, without any clear idea of the consequences.  This is not second-order rationality.  It is willful stupidity.

There are several errors here. The first is the denial that belief is voluntary. As I remarked in the comments to this post, it is best to think of “choosing to believe a thing” as “choosing to treat this thing as a fact.” And this is something which is indeed voluntary. Thus for example it is by choice that I am, at this very moment, treating it as a fact that belief is voluntary.

There is some truth in Yudkowsky’s remark that “you cannot make yourself believe the sky is green by an act of will.” But this is not because the thing itself is intrinsically involuntary. On the contrary, you could, if you wished, choose to treat the greenness of the sky as a fact, at least for the most part and in most ways. The problem is that you have no good motive to wish to act this way, and plenty of good motives not to act this way. In this sense, it is impossible for most of us to believe that the sky is green in the same way it is impossible for most of us to commit suicide; we simply have no good motive to do either of these things.

Yudkowsky’s second error is connected with the first. Since, according to him, it is impossible to deliberately and directly deceive oneself, self-deception can only happen in an indirect manner: “The other alternative is to choose blindly to remain biased, without any clear idea of the consequences.  This is not second-order rationality.  It is willful stupidity.” The idea is that ordinary beliefs are simply involuntary, but we can have beliefs that are somewhat voluntary by choosing “blindly to remain biased, without any clear idea of the consequences.” Since this is “willful stupidity,” a reasonable person would completely avoid such behavior, and thus all of his beliefs would be involuntary.

Essentially, Yudkowsky is claiming that we have some involuntary beliefs, and that we should avoid adding any voluntary beliefs to our involuntary ones. This view is fundamentally flawed precisely because all of our beliefs are voluntary, and thus we cannot avoid having voluntary beliefs.

Nor is it “willful stupidity” to trade away some truth for the sake of other good things. Completely avoiding this is in fact intrinsically impossible. If you are seeking one good, you are not equally seeking a distinct good; one cannot serve two masters. Thus since all people are interested in some goods distinct from truth, there is no one who fails to trade away some truth for the sake of other things. Yudkowsky’s mistake here is related to his wishful thinking about wishful thinking which I discussed previously. In this way he views himself, at least ideally, as completely avoiding wishful thinking. This is both impossible and unhelpful, impossible in that everyone has such motivated beliefs, and unhelpful because such beliefs can in fact be beneficial.

A better attitude to this matter is adopted by Robin Hanson, as for example when he discusses motives for having opinions in a post which we previously considered here. Bryan Caplan has a similar view, discussed here.

Once we have a clear view of this matter, we can use this to minimize the loss of truth that results from such beliefs. For example, in a post linked above, we discussed the argument that fictional accounts consistently distort one’s beliefs about reality. Rather than pretending that there is no such effect, we can deliberately consider to what extent we wish to be open to this possibility, depending on our other purposes for engaging with such accounts. This is not “willful stupidity”; the stupidity would to be engage in such trades without realizing that such trades are inevitable, and thus not to realize to what extent you are doing it.

Consider one of the cases of voluntary belief discussed in this earlier post. As we quoted at the time, Eric Reitan remarks:

For most horror victims, the sense that their lives have positive meaning may depend on the conviction that a transcendent good is at work redeeming evil. Is the evidential case against the existence of such a good really so convincing that it warrants saying to these horror victims, “Give up hope”? Should we call them irrational when they cling to that hope or when those among the privileged live in that hope for the sake of the afflicted? What does moral decency imply about the legitimacy of insisting, as the new atheists do, that any view of life which embraces the ethico-religious hope should be expunged from the world?

Here, Reitan is proposing that someone believe that “a transcendent good is at work redeeming evil” for the purpose of having “the sense that their lives have positive meaning.” If we look at this as it is, namely as proposing a voluntary belief for the sake of something other than truth, we can find ways to minimize the potential conflict between accuracy and this other goal. For example, the person might simply believe that “my life has a positive meaning,” without trying to explain why this is so. For the reasons given here, “my life has a positive meaning” is necessarily more probable and more known than any explanation for this that might be adopted. To pick a particular explanation and claim that it is more likely would be to fall into the conjunction fallacy.

Of course, real life is unfortunately more complicated. The woman in Reitan’s discussion might well respond to our proposal somewhat in this way (not a real quotation):

Probability is not the issue here, precisely because it is not a question of the truth of the matter in itself. There is a need to actually feel that one’s life is meaningful, not just to believe it. And the simple statement “life is meaningful” will not provide that feeling. Without the feeling, it will also be almost impossible to continue to believe it, no matter what the probability is. So in order to achieve this goal, it is necessary to believe a stronger and more particular claim.

And this response might be correct. Some such goals, due to their complexity, might not be easily achieved without adopting rather unlikely beliefs. For example, Robin Hanson, while discussing his reasons for having opinions, several times mentions the desire for “interesting” opinions. This is a case where many people will not even notice the trade involved, because the desire for interesting ideas seems closely related to the desire for truth. But in fact truth and interestingness are diverse things, and the goals are diverse, and one who desires both will likely engage in some trade. In fact, relative to truth seeking, looking for interesting things is a dangerous endeavor. Scott Alexander notes that interesting things are usually false:

This suggests a more general principle: interesting things should usually be lies. Let me give three examples.

I wrote in Toxoplasma of Rage about how even when people crusade against real evils, the particular stories they focus on tend to be false disproportionately often. Why? Because the thousands of true stories all have some subtleties or complicating factors, whereas liars are free to make up things which exactly perfectly fit the narrative. Given thousands of stories to choose from, the ones that bubble to the top will probably be the lies, just like on Reddit.

Every time I do a links post, even when I am very careful to double- and triple- check everything, and to only link to trustworthy sources in the mainstream media, a couple of my links end up being wrong. I’m selecting for surprising-if-true stories, but there’s only one way to get surprising-if-true stories that isn’t surprising, and given an entire Internet to choose from, many of the stories involved will be false.

And then there’s bad science. I can’t remember where I first saw this, so I can’t give credit, but somebody argued that the problem with non-replicable science isn’t just publication bias or p-hacking. It’s that some people will be sloppy, biased, or just stumble through bad luck upon a seemingly-good methodology that actually produces lots of false positives, and that almost all interesting results will come from these people. They’re the equivalent of Reddit liars – if there are enough of them, then all of the top comments will be theirs, since they’re able to come up with much more interesting stuff than the truth-tellers. In fields where sloppiness is easy, the truth-tellers will be gradually driven out, appearing to be incompetent since they can’t even replicate the most basic findings of the field, let alone advance it in any way. The sloppy people will survive to train the next generation of PhD students, and you’ll end up with a stable equilibrium.

In a way this makes the goal of believing interesting things much like the woman’s case. The goal of “believing interesting things” will be better achieved by more complex and detailed beliefs, even though to the extent that they are more complex and detailed, they are simply that much less likely to be true.

The point of this present post, then, is not to deny that some goals might be such that they are better attained with rather unlikely beliefs, and in some cases even in proportion to the unlikelihood of the beliefs. Rather, the point is that a conscious awareness of the trades involved will allow a person to minimize the loss of truth involved. If you never look at your bank account, you will not notice how much money you are losing from that monthly debit for internet. In the same way, if you hold Yudkowksy’s opinion, and believe that you never trade away truth for other things, which is itself both false and motivated, you are like someone who never looks at your account: you will not notice how much you are losing.

# Crisis of Faith

In the last post, I linked to Fr. Joseph Bolin’s post on the commitment of faith. He says there:

Since faith by definition is about things that we do not see to be true, there is no inherent contradiction in faith as such being contradicted by things we do see to be true, such an absolute assent of faith seems to imply an assent to the content of faith so strong that one would desire to hold to it as true, “even if it (the content of faith) were to be false”. Can such faith be justified?

Consider the following situation: a woman has grounds to suspect her husband is cheating on her; there is a lot of evidence that he is; even when she asks him and he tells her that he is not, she must admit that the sum of evidence including his testimony is against him, and he probably is cheating. Still, she decides to believe him. I argue that the very act of believing him entails a commitment to him such that once she has given faith to his word, while it is still in fact possible that she is believing him though he is actually lying, this possibility is less relevant for her than it was prior to her giving faith. In this sense, after faith, the “if it were to be false” becomes less of a consideration for the believer, and to this degree she wills faith “even were it to be false”.

A more detailed analysis of the situation: various persons present her with claims or evidence that her husband is cheating on her. Before confronting him or asking him if he is, she collects various evidence for and against it. She decides that since believing him if he is dishonest is not without its own evils, if the evidence that he is cheating (after taking into account the evidence constituted by his statement on the matter) constitutes a near certainty that he is cheating — let’s say, over 95% probability that he is cheating — that she shouldn’t believe him if he says he is not, but must either suspend judgment or maintain that he is cheating. Now, suppose the man says that he is not cheating, and the evidence is not quite that much against him, let’s say, the evidence indicates a 90% probability that he is cheating, and a 10% probability that he is not. She makes the decision to believe him. Since she would not decide to do so unless she believed that it were good to so, she is giving an implicit negative value to “believing him, if he is in fact lying”, a much greater positive value to “believing him, if he is speaking the truth”, and consequently an implicit positive value to “believing him,” (even though he is probably lying).

Going forward, she is presented with an easy opportunity to gather further evidence about whether he is in fact cheating. She must make a decision whether to do so. If she is always going to make the same decision at this point that she would have made if she had not yet decided to believe him, it seems that her “faith” she gives him and his word is rather empty. A given decision to pursue further evidence, while not incompatible with faith, is a blow against it — to the extent that, out of fidelity to him, she accepts his claim as sure, she must operate either on the assumption that further evidence will vindicate him, or that he is innocent despite the evidence. But to the extent she operates on one of these assumptions, there is no need to pursue further evidence. Pursuing evidence, therefore, implies abstracting from her faith in him. To pursue evidence because it is possible that further evidence will be even more against him and provide her with enough grounds to withdraw her assent to his claim of innocence means giving that faith a lesser role in her life and relationship with him, and is thereby a weakening of the exercise of that faith. Consequently, if that faith is a good thing, then, having given such faith, she must be more reluctant to seek a greater intellectual resolution of the case by greater evidence than she was before she had given it.

All of this is true in substance, although one could argue with various details. For example, Fr. Joseph seems to be presuming for the sake of discussion that a person’s subjective assessment is at all times in conformity with the evidence, so that if more evidence is found, one must change one’s subjective assessment to that degree. But this is clearly not the case in general in regard to religious opinions. As we noted in the previous post, that assessment does not follow a random walk, and this proves that it is not simply a rational assessment of the evidence. And it is the random walk, rather than anything that happens with actual religious people, that would represent the real situation of someone with an “empty” faith, that is, of someone without any commitment of faith.

Teenagers will sometimes say to themselves, “My parents told me all these things about God and religion, but actually there are other families and other children who believe totally different things. I don’t have any real reason to think my family is right rather than some other. So God probably doesn’t exist.”

They might very well follow this up with, “You know, I said God doesn’t exist, but that was just because I was trying to reject my unreasonable opinions. I don’t actually know whether God exists or not.”

This is an example of the random walk, and represents a more or less rational assessment of the evidence available to teenagers. But what it most certainly does not represent, is commitment of any kind. And to the degree that we think that such a commitment is good, it is reasonable to disapprove of such behavior, and this is why there does seem something wrong there, even if in fact the teenager’s religious opinions were not true in the first place.

Fr. Joseph’s original question was this: “Can (religious) faith entail an absolute commitment to the one in whom we place faith and his word, such that one should hold that “no circumstances could arise in which I would cease to believe?” He correctly notes that this “seems to imply an assent to the content of faith so strong that one would desire to hold to it as true, ‘even if it (the content of faith) were to be false'”. For this reason, his post never actually answers the question. For although he right to say that the commitment of faith implies giving preferential treatment to the claim that the content of one’s faith is true, it will not follow that this preferential treatment should be absolute, unless it is true that it is better to believe even if that content is false. And it would be extremely difficult to prove that, even if it were the case.

My own view is that one should be extremely hesitant to accept such an assessment, even of some particular claim, such as the one in the post linked above, that “God will always bring good out of evil.” And if one should be hesitant to make such an assertion about a particular claim, much more should one doubt that this claim is true in regard to the entire contents of a religious faith, which involves making many assertions. Some of the reasons for what I am saying here are much like some of the reasons for preserving the mean of virtue. What exactly will happen if I eat too much? I’m not sure, but I know it’s likely to be something bad. I might feel sick afterwards, but I also might not. Or I might keep eating too much, become very overweight, and have a heart attack at some point. Or I might, in the very process of eating too much, say at a restaurant, spend money that I needed for something else. Vicious behaviors are extreme insofar as they lack the mean of virtue, and insofar as they are extreme, they are likely to have extreme consequences of one kind or another. So we can know in advance that our vicious behaviors are likely to have bad consequences, without necessarily being able to point out the exact consequences in advance.

Something very similar applies to telling lies, and in fact telling lies is a case of vicious behavior, at least in general. It often seems like a lie is harmless, but then it turns out later that the lie caused substantial harm.

And if this is true about telling lies, it is also true about making false statements, even when those false statements are not lies. So we can easily assert that the woman in Eric Reitan’s story is better off believing that God will somehow redeem the evil of the death of her children, simply looking at the particular situation. But if this turned out to be false, we have no way to know what harms might follow from her holding a false belief, and there would be a greater possibility of harm to the degree that she made that conviction more permanent. It would be easy enough to create stories to illustrate this, but I will not do that here. Just as eating too much, or talking too much, or moving about too much, can create any number of harms by multiple circuitous routes, so can believing in things that are false. One particularly manifest way this can happen is insofar as one false belief can lead to another, and although the original belief might seem harmless, the second belief might be very harmful indeed.

In general, Fr. Joseph seems to be asserting that the commitment of faith should lead a person not to pursue additional evidence relative to the truth of their faith, and apparently especially in situations where one already knows that there is a significant chance that the evidence will continue to be against it. This is true to some extent, but the right action in a concrete case will differ according to circumstances, especially, as argued here, if it is not better to believe in the situation where the content of the faith is false. Additionally, it will frequently not be a question of deciding to pursue evidence or not, but of deciding whether to think clearly about evidence or arguments that have entered one’s life even without any decision at all.

Consider the case of St. Therese, discussed in the previous post. Someone might argue thus: “Surely St. Therese’s commitment was absolute. You cannot conceive of circumstances in which she would have abandoned her faith. So if St. Therese was virtuous, it must be virtuous to have such an absolute commitment.” And it would follow that it is better to believe even if your faith is false, and that one should imitate her in having such an absolute commitment. Likewise, it would follow with probability, although not conclusively, that Shulem Deen should also have had such an absolute commitment to his Jewish faith, and should have kept believing in it no matter what happened. Of course, an additional consequence, unwelcome to many, would be that he should also have had an absolute refusal to convert to Christianity that could not be changed under any circumstances.

It is quite certain that St. Therese was virtuous. However, if you cannot conceive of any circumstances in which she would have abandoned her faith, that is more likely to be a lack in your imagination than in the possibility. Theoretically there could have been many circumstances in which it would have been quite possible. It is true that in the concrete circumstances in which she was living, such an abandonment would have been extremely unlikely, and likely not virtuous if it happened. But those are concrete circumstances, not abstractly conceivable circumstances. As noted in the previous post, the evidence that she had against her faith was very vague and general, and it is not clear that it could ever have become anything other than that without a substantially different life situation. And since it is true that the commitment of faith is a good reason to give preferential treatment to the truth of your faith, such vague and general evidence could not have been a good reason for her to abandon her faith. This is the real motivation for the above argument. It is clear enough that in her life as it was actually lived, there was not and could not be a good reason for her to leave her faith. But this is a question of the details of her life.

Shulem Deen, of course, lived in very different circumstances, and his religious faith itself differed greatly from that of St. Therese. Since I have already recommended his book, I will not attempt to tell his story for him, but it can be seen from the above reasoning that the answer to the question raised at the end of the last post might very well be, “They both did the right thing.”

Christian philosopher William Lane Craig writes somewhere about what he calls the “ministerial” and the “magisterial” use of reason. (It’s a traditional view — he’s merely citing Martin Luther — and one that Craig endorses.) On this view, the task of reason is to find arguments in support of the faith and to counter any arguments against it. Reason is not, however, the basis of the Christian’s faith. The basis of the Christian’s faith is (what she takes to be) the “internal testimony of the Holy Spirit” in her heart. Nor can rational reflection can be permitted to undermine that faith. The commitment of faith is irrevocable; to fall away from it is sinful, indeed the greatest of sins.

It follows that while the arguments put forward by many Christian philosophers are serious arguments, there is something less than serious about the spirit in which they are being offered. There is a direction in which those arguments will not be permitted to go. Arguments that support the faith will be seriously entertained; those that apparently undermine the faith must be countered, at any cost. Philosophy, to use the traditional phrase, is merely a “handmaid” of theology.

There is, to my mind, something frivolous about a philosophy of this sort. My feeling is that if we do philosophy, it ought to be because we take arguments seriously. This means following them wherever they lead.

There is more than one way to read this. When he says, “this means following them wherever they lead,” one could take that to imply a purely rational assessment of evidence, and no hesitancy whatsoever to consider any possible line of argument. This would be a substantial disagreement with Fr. Joseph’s position, and would in fact be mistaken. Fr. Joseph is quite right that the commitment of faith has implications for one’s behavior, and that it implies giving a preferential treatment to the claims of one’s faith. But this is probably not the best way to read Dawes, who seems to be objecting more to the absoluteness of the claim: “The commitment of faith is irrevocable,” and arguments “that apparently undermine the faith must be countered, at any cost.” And Dawes is quite right that such absolute claims go too far. Virtue is a mean and depends on circumstances, and there is enough space in the world for both Shulem Deen and St. Therese.

The reader might be wondering about the title to this post. Besides being a play on words, perhaps spoiled by mentioning it, it is a reference to the fact that Fr. Joseph is basically painting a very clear picture of the situation where a Catholic has a crisis of faith and overcomes it. This is only slightly distorted by the idealization of assuming that the person evaluates the evidence available to him in a perfectly rational way. But he points out, just as I did in the previous post, that such a crisis is mainly overcome by choosing not to consider any more evidence, or not to think about it anymore, and similar things. He describes this as choosing “not to pursue evidence” because of the idealization, but in real life this can also mean ceasing to pay attention to evidence that one already has, choosing to pay more attention to other motives that one has to believe that are independent of evidence, and the like.

# Whether Lying is Always Wrong?

It is clear that lying is wrong in general. And there seem to be good reasons for saying that this is true without exception. In the first place, as was said in the linked post, lying always harms the common good by taking away from the meaning of language.

This is also related to St. Thomas’s argument that lying is always wrong:

An action that is naturally evil in respect of its genus can by no means be good and lawful, since in order for an action to be good it must be right in every respect: because good results from a complete cause, while evil results from any single defect, as Dionysius asserts (Div. Nom. iv). Now a lie is evil in respect of its genus, since it is an action bearing on undue matter. For as words are naturally signs of intellectual acts, it is unnatural and undue for anyone to signify by words something that is not in his mind. Hence the Philosopher says (Ethic. iv, 7) that “lying is in itself evil and to be shunned, while truthfulness is good and worthy of praise.” Therefore every lie is a sin, as also Augustine declares (Contra Mend. i).

The idea here is that just as “killing an innocent person” is always wrong, so “speaking against one’s mind” is always wrong, and the harm consistently done to the language and to the common good is a sign of this wrongness. Still, there are cases where it is right to do something that involves the death of an innocent person incidentally, and likewise there could be cases where it is right to do something that incidentally involves speech apparently contrary to one’s thought. But just as such incidental cases are not murder, so such incidental cases are not lying. This post and the previous past are good examples, since I appear to be saying things contrary to my mind.

Even if someone does not accept St. Thomas’s manner of argument, there are reasons for thinking that lying is always harmful even in terms of its consequences. One should consider the consequences not only of the individual act, but also of the policy, and the policy “never tell lies,” seems more beneficial than any policy permitting lies under some circumstances. We can consider the Prisoner’s Dilemma. If everyone has the policy of cooperating, everyone will be better off. Likewise, society will be better off if everyone has the policy of never lying. Of course, not everyone has this policy. Nonetheless, the more people adopt it, the more other people will be willing to adopt it, and the better off everyone will be. Even the typically discussed case of the Nazi and the Jews may not change this. If you tell the truth to the Nazi, it will be bad for the Jews in the particular case, but the world as a whole may be better off because of your policy of consistent truth-telling.

On the other hand, it is also easy to argue that we should make an exception for cases like that of the Jews. In the first place, almost everyone would in fact make an exception in this case, and simply say that there are no Jews. Yes, you could respond with a verbal evasion, if you happened to think of one. But suppose that you are on the spot and one does not occur to you. Your real choice here is simply to say, “Yes, there are Jews,” or “No, there are none here.” If you do not respond, your house will be searched, which will have the same effect as giving an affirmative response. In practice most people would lie. Nor can this be dismissed as moral weakness, the way we can dismiss people’s tendency to overeat as moral weakness. For people regret overeating; they will say things like, “I wish I didn’t eat so much.” But in the case of the Nazi and the Jews, most people would lie, and would never regret it. They would never say, “I wish I had admitted the Jews were there.” This indicates that almost everyone agrees that it is ok to lie in that case: regardless of how they describe this situation philosophically, at a deep level they believe that lying is justified in this case. If you attempt to justify it by saying that it isn’t really lying in that case, then you are simply confirming the fact that you believe this.

And insofar as this is a practical matter, we can make a strong argument for their conclusion as a matter of practice, regardless of the theoretical truth of the matter. Suppose you are 95% certain of the arguments in the first part of this post. You think there is a 95% chance that lying is always wrong, even in the case of the Nazi and the Jews. Now the Nazi is at the door, asking about the Jews in your house. You can tell the truth. In this case, according to your opinion, there is 95% chance that you will be doing the morally right thing, and incidentally allowing the death of some innocent persons. But there is a 5% chance that you will be doing the morally wrong thing. That is, if you are wrong, you will not only be doing something morally neutral: you will be doing something morally wrong, namely allowing the death of innocent persons for no good reason. And the 95% chance is of telling a useful lie, and saving lives. If it is morally wrong, it is a small wrong. The 5% chance, however, is of pointlessly allowing deaths. If it is morally wrong, it is extremely evil.

And it is easy to argue that in practice there is only one good choice here: a certainty of saving lives, together with a 95% chance of a slightly wrong act, seems much better than the certainty of allowing deaths, together with a 5% chance of an extremely evil act.

# Lying

St. Thomas speaks of truth as a part of justice:

Since man is a social animal, one man naturally owes another whatever is necessary for the preservation of human society. Now it would be impossible for men to live together, unless they believed one another, as declaring the truth one to another. Hence the virtue of truth does, in a manner, regard something as being due.

It is not clear whether St. Thomas intends to say precisely this, but in fact it would be impossible for men to live together without believing one another in a particular sense, namely it would be impossible for them to speak a common language, or in other words for them to communicate with one another by language at all.

Consider what would happen if people only said “this is red” about things that are blue. If this happened, “red” would simply acquire the meaning that “blue” presently has. The resulting situation would be entirely normal, except that the word “red” would have a different meaning.

Likewise, consider what would happen if people said “this is red” about random things in random situations. The phrase would cease to have any concrete meaning, and if the situations were randomized enough, the phrase would cease to have any meaning at all.

Again, supposing that one man had the intention of deceiving another as much as possible, as soon as both men are aware of this intention, the one who wishes to deceive can no longer do so. But he also cannot communicate anything; if he says, “there will be a concert tomorrow,” the other man will not believe that there will be a concert tomorrow. But neither will he conclude that there will not be a concert, because the deceiving one might have hoped for this result. Consequently he will cease to pay any attention whatsoever to what he says.

Similarly, if all men had the intention of deceiving all others as much as possible, language would simply cease to have meaning, and people would simply stop listening to one another.

Saying all of this in another way, we cannot understand the meaning of words unless they actually have some correlation with reality. This implies that it is basically necessary for truth telling to be more common than lying in order for language to exist at all; and this necessity is a necessity of fact, not merely of precept.

It follows that one harmful effect of lying is that it damages language, namely by tending to make it less meaningful. In some cases, we can see that the harm has already been done: for example, when someone asks, “How are you doing?” and the other responds, “Fine,” his response is meaningless, and it has become so on account of many past lies. And insofar as language is a common good, since it is a tool that benefits the whole community by having meaning, lying is always harmful to the common good by tending to take away meaning from the language in this way.