How to Build an Artificial Human

I was going to use “Artificial Intelligence” in the title here but realized after thinking about it that the idea is really more specific than that.

I came up with the idea here while thinking more about the problem I raised in an earlier post about a serious obstacle to creating an AI. As I said there:

Current AI systems are not universal, and clearly have no ability whatsoever to become universal, without first undergoing deep changes in those systems, changes that would have to be initiated by human beings. What is missing?

The problem is the training data. The process of evolution produced the general ability to learn by using the world itself as the training data. In contrast, our AI systems take a very small subset of the world (like a large set of Go games or a large set of internet text), and train a learning system on that subset. Why take a subset? Because the world is too large to fit into a computer, especially if that computer is a small part of the world.

This suggests that going from the current situation to “artificial but real” intelligence is not merely a question of making things better and better little by little. There is a more fundamental problem that would have to be overcome, and it won’t be overcome simply by larger training sets, by faster computing, and things of this kind. This does not mean that the problem is impossible, but it may turn out to be much more difficult than people expected. For example, if there is no direct solution, people might try to create Robin Hanson’s “ems”, where one would more or less copy the learning achieved by natural selection. Or even if that is not done directly, a better understanding of what it means to “know how to learn,” might lead to a solution, although probably one that would not depend on training a model on massive amounts of data.

Proposed Predictive Model

Perhaps I was mistaken in saying that “larger training sets” would not be enough, at any rate enough to get past this basic obstacle. Perhaps it is enough to choose the subset correctly… namely by choosing the subset of the world that we know to contain general intelligence. Instead of training our predictive model on millions of Go games or millions of words, we will train it on millions of human lives.

This project will be extremely expensive. We might need to hire 10 million people to rigorously lifelog for the next 10 years. This has to be done with as much detail as possible; in particular we would want them recording constant audio and visual streams, as well as much else as possible. If we pay our crew an annual salary of $75,000 for this, this will come to $7.5 trillion; there will be some small additions for equipment and maintenance, but all of this will be very small compared to the salary costs.

Presumably in order to actually build such a large model, various scaling issues would come up and need to be solved. And in principle nothing prevents these from being very hard to solve, or even impossible in practice. But since we do not know that this would happen, let us skip over this and pretend that we have succeeded in building the model. Once this is done, our model should be able to fairly easily take a point in a person’s life and give a fairly sensible continuation over at least a short period of time, just as GPT-3 can give fairly sensible continuations to portions of text.

It may be that this is enough to get past the obstacle described above, and once this is done, it might be enough to build a general intelligence using other known principles, perhaps with some research and refinement that could be done during the years in which our crew would be building their records.

Required Elements

Live learning. In the post discussing the obstacle, I noted that there are two kinds of learning, the type that comes from evolution, and the type that happens during life. Our model represents the type that comes from evolution; unlike GPT-3, which cannot learn anything new, we need our AI to remember what has actually happened during its life and to be able to use this to acquire knowledge about its particular situation. This is not difficult in theory but you would need to think carefully about how this should interact with the general model; you do not want to simply add its particular experiences as another individual example (not that such an addition to an already trained model is simple anyway.)

Causal model. Our AI needs not just a general predictive model of the world, but specifically a causal one; not just the general idea that “when you see A, you will soon see B,” but the idea that “when there is an A — which may or may not be seen — it will make a B, which you may or may not see.” This is needed for many reasons, but in particular, without such a causal model, long term predictions or planning will be impossible. If you take a model like GPT-3 and force it to continue producing text indefinitely, it will either repeat itself or eventually go completely off topic. The same thing would happen to our human life model — if we simply used the model without any causal structure, and forced it to guess what would happen indefinitely far into the future, it would eventually produce senseless predictions.

In the paper Making Sense of Raw Input, published by Google Deepmind, there is a discussion of an implementation of this sort of model, although trained on an extremely easy environment (compared to our task, which would be train it on human lives).

The Apperception Engine attempts to discern the nomological structure that underlies the raw sensory input. In our experiments, we found the induced theory to be very accurate as a predictive model, no matter how many time steps into the future we predict. For example, in Seek Whence (Section 5.1), the theory induced in Fig. 5a allows us to predict all future time steps of the series, and the accuracy of the predictions does not decay with time.

In Sokoban (Section 5.2), the learned dynamics are not just 100% correct on all test trajectories, but they are provably 100% correct. These laws apply to all Sokoban worlds, no matter how large, and no matter how many objects. Our system is, to the best of our knowledge, the first that is able to go from raw video of non-trivial games to an explicit first-order nomological model that is provably correct.

In the noisy sequences experiments (Section 5.3), the induced theory is an accurate predictive model. In Fig. 19, for example, the induced theory allows us to predict all future time steps of the series, and does not degenerate as we go further into the future.

(6.1.2 Accuracy)

Note that this does not have the problem of quick divergence from reality as you predict into the distant future. It will also improve our AI’s live learning:

A system that can learn an accurate dynamics model from a handful of examples is extremely useful for model-based reinforcement learning. Standard model-free algorithms require millions of episodes before they can reach human performance on a range of tasks [31]. Algorithms that learn an implicit model are able to solve the same tasks in thousands of episodes [82]. But a system that learns an accurate dynamics model from a handful of examples should be able to apply that model to plan, anticipating problems in imagination rather than experiencing them in reality [83], thus opening the door to extremely sample efficient model-based reinforcement learning. We anticipate a system that can learn the dynamics of an ATARI game from a handful of trajectories,19 and then apply that model to plan, thus playing at reasonable human level on its very first attempt.

(6.1.3. Data efficiency)

“We anticipate”, as in Google has not yet built such a thing, but that they expect to be able to build it.

Scaling a causal model to work on our human life dataset will probably require some of the most difficult new research of this entire proposal.

Body. In order to engage in live learning, our AI needs to exist in the world in some way. And for the predictive model to do it any good, the world that it exists in needs to be a roughly human world. So there are two possibilities: either we simulate a human world in which it will possess a simulated human body, or we give it a robotic human-like body that will exist physically in the human world.

In relationship to our proposal, these are not very different, but the former is probably more difficult, since we would have to simulate pretty much the entire world, and the more distant our simulation is from the actual world, the less helpful its predictive model would turn out to be.

Sensation. Our AI will need to receive input from the world through something like “senses.” These will need to correspond reasonably well with the data as provided in the model; e.g. since we expect to have audio and visual recording, our AI will need sight and hearing.

Predictive Processing. Our AI will need to function this way in order to acquire self-knowledge and free will, without which we would not consider it to possess general intelligence, however good it might be at particular tasks. In particular, at every point in time it will have predictions, based on the general human-life predictive model and on its causal model of the world, about what will happen in the near future. These predictions need to function in such a way that when it makes a relevant prediction, e.g. when it predicts that it will raise its arm, it will actually raise its arm.

(We might not want this to happen 100% of the time — if such a prediction is very far from the predictive model, we might want the predictive model to take precedence over this power over itself, much as happens with human beings.)

Thought and Internal Sensation. Our AI needs to be able to notice that when it predicts it will raise its arm, it succeeds, and it needs to learn that in these cases its prediction is the cause of raising the arm. Only in this way will its live learning produce a causal model of the world which actually has self knowledge: “When I decide to raise my arm, it happens.” This will also tell it the distinction between itself and the rest of the world; if it predicts the sun will change direction, this does not happen. In order for all this to happen, the AI needs to be able to see its own predictions, not just what happens; the predictions themselves have to become a kind of input, similar to sight and hearing.

What was this again?

If we don’t run into any new fundamental obstacle along the way (I mentioned a few points where this might happen), the above procedure might be able to actually build an artificial general intelligence at a rough cost of $10 trillion (rounded up to account for hardware, research, and so on) and a time period of 10-20 years. But I would call your attention to a couple of things:

First, this is basically an artificial human, even to the extent that the easiest implementation likely requires giving it a robotic human body. It is not more general than that, and there is little reason to believe that our AI would be much more intelligent than a normal human, or that we could easily make it more intelligent. It would be fairly easy to give it quick mental access to other things, like mathematical calculation or internet searches, but this would not be much faster than a human being with a calculator or internet access. Like with GPT-N, one factor that would tend to limit its intelligence is that its predictive model is based on the level of intelligence found in human beings; there is no reason it would predict it would behave more intelligently, and so no reason why it would.

Second, it is extremely unlikely than anyone will implement this research program anytime soon. Why? Because you don’t get anything out of it except an artificial human. We have easier and less expensive ways to make humans, and $10 trillion is around the most any country has ever spent on anything, and never deliberately on one single project. Nonetheless, if no better way to make an AI is found, one can expect that eventually something like this will be implemented; perhaps by China in the 22nd century.

Third, note that “values” did not come up in this discussion. I mentioned this in one of the earlier posts on predictive processing:

The idea of the “desert landscape” seems to be that this account appears to do away with the idea of the good, and the idea of desire. The brain predicts what it is going to do, and those predictions cause it to do those things. This all seems purely intellectual: it seems that there is no purpose or goal or good involved.

The correct response to this, I think, is connected to what I have said elsewhere about desire and good. I noted there that we recognize our desires as desires for particular things by noticing that when we have certain feelings, we tend to do certain things. If we did not do those things, we would never conclude that those feelings are desires for doing those things. Note that someone could raise a similar objection here: if this is true, then are not desire and good mere words? We feel certain feelings, and do certain things, and that is all there is to be said. Where is good or purpose here?

The truth here is that good and being are convertible. The objection (to my definition and to Clark’s account) is not a reasonable objection at all: it would be a reasonable objection only if we expected good to be something different from being, in which case it would of course be nothing at all.

There was no need for an explicit discussion of values because they are an indirect consequence. What would our AI care about? It would care roughly speaking about the same things we care about, because it would predict (and act on the prediction) that it would live a life similar to a human life. There is definitely no specific reason to think it would be interested in taking over the world, although this cannot be excluded absolutely, since this is an interest that humans sometimes have. Note also that Nick Bostrom was wrong: I have just made a proposal that might actually succeed in making a human-like AI, but there is no similar proposal that would make an intelligent paperclip maximizer.

This is not to say that we should not expect any bad behavior at all from such a being; the behavior of the AI in the film Ex Machina is a plausible fictional representation of what could go wrong. Since what it is “trying” to do is to get predictive accuracy, and its predictions are based on actual human lives, it will “feel bad” about the lack of accuracy that results from the fact that it is not actually human, and it may act on those feelings.

Prayer and Probability

The reader might wonder about the relation between the previous post and my discussion of Arman Razaali. If I could say it is more likely that he was lying than that the thing happened as stated, why shouldn’t they believe the same about my personal account?

In the first place there is a question of context. I deliberately took Razaali’s account randomly from the internet without knowing anything about him. Similarly if someone randomly passes through and reads the previous post without having ready anything else on this blog, it would not be unreasonable for them to think I might have just made it up. But if someone has read more here they probably have a better estimate of my character. (If you have read more and still think I made it up, well, you are a very poor judge of character and there is not much I can do about that.)

Second, I did not say he was lying. I said it was more likely than the extreme alternative hypothesis that the thing happened exactly as stated and that it happened purely by chance. And given later events (namely his comment here), I do not think he was lying at all.

Third, the probabilities are very different.

“Calculating” the probability

What is the probability of the events I described happening purely by chance? The first thing to determine is what we are counting when we say that something has a chance of 1/X, whatever X is. Out of X cases, the thing should happen about once. In the Razaali case, ‘X’ would be something like “shuffling a deck of cards for 30 minutes and ending up with the deck in the original order.” That should happen about once, if you shuffle and check your deck of cards about 10^67 times.

It is not so easy to say what you are counting if you are trying to determine the probability of a coincidence. And one factor that makes this feel weirder and less probable is that since a coincidence involves several different things happening, you tend to think about it as though there were an extra difficulty in each and every one of the things needing to happen. But in reality you should take one of them as a fixed fact and simply ask about the probability of the other given the fixed thing. To illustrate this, consider the “birthday problem“: in a group of 23 people, the chance that two of them will have the same birthday is over 50%. This “feels” too high; most people would guess that the chance would be lower. But even without doing the math, one can begin to see why this is so by thinking through a few steps of the problem. 22 days is about 6% of the days in a year; so if we take one person, who has a birthday on some day or other, there will be about a 6% chance that one of the other people have the same birthday. If none of them do, take the second person; the chance one of the remaining 21 people will have the same birthday as them will still be pretty close to 6%, which gets us up to almost 12% (it doesn’t quite add up in exactly this way, but it’s close). And we still have a lot more combinations to check. So you can already start to see that how easy it will turn out to be to get up to 50%. In any case, the basic point is that the “coincidence” is not important; each person has a birthday, and we can treat that day as fixed while we compare it to all the others.

In the same way, if you are asking about the probability that someone prays for a thing, and then that thing happens (by chance), you don’t need to consider the prayer as some extra factor — it is enough to ask how often the thing in question happens, and that will tell you your chance. If someone is looking for a job and prays a novena for this intention, and receives a job offer immediately afterwards, the chance will be something like “how often a person looking for a job receives a job offer.” For example, if it takes five months on average to get a job when you are looking, the probability of receiving an offer on a random day should be about 1/150; so out 150 people praying novenas for a job while engaged in a job search, about 1 of them should get an offer immediately afterwards.

What would have counted as “the thing happening” in the personal situation described in the last post? There are a number of subjective factors here, and depending on how one looks at it, especially depending on the detail with which the situation is described. For example, as I said in the last post, it is normal to think of the “answer” to novena on the last day or the day after — so if a person praying for a job receives an offer on either of those days, they will likely consider it just as much of an answer. This means the estimate of 1/150 is really too low; it should really be 1/75. And given that many people would stretch out the period (in which they would count the result as an answer) to as much as a week, we could make the odds as high as 1/21. Looking loosely at other details could similarly improve the odds; e.g. if receiving an interview invitation that later leads to a job is included, the odds would be even higher.

But since we are considering whether the odds might be as bad as 1/10^67, let’s assume we include a fair amount of detail. What are the odds that on a specific day a stranger tells someone that “Our Lady wants you to become a religious and she is afraid that you are going astray,” or words to that effect?

The odds here should be just as objective as the odds with the cards — there should be a real number here — for reasons explained elsewhere, but unfortunately unlike the cards, we have nowhere near enough experience to get a precise number. Nonetheless it is easy to see that various details about the situation made it actually more likely than it would be for a perfectly random person. Since I had a certain opinion of my friend’s situation, that makes it far more likely than chance that other people aware of the situation would have a similar opinion. And although we are talking about a “stranger” here, that stranger was known to a third party that knew my friend, and we have no way of knowing what, if anything, might have passed through that channel.

If we arbitrarily assume that one in a million people in similar situations (i.e. where other people have similar opinions about them) hear such a thing at some point in their lives, and assume that we need to hit one particular day out of 50 years here, then we can “calculate” the chance: 1 / (365 * 50 * 1,000,000), or about 1 in 18 billion. To put it in counting terms, 1 in 18 billion novenas like this will result in the thing happening by chance.

Now it may be that one in a million persons is too high (although if anything it may also be too low; the true value may be more like 1 / 100,000, making the overall probability 1 / 180 million). But it is easy to see that there is no reasonable way that you can say this is as unlikely as shuffling a deck of cards and getting it in the original order.

The Alternative Hypothesis

A thing that happens once in 18 billion person days is not so rare that you would expect such things to never occur (although you would expect them to most likely not happen to you). Nonetheless, you might want to consider whether there is some better explanation than chance.

But a problem arises immediately: it is not clear that the alternative makes it much more likely. After all, I was very surprised by these events when they happened, even though at the time I did attribute an explicitly religious explanation. Indeed, Fr. Joseph Bolin argues that you should not expect prayer to increase the chances of any event. But if this is the case, then the odds of it happening will be the same given the religious explanation as given the chance explanation. Which means the event would not even be evidence for the religious explanation.

In actual fact, it is evidence for the religious explanation, but only because Fr. Joseph’s account is not necessarily true. It could be true that when one prays for something sufficiently rare, the chance of it happening increases by a factor of 1,000; the cases would still be so rare that people would not be likely to discover this fact.

Nonetheless, the evidence is much weaker than a probability of 1 in 18 billion would suggest, namely because the alternative hypothesis does not prevent the events from remaining very unlikely. This is an application of the discussion here, where I argued that “anomalous” evidence should not change your opinion much about anything. This is actually something the debunkers get right, even if they are mistaken about other things.

Might People on the Internet Sometimes Tell the Truth?

Lies and Scott Alexander

Scott Alexander wrote a very good post called Might People on the Internet Sometimes Lie, which I have linked to several times in the past. In the first linked post (Lies, Religion, and Miscalibrated Priors), I answered Scott’s question (why it is hard to believe that people are lying even when they probably are), but also pointed out that “either they are lying or the thing actually happened in such and such a specific way” is a false dichotomy in any case.

In the example in my post, I spoke about Arman Razaali and his claim that he shuffled a deck of cards for 30 minutes and ended up with the deck in its original order. As I stated in the post,

People demonstrably lie at far higher rates than 1 in 10^67 or 1 in 10^40. This will remain the case even if you ask about the rate of “apparently unmotivated flat out lying for no reason.” Consequently, “he’s lying, period,” is far more likely than “the story is true, and happened by pure chance.” Nor can we fix this by pointing to the fact that an extraordinary claim is a kind of extraordinary evidence

But as I also stated there, those are not the only options. As it turns out, although my readers may have missed this, Razaali himself stumbled upon my post somewhat later and posted something in the comments there:

At first, I must say that I was a bit flustered when I saw this post come up when I was checking what would happen when I googled myself. But it’s an excellent read, exceptionally done with excellent analysis. Although I feel the natural urge to be offended by this, I’m really not. Your message is very clear, and it articulates the inner workings of the human mind very well, and in fact, I found that I would completely agree. Having lost access to that Quora account a month or two ago, I can’t look back at what I wrote. I can easily see how the answer gave on Quora could very easily be seen as a lie, and if I read it with no context, I would probably think it was fake too. But having been there at the moment as I counted the cards, I am biased towards believing what I saw, even though I could have miscounted horrendously.

Does this sound like something written by one of Scott Alexander’s “annoying trolls”?

Not to me, anyway. I am aware that I am also disinclined for moral reasons to believe that Razaali was lying, for the reasons I stated in that post. Nonetheless, it seems fair to say that this comment fits better with some intermediate hypothesis (e.g. “it was mostly in order and he was mistaken”) rather than with the idea that “he was lying.”

Religion vs. UFOs

I participated in this exchange on Twitter:

Ross Douthat:

Of what use are our professionally-eccentric, no-heresy-too-wild reasoners like @robinhanson if they assume a priori that “spirits or creatures from other dimensions” are an inherently crazy idea?: https://overcomingbias.com/2021/05/ufos-say-govt-competence-is-either-surprisingly-high-or-surprisingly-low.html

Robin Hanson:

But we don’t want to present ourselves as finding any strange story as equally likely. Yes, we are willing to consider most anything, at least from a good source, & we disagree with others on which stories seem more plausible. But we present ourselves as having standards! 🙂

Me:

I think @DouthatNYT intended to hint that many religious experiences offer arguments for religions that are at least as strong as arguments from UFOs for aliens, and probably stronger.

I agree with him and find both unconvincing.

But find it very impressive you were willing to express those opinions.

Robin Hanson:

You can find videos on best recent evidence for ghosts, which to me looks much less persuasive than versions for UFOs. But evidence for non-ghost spirits, they don’t even bother to make videos for that, as there’s almost nothing.

Me:

It is just not true that there is “almost nothing.” E.g. see the discussion in my post here:

Miracles and Multiple Witnesses

Robin does not respond. Possibly he just does not want to spend more time on the matter. But I think there is also something else going on; engaging with this would suggest to people that he does not “have standards.” It is bad enough for his reputation if he talks about UFOs; it would be much worse if he engaged in a discussion about rosaries turning to gold, which sounds silly to most Catholics, let alone to non-Catholic Christians, people of other religions, and non-religious people.

But I meant what I said in that post, when I said, “these reports should be taken seriously.” Contrary to the debunkers, there is nothing silly about something being reported by thousands of people. It is possible that every one of those reports is a lie or a mistake. Likely, even. But I will not assume that this is the case when no one has even bothered to check.

Scott Alexander is probably one of the best bloggers writing today, and one of the most honest, to the degree that his approach to religious experiences is somewhat better. For example, although I was unfortunately unable to find the text just now, possibly because it was in a comment (and some of those threads have thousands of comments) and not in a post, he once spoke about the Miracle of the Sun at Fatima, and jokingly called it something like, “a glitch in the matrix.” The implication was that (1) he does not believe in the religious explanation, but nonetheless (2) the typical “debunkings” are just not very plausible. I agree with this. There are some hints that there might be a natural explanation, but the suggestions are fairly stretched compared to the facts.

December 24th, 2010 – Jan 4th, 2011

What follows is a description of events that happened to me personally in the period named. They are facts. They are not lies. There is no distortion, not due to human memory failures or anything else. The account here is based on detailed records that I made at the time, which I still possess, and which I just reviewed today to ensure that there would be no mistake.

At that time I was a practicing Catholic. On December 24th, 2010, I started a novena to Mary. I was concerned about a friend’s vocation; I believed that they were called to religious life; they had thought the same for a long time but were beginning to change their mind. The intention of the novena was to respond to this situation.

I did not mention this novena to anyone at the time, or to anyone at all before the events described here.

The last day of the novena was January 1st, 2011, a Marian feast day. (It is a typical practice to end a novena on a feast day of saint to whom the novena is directed.)

On January 4th, 2011, I had a conversation with the same friend. I made no mention at any point during this conversation of the above novena, and there is no way that they could have known about it, or at any rate no way that our debunking friends would consider “ordinary.”

They told me about events that happened to them on January 2nd, 2011.

Note that these events were second hand for me (narrated by my friend) and third hand for any readers this blog might have. This does not matter, however; since my friend had no idea about the novena, even if they were completely making it up (which I believe in no way), it would be nearly as surprising.

When praying a novena, it is typical to expect the “answer to the prayer” on the last day or on the day after, as in an example online:

The Benedictine nuns of St Cecilia’s Abbey on the Isle of Wight (http://www.stceciliasabbey.org.uk) recently started a novena to Fr Doyle with the specific intention of finding some Irish vocations. Anybody with even a passing awareness of the Catholic Church in Ireland is aware that there is a deep vocations crisis. Well, the day after the novena ended, a young Irish lady in her 20’s arrived for a visit at the convent. Today, the Feast of the Immaculate Conception, she will start her time as a postulant at St Cecilia’s Abbey.

Some might dismiss this as coincidence. Those with faith will see it in a different light. Readers can make up their own minds. 

January 2nd, 2011, was the day after my novena ended, and the day to which my friend (unaware of the novena) attributed the following event:

They happened to meet with another person, one who was basically a stranger to them, but met through a mutual acquaintance (mutual to my friend and the stranger; unknown to me.) This person (the stranger) asked my friend to pray with her. She then told my friend that “Our Lady knows that you suffer a lot… She wants you to become a religious and she is afraid that you are going astray…”

Apart from a grammatical change for context, the above sentences are a direct quotation from my friend’s account. Note the relationship with the text I placed in bold earlier.

To be Continued

I may have more to say about these events, but for now I want to say two things:

(1) These events actually happened. The attitude of the debunkers is that if anything “extraordinary” ever happens, it is at best a psychological experience, not a question of the facts. This is just false, and this is what I referred to when I mentioned their second error in the previous post.

(2) I do not accept a religious explanation of these events (at any rate not in any sense that would imply that a body of religious doctrine is true as a whole.)

The Debunkers

Why are they all blurry?

In a recent article, Michael Shermer says about UFOs:

UFOlogists claim that extraordinary evidence exists in the form of tens of thousands of UFO sightings. But SETI scientist Seth Shostak points out in his book Confessions of an Alien Hunter: A Scientist’s Search for Extraterrestrial Intelligence that this actually argues against UFOs being ETIs, because to date not one of these tens of thousands of sightings has materialized into concrete evidence that UFO sightings equal ETI contact. Lacking physical evidence or sharp and clear photographs and videos, more sightings equals less certainty because with so many unidentified objects purportedly zipping around our airspace we surely should have captured one by now, and we haven’t. And where are all the high-definition photographs and videos captured by passengers on commercial airliners? The aforementioned Navy pilot Ryan Graves told 60 Minutes’ correspondent Bill Whitaker that they had seen UAPs “every day for at least a couple of years.” If true, given that nearly every passenger has a smart phone with a high-definition camera, there should be thousands of unmistakable photographs and videos of these UAPs. To date there is not one. Here, the absence of evidence is evidence of absence.

So you say everything is always vague? There is never any clear evidence?

Richard Carrier accidentally gives the game away when making the same point:

Which leads to the next general principle: notice how real UFO videos (that is, ones that aren’t faked) are always out-of-focus or grainy, fuzzy, or in dim light or infrared or other conditions of extreme ambiguity (you can barely tell even what is being imaged). This is a huge red flag. Exactly as with the errors of human cognition, here we already know we should expect difficulty identifying an object, because we are looking at unclear footage. That “UFOs” always only ever show up in ambiguous footage like this is evidence they are not remarkable. Real alien ships endeavoring to be this visible would have been filmed in much clearer conditions by now. Whereas vehicles able to hide from such filming would never even show up under the conditions of these videos. When you make the conditions so bad you can barely discern obvious things, you have by definition made them so bad you won’t even see less-than-obvious things.

Notice what? “Ones that aren’t faked?” What I notice is that you aren’t actually saying that all UFO reports and videos and so on are vague and unclear. There are plenty of clear ones. You just believe that the clear reports are fake.

Which is fine. You are welcome to believe that. But don’t pretend that all the reports are vague. This drastically reduces the strength of the argument. Your real argument is more or less, “If UFOs were aliens, we would have expected, after all this time, there would be so much evidence that everyone would already have been convinced. But I am not convinced and many people are not convinced. Therefore UFOs must not be aliens.”

Even in its real form, this is not a weak argument. It is actually a pretty good one. It is nonetheless weaker in the case of UFOs than in many other cases where similar arguments are made, because the evidence could easily be reconciled with a situation where the vast majority of UFOs are not aliens, a few or many “clear” cases are hoaxes, and a few clear cases are aliens who typically are attempting to avoid human notice, but who fail or make an exception in a very small number of cases. And in general it is more likely to fail in situations where the phenomena might be very rare, or in situations where something is deliberately hidden (e.g. where there are actual conspiracies.)

The Courage of Robin Hanson

In a sequence of posts beginning around last December, Robin Hanson has been attempting to think carefully about the possibility of UFO’s as aliens. In a pair of posts at the end of March, he first presents a list of facts that would need to be explained under that hypothesis, and then in the next presents his proposal to explain those facts.

In the following post, he makes some comments on fact of having the discussion in the first place:

I’ve noticed that this topic of UFOs makes me feel especially uncomfortable. I look at the many details, and many seem to cry out “there really is something important here.” But I know full well that most people refuse to look at the details, and are quick to denigrate those who do, being confident in getting wide social support when they do.

So I’m forced to choose between my intellectual standards, which say to go where the evidence leads, and my desire for social approval, or at least not extra disapproval. I know which one I’m idealistically supposed to pick, but I also know that I don’t really care as much for picking the things you are supposed to pick as I pretend to myself or others.

We often fantasize about being confronted with a big moral dilemma, so we can prove our morality to ourselves and others. But we should mostly be glad we don’t get what we wish for, as we are often quite wrong about how we would actually act.

This is not merely theoretical. He in fact receives quite a bit of pushback in these posts, some of it rather insulting. For example, in this recent post, someone says in the comments:

When there’s a phenomenon like Bigfoot or the Loch Ness Monster or Alien visitors, believers often point to “all the evidence”. But lots of bad evidence doesn’t equal good evidence! Navy pilots who say they see UFOs “everyday” actually are providing support for the idea that they are misidentifying something mundane. When talking to those who believe in a phenomenon with low plausibility, the best way to start is by saying, “Lets discuss the *single best piece of evidence you have* and then consider other pieces separately.”

I have seen UFO’s twice and each time my brow was furrowed in a vain attempt to understand what I had just witnessed. If I hadn’t simply been lucky enough to see the illusion again from another perspective, each time I would have walked away convinced that I had seen a large, extremely fast craft far away and not a small, slow object quite close to me. And I’m not easy to fool, as I already understand how perspective can be deceiving.

I get the idea that your skeptic skills may be under-exercised compared to the rest of your intellect. I’d recommend reading the Shermer book, “Why People Believe Weird Things” or Sagan’s “The Demon Haunted World.” Both are fun reads.

(5ive)

Robin replies,

Your response style, lecturing me about basics, illustrates my status point. People feel free to treat anyone who isn’t on board with full-skeptical like children in need of a lecture.

The debunkers, who are very often the same few people (note that 5ive refers to a book by Michael Shermer), tend to batch together a wide variety of topics (e.g. “Bigfoot or the Loch Ness Monster or Alien visitors”) as “bunk.” You could describe what these things have in common in various ways, but one of the most evident ways is what makes them count as bunk: There is “lots of bad evidence.” That is, as we noted above about UFOs, there is enough evidence to convince some people, but not enough to convince everyone, and the debunkers suppose this situation is just not believable; if the thing were real, they say, everyone would already know.

As I said, this is a pretty good argument, and this generally holds for the sorts of things the debunkers oppose. But this argument can also easily fail, as it did in the case of the meteorites. While people might accept this as a general remark, it nonetheless takes a great deal of courage to suggest that some particular case might be such a case, since as Robin notes, it automatically counts as low status and causes one to be subject to immediate ridicule.

In any case, whether or not the debunkers are right about UFOs or any other particular case, there are at least two general things that they are definitely mistaken about. One is the idea that people who discuss such topics without complete agreement with them are automatically ridiculous. The second will be the topic of another post.

A Correction Regarding Laplace

A few years ago, I quoted Stanley Jaki on an episode supposedly involved Laplace:

Laplace shouted, “We have had enough such myths,” when his fellow academician Marc-Auguste Pictet urged, in the full hearing of the Académie des Sciences, that attention be given to the report about a huge meteor shower that fell at L’Aigle, near Paris, on April 26, 1803.

I referred to this recently on Twitter. When another user found it surprising that Laplace would have said this, I attempted to track it down, and came to the conclusion that this very account is a “myth” itself, in some sense. Jaki tells the same story in different words in the book Miracles and Physics:

The defense of miracles done with an eye on physics should include a passing reference to meteorites. Characteristic of the stubborn resistance of scientific academies to those strange bits of matter was Laplace’s shouting, “We’ve had enough of such myths,” when Pictet, a fellow academician, urged a reconsideration of the evidence provided by “lay-people” as plain eyewitnesses.

(p. 94)

Jaki provides no reference in God and the sun at Fatima. The text in Miracles and Physics has a footnote, but it provides generic related information that does not lead back to any such episode.

Did Jaki make it up? People do just make things up“, but in this case whatever benefit Jaki might get from it would seem to be outweighed by the potential reputational damage of being discovered in such a lie, so it seems unlikely. More likely he is telling a story from memory, with the belief that the details just don’t matter very much. And since he provides plenty of other sources, I am sure he knows full well that he is omitting any source here, presumably because he does not have one at hand. He may even be trying to cover up this omission, in a sense, by footnoting the passage with information that does not source it. It seems likely that the story is a lecture hall account that has been modified by the passage of time. One reason to suppose such a source is that Jaki is not alone in the claim that Laplace opposed the idea of meteorites as stones from the sky until 1803. E.T. Jaynes, in Probability Theory: The Logic of Science, makes a similar claim:

Note that we can recognize the clear truth of this psychological phenomenon without taking any stand about the truth of the miracle; it is possible that the educated people are wrong. For example, in Laplace’s youth educated persons did not believe in meteorites, but dismissed them as ignorant folklore because they are so rarely observed. For one familiar with the laws of mechanics the notion that “stones fall from the sky” seemed preposterous, while those without any conception of mechanical law saw no difficulty in the idea. But the fall at Laigle in 1803, which left fragments studied by Biot and other French scientists, changed the opinions of the educated — including Laplace himself. In this case the uneducated, avid for the marvelous, happened to be right: c’est la vie.

(p. 505)

Like Jaki, Jaynes provides no source. Still, is that good enough reason to doubt the account? Let us examine a text from the book The History of Meteoritics and Key Meteorite Collections. In the article, “Meteorites in history,” Ursula Marvin remarks:

Early in 1802 the French mathematician Pierre-Simon de Laplace (1749-1827) raised the question at the National Institute of a lunar volcanic origin of fallen stones, and quickly gained support for this idea from two physicist colleagues Jean Baptiste Biot (1774-1862) and Siméon-Denis Poisson (1781-1840). The following September, Laplace (1802, p. 277) discussed it in a letter to von Zach.

The idea won additional followers when Biot (1803a) referred to it as ‘Laplace’s hypothesis’, although Laplace, himself, never published an article on it.

(p.49)

This has a source for Laplace’s letter of 1802, although I was not able to find it online. It seems very unlikely that Laplace would have speculated on meteorites as coming from lunar volcanos in 1802, and then called them “myths” in 1803. So where does this story come from? In Cosmic Debris: Meteorites in History, John Burke gives this account:

There is also a problem with respect to the number of French scientists who, after Pictet published a résumé of Howard’s article in the May 1802 issue of the Bibliothèque Britannique, continued to oppose the idea that stones fell from the atmosphere. One can infer from a statement of Lamétherie that there was considerable opposition, for he reported that when Pictet read a memoir to the Institut on the results of Howard’s report “he met with such disfavor that it required a great deal of fortitude for him to finish his reading.” However, Biot’s description of the session varies a good deal. Pictet’s account, he wrote, was received with a “cautious eagerness,” though the “desire to explain everything” caused the phenomenon to be rejected for a long time. There were, in fact, only three scientists who publicly expressed their opposition: the brothers Jean-André and Guillaume-Antoine Deluc of Geneva, and Eugène Patrin, an associate member of the mineralogy section of the Institut and librarian at the École des mines.

When Pictet early in 1801 published a favorable review of Chladni’s treatise, it drew immediate fire from the Deluc brothers. Jean, a strict Calvinist, employed the same explanation of a fall that the Fougeroux committee had used thirty years before: stones did not fall; the event was imagined when lightning struck close to the observer. Just as no fragment of our globe separate and become lost in space, he wrote, fragments could not be detached from another planet. It was also very unlikely that solid masses had been wandering in space since the creation, because they would have long since fallen into the sphere of attraction of some planet. And even if they did fall, they would penetrate the earth to a great depth and shatter into a thousand pieces.

(p.51)

It seems quite possible that Pictet’s “reading a memoir” here and “meeting with disfavor” (regardless of details, since Burke notes it had different descriptions at the time) is the same incident that Jaki describes as having been met with “We’ve had enough of such myths!” when Pictet “urged a reconsideration of the evidence.” If these words were ever said, then, they were presumably said by one of these brothers or someone else, and not by Laplace.

How does this sort of thing happen, if we charitably assume that Jaki was not being fundamentally dishonest? As stated above, it seems likely that he knew he did not have a source. He may even have been consciously aware that it might not have been Laplace who made this statement, if anyone did. But he was sure there was a dispute about the matter, and presumably thought that it just wasn’t too important who it was or the details of the situation, since the main point was that scientists are frequently reluctant to accept facts when those facts occur rarely and are not deliberately reproducible. And if we reduce Jaki’s position to these two things, namely, (1) that scientists at one point disputed the reality and meteorites, and (2) this sort of thing frequently happens with rare and hard to reproduce phenomena, then the position is accurate.

But this behavior, the description of situations with the implication that the details just don’t matter much, is very bad, and directly contributes to the reluctance of many scientists to accept the reality of “extraordinary” phenomena, even in situations where they are, in fact, real.