Eliezer Yudkowsky on AlphaGo

On his Facebook page, during the Go match between AlphaGo and Lee Sedol, Eliezer Yudkowsky writes: At this point it seems likely that Sedol is actually far outclassed by a superhuman player. The suspicion is that since AlphaGo plays purely for *probability of long-term victory* rather than playing for points, the fight against Sedol generates … Continue reading Eliezer Yudkowsky on AlphaGo

Some Remarks on GPT-N

At the end of May, OpenAI published a paper on GPT-3, a language model which is a successor to their previous version, GPT-2. While quite impressive, the reaction from many people interested in artificial intelligence has been seriously exaggerated. Sam Altman, OpenAI’s CEO, has said as much himself: The GPT-3 hype is way too much. … Continue reading Some Remarks on GPT-N

Chronological Archives

2015 7/5 – Why Useless?7/6 – In Forty Days Nineveh Will be Destroyed7/7 – Beati Mundo Corde7/8 – Politically Incorrect Algorithms7/9 – The Null Hypothesis7/10 – Conspiracy Theories7/11 – Are Hyperlinks a Bad Idea?7/12 – Privacy7/13 – Pope Francis and Proselytization7/14 – The Order of the World7/15 – The Progress of Humanity and History7/16 – … Continue reading Chronological Archives

Discount Rates

Eliezer Yudkowsky some years ago made this argument against temporal discounting: I’ve never been a fan of the notion that we should (normatively) have a discount rate in our pure preferences – as opposed to a pseudo-discount rate arising from monetary inflation, or from opportunity costs of other investments, or from various probabilistic catastrophes that … Continue reading Discount Rates

Really and Truly True

There are two persons in a room with a table between them. One says, “There is a table on the right.” The other says, “There is a table on the left.” Which person is right? The obvious answer is that both are right. But suppose they attempt to make this into a metaphysical disagreement. “Yes, … Continue reading Really and Truly True

Artificial Unintelligence

Someone might argue that the simple algorithm for a paperclip maximizer in the previous post ought to work, because this is very much the way currently existing AIs do in fact work. Thus for example we could describe AlphaGo‘s algorithm in the following simplified way (simplified, among other reasons, because it actually contains several different … Continue reading Artificial Unintelligence

Minimizing Motivated Beliefs

In the last post, we noted that there is a conflict between the goal of accurate beliefs about your future actions, and your own goals about your future. More accurate beliefs will not always lead to a better fulfillment of those goals. This implies that you must be ready to engage in a certain amount … Continue reading Minimizing Motivated Beliefs

Wishful Thinking about Wishful Thinking

Cameron Harwick discusses an apparent relationship between “New Atheism” and group selection: Richard Dawkins’ best-known scientific achievement is popularizing the theory of gene-level selection in his book The Selfish Gene. Gene-level selection stands apart from both traditional individual-level selection and group-level selection as an explanation for human cooperation. Steven Pinker, similarly, wrote a long article … Continue reading Wishful Thinking about Wishful Thinking