Technology and Culture

The last two posts have effectively answered the question raised about Scott Alexander’s account of cultural decline. What could be meant by calling some aspects of culture “less compatible with modern society?” Society tends to change over time, and some of those changes are humanly irreversible. It is entirely possible, and in fact common, for some of those irreversible changes to stand in tension with various elements of culture. This will necessarily tend to cause cultural decay at least with respect to those elements, and often with respect to other elements of culture as well, since the various aspects of culture are related.

This happens in a particular way with changes in technology, although technology is not the only driver of such irreversible change.

It would be extremely difficult for individuals to opt out of the use of of various technologies. For example, it would be quite difficult for Americans to give up the use of plumbing and heating, and a serious attempt to do so might lead to illness or death in many cases. And it would be still more difficult to give up the use of clothes, money, and language. Attempting to do so, assuming that one managed to preserve one’s physical life, would likely lead to imprisonment or other forms of institutionalization (which would make it that much more difficult to abandon the use of clothes.)

Someone might well respond here, “Wait, why are you bringing up clothes, money, and language as examples of technology?” Clothes and money seem more like cultural institutions than technology in the first place; and language seems to be natural to humans.

I have already spoken of language as a kind of technology. And with regard to clothes and money, it is even more evident that in the concrete forms in which they exist in our world today they are tightly intertwined with various technologies. The cash used in the United States depends on mints and printing presses, actual mechanical technologies. And if one wishes to buy something without cash, this usually depends on still more complex technology. Similar things are true of the clothes that we wear.

I concede, of course, that the use of these things is different from the use of the machines that make them, or as in the case of credit cards, support their use, although there is less distinction in the latter case. But I deliberately brought up things which look like purely cultural institutions in order to note their relationship with technology, because we are discussing the manner in which technological change can result in cultural change. Technology and culture are tightly intertwined, and can never be wholly separated.

Sarah Perry discusses this (the whole post is worth reading):

Almost every technological advance is a de-condensation: it abstracts a particular function away from an object, a person, or an institution, and allows it to grow separately from all the things it used to be connected to. Writing de-condenses communication: communication can now take place abstracted from face-to-face speech. Automobiles abstract transportation from exercise, and allow further de-condensation of useful locations (sometimes called sprawl). Markets de-condense production and consumption.

Why is technology so often at odds with the sacred? In other words, why does everyone get so mad about technological change? We humans are irrational and fearful creatures, but I don’t think it’s just that. Technological advances, by their nature, tear the world apart. They carve a piece away from the existing order – de-condensing, abstracting, unbundling – and all the previous dependencies collapse. The world must then heal itself around this rupture, to form a new order and wholeness. To fear disruption is completely reasonable.

The more powerful the technology, the more unpredictable its effects will be. A technological advance in the sense of a de-condensation is by its nature something that does not fit in the existing order. The world will need to reshape itself to fit. Technology is a bad carver, not in the sense that it is bad, but in the sense of Socrates:

First, the taking in of scattered particulars under one Idea, so that everyone understands what is being talked about … Second, the separation of the Idea into parts, by dividing it at the joints, as nature directs, not breaking any limb in half as a bad carver might.”

Plato, Phaedrus, 265D, quoted in Notes on the Synthesis of Form, Christopher Alexander.

The most powerful technological advances break limbs in half. They cut up the world in an entirely new way, inconceivable in the previous order.

Now someone, arguing much in Chesterton’s vein, might say that this does not have to happen. If a technology is damaging in this way, then just don’t use it. The problem is that often one does not have a realistic choice not to use it, as in my examples above. And much more can one fail to have a choice not to interact with people who use the new technology, and interacting with those people will itself change the way that life works. And as Robin Hanson noted, there is not some human global power that decides whether or not a technology gets to be introduced into human society or not. This happens rather by the uncoordinated and unplanned decisions of individuals.

And this is sufficient to explain the tendency towards cultural decline. The constant progress of technology results, and results of necessity, in constant cultural decline. And thus we fools understand why the former days were better than these.

Turning Back the Clock

Let’s look again at the center of Chesterton’s argument about turning back the clock:

There is one metaphor of which the moderns are very fond; they are always saying, “You can’t put the clock back.” The simple and obvious answer is “You can.” A clock, being a piece of human construction, can be restored by the human finger to any figure or hour. In the same way society, being a piece of human construction, can be reconstructed upon any plan that has ever existed.

Of course, one can physically turn a clock back. But as Chesterton notes, the idea that “you can’t put the clock back,” is a metaphor, not a literal statement. The metaphor is based off the idea that you can’t time travel to the past, and this is literally true, fortunately or unfortunately. The one who uses the metaphor intends to assert something stronger, however, and it is this stronger thing that Chesterton wishes to refute when he says, “Society, being a piece of human construction, can be reconstructed upon any plan that has ever existed.”

Yes, the human finger can turn back the clock. But what corresponds to “the human finger” in the case of society? Who or what has the power to reconstruct society upon any plan that has ever existed?

As soon as we ask the question, the answer is clear. Society has never been constructed upon any plan whatsoever; so neither can it be reconstructed upon any plan whatsoever. As Robin Hanson puts it, “no one rules the world,” so there is no way to construct society according to a plan in the first place. In particular, Hanson remarks regarding technology:

This seems especially true regarding the consequences of new tech. So far in history tech has mostly appeared whenever someone somewhere has wanted it enough, regardless of what the rest of the world thought. Mostly, no one has been driving the tech train. Sometimes we like the result, and sometimes we don’t. But no one rules the world, so these results mostly just happen either way.

Chesterton is free, as he says, to propose anything he likes, including bringing back the stage coaches. But we are also free to propose that the world would be better off if horses walked on their hind legs. The plans will meet with approximately equal success: getting the world to abandon automobiles and adopt stage coaches will not be much easier than getting horses to follow our suggestions.

Indeed, it is not impossible to bring back the stage coaches in the way that “bringing back last Friday” is impossible. But neither is it impossible for horses to walk on their hind legs in this way. Nonetheless both are impossible in the sense that physically turning back a clock is possible. Namely, no human being can either bring back the stage coaches or convince horses to walk on their hind legs, even though one can turn back a clock. One might have occasional success with either plan, but not overall success.

 

Scott Alexander on the Decline of Culture

From Scott Alexander’s Tumblr:

voximperatoris:

[This post is copied over from Stephen Hicks.]

An instructive series of quotations, collected over the years, on the theme of pessimism about the present in relation to the past:

Plato, 360 BCE: “In that country [Egypt] arithmetical games have been invented for the use of mere children, which they learn as pleasure and amusement. I have late in life heard with amazement of our ignorance in these matters [science in general]; to me we appear to be more like pigs than men, and I am quite ashamed, not only of myself, but of all Greeks.” (Laws, Book VII)

Catullus, c. 60 BCE: “Oh, this age! How tasteless and ill-bred it is!”

Sallust, 86– c. 35 BCE: “to speak of the morals of our country, the nature of my theme seems to suggest that I go farther back and give a brief account of the institutions of our forefathers in peace and in war, how they governed the commonwealth, how great it was when they bequeathed it to us, and how by gradual changes it has ceased to be the noblest and best, and has become the worst and most vicious.” About Rome’s forefathers: “good morals were cultivated at home and in the field; there was the greatest harmony and little or no avarice; justice and probity prevailed among them.” They “adorned the shrines of the gods with piety, their own homes with glory, while from the vanquished they took naught save the power of doing harm.” But Rome now is a moral mess: “The men of to‑day, on the contrary, basest of creatures, with supreme wickedness are robbing our allies of all that those heroes in the hour of victory had left them; they act as though the one and only way to rule were to wrong.” (The Catiline War)

Horace, c. 23-13 BCE: “Our fathers, viler than our grandfathers, begot us who are viler still, and we shall bring forth a progeny more degenerate still.” (Odes 3:6)

Alberti, 1436: Nature is no longer producing great intellects — “or giants which in her youthful and more glorious days she had produced so marvelously and abundantly.” (On Painting)

Peter Paul Rubens, c. 1620: “For what else can our degenerate race do in this age of error. Our lowly disposition keeps us close to the ground, and we have declined from that heroic genius and judgment of the ancients.”

Mary Wollstonecraft, c. 1790: “As from the respect paid to property flow, as from a poisoned fountain, most of the evils and vices which render this world such a dreary scene to the contemplative mind.”

William Wordsworth, 1802:
“Milton! thou should’st be living at this hour:
England hath need of thee: she is a fen
Of stagnant waters: altar, sword, and pen,
Fireside, the heroic wealth of hall and bower,
Have forfeited their ancient English dower
Of inward happiness. We are selfish men;
Oh! raise us up, return to us again;
And give us manners, virtue, freedom, power.”
(“London”)

John Stuart Mill, in 1859, speaking of his generation: “the present low state of the human mind.” (On Liberty, Chapter 3)

Friedrich Nietzsche, in 1871: “What else, in the desolate waste of present-day culture, holds any promise of a sound, healthy future? In vain we look for a single powerfully branching root, a spot of earth that is fruitful: we see only dust, sand, dullness, and languor” (Birth of Tragedy, Section 20).

Frederick Taylor, 1911: “We can see our forests vanishing, our water-powers going to waste, our soil being carried by floods into the sea; and the end of our coal and our iron is in sight.” (Scientific Management)

T. S. Eliot, c. 1925: “We can assert with some confidence that our own period is one of decline; that the standards of culture are lower than they were fifty years ago; and that the evidences of this decline are visible in every department of human activity.”

So has the world really been in constant decline? Or perhaps, as Gibbon put it in The Decline and Fall of the Roman Empire (1776): “There exists in human nature a strong propensity to depreciate the advantages, and to magnify the evils, of the present times.”

Words to keep in mind as we try to assess objectively our own generation’s serious problems.

I hate this argument. It’s the only time I ever see “Every single person from history has always believed that X is true” used as an argument *against* X.

I mean, imagine that I listed Thomas Aquinas as saying “Technology sure has gotten better the past few decades,” and then Leonardo da Vinci, “Technology sure has gotten better the past few decades”. Benjamin Franklin, “Technology sure has gotten better the past few decades”. Abraham Lincon, “Technology sure has gotten better the past few decades. Henry Ford, “Technology sure has gotten better the past few decades.”

My conclusion – people who think technology is advancing now are silly, there’s just some human bias toward always believing technology is advancing.

In the same way technology can always be advancing, culture can always be declining, for certain definitions of culture that emphasize the parts less compatible with modern society. Like technology, this isn’t a monotonic process – there will be disruptions every time one civilization collapses and a new one begins, and occasional conscious attempts by whole societies to reverse the trend, but in general, given movement from time t to time t+1, people can correctly notice cultural decline.

I mean, really. If, like Nietszche, your thing is the BRUTE STRENGTH of the valiant warrior, do you think that the modern office worker has exactly as much valiant warrior spirit as the 19th century frontiersman? Do you think the 19th century frontiersman had as much as the medieval crusader? Do you think the medieval crusader had as much as the Spartans? Pinker says the world is going from a state of violence to a state of security, and the flip side of that is people getting, on average, more domesticated and having less of the wild free spirit that Nietszche idealized.

Likewise, when people talk about “virtue”, a lot of the time they’re talking about chastity and willingness to remain faithful in a monogamous marriage for the purpose of procreation. And a lot of the time they don’t even mean actual chastity, they mean vocal public support for chastity and social norms demanding it. Do you really believe our culture has as much of that as previous cultures do? Remember, the sort of sharia law stuff that we find so abhorrent and misogynist was considered progressive during Mohammed’s time, and with good reason.

I would even argue that Alberti is right about genius. There are certain forms of genius that modern society selects for and certain ones it selects against. Remember, before writing became common, the Greek bards would have mostly memorized Homer. I think about the doctors of past ages, who had amazing ability to detect symptoms with the naked eye in a way that almost nobody now can match because we use CT scan instead and there’s no reason to learn this art. (Also, I think modern doctors have much fewer total hours of training than older doctors, because as bad as today’s workplace-protection/no-overtime rules are, theirs were worse)

And really? Using the fact that some guy complained of soil erosion as proof that nobody’s complaints are ever valid? Soil erosion is a real thing, it’s bad, and AFAIK it does indeed keep getting worse.

More controversially, if T.S. Eliot wants to look at a world that over four hundred years, went from the Renaissance masters to modern art, I am totally okay with him calling that a terrible cultural decline.

Scott’s argument is plausible, although he seems somewhat confused insofar as he appears to associate Mohammed with monogamy. And since we are discussing the matter with an interlocutor who maintains that the decline of culture is obvious, we will concede the point immediately. Scott seems a bit ambivalent in regard to whether a declining culture is a bad thing, but we will concede that as well, other things being equal.

However, we do not clearly see an answer here to one of the questions raised in the last post: if culture tends to decline, why does this happen? Scott seems to suggest an answer when he says, “Culture can always be declining, for certain definitions of culture that emphasize the parts less compatible with modern society.” According to this, culture tends to decline because it becomes incompatible with modern society. The problem with this is that it seems to be a “moronic pseudo-reason”: 2017 is just one year among others. So no parts of culture should be less compatible with life in 2017, than with life in 1017, or in any other year. Chesterton makes a similar argument:

We often read nowadays of the valor or audacity with which some rebel attacks a hoary tyranny or an antiquated superstition. There is not really any courage at all in attacking hoary or antiquated things, any more than in offering to fight one’s grandmother. The really courageous man is he who defies tyrannies young as the morning and superstitions fresh as the first flowers. The only true free-thinker is he whose intellect is as much free from the future as from the past. He cares as little for what will be as for what has been; he cares only for what ought to be. And for my present purpose I specially insist on this abstract independence. If I am to discuss what is wrong, one of the first things that are wrong is this: the deep and silent modern assumption that past things have become impossible. There is one metaphor of which the moderns are very fond; they are always saying, “You can’t put the clock back.” The simple and obvious answer is “You can.” A clock, being a piece of human construction, can be restored by the human finger to any figure or hour. In the same way society, being a piece of human construction, can be reconstructed upon any plan that has ever existed.

There is another proverb, “As you have made your bed, so you must lie on it”; which again is simply a lie. If I have made my bed uncomfortable, please God I will make it again. We could restore the Heptarchy or the stage coaches if we chose. It might take some time to do, and it might be very inadvisable to do it; but certainly it is not impossible as bringing back last Friday is impossible. This is, as I say, the first freedom that I claim: the freedom to restore. I claim a right to propose as a solution the old patriarchal system of a Highland clan, if that should seem to eliminate the largest number of evils. It certainly would eliminate some evils; for instance, the unnatural sense of obeying cold and harsh strangers, mere bureaucrats and policemen. I claim the right to propose the complete independence of the small Greek or Italian towns, a sovereign city of Brixton or Brompton, if that seems the best way out of our troubles. It would be a way out of some of our troubles; we could not have in a small state, for instance, those enormous illusions about men or measures which are nourished by the great national or international newspapers. You could not persuade a city state that Mr. Beit was an Englishman, or Mr. Dillon a desperado, any more than you could persuade a Hampshire Village that the village drunkard was a teetotaller or the village idiot a statesman. Nevertheless, I do not as a fact propose that the Browns and the Smiths should be collected under separate tartans. Nor do I even propose that Clapham should declare its independence. I merely declare my independence. I merely claim my choice of all the tools in the universe; and I shall not admit that any of them are blunted merely because they have been used.

Four Minutes and Thirty-Three Seconds of Regress

Someone might respond to what I have said about progress in the following way:

So how come you talk about progress in technology and progress in truth, but do not talk about the progress of culture? Is it not because as soon as one considers the idea, it constitutes the refutation of your arguments? Consider 4’33”, or much of modern art in general. Or again, consider the liturgical changes after the Second Vatican Council. Nor are these issues limited to artistic matters, since we could mention many matters of morality, or various cultural institutions. It is not even necessary to mention examples, so obvious all of this is, once one even considers the idea of the progress or regress of culture.

There is some truth to this, and it is worthy of serious consideration.

This or Nothing

In his homily on June 9th, Pope Francis spoke against excessively rigid views:

This (is the) healthy realism of the Catholic Church: the Church never teaches us ‘or this or that.’ That is not Catholic. The Church says to us: ‘this and that.’ ‘Strive for perfectionism: reconcile with your brother. Do not insult him. Love him. And if there is a problem, at the very least settle your differences so that war doesn’t break out.’ This (is) the healthy realism of Catholicism. It is not Catholic (to say) ‘or this or nothing:’ This is not Catholic, this is heretical. Jesus always knows how to accompany us, he gives us the ideal, he accompanies us towards the ideal, He frees us from the chains of the laws’ rigidity and tells us: ‘But do that up to the point that you are capable.’ And he understands us very well. He is our Lord and this is what he teaches us.

“Or this or that” and “Or this or nothing” are probably excessively literal translations of the Italian, which would actually mean “either this or that,” and “either this or nothing.”

It is a bit odd to speak of such views as “heretical,” since it would be hard to find a determinate doctrine here that might be true or false. Rather, the Pope speaks of an attitude, and is condemning it as a bad attitude, not only morally, but as leading one into error intellectually as well. We have seen various people with views and attitudes that would likely fit under this categorization: thus for example Fr. Brian Harrison maintains that a person cannot accept both Christianity and evolutionJames Larson maintains that disagreement with his theological and philosophical positions amounts to a “war against being,” thus asserting “either this or nothing” in a pretty immediate sense. Alexander Pruss maintains that either there was a particular objective moment when Queen Elizabeth passed from not being old to being old, or logic is false. We have seen a number of other examples.

The attitude is fairly common among Catholic traditionalists (of which Fr. Brian Harrison and James Larson are in fact examples.) Thus it is not surprising that the blog Rorate Caeli, engaging in exactly the “this or nothing” attitude that Pope Francis condemns, condemns Pope Francis’s statements as heretical:

(1) Either John Paul II and all the Popes who came before him are right, by emphasizing the “absoluteness” of the Church’s moral law and by classifying as a “very serious error” that the doctrine of the Church is only an “ideal”…

…or (2) Francis is right, by qualifying as “heretical” a rejection of the “Doctrine of the Ideal” as well as any affirmation of the absoluteness of moral prohibitions (‘or this or nothing’).

Regardless of the accusations of heresy on either side, however, Pope Francis is basically right in rejecting the attitude in question. I have spoken elsewhere about the fact that in discussion, one should try to look for what is true in the other person’s position. The most basic reason for this, of course, is that there is almost always some truth there. The attitude of “this or nothing” is basically a refusal to consider the truth in the other person’s position.

Strangely, as we will see in future posts, this turns out to be relevant to our discussion of elements.

[On another matter, a public service announcement: If you occasionally use a taxi, or might occasionally do so in the future, and you are not signed up with Uber, you should do so. Call a traditional taxi, and they will tell you they will be there in 20 – 30 minutes. They will actually be there in 45 – 60 minutes, and possibly not at all. With Uber, all it takes is a few clicks, and you will have a ride in 5 -10 minutes. While it is on the way, you know the exact location of your ride and can communicate with your driver in advance as needed. And as far as I can tell, the price is about the same.

There is also another reason for this advertisement. If you sign up with Uber using the promo code 6p1nbwapue , you and I will both receive $20 of credit. This only works if you actually use the service at least once, however.]

Language as Technology

Genesis tells the story of the Tower of Babel:

Now the whole earth had one language and the same words. And as they migrated from the east, they came upon a plain in the land of Shinar and settled there. And they said to one another, “Come, let us make bricks, and burn them thoroughly.” And they had brick for stone, and bitumen for mortar. Then they said, “Come, let us build ourselves a city, and a tower with its top in the heavens, and let us make a name for ourselves; otherwise we shall be scattered abroad upon the face of the whole earth.” The Lord came down to see the city and the tower, which mortals had built. And the Lord said, “Look, they are one people, and they have all one language; and this is only the beginning of what they will do; nothing that they propose to do will now be impossible for them. Come, let us go down, and confuse their language there, so that they will not understand one another’s speech.” So the Lord scattered them abroad from there over the face of all the earth, and they left off building the city. Therefore it was called Babel, because there the Lord confused the language of all the earth; and from there the Lord scattered them abroad over the face of all the earth.

The account suggests that language is a cause of technology, as when the Lord says, “this is only the beginning of what they will do; nothing that they propose to do will now be impossible for them.”

But is possible to understand language here as a technology itself, one which gives rise to other technologies. It is a technology by which men communicate with each other. In the story, God weakens the technology, making it harder for people to communicate with one another, and therefore making it harder for them to accomplish other goals.

But language is not just a technology that exists for the sake of communication; it is also a technology that exists for the sake of thought. As I noted in the linked post, our ability to think depends to some extent on our possession of language.

All of this suggests that in principle, the idea of technological progress  is something that could apply to language itself, and that such progress could correspondingly be a cause of progress in truth. The account in Genesis suggests some of the ways that this could happen; to the degree that people develop better means of understanding one another, whether we speak of people speaking different languages, or even people already speaking the same language, they will be better able to work together towards the goal of truth, and thus will be better able to attain that goal.

 

Eliezer Yudkowsky on AlphaGo

On his Facebook page, during the Go match between AlphaGo and Lee Sedol, Eliezer Yudkowsky writes:

At this point it seems likely that Sedol is actually far outclassed by a superhuman player. The suspicion is that since AlphaGo plays purely for *probability of long-term victory* rather than playing for points, the fight against Sedol generates boards that can falsely appear to a human to be balanced even as Sedol’s probability of victory diminishes. The 8p and 9p pros who analyzed games 1 and 2 and thought the flow of a seemingly Sedol-favoring game ‘eventually’ shifted to AlphaGo later, may simply have failed to read the board’s true state. The reality may be a slow, steady diminishment of Sedol’s win probability as the game goes on and Sedol makes subtly imperfect moves that *humans* think result in even-looking boards. (E.g., the analysis in https://gogameguru.com/alphago-shows-true-strength-3rd-vic…/ )

For all we know from what we’ve seen, AlphaGo could win even if Sedol were allowed a one-stone handicap. But AlphaGo’s strength isn’t visible to us – because human pros don’t understand the meaning of AlphaGo’s moves; and because AlphaGo doesn’t care how many points it wins by, it just wants to be utterly certain of winning by at least 0.5 points.

IF that’s what was happening in those 3 games – and we’ll know for sure in a few years, when there’s multiple superhuman machine Go players to analyze the play – then the case of AlphaGo is a helpful concrete illustration of these concepts:

He proceeds to suggest that AlphaGo’s victories confirm his various philosophical positions concerning the nature and consequences of AI. Among other things, he says,

Since Deepmind picked a particular challenge time in advance, rather than challenging at a point where their AI seemed just barely good enough, it was improbable that they’d make *exactly* enough progress to give Sedol a nearly even fight.

AI is either overwhelmingly stupider or overwhelmingly smarter than you. The more other AI progress and the greater the hardware overhang, the less time you spend in the narrow space between these regions. There was a time when AIs were roughly as good as the best human Go-players, and it was a week in late January.

In other words, according to his account, it was basically certain that AlphaGo would either be much better than Lee Sedol, or much worse than him. After Eliezer’s post, of course, AlphaGo lost the fourth game.

Eliezer responded on his Facebook page:

That doesn’t mean AlphaGo is only slightly above Lee Sedol, though. It probably means it’s “superhuman with bugs”.

We might ask what “superhuman with bugs” is supposed to mean. Deepmind explains their program:

We train the neural networks using a pipeline consisting of several stages of machine learning (Figure 1). We begin by training a supervised learning (SL) policy network, pσ, directly from expert human moves. This provides fast, efficient learning updates with immediate feedback and high quality gradients. Similar to prior work, we also train a fast policy pπ that can rapidly sample actions during rollouts. Next, we train a reinforcement learning (RL) policy network, pρ, that improves the SL policy network by optimising the final outcome of games of self-play. This adjusts the policy towards the correct goal of winning games, rather than maximizing predictive accuracy. Finally, we train a value network vθ that predicts the winner of games played by the RL policy network against itself. Our program AlphaGo efficiently combines the policy and value networks with MCTS.

In essence, like all such programs, AlphaGo is approximating a function. Deepmind describes the function being approximated, “All games of perfect information have an optimal value function, v ∗ (s), which determines the outcome of the game, from every board position or state s, under perfect play by all players.”

What would a “bug” in a program like this be? It would not be a bug simply because the program does not play perfectly, since no program will play perfectly. One could only reasonably describe the program as having bugs if it does not actually play the move recommended by its approximation.

And it is easy to see that it is quite unlikely that this is the case for AlphaGo. All programs have bugs, surely including AlphaGo. So there might be bugs that would crash the program under certain circumstances, or bugs that cause it to move more slowly than it should, or the like. But that it would randomly perform moves that are not recommended by its approximation function is quite unlikely. If there were such a bug, it would likely apply all the time, and thus the program would play consistently worse. And so it would not be “superhuman” at all.

In fact, Deepmind has explained how AlphaGo lost the fourth game:

To everyone’s surprise, including ours, AlphaGo won four of the five games. Commentators noted that AlphaGo played many unprecedented, creative, and even“beautiful” moves. Based on our data, AlphaGo’s bold move 37 in Game 2 had a 1 in 10,000 chance of being played by a human. Lee countered with innovative moves of his own, such as his move 78 against AlphaGo in Game 4—again, a 1 in 10,000 chance of being played—which ultimately resulted in a win.

In other words, the computer lost because it did not expect Lee Sedol’s move, and thus did not sufficiently consider the situation that would follow. AlphaGo proceeded to play a number of fairly bad moves in the remainder of the game. This does not require any special explanation implying that it was not following the recommendations of its usual strategy. As David Wu comments on Eliezer’s page:

The “weird” play of MCTS bots when ahead or behind is not special to AlphaGo, and indeed appears to have little to do with instrumental efficiency or such. The observed weirdness is shared by all MCTS Go bots and has been well-known ever since they first came on to the scene back in 2007.

In particular, Eliezer may not understand the meaning of the statement that AlphaGo plays to maximize its probability of victory. This does not mean maximizing an overall rational estimate of the its chances of winning, giving all of the circumstances, the board position, and its opponent. The program does not have such an estimate, and if it did, it would not change much from move to move. For example, with this kind of estimate, if Lee Sedol played a move apparently worse than it expected, rather than changing this estimate much, it would change its estimate of the probability that the move was a good one, and the probability of victory would remain relatively constant. Of course it would change slowly as the game went on, but it would be unlikely to change much after an individual move.

The actual “probability of victory” that the machine estimates is somewhat different. It is a learned estimate based on playing itself. This can change somewhat more easily, and is independent of the fact that it is playing a particular opponent; it is based on the board position alone. In its self-training, it may have rarely won starting from an apparently losing position, and this may have happened mainly by “luck,” not by good play. If this is the case, it is reasonable that its moves would be worse in a losing position than in a winning position, without any need to say that there are bugs in the algorithm. Psychologically, one might compare this to the case of a man in love with a woman who continues to attempt to maximize his chances of marrying her, after she has already indicated her unwillingness: he may engage in very bad behavior indeed.

Eliezer’s claim that AlphaGo is “superhuman with bugs” is simply a normal human attempt to rationalize evidence against his position. The truth is that, contrary to his expectations, AlphaGo is indeed in the same playing range as Lee Sedol, although apparently somewhat better. But not a lot better, and not superhuman. Eliezer in fact seems to have realized this after thinking about it for a while, and says:

It does seem that what we might call the Kasparov Window (the AI is mostly superhuman but has systematic flaws a human can learn and exploit) is wide enough that AlphaGo landed inside it as well. The timescale still looks compressed compared to computer chess, but not as much as I thought. I did update on the width of the Kasparov window and am now accordingly more nervous about similar phenomena in ‘weakly’ superhuman, non-self-improving AGIs trying to do large-scale things.

As I said here, people change their minds more often than they say that they do. They frequently describe the change as having more agreement with their previous position than it actually has. Yudkowsky is doing this here, by talking about AlphaGo as “mostly superhuman” but saying it “has systematic flaws.” This is just a roundabout way of admitting that AlphaGo is better than Lee Sedol, but not by much, the original possibility that he thought extremely unlikely.

The moral here is clear. Don’t assume that the facts will confirm your philosophical theories before this actually happens, because it may not happen at all.

 

Whatever Can Happen Sometimes Does

In St. Thomas’s third way, he says, “that which is possible not to be at some time is not.” Basically he is saying that if something is possible, it will be actual sooner or later. Is this really the case?

With some qualifications, it is indeed the case. If the probability of something during equal units of time remains fixed, or if it does not decrease sufficiently quickly, then at the limit of infinite time, the probability that the thing will happen sooner or later will converge to one. Thus, to give an arbitrary example, if there is a chance that human beings will produce a space elevator during the next 20 years, and the chance for each period of 20 years is not constantly decreasing, then it will happen sooner or later.

Of course the qualifications imply that there are still plenty of ways that this could fail to happen, as if time does not go on forever, or if something happens (e.g. the kind of thing that might be called “the end of the world”) that reduces this chance to zero, or that causes it to start going down, and to continue going down forever, quickly enough that the total probability converges to something less than one.

It might be possible to argue against St. Thomas’s application of this principle in the third way, since even if we believe that it could have happened that nothing existed, we might reasonably suppose that once something exists, the probability of “nothing exists” being true in the future is immediately reduced to zero. Nonetheless, it is certainly true that the existence of contingent beings implies the existence of a necessary being.

Ray Kurzweil’s Myth of Progress

I have taken an optimistic view of progress on this blog, but it is possible to take any position to an extreme. Ray Kurzweil’s position on progress, as expressed in his 2005 book The Singularity is Near, is wild enough to seem almost a caricature.

He constantly asserts that nearly every kind of change is accelerating exponentially:

The key idea underlying the impending Singularity is that the pace of change of our human-created technology is accelerating and its powers are expanding at an exponential pace. Exponential growth is deceptive. It starts out almost imperceptibly and then explodes with unexpected fury— unexpected, that is, if one does not take care to follow its trajectory. (pp. 7-8)

We are now in the early stages of this transition. The acceleration of paradigm shift (the rate at which we change fundamental technical approaches) as well as the exponential growth of the capacity of information technology are both beginning to reach the “knee of the curve,” which is the stage at which an exponential trend becomes noticeable. Shortly after this stage, the trend quickly becomes explosive. Before the middle of this century, the growth rates of our technology— which will be indistinguishable from ourselves— will be so steep as to appear essentially vertical. From a strictly mathematical perspective, the growth rates will still be finite but so extreme that the changes they bring about will appear to rupture the fabric of human history. That, at least, will be the perspective of unenhanced biological humanity. (p. 9)

In other words, Kurzweil believes that the ends of the ages have come upon us, although in a new, secular way. However, he cannot say that we have reached the “explosive” point quite yet, because if that were true, everyone would notice. In order to explain the fact that people haven’t noticed it yet, he has to say that we are just about to reach that point. It should be noted that this was written 10 years ago, so it is pretty reasonable to say that it has already been falsified, since we still haven’t noticed any explosion.

He uses the “exponential” idea to make definite claims about how much progress is made or will be made in various periods of time, as for example here:

Most long-range forecasts of what is technically feasible in future time periods dramatically underestimate the power of future developments because they are based on what I call the “intuitive linear” view of history rather than the “historical exponential” view. My models show that we are doubling the paradigm-shift rate every decade, as I will discuss in the next chapter. Thus the twentieth century was gradually speeding up to today’s rate of progress; its achievements, therefore, were equivalent to about twenty years of progress at the rate in 2000. We’ll make another twenty years of progress in just fourteen years (by 2014), and then do the same again in only seven years. To express this another way, we won’t experience one hundred years of technological advance in the twenty-first century; we will witness on the order of twenty thousand years of progress (again, when measured by today’s rate of progress), or about one thousand times greater than what was achieved in the twentieth century. (p.11)

If this is not clear, he is claiming here that the amount of change in the world that was made between the year 1900 and the 2000 was the same as the amount of change between the year 2000 and the year 2014. It is possible for him to make this claim because he was writing in the year 2005 and expected impossible changes in the next 10 years. But if someone in the year 1900 were to use a time machine to travel to the year 2000 and then to the year 2014, there is simply no way they would claim that the two periods contained an equal amount of change. I’m not sure how one would mathematically formalize this but Kurzweil’s claim here would be a lot like saying that the difference between blue and pink is about the same as the difference between two shades of pink.

He is quite definite about what he expects to happen by various dates:

The current disadvantages of Web-based commerce (for example, limitations in the ability to directly interact with products and the frequent frustrations of interacting with inflexible menus and forms instead of human personnel) will gradually dissolve as the trends move robustly in favor of the electronic world. By the end of this decade, computers will disappear as distinct physical objects, with displays built in our eyeglasses, and electronics woven in our clothing, providing full-immersion visual virtual reality. Thus, “going to a Web site” will mean entering a virtual-reality environment— at least for the visual and auditory senses— where we can directly interact with products and people, both real and simulated. (pp. 104-105)

“By the end of this decade” refers to the year 2010.

The full-immersion visual-auditory virtual-reality environments, which will be ubiquitous during the second decade of this century, will hasten the trend toward people living and working wherever they wish. Once we have full-immersion virtual-reality environments incorporating all of the senses, which will be feasible by the late 2020s, there will be no reason to utilize real offices. Real estate will become virtual. (p. 105)

This is not yet completely disproved but there is not much more time for the “ubiquitous” virtual reality environments.

Returning to the limits of computation according to physics, the estimates above were expressed in terms of laptop-size computers because that is a familiar form factor today. By the second decade of this century, however, most computing will not be organized in such rectangular devices but will be highly distributed throughout the environment. Computing will be everywhere: in the walls, in our furniture, in our clothing, and in our bodies and brains. (p. 136)

No comment on this prediction is necessary. Along the same lines:

Early in the second decade of this century, the Web will provide full immersion visual-auditory virtual reality with images written directly to our retinas from our eyeglasses and lenses and very high-bandwidth wireless Internet access woven in our clothing. These capabilities will not be restricted just to the privileged. Just like cell phones, by the time they work well they will be everywhere. (p. 472)

Apart from particular predictions, there are obvious general problems with his claims about exponentially accelerating change. A day lasts 24 hours, and that isn’t going to change. It takes a human being 18 years (or more, depending on how you define it) to grow to adulthood, and that isn’t going to change.  Ray apparently believes that such things make no difference:

Each example of information technology starts out with early-adoption versions that do not work very well and that are unaffordable except by the elite. Subsequently the technology works a bit better and becomes merely expensive. Then it works quite well and becomes inexpensive. Finally it works extremely well and is almost free. The cell phone, for example, is somewhere between these last two stages. Consider that a decade ago if a character in a movie took out a portable telephone, this was an indication that this person must be very wealthy, powerful, or both. Yet there are societies around the world in which the majority of the population were farming with their hands two decades ago and now have thriving information-based economies with widespread use of cell phones (for example, Asian societies, including rural areas of China). This lag from very expensive early adopters to very inexpensive, ubiquitous adoption now takes about a decade. But in keeping with the doubling of the paradigm-shift rate each decade, this lag will be only five years a decade from now. In twenty years, the lag will be only two to three years. (p. 469)

In the first place, his description of what happened with cell phones is not historically accurate. The use of cell phones in the USA in 1995 was indeed rare, but it was already quite a bit more common in Europe, and certainly did not indicate that someone must be wealthy or powerful (e.g. in 1996 one of my many European acquaintances who possessed cell phones was a teen-age girl from a broken family.) In general he shortens various actual time frames in this way in order to cause the appearance of greater acceleration; the actual process of cell phone adoption would be better assigned a 20 year period. It is also a fallacy to confuse movement which people see as being within a single technology, e.g. from cell phones in general to smart phones, with the adoption of a new technology. And it doesn’t matter here whether or not there is really a new technology or not; what matters is whether people see it as adopting something new, because people are much more unwilling to adopt a new technology than to upgrade a presently used one. As one example, Ray was right to predict the coming of virtual reality technologies, even though his time frame was wrong, and these are currently being developed. But according to him, their wide-spread adoption should take less than five years, and it is already obvious that this will turn out to be entirely false.

Ray’s book is hundreds of pages long, and one could easily write an entire book in refutation. In addition to being wrong in his specific expectations, there are many religious, philosophical, and moral issues that are raised by many of his proposals, which I have not discussed. However, it should be noted that for the most part his predictions are probably physically possible, although exaggerated, and may actually happen sooner or later, with the exception of some of the more extreme claims. But I suspect that there is more going on than simply exaggerating and predicting that various technologies will arrive sooner than they actually will.

Rather, it seems that Ray’s motives are quasi-religious; as I stated above, he believes that the end of the ages is upon us. He discusses this comparison himself:

George Gilder has described my scientific and philosophical views as “a substitute vision for those who have lost faith in the traditional object of religious belief.” Gilder’s statement is understandable, as there are at least apparent similarities between anticipation of the Singularity and anticipation of the transformations articulated by traditional religions. But I did not come to my perspective as a result of searching for an alternative to customary faith. The origin of my quest to understand technology trends was practical: an attempt to time my inventions and to make optimal tactical decisions in launching technology enterprises. Over time this modeling of technology took on a life of its own and led me to formulate a theory of technology evolution. It was not a huge leap from there to reflect on the impact of these crucial changes on social and cultural institutions and on my own life. So, while being a Singularitarian is not a matter of faith but one of understanding, pondering the scientific trends I’ve discussed in this book inescapably engenders new perspectives on the issues that traditional religions have attempted to address: the nature of mortality and immortality, the purpose of our lives, and intelligence in the universe. (p. 370)

But the fact that someone recognizes the possibility of undue influences upon his beliefs and claims to be free of them does not mean that he is actually free of them. Ray Kurzweil is currently 67 years old. He will likely die within 25 years. It is perfectly clear that one of his main preoccupations is how to prevent this from happening:

Biotechnology will extend biology and correct its obvious flaws. The overlapping revolution of nanotechnology will enable us to expand beyond the severe limitations of biology. As Terry Grossman and I articulated in Fantastic Voyage: Live Long Enough to Live Forever, we are rapidly gaining the knowledge and the tools to indefinitely maintain and extend the “house” each of us calls his body and brain. Unfortunately the vast majority of our baby-boomer peers are unaware of the fact that they do not have to suffer and die in the “normal” course of life, as prior generations have done— if they take aggressive action, action that goes beyond the usual notion of a basically healthy lifestyle. (p. 323)

And this is not merely some vague hope, but a belief that he acts upon:

I have been very aggressive about reprogramming my biochemistry. I take 250 supplements (pills) a day and receive a half-dozen intravenous therapies each week (basically nutritional supplements delivered directly into my bloodstream, thereby bypassing my GI tract). As a result, the metabolic reactions in my body are completely different than they would otherwise be. Approaching this as an engineer, I measure dozens of levels of nutrients (such as vitamins, minerals, and fats), hormones, and metabolic by-products in my blood and other body samples (such as hair and saliva). Overall, my levels are where I want them to be, although I continually fine-tune my program based on the research that I conduct with Grossman. Although my program may seem extreme, it is actually conservative— and optimal (based on my current knowledge). Grossman and I have extensively researched each of the several hundred therapies that I use for safety and efficacy. I stay away from ideas that are unproven or appear to be risky (the use of human-growth hormone, for example). (pp. 211-212)

In other words, it is very likely that Kurzweil’s predictions are as ridiculous as they are because he insists on a time frame to his “Singularity” that will allow him personally to avoid death. It won’t help him, for example, if all his predictions come to pass, but everything happens after he is dead.

But it won’t work, Ray. You are going to die.

Aristotle and Bacon on the Sciences

Aristotle, as stated in the last post, held that the highest knowledge is for its own sake, not for the sake of other things. Francis Bacon, on the other hand, is famous for saying that knowledge is power (Novum Organum, Bk. 1, Aphorism 3):

Human knowledge and human power meet in one; for where the cause is not known the effect cannot be produced. Nature to be commanded must be obeyed; and that which in contemplation is as the cause is in operation as the rule.

Although Aristotle is correct regarding the purpose of the sciences, this difference between the two of them may partly explain their difference of opinion regarding the state of completeness of the sciences. Aristotle basically believed the sciences complete, as I said in the last post, but part of the reason for this may be that he did not care much about potential improvements in the mechanical arts. Bacon, on the other hand, was concerned precisely with such improvements. In particular, he wishes to make gold, as he suggests here in Bk. 2, Aphorism 5:

The rule or axiom for the transformation of bodies is of two kinds. The first regards a body as a troop or collection of simple natures. In gold, for example, the following properties meet. It is yellow in color, heavy up to a certain weight, malleable or ductile to a certain degree of extension; it is not volatile and loses none of its substance by the action of fire; it turns into a liquid with a certain degree of fluidity; it is separated and dissolved by particular means; and so on for the other natures which meet in gold. This kind of axiom, therefore, deduces the thing from the forms of simple natures. For he who knows the forms of yellow, weight, ductility, fixity, fluidity, solution, and so on, and the methods for superinducing them and their gradations and modes, will make it his care to have them joined together in some body, whence may follow the transformation of that body into gold.

This desire for gold in particular is probably a desire for money, which seems to be a kind of universal power, although one might question whether money would be necessary for someone who can change anything into anything else whenever he pleases. In any case, Bacon is engaging in a kind of wishful thinking here. He recognizes, in fact, that human beings have a limited ability to transform nature (Bk. 1, Aphorism 4):

Toward the effecting of works, all that man can do is to put together or put asunder natural bodies. The rest is done by nature working within.

From this it should follow that it may be impossible to effect certain transformations, if they will not follow upon such putting together or asunder of natural bodies. But instead of waiting to learn from experience, Bacon simply assumes (and this is the wishful thinking) that any possible transformation, such as the transformation of lead into gold, will definitely be possible. Since the sciences of his day could do nothing of the sort, he concluded that the sciences still had a great deal of room for improvement.

There are good reasons to think that Bacon’s assumption was false. For example, the no-cloning theorem says that perfectly copying a quantum state is impossible. There may be no particular reason why you would need to do this, but that is not the point. Rather, Bacon claims that any form can be induced upon anything, and this seems to be false.

Nonetheless, his conclusion that the sciences of his day were very imperfect was indeed correct. Thus, although his idea of the purpose of the sciences was inferior to Aristotle’s position, his idea of the state of their progress was superior.