In Plato’s Meno, Socrates makes the somewhat odd claim that the ability of people to learn things without being directly told them proves that somehow they must have learned them or known them in advance. While we can reasonably assume this is wrong in a literal sense, there is some likeness of the truth here.
The whole of a human life is a continuous learning process generally speaking without any sudden jumps. We think of a baby’s learning as different from the learning of a child in school, and the learning of the child as rather different from the learning of an adult. But if you look at that process in itself, there may be sudden jumps in a person’s situation, such as when they graduate from school or when they get married, but there are no sudden jumps from not knowing anything about a topic or an object to suddenly knowing all about it. The learning itself happens gradually. It is the same with the manner in which it takes place; adults do indeed learn in a different manner from that in which children or infants learn. But if you ask how that manner got to be different, it certainly did so gradually, not suddenly.
But in addition to all this, there is a kind of “knowledge” that is not learned at all during one’s life, but is possessed from the beginning. From the beginning people have the ability to interact with the world in such a way that they will survive and go on to learn things. Thus from the beginning they must “know” how to do this. Now one might object that infants have no such knowledge, and that the only reason they survive is that their parents or others keep them alive. But the objection is mistaken: infants know to cry out when they hungry or in pain, and this is part of what keeps them alive. Similarly, an infant knows to drink the milk from its mother rather than refusing it, and this is part of what keeps it alive. Similarly in regard to learning, if an infant did not know the importance of paying close attention to speech sounds, it would never learn a language.
When was this “knowledge” learned? Not in the form of a separated soul, but through the historical process of natural selection.
Selection and Artificial Intelligence
This has significant bearing on our final points in the last post. Is the learning found in AI in its current forms more like the first kind of learning above, or like the kind found in the process of natural selection?
There may be a little of both, but the vast majority of learning in such systems is very much the second kind, and not the first kind. For example, AlphaGo is trained by self-play, where moves and methods of play that tend to lose are eliminated in much the way that in the process of natural selection, manners of life that do not promote survival are eliminated. Likewise a predictive model like GPT-3 is trained, through a vast number of examples, to avoid predictions that turn out to be less accurate and to make predictions that tend to be more accurate.
Now (whether or not this is done in individual cases) you might take a model of this kind and fine tune it based on incoming data, perhaps even in real time, which is a bit more like the first kind of learning. But in our actual situation, the majority of what is known by our AI systems is based on the second kind of learning.
This state of affairs should not be surprising, because the first kind of learning described above is impossible without being preceded by the second. The truth in Socrates’ claim is that if a system does not already “know” how to learn, of course it will not learn anything.
Intelligence and Universality
Elsewhere I have mentioned the argument, often made in great annoyance, that people who take some new accomplishment in AI or machine learning and proclaim that it is “not real intelligence” or that the algorithm is “still fundamentally stupid”, and other things of that kind, are “moving the goalposts,” especially since in many such cases, there really were people who said that something that could do such a thing would be intelligent.
As I said in the linked post, however, there is no problem of moving goalposts unless you originally had them in the wrong place. And attaching intelligence to any particular accomplishment, such as “playing chess well” or even “producing a sensible sounding text,” or anything else with that sort of particularity, is misplacing the goalposts. As we might remember, what excited Francis Bacon was the thought that there were no clear limits, at all, on what science (namely the working out of intelligence) might accomplish. In fact he seems to have believed that there were no limits at all, which is false. Nonetheless, he was correct that those limits are extremely vague, and that much that many assumed to be impossible would turn out to be possible. In other words, human intelligence does not have very meaningful limits on what it can accomplish, and artificial intelligence will be real intelligence (in the same sense that artificial diamonds can be real diamonds) when artificial intelligence has no meaningful limits on what it can accomplish.
I have no time for playing games with objections like, “but humans can’t multiply two 1000 digit numbers in one second, and no amount of thought will give them that ability.” If you have questions of this kind, please answer them for yourself, and if you can’t, sit still and think about it until you can. I have full confidence in your ability to find the answers, given sufficient thought.
What is needed for “real intelligence,” then, is universality. In a sense everyone knew all along that this was the right place for the goalposts. Even if someone said “if a machine can play chess, it will be intelligent,” they almost certainly meant that their expectation was that a machine that could play chess would have no clear limits on what it could accomplish. If you could have told them for a fact that the future would be different: that a machine would be able to play chess but that (that particular machine) would never be able to do anything else, they would have conceded that the machine would not be intelligent.
Training and Universality
Current AI systems are not universal, and clearly have no ability whatsoever to become universal, without first undergoing deep changes in those systems, changes that would have to be initiated by human beings. What is missing?
The problem is the training data. The process of evolution produced the general ability to learn by using the world itself as the training data. In contrast, our AI systems take a very small subset of the world (like a large set of Go games or a large set of internet text), and train a learning system on that subset. Why take a subset? Because the world is too large to fit into a computer, especially if that computer is a small part of the world.
This suggests that going from the current situation to “artificial but real” intelligence is not merely a question of making things better and better little by little. There is a more fundamental problem that would have to be overcome, and it won’t be overcome simply by larger training sets, by faster computing, and things of this kind. This does not mean that the problem is impossible, but it may turn out to be much more difficult than people expected. For example, if there is no direct solution, people might try to create Robin Hanson’s “ems”, where one would more or less copy the learning achieved by natural selection. Or even if that is not done directly, a better understanding of what it means to “know how to learn,” might lead to a solution, although probably one that would not depend on training a model on massive amounts of data.
What happens if there is no solution, or no solution is found? At times people will object to the possibility of such a situation along these times: “this situation is incoherent, since obviously people will be able to keep making better and better machine learning systems, so sooner or later they will be just as good as human intelligence.” But in fact the situation is not incoherent; if it happened, various types of AI system would approach various asymptotes, and this is entirely coherent. We can already see this in the case of GPT-3, where as I noted, there is an absolute bound on its future performance. In general such bounds in their realistic form are more restrictive than their in-principle form; I do not actually expect some successor to GPT-3 to write sensible full length books. Note however that even if this happened (as long as the content itself was not fundamentally better than what humans have done) I would not be “moving the goalposts”; I do not expect that to happen, but its happening would not imply any fundamental difference, since this is still within the “absolute” bounds that we have discussed. In contrast, if a successor to GPT-3 published a cure for cancer, this would prove that I had made some mistake on the level of principle.