What You Learned Before You Were Born

In Plato’s Meno, Socrates makes the somewhat odd claim that the ability of people to learn things without being directly told them proves that somehow they must have learned them or known them in advance. While we can reasonably assume this is wrong in a literal sense, there is some likeness of the truth here.

The whole of a human life is a continuous learning process generally speaking without any sudden jumps. We think of a baby’s learning as different from the learning of a child in school, and the learning of the child as rather different from the learning of an adult. But if you look at that process in itself, there may be sudden jumps in a person’s situation, such as when they graduate from school or when they get married, but there are no sudden jumps from not knowing anything about a topic or an object to suddenly knowing all about it. The learning itself happens gradually. It is the same with the manner in which it takes place; adults do indeed learn in a different manner from that in which children or infants learn. But if you ask how that manner got to be different, it certainly did so gradually, not suddenly.

But in addition to all this, there is a kind of “knowledge” that is not learned at all during one’s life, but is possessed from the beginning. From the beginning people have the ability to interact with the world in such a way that they will survive and go on to learn things. Thus from the beginning they must “know” how to do this. Now one might object that infants have no such knowledge, and that the only reason they survive is that their parents or others keep them alive. But the objection is mistaken: infants know to cry out when they hungry or in pain, and this is part of what keeps them alive. Similarly, an infant knows to drink the milk from its mother rather than refusing it, and this is part of what keeps it alive. Similarly in regard to learning, if an infant did not know the importance of paying close attention to speech sounds, it would never learn a language.

When was this “knowledge” learned? Not in the form of a separated soul, but through the historical process of natural selection.

Selection and Artificial Intelligence

This has significant bearing on our final points in the last post. Is the learning found in AI in its current forms more like the first kind of learning above, or like the kind found in the process of natural selection?

There may be a little of both, but the vast majority of learning in such systems is very much the second kind, and not the first kind. For example, AlphaGo is trained by self-play, where moves and methods of play that tend to lose are eliminated in much the way that in the process of natural selection, manners of life that do not promote survival are eliminated. Likewise a predictive model like GPT-3 is trained, through a vast number of examples, to avoid predictions that turn out to be less accurate and to make predictions that tend to be more accurate.

Now (whether or not this is done in individual cases) you might take a model of this kind and fine tune it based on incoming data, perhaps even in real time, which is a bit more like the first kind of learning. But in our actual situation, the majority of what is known by our AI systems is based on the second kind of learning.

This state of affairs should not be surprising, because the first kind of learning described above is impossible without being preceded by the second. The truth in Socrates’ claim is that if a system does not already “know” how to learn, of course it will not learn anything.

Intelligence and Universality

Elsewhere I have mentioned the argument, often made in great annoyance, that people who take some new accomplishment in AI or machine learning and proclaim that it is “not real intelligence” or that the algorithm is “still fundamentally stupid”, and other things of that kind, are “moving the goalposts,” especially since in many such cases, there really were people who said that something that could do such a thing would be intelligent.

As I said in the linked post, however, there is no problem of moving goalposts unless you originally had them in the wrong place. And attaching intelligence to any particular accomplishment, such as “playing chess well” or even “producing a sensible sounding text,” or anything else with that sort of particularity, is misplacing the goalposts. As we might remember, what excited Francis Bacon was the thought that there were no clear limits, at all, on what science (namely the working out of intelligence) might accomplish. In fact he seems to have believed that there were no limits at all, which is false. Nonetheless, he was correct that those limits are extremely vague, and that much that many assumed to be impossible would turn out to be possible. In other words, human intelligence does not have very meaningful limits on what it can accomplish, and artificial intelligence will be real intelligence (in the same sense that artificial diamonds can be real diamonds) when artificial intelligence has no meaningful limits on what it can accomplish.

I have no time for playing games with objections like, “but humans can’t multiply two 1000 digit numbers in one second, and no amount of thought will give them that ability.” If you have questions of this kind, please answer them for yourself, and if you can’t, sit still and think about it until you can. I have full confidence in your ability to find the answers, given sufficient thought.

What is needed for “real intelligence,” then, is universality. In a sense everyone knew all along that this was the right place for the goalposts. Even if someone said “if a machine can play chess, it will be intelligent,” they almost certainly meant that their expectation was that a machine that could play chess would have no clear limits on what it could accomplish. If you could have told them for a fact that the future would be different: that a machine would be able to play chess but that (that particular machine) would never be able to do anything else, they would have conceded that the machine would not be intelligent.

Training and Universality

Current AI systems are not universal, and clearly have no ability whatsoever to become universal, without first undergoing deep changes in those systems, changes that would have to be initiated by human beings. What is missing?

The problem is the training data. The process of evolution produced the general ability to learn by using the world itself as the training data. In contrast, our AI systems take a very small subset of the world (like a large set of Go games or a large set of internet text), and train a learning system on that subset. Why take a subset? Because the world is too large to fit into a computer, especially if that computer is a small part of the world.

This suggests that going from the current situation to “artificial but real” intelligence is not merely a question of making things better and better little by little. There is a more fundamental problem that would have to be overcome, and it won’t be overcome simply by larger training sets, by faster computing, and things of this kind. This does not mean that the problem is impossible, but it may turn out to be much more difficult than people expected. For example, if there is no direct solution, people might try to create Robin Hanson’s “ems”, where one would more or less copy the learning achieved by natural selection. Or even if that is not done directly, a better understanding of what it means to “know how to learn,” might lead to a solution, although probably one that would not depend on training a model on massive amounts of data.

What happens if there is no solution, or no solution is found? At times people will object to the possibility of such a situation along these times: “this situation is incoherent, since obviously people will be able to keep making better and better machine learning systems, so sooner or later they will be just as good as human intelligence.” But in fact the situation is not incoherent; if it happened, various types of AI system would approach various asymptotes, and this is entirely coherent. We can already see this in the case of GPT-3, where as I noted, there is an absolute bound on its future performance. In general such bounds in their realistic form are more restrictive than their in-principle form; I do not actually expect some successor to GPT-3 to write sensible full length books. Note however that even if this happened (as long as the content itself was not fundamentally better than what humans have done) I would not be “moving the goalposts”; I do not expect that to happen, but its happening would not imply any fundamental difference, since this is still within the “absolute” bounds that we have discussed. In contrast, if a successor to GPT-3 published a cure for cancer, this would prove that I had made some mistake on the level of principle.

Pseudoscience

James Chastek reflects on science, pseudoscience, and religion:

The demarcation problem is a name for our failure to identify criteria that can distinguish science from pseudo-science, in spite of there being two such things. In the absence of rational criteria, we get clarity on the difference from various institutional-cultural institutions, like the consensus produced by university gatekeepers though peer review (which generates, by definition, peer pressure), grants, prestige, and other stick-and-carrot means.  Like most institutions we expect it to do reasonably well (or at least better than an every-man-for-himself chaos) though it will come at a cost of group-think, elitism, the occasional witch hunt etc..

The demarcation problem generalizes to our failure to identify any meta-criterion for what counts as legitimate discourse or belief. Kant’s famous attempt to articulate meta-criteria for thought, which concluded to limiting it to an intuition of Euclidean space distinct from linear time turned out to be no limitation at all, and Davidson pointed out that the very idea of a conceptual scheme – a finite scope or limit to human thought that could be determined in advance – requires us to posit a language that is in-principle untranslatable, which is to speak of something that has to meaning. Heraclitus was right – you can’t come to the borders of thought, even if you travel down every road. We simply can’t articulate a domain of acceptable belief in general from which we can identify the auslanders.

This is true of religion as well. By our own resources we can know there are pseudo ones and truer ones, but the degree of clarity we want in this area is going to have to be borrowed from an intellect other than our own. The various religious institutions are attempts to make up for this deficiency in reason and provide us with clearer and more precise articulations of true religion in exactly the same way that we get it in the sciences. That a westerner tends to accept Christianity arises from the same sort of process that makes him tend to accept scientific consensus. He walks within the ambit of various institutions that are designed to help him toward truth, and they almost certainly succeed at this more than he would succeed if left solely to his own lights. Anyone who thinks he can easily identify true science while no one can identify true religion is right in a sense, but he doesn’t recognize how heavily his belief is resting on institutional power.

Like Sean Collins as quoted in this earlier post, Chastek seems to be unreasonably emphasizing the similarity between science and religion where in fact there is a greater dissimilarity. As discussed in the last post, a field is only considered scientific once it has completely dominated the area of thought among persistent students of that field. It is not exactly that “no one disagrees,” so much as that it becomes too complicated for anyone except those students. But those students, to an extremely high degree, have a unified view of the field. An actual equivalent in the area of religion would be if virtually all theologians accepted the same religion. Even here, it might be a bit strange to find whole countries that accepted another religion, the way it would be strange to find a whole country believing in a flat earth. But perhaps not so strange; occasionally you do get a poll indicating a fairly large percentage of some nation believing some claim entirely opposed to the paradigm of some field of science. Nonetheless, if virtually all theologians accepted the same religion, the comparison between science and religion would be pretty apt. Since that is not the case in the slightest, religion looks more like a field where knowledge remains “undeveloped,” in the way I suggested in reference to some areas of philosophy.

Chastek is right to note that one cannot set down some absolute list of rules setting apart reasonable thought from unreasonable thought, or science from pseudoscience. Nonetheless, reflecting on the comments to the previous post, it occurs to me that we have a pretty good idea of what pseudoscience is. The term itself, of course, means something like “fake science,” so the idea would be something purporting to be scientific which is not scientific.

A recurring element in Kuhn’s book, as in the title itself, is the idea of change in scientific paradigms. Kuhn remarks:

Probably the single most prevalent claim advanced by the proponents of a new paradigm is that they can solve the problems that have led the old one to a crisis. When it can legitimately be made, this claim is often the most effective one possible. In the area for which it is advanced the paradigm is known to be in trouble. That trouble has repeatedly been explored, and attempts to remove it have again and again proved vain. “Crucial experiments”—those able to discriminate particularly sharply between the two paradigms—have been recognized and attested before the new paradigm was even invented. Copernicus thus claimed that he had solved the long-vexing problem of the length of the calendar year, Newton that he had reconciled terrestrial and celestial mechanics, Lavoisier that he had solved the problems of gas-identity and of weight relations, and Einstein that he had made electrodynamics compatible with a revised science of motion.

Some pages later, considering why paradigm change is considered progress, he continues:

Because the unit of scientific achievement is the solved problem and because the group knows well which problems have already been solved, few scientists will easily be persuaded to adopt a viewpoint that again opens to question many problems that had previously been solved. Nature itself must first undermine professional security by making prior achievements seem problematic. Furthermore, even when that has occurred and a new candidate for paradigm has been evoked, scientists will be reluctant to embrace it unless convinced that two all-important conditions are being met. First, the new candidate must seem to resolve some outstanding and generally recognized problem that can be met in no other way. Second, the new paradigm must promise to preserve a relatively large part of the concrete problem-solving ability that has accrued to science through its predecessors. Novelty for its own sake is not a desideratum in the sciences as it is in so many other creative fields. As a result, though new paradigms seldom or never possess all the capabilities of their predecessors, they usually preserve a great deal of the most concrete parts of past achievement and they always permit additional concrete problem-solutions besides.

It is not automatically unscientific to suggest that the current paradigm is somehow mistaken and needs to be replaced: in fact the whole idea of paradigm change depends on scientists doing this on a fairly frequent basis. But Kuhn suggests that this mainly happens when there are well known problems with the current paradigm. Additionally, when a new one is proposed, it should be in order to solve new problems. This suggests one particular form of pseudoscientific behavior: to propose new paradigms when there are no special problems with the current ones. Or at any rate, to propose that they be taken just as seriously as the current ones; there is not necessarily anything unreasonable about saying, “Although we currently view things according to paradigm A, someday we might need to adopt something somewhat like paradigm B,” even if one is not yet aware of any great problems with paradigm A.

A particularly anti-scientific form of this would be to propose that the current paradigm be abandoned in favor of an earlier one. It is easy to see why scientists would be especially opposed to such a proposal: since the earlier one was abandoned in order to solve new problems and to resolve more and more serious discrepancies between the paradigm and experience, going back to an earlier paradigm would suddenly create all sorts of new problems.

On the other hand, why do we have the “science” part of “pseudoscience”? This is related to Chastek’s point about institutions as a force creating conformity of opinion. The pseudoscientist is a sort of predator in relation to these institutions. While the goal of science is truth, at least to a first approximation, the pseudoscientist has something different in mind: this is clear from the fact that he does not care whether his theory solves any new problems, and it is even more clear in the case of a retrogressive proposal. But the pseudoscientist will attempt to use the institutions of science to advance his cause. This will tend in reality to be highly unsuccessful in relation to ordinary scientists, for the same reason that Kuhn remarks that scientists who refuse to adopt a new paradigm after its general acceptance “are simply read out of the profession, which thereafter ignores their work.” In a similar way, if someone proposes an unnecessary paradigm change, scientists will simply ignore the proposal. But if the pseudoscientist manages to get beyond certain barriers, e.g. peer review, it may be more difficult for ordinary people to distinguish between ordinary science and pseudoscience, since they are not in fact using their own understanding of the matter, but simply possess a general trust that the scientists know the general truth about the field.

One of the most common usages of the term “pseudoscience” is in relation to young earth creationism, and rightly so. This is in fact a case of attempting to return to an earlier paradigm which was abandoned precisely because of the kind of tensions that are typical of paradigm change. Thus one of their favorite methods is to attempt to get things published in peer reviewed journals. Very occasionally this is successful, but obviously it has very little effect on the field itself: just as with late adopters or people who never change their mind, the rest of the field, as Kuhn says, “ignores their work.” But to the degree that they manage to lead ordinary people to adopt their views, this is to act in a sort of predator relationship with the institutions of science: to take advantage of these institutions for the sake of falsehood rather than truth.

That’s kind of blunt, someone will say. If paradigm change is frequently necessary, surely it could happen at least once that a former paradigm was better than a later one, such that it would be necessary to return to it, and for the sake of truth. People are not infallible, so surely this is possible.

Indeed, it is possible. But very unlikely, for all the reasons that Kuhn mentions. And in order for such a proposal to be truth oriented, it would have to be motivated by the perception of problems with the current paradigm, even if they were problems that had not been foreseen when the original paradigm was abandoned. In practice such proposals are normally not motivated by problems at all,  and thus there is very little orientation towards truth in them.

Naturally, all of this has some bearing on the comments to the last post, but I will leave most of that to the reader’s consideration. I will remark, however, that things like “he is simply ignorant of basic physics because he is a computer scientist, not a physicist,” or “Your last question tells me that you do not know much physics,” or that it is important not to “ignore the verdict of the reviewers and editors of a respected physics journal,” might be important clues for the ordinary fellow.

And Fire by Fire

Superstitious Nonsense asks about the last post:

So the answer here is that -some- of the form is present in the mind, but always an insufficient amount or accuracy that the knowledge will not be “physical”? You seem to be implying the part of the form that involves us in the self-reference paradox is precisely the part of the form that gives objects their separate, “physical” character. Is this fair? Certainly, knowing progressively more about an object does not imply the mental copy is becoming closer and closer to having a discrete physicality.

I’m not sure this is the best way to think about it. The self-reference paradox arises because we are trying to copy ourselves into ourselves, and thus we are adding something into ourselves, making the copy incomplete. The problem is not that there is some particular “part of the form” that we cannot copy, but that it is in principle impossible to copy it perfectly. This is different from saying that there is some specific “part” that cannot be copied.

Consider what happens when we make “non-physical” copies of something without involving a mind. Consider the image of a gold coin. There are certain relationships common to the image and to a gold coin in the physical world. So you could say we have a physical gold coin, and a non-physical one.

But wait. If the image of the coin is on paper, isn’t that a physical object? Or if the image is on your computer screen, isn’t your screen a physical object? And the image is just the colors on the screen, which are apparently just as “physical” (or non-physical) as the color of the actual coin. So why we would say that “this is not a physical coin?”

Again, as in the last post, the obvious answer is that the image is not made out of gold, while the physical coin is. But why not? Is it that the image is not accurate enough? If we made it more accurate, would it be made out of gold, or become closer to being made out of gold? Obviously not. This is like noting that a mental copy does not become closer and closer to being a physical one.

In a sense it is true that the reason the image of the coin is not physical is that it is not accurate enough. But that is because it cannot be accurate enough: the fact that it is an image positively excludes the copying of certain relationships. Some aspects can be copied, but others cannot be copied at all, as long as it is an image. On the other hand, you can look at this from the opposite direction: if you did copy those aspects, the image would no longer be an image, but a physical coin.

As a similar example, consider the copying of a colored scene into black and white. We can copy some aspects of the scene by using various shades of gray, but we cannot copy every aspect of the scene. There are simply not enough differences in a black and white image to reflect every aspect of a colored scene. The black and white image, as you make it more accurate, does not become closer to being colored, but this is simply because there are aspects of the colored scene that you never copy. If you do insist on copying those aspects, you will indeed make the black and white image into a colored image, and thus it will no longer be black and white.

The situation becomes significantly more complicated when we talk about a mind. In one way, there is an important similarity. When we say that the copy in the mind is “not physical,” that simply means that it is a copy in the mind, just as when we say that the image of the coin is not physical, it means that it is an image, made out of the stuff that images are made of. But just as the image is physical anyway, in another sense, so it is perfectly possible that the mind is physical in a similar sense. However, this is where things begin to become confusing.

Elsewhere, I discussed Aristotle’s argument that the mind is immaterial. Considering the cases above, we could put his argument in this way: the human brain is a limited physical object. So as long as the brain remains a brain, there are simply not enough potential differences in it to model all possible differences in the world, just as you cannot completely model a colored scene using black and white. But anything at all can be understood. Therefore we cannot be understanding by using the brain.

I have claimed myself that anything that can be, can be understood. But this needs to be understood generically, rather than as claiming that it is possible to understand reality in every detail simultaneously. The self-reference paradox shows that it is impossible in principle for a knower that copies forms into itself to understand itself in every aspect at once. But even apart from this, it is very obvious that we as human beings cannot understand every aspect of reality at once. This does not even need to be argued: you cannot even keep everything in mind at once, let alone understand every detail of everything. This directly suggests a problem with Aristotle’s argument: if being able to know all things suggests that the mind is immaterial, the obvious fact that we cannot know all things suggests that it is not.

Nonetheless, let us see what happens if we advance the argument on Aristotle’s behalf. Admittedly, we cannot understand everything at once. But in the case of the colored scene, there are aspects that cannot be copied at all into the black and white copy. And in the case of the physical coin, there are aspects that cannot be copied at all into the image. So if we are copying things into the brain, doesn’t that mean that there should be aspects of reality that cannot be copied at all into the mind? But this is false, since it would not only mean that we can’t understand everything, but it would also mean that there would be things that we cannot think about at all, and if it is so, then it is not so, because in that case we are right now talking about things that we supposedly cannot talk about.

Copying into the mind is certainly different from copying into a black and white scene or copying into a picture, and this does get at one of the differences. But the difference here is that the method of copying in the case of the mind is flexible, while the method of copying in the case of the pictures is rigid. In other words, we have a pre-defined method of copying in the case of the pictures that, from the beginning, only allows certain aspects to be copied. In the case of the mind, we determine the method differently from case to case, depending on our particular situation and the thing being copied. The result is that there is no particular aspect of things that cannot be copied, but you cannot copy every aspect at once.

In answer to the original question, then, the reason that the “mental copy” always remains mental is that you never violate the constraints of the mind, just as a black and white copy never violates the constraints of being black and white. But if you did violate the constraints of the black and white copy by copying every aspect of the scene, the image would become colored. And similarly, if you did violate the constraints of the mind in order to copy every aspect of reality, your mind would cease to be, and it would instead become the thing itself. But there is no particular aspect of “physicality” that you fail to copy: rather, you just ensure that one way or another you do not violate the constraints of the mind that you have.

Unfortunately, the explanation here for why the mind can copy any particular aspect of reality, although not every aspect at once, is rather vague. Perhaps a clearer explanation is possible? In fact, someone could use the vagueness to argue for Aristotle’s position and against mine. Perhaps my account is vague because it is wrong, and there is actually no way for a physical object to receive copied forms in this way.

Earth By Earth

In an earlier post I quoted Empedocles:

For ’tis by Earth we see Earth, by Water Water,

By Ether Ether divine, by Fire destructive Fire,

By Love Love, and Hate by cruel Hate.

I argued in that post that the mind does have something in common with what is known, and that this common thing is the form of the thing known. However, I took for granted that Empedocles is mistaken in assuming that the thing itself must be in common in order to be known.

I did not directly say why he is mistaken. If form makes a thing what it is, and the form of a thing known is in the mind, why does the mind not become that thing? If the form of earth is in your mind, then why is your mind not literally earth?

We will naturally be inclined to say that the form in your mind is apart from its proper matter, and that you need both form and matter to make a thing. And there is nothing wrong with this answer, as far as it goes, but it seems insufficient. Suppose you have a gold coin: what is its matter? The gold coin is presumably made out of atoms of gold, and since these atoms are not in your head, you do not see gold by gold. The problem is that atoms of gold also have some form, since this is just to say there is an answer when we ask, “What is this?”, and this will be true of anything whatever that you call matter. And there is nothing to prevent you from knowing that thing as well. There is nothing to prevent you from knowing the nature of gold atoms. And thus it seems that the matter will be present, and thus there should be actual gold in your mind.

Perhaps an Aristotelian will suggest that it is prime matter that is missing. But this answer will not work, because humans have this sort of matter in common with other things. And in any case, nothing is meant by “matter” in this sense except the ability to have the form. And since the knower can have the form, they have the ability to have the form, and thus matter. So nothing is missing, and the thing known should be literally in the knower.

Thus it appears that we have a reductio. Either my account of knowledge is mistaken, or earth should actually by known by earth, which it obviously is not.

The conclusion is only apparent, however. We can resolve it by going back to what I said about form in that post and the following one. Form is a network of relationships apt to make something one. But being one not only includes internal unity, but also separation from other things. For example, suppose we now have three gold coins, instead of one: each coin is one coin, and this depends on its parts being together, rather than in a loose heap of gold dust. But the fact that the coins are three depends on their separation from one another, and thus also the fact that each coin is “one” depends on that separation.

In other words, the form of a thing includes not only internal relationships, but also external relationships. This implies that to know the nature of a thing, one must know its external relationships. And to know a thing perfectly would require knowing both its internal and external relationships perfectly.

Now one of the things to which it is related is the very one who knows it. Thus, if the knower is to know the thing perfectly, they must perfectly understand the relationships between themselves and the thing. But this is not possible, for reasons explained in the post on self-reference. The person who attempts to know something perfectly is in the situation of someone attempting to draw a picture of themselves drawing a picture: to make a perfect copy of the gold coin, it is necessary to copy its context, which includes the knower. But this cannot be done; therefore perfect knowledge of the coin is impossible.

A different way to state the same analysis: “perfect copy” is a contradiction in terms, because such perfection would imply identity with the original, and thus not being a copy at all. In other words, perfect knowledge of a thing is impossible because perfect knowledge would imply, as in the argument of Empedocles, that one’s knowledge would literally be the thing known, and thus not knowledge at all.

Tautologies Not Trivial

In mathematics and logic, one sometimes speaks of a “trivial truth” or “trivial theorem”, referring to a tautology. Thus for example in this Quora question, Daniil Kozhemiachenko gives this example:

The fact that all groups of order 2 are isomorphic to one another and commutative entails that there are no non-Abelian groups of order 2.

This statement is a tautology because “Abelian group” here just means one that is commutative: the statement is like the customary example of asserting that “all bachelors are unmarried.”

Some extend this usage of “trivial” to refer to all statements that are true in virtue of the meaning of the terms, sometimes called “analytic.” The effect of this is to say that all statements that are logically necessary are trivial truths. An example of this usage can be seen in this paper by Carin Robinson. Robinson says at the end of the summary:

Firstly, I do not ask us to abandon any of the linguistic practises discussed; merely to adopt the correct attitude towards them. For instance, where we use the laws of logic, let us remember that there are no known/knowable facts about logic. These laws are therefore, to the best of our knowledge, conventions not dissimilar to the rules of a game. And, secondly, once we pass sentence on knowing, a priori, anything but trivial truths we shall have at our disposal the sharpest of philosophical tools. A tool which can only proffer a better brand of empiricism.

While the word “trivial” does have a corresponding Latin form that means ordinary or commonplace, the English word seems to be taken mainly from the “trivium” of grammar, rhetoric, and logic. This would seem to make some sense of calling logical necessities “trivial,” in the sense that they pertain to logic. Still, even here something is missing, since Robinson wants to include the truths of mathematics as trivial, and classically these did not pertain to the aforesaid trivium.

Nonetheless, overall Robinson’s intention, and presumably that of others who speak this way, is to suggest that such things are trivial in the English sense of “unimportant.” That is, they may be important tools, but they are not important for understanding. This is clear at least in our example: Robinson calls them trivial because “there are no known/knowable facts about logic.” Logical necessities tell us nothing about reality, and therefore they provide us with no knowledge. They are true by the meaning of the words, and therefore they cannot be true by reason of facts about reality.

Things that are logically necessary are not trivial in this sense. They are important, both in a practical way and directly for understanding the world.

Consider the failure of the Mars Climate Orbiter:

On November 10, 1999, the Mars Climate Orbiter Mishap Investigation Board released a Phase I report, detailing the suspected issues encountered with the loss of the spacecraft. Previously, on September 8, 1999, Trajectory Correction Maneuver-4 was computed and then executed on September 15, 1999. It was intended to place the spacecraft at an optimal position for an orbital insertion maneuver that would bring the spacecraft around Mars at an altitude of 226 km (140 mi) on September 23, 1999. However, during the week between TCM-4 and the orbital insertion maneuver, the navigation team indicated the altitude may be much lower than intended at 150 to 170 km (93 to 106 mi). Twenty-four hours prior to orbital insertion, calculations placed the orbiter at an altitude of 110 kilometers; 80 kilometers is the minimum altitude that Mars Climate Orbiter was thought to be capable of surviving during this maneuver. Post-failure calculations showed that the spacecraft was on a trajectory that would have taken the orbiter within 57 kilometers of the surface, where the spacecraft likely skipped violently on the uppermost atmosphere and was either destroyed in the atmosphere or re-entered heliocentric space.[1]

The primary cause of this discrepancy was that one piece of ground software supplied by Lockheed Martin produced results in a United States customary unit, contrary to its Software Interface Specification (SIS), while a second system, supplied by NASA, expected those results to be in SI units, in accordance with the SIS. Specifically, software that calculated the total impulse produced by thruster firings produced results in pound-force seconds. The trajectory calculation software then used these results – expected to be in newton seconds – to update the predicted position of the spacecraft.

It is presumably an analytic truth that the units defined in one way are unequal to the units defined in the other. But it was ignoring this analytic truth that was the primary cause of the space probe’s failure. So it is evident that analytic truths can be extremely important for practical purposes.

Such truths can also be important for understanding reality. In fact, they are typically more important for understanding than other truths. The argument against this is that if something is necessary in virtue of the meaning of the words, it cannot be telling us something about reality. But this argument is wrong for one simple reason: words and meaning themselves are both elements of reality, and so they do tell us something about reality, even when the truth is fully determinate given the meaning.

If one accepts the mistaken argument, in fact, sometimes one is led even further. Logically necessary truths cannot tell us anything important for understanding reality, since they are simply facts about the meaning of words. On the other hand, anything which is not logically necessary is in some sense accidental: it might have been otherwise. But accidental things that might have been otherwise cannot help us to understand reality in any deep way: it tells us nothing deep about reality to note that there is a tree outside my window at this moment, when this merely happens to be the case, and could easily have been otherwise. Therefore, since neither logically necessary things, nor logically contingent things, can help us to understand reality in any deep or important way, such understanding must be impossible.

It is fairly rare to make such an argument explicitly, but it is a common implication of many arguments that are actually made or suggested, or it at least influences the way people feel about arguments and understanding.  For example, consider this comment on an earlier post. Timocrates suggests that (1) if you have a first cause, it would have to be a brute fact, since it doesn’t have any other cause, and (2) describing reality can’t tell us any reasons but is “simply another description of how things are.” The suggestion behind these objections is that the very idea of understanding is incoherent. As I said there in response, it is true that every true statement is in some sense “just a description of how things are,” but that was what a true statement was meant to be in any case. It surely was not meant to be a description of how things are not.

That “analytic” or “tautologous” statements can indeed provide a non-trivial understanding of reality can also easily be seen by example. Some examples from this blog:

Good and being. The convertibility of being and goodness is “analytic,” in the sense that carefully thinking about the meaning of desire and the good reveals that a universe where existence as such was bad, or even failed to be good, is logically impossible. In particular, it would require a universe where there is no tendency to exist, and this is impossible given that it is posited that something exists.

Natural selection. One of the most important elements of Darwin’s theory of evolution is the following logically necessary statement: the things that have survived are more likely to be the things that were more likely to survive, and less likely to be the things that were less likely to survive.

Limits of discursive knowledge. Knowledge that uses distinct thoughts and concepts is necessarily limited by issues relating to self-reference. It is clear that this is both logically necessary, and tells us important things about our understanding and its limits.

Knowledge and being. Kant rightly recognized a sense in which it is logically impossible to “know things as they are in themselves,” as explained in this post. But as I said elsewhere, the logically impossible assertion that knowledge demands an identity between the mode of knowing and the mode of being is the basis for virtually every sort of philosophical error. So a grasp on the opposite “tautology” is extremely useful for understanding.

 

Necessary Connection

In Chapter 7 of his Enquiry Concerning Human Understanding, David Hume says about the idea of “necessary connection”:

We have looked at every possible source for an idea of power or necessary connection, and have found nothing. However hard we look at an isolated physical episode, it seems, we can never discover discover anything but one event following another; we never find any force or power by which the cause operates, or any connection between it and its supposed effect. The same holds for the influence of mind on body: the mind wills, and then the body moves, and we observe both events; but we don’t observe– and can’t even conceive– the tie that binds the volition to the motion, i.e. the energy by which the mind causes the body to move. And the power of the will over its own faculties and ideas– i.e. over the mind, as distinct from the body– is no more comprehensible. Summing up, then: throughout the whole of nature there seems not to be a single instance of connection that is conceivable by us. All events seem to be entirely loose and separate. One event follows another, but we never can observe any tie between them. They seem associated, but never connected. And as we can have no idea of anything that never appeared as an impression to our outward sense or inward feeling, we are forced to conclude that we have no idea of ‘connection’ or ‘power’ at all, and that those words– as used in philosophical reasonings or in common life– have absolutely no meaning.

This is not Hume’s final word on the matter, as we will see below, so this has to be taken with a grain of salt, even as a representation of his opinion. Nonetheless, consider this caricature of what he just said:

We have looked at every possible source for an idea of mduvvqi or pdnfhvdkdddd, and have found nothing. However hard we look at an isolated physical episode, it seems, we can never discover anything but events that can be described by perfectly ordinary words; we never find any mduvvqi involved, nor any pdnfhvkdddd.

We could take this to be making the point that “mduvvqi” and “pdnfhvdkdddd” are not words. Other than that, however, the paragraph itself is meaningless, precisely because those “words” are meaningless. It certainly does not make any deep (or shallow for that matter) metaphysical or physical point, nor any special point about the human mind. But Hume’s text is different, and the difference in question is a warning sign of Kantian confusion. If those words had “absolutely no meaning,” in fact, there would be no difference between Hume’s passage and our caricature. Those words are not meaningless, but meaningful, and Hume is even analyzing their meaning in order to supposedly determine that the words are meaningless.

Hume’s analysis in fact proceeds more or less in the following way. We know what it means to say that something is necessary, and it is not the same as saying that the thing always happens. Every human being we have ever seen was less than 20 feet tall. But is it necessary that human beings be less than 20 feet tall? This is a different question, and while we can easily experience someone’s being less than 20 feet tall, it is very difficult to see how we could possibly experience the necessity of this fact, if it is necessary. Hume concludes: we cannot possibly experience the necessity of it. Therefore we can have no idea of such necessity.

But Hume has just contradicted himself: it was precisely by understanding the concept of necessity that he was able to see the difficulty in the idea of experiencing necessity.

Nonetheless, as I said, this is not his final conclusion. A little later he gives a more nuanced account:

The source of this idea of a necessary connection among events seems to be a number of similar instances of the regular pairing of events of these two kinds; and the idea cannot be prompted by any one of these instances on its own, however comprehensively we examine it. But what can a number of instances contain that is different from any single instance that is supposed to be exactly like them? Only that when the mind experiences many similar instances, it acquires a habit of expectation: the repetition of the pattern affects it in such a way that when it observes an event of one of the two kinds it expects an event of the other kind to follow. So the feeling or impression from which we derive our idea of power or necessary connection is a feeling of connection in the mind– a feeling that accompanies the imagination’s habitual move from observing one event to expecting another of the kind that usually follows it. That’s all there is to it. Study the topic from all angles; you will never find any other origin for that idea.

Before we say more, we should concede that this is far more sensible than the claim that the idea of necessity “has absolutely no meaning.” Hume is now conceding that it does have meaning, but claiming that the meaning is about us, not about the thing. When we see someone knock a glass off a table, we perhaps feel a certainty that it will fall and hit the floor. Experiencing that feeling of certainty, he says, is the source of the idea of “necessity.” This is not an unreasonable hypothesis.

However, Hume is also implicitly making a metaphysical argument here which is somewhat less sensible. Our feelings of certainty and uncertainty are subjective qualities of our minds, he suggests, not objective features of the things. Therefore necessity as an objective feature does not and cannot exist. This is not unrelated to his mistaken claim that we cannot know that the future will be similar to the past, even with probability.

What is the correct account here? In fact we already know, from the beginning of the conversation, that “necessary” and “possible” are meaningful words. We also know that in fact we use them to describe objective features of the world. But which features? Attempting to answer this question is where Hume’s approach is pretty sensible. Hume is not mistaken that all of our knowledge is from experience, and ultimately from the senses. He seems to identify experience with sense experience too simplistically, but he is not mistaken that all experience is at least somewhat similar to sense experience; feeling sure that two and two make four is not utterly unlike seeing something red. We want to say that there is something in common there, “something it is like,” to experience one or the other. But if this is the case, it would be reasonable to extend what we said about the senses to intellectual experiences. “The way red looks” cannot, as such, be an objective feature of a thing, but a thing can be objectively red, in such a way that “being red,” together with the nature of the senses, explains why a thing looks red. In a similar way, certainty and uncertainty, insofar as they are ways we experience the world, cannot be objective features of the world as such. Nonetheless, something can be objectively necessary or uncertain, in such a way that “being necessary” or otherwise, together with the nature of our minds, explains why it seems certain or uncertain to us.

There will be a similarity, however. The true nature of red might be quite strange in comparison to the experience of seeing red, as for example it might consist of surface reflectance properties. In a similar way, the true nature of necessity, once it is explained, might be quite strange to us compared to the experience of being certain or uncertain. But that it can be explained is quite certain itself, since the opposite claim would fall into Hume’s original absurdity. There are no hidden essences.

Motivated Reasoning and the Kantian Dichotomy

At the beginning of the last post, I distinguished between error caused by confusing the mode of knowledge and the mode of being, and error caused by non-truth related motives. But by the end of the post, it occurred to me that there might be more of a relationship between the two than one might think. Not that we can reduce all error to one or the other, of course. It seems pretty clear that the errors involved in the Kantian dichotomy are somewhat “natural,” so to speak, and often the result of honest confusion. This seems different from motivated reasoning. Similarly, there are certainly other causes of error. If someone makes an arithmetical error in their reasoning, which is a common occurrence, this is not necessarily caused by either confusion about the mode of knowing or by some other motive. It is just a mistake, perhaps caused by a failure of the imagination.

Nonetheless, consider the examples chosen in the last post. Scott Sumner is the anti-realist, while James Larson is the realist. And if we are looking only at that disagreement, and not taking into account confusion about the mode of knowing, Larson is right, and Sumner is wrong. But if we consider their opinions on other matters, Sumner is basically sane and normal, while Larson is basically crazy. Consider for example Larson’s attitude to science:

In considering what might be called the “collective thinking” of the entire Western world (and beyond), there is no position one can take which elicits more universal disdain than that of being“anti-science.” It immediately calls forth stereotyped images of backwardness, anti-progress, rigidity, and just plain stupidity.

There are of course other epithets that are accompanied by much more vehement condemnations: terms as such anti-semite, racist, etc. But we are not here concerned with such individual prejudices and passions, but rather with the scientific Weltanschauung (World-view) which now dominates our thinking, and the rejection of which is almost unthinkable to modern man.

Integral to this world-view is the belief that there is a world of “Science” containing all knowledge of the depths of the physical world, that the human mind has the potential to fully encompass this knowledge, and that it is only in the use” of this knowledge that man sins.

It is my contention, on the other hand, that the scientific weltanschauung is integrally constituted by a dominant hubris, which has profoundly altered human consciousness, and constitutes a war against both God and man.

Stereotyped or not, the labels Larson complains about can be applied to his position with a high degree of accuracy. He goes on to criticize not only the conclusions of science but also the very idea of engaging in a study of the world in a scientific manner:

It is a kind of dogma of modern life that man has the inalienable right, and even responsibility, to the pursuit of unending growth in all the spheres of his secular activity: economic, political (New World Order), scientific knowledge, technological development, etc. Such “unending quest for knowledge and growth” would almost seem to constitute modern man’s definition of his most fundamental dignity. This is fully in accord with the dominant forms of modern philosophy which define him in terms of evolutionary becoming rather than created being.

Such is not the Biblical view, which rather sees such pursuits as reeking disaster to both individual and society, and to man’s relationship to Truth and God. The Biblical perspective begins with Original Sin which, according to St. Thomas, was constituted as an intellectual pride by which Adam and Eve sought an intellectual excellence of knowledge independently of God. In the situation of Original Sin, this is described in terms of “knowledge of good and evil.” It is obvious in the light of further Old Testament scriptures, however, that this disorder also extends to the “seeking after an excellence” which would presume to penetrate to the depth of the nature of created things. Thus, we have the following scriptures:

Nothing may be taken away, nor added, neither is it possible to find out the glorious works of God: When a man hath done, then shall he begin: And when he leaveth off, he shall be at a loss.” (Ecclus 28:5-6).

And I understood that man can find no reason of all those works of God that are done under the sun: and the more he shall labor to seek, so much the less shall he find: yea, though the wise man shall say, that he knoweth it, he shall not be able to find it.” (Eccl 8:17).

For the works of the Highest only are wonderful, and his works are glorious, secret, and hidden.” (Ecclus 11:4).

For great is the power of God alone, and he is honoured by the humble. Seek not the things that are too high for thee, and search not into things above thy ability: but the things that God hath commanded thee, think on them always, and in many of his works be not curious. For it is not necessary for thee to see with thy eyes those things that are hid. In unnecessary matters be not over curious, and in many of his works thou shalt not be inquisitive. For many things are shewn to thee above the understanding of men. And the suspicion of them hath deceived man, and hath detained their minds in vanity.” (Ecclus 3:21-26).

These scripture passages proscribe any effort by man which attempts to penetrate (or even be inquisitive and curious about) the hidden depths of God’s “works.” It is evident that in these scriptures the word “works” refers to the physical world itself – to all those “works of God that are done under the sun.” There is no allegorical interpretation possible here. We are simply faced with a choice between considering these teachings as divinely revealed truth, or merely the product of primitive and ignorant Old Testament human minds.

It is not merely that Larson rejects the conclusions of science, which he admittedly does. He also condemns the very idea of “let’s go find out how the world works” as a wicked and corrupting curiosity. I say, without further ado, that this is insane.

But of course it is not insane in the sense that Larson should be committed to a mental institution, even though I would expect that he has some rather extreme personality characteristics. Rather, it is extremely obvious that Larson is engaging in highly motivated reasoning. On the other hand, most of Scott Sumner’s opinions are relatively ordinary, and while some of his opinions are no doubt supported by other human motives besides truth, we do not find him holding anything in such a highly motivated way.

Thus we have this situation: the one who upholds common sense (with regard to realism) holds crazy motivated opinions about all sorts of other matters, while the one who rejects common sense (with regard to realism) holds sane non-motivated opinions about all sorts of other matters. Perhaps this is accidental? If we consider other cases, will we find that this is an exceptional case, and that most of the time the opposite happens?

Anti-realism in particular, precisely because it is so strongly opposed to common sense, is rare in absolute terms, and thus we can expect to find that most people are realist regardless of their other opinions. But I do not think that we will find that the opposite is the case overall. On the contrary, I think we will find that people who embrace the Kantian side of such a dichotomy will frequently tend to be people who have more accurate opinions about detailed matters, and that people who embrace the anti-Kantian side of such a dichotomy will frequently tend to be people who have less accurate opinions about detailed matters, despite the fact that the anti-Kantian side is right about the common sense issue at hand.

Consider the dichotomy in general. If we analyze it purely in terms of concern for truth, the anti-Kantian is interested in upholding the truth of common sense, while the Kantian is interested in upholding the truth about the relationship between the mind and the world. From the beginning, the anti-Kantian wishes to maintain a general well-known truth, while the Kantian wants to maintain a relatively complex detailed truth about the relationship between knowledge and the world. The Kantian thus has more of an interest in details than the anti-Kantian, while the anti-Kantian is more concerned about the general truth.

What happens when we bring in other motivations? People begin to trade away truth. To the degree that they are interested in other things, they will have less time and energy to think about what is true. And since knowledge advances from general to particular, it would not be surprising if people who are less interested in truth pay less attention to details, and bother themselves mainly about general issues. On the other hand, if people are highly interested in truth and not much interested in other things, they will dedicate a lot of time and attention to working out things in detail. Of course, there are also other reasons why someone might want to work things out in detail. For example, as I discussed a few years ago, Francis Bacon says in effect: the philosophers do not care about truth. Rather their system is “useful” for certain goals:

We make no attempt to disturb the system of philosophy that now prevails, or any other which may or will exist, either more correct or more complete. For we deny not that the received system of philosophy, and others of a similar nature, encourage discussion, embellish harangues, are employed, and are of service in the duties of the professor, and the affairs of civil life. Nay, we openly express and declare that the philosophy we offer will not be very useful in such respects. It is not obvious, nor to be understood in a cursory view, nor does it flatter the mind in its preconceived notions, nor will it descend to the level of the generality of mankind unless by its advantages and effects.

Meanwhile, Bacon does not himself claim to be interested in truth. But he desires “advantages and effects,” namely accomplishments in the physical world, such as changing lead into gold. But if you want to make complex changes in the physical world, you need to know the world in detail. The philosophers, therefore, have no need of detailed knowledge because they are not interested in truth but disputation and status, while Bacon does have a need of detailed knowledge, even though he is likewise uninterested in truth, because he is interested in changing the world.

In reality, there will exist both philosophers and scientists who mainly have these non-truth related concerns, and others who are mainly concerned about the truth. But we can expect an overall effect of caring more about truth to be caring more about details as well, simply because such people will devote more time and energy to working things out in detail.

On this account, Scott Sumner’s anti-realism is an honest mistake, made simply because people tend to find the Kantian error persuasive when they try to think about how knowledge works in detail. Meanwhile, James Larson’s absurd opinions about science are not caused by any sort of honesty, but by his ulterior motives. I noted in the last post that in any such Kantian dichotomy, the position upholding common sense is truer. And this is so, but the implication of the present considerations is that in practice we will often find the person upholding common sense also maintaining positions which are much wronger in their details, because they will frequently care less about the truth overall.

I intended to give a number of examples, since this point is hardly proven by the single instance of Scott Sumner and James Larson. But since I am running short on time, at least for now I will simply point the reader in the right direction. Consider the Catholic discussion of modernism. Pius X said that the modernists “attempt to ascribe to a love of truth that which is in reality the result of pride and obstinacy,” but as we saw there, the modernists cared about the truth of certain details that the Church preferred to ignore or even to deny. The modernists were not mistaken to ascribe this to a love of truth. As I noted in the same post, Pius X suggests that a mistaken epistemology is responsible for the opinions of the modernists:

6. We begin, then, with the philosopher. Modernists place the foundation of religious philosophy in that doctrine which is usually called Agnosticism. According to this teaching human reason is confined entirely within the field of phenomena, that is to say, to things that are perceptible to the senses, and in the manner in which they are perceptible; it has no right and no power to transgress these limits. Hence it is incapable of lifting itself up to God, and of recognising His existence, even by means of visible things. From this it is inferred that God can never be the direct object of science, and that, as regards history, He must not be considered as an historical subject. Given these premises, all will readily perceive what becomes of Natural Theology, of the motives of credibility, of external revelation. The Modernists simply make away with them altogether; they include them in Intellectualism, which they call a ridiculous and long ago defunct system. Nor does the fact that the Church has formally condemned these portentous errors exercise the slightest restraint upon them.

As I noted there, epistemology is not the foundation for anyone’s opinions, and was not the foundation for the opinions of the modernists. But on the other hand, Pius X may be seeing something true here. The “agnosticism” he describes here is basically the claim that we can know only appearances, and not the thing in itself. And I would find it unsurprising if Pius X is right that there was a general tendency among the modernists to accept a Kantian epistemology. But the reason for this would be analogous to the reasons that Scott Sumner is an anti-realist: that is, it is basically an honest mistake about knowledge, while in contrast, the condemnation of questioning the authenticity of the Vulgate text of 1 John 5:7 was not honest at all.

Generalized Kantian Dichotomy

At the end of the last post I suggested that the confusion between the mode of knowledge and the mode of being might be a primary, or rather the primary, cause of philosophical error, with the exception of motivated error.

If we consider the “Kantian” and “anti-Kantian” errors in the last post, we can give a somewhat general account of how this happens. The two errors might appear to be mutually exclusive and exhaustive, but in fact they constitute a false dichotomy. Consider the structure of the disagreement:

A. Common sense takes note of something: in this case, that it is possible to know things. Knowledge is real.

B. The Kantian points out that the mode of knowing and the mode of being are not the same, and concludes that common sense is wrong. Knowledge is apparent, but not real.

C. The anti-Kantian, determined to uphold common sense, applies modus tollens. We know that knowledge is real: so the mode of knowing and the mode of being must be the same.

Each party to the dispute says something true (that knowledge is real, that the mode of being and the mode of knowing are not the same), and something false (that knowledge is not real, that the mode of being and the mode of knowing are the same.)

A vast number of philosophical disputes can be analyzed in a very similar manner. Thus we have the general structure:

A. Common sense points out that some item X is real.

B. The Kantian points out that the mode of knowing and the mode of being are not the same, and concludes that common sense is wrong. X is apparent, but not real.

C. The anti-Kantian, determined to uphold common sense, applies modus tollens. We know that X is real: so the mode of knowing and the mode of being must be the same.

Once again, in this general structure, each party to the dispute would say something true (that X is real, that the mode of knowing and being are not the same), and something false (the denial of one of these two.) As an example, we can apply this structure to our discussion of reductionism and anti-reductionism. The reductionist, in this case, is the Kantian (in our present structure), and the anti-reductionist the anti-Kantian. The very same person might well argue both sides about different things: thus Sean Carroll might be anti-reductionist about fundamental particles and reductionist about humans, while Alexander Pruss is anti-reductionist about humans and reductionist about artifacts. But whether we are discussing fundamental particles, humans, or artifacts, both sides are wrong. Both say something true, but also something false.

Several cautionary notes are needed in this regard.

First, both sides will frequently realize that they are saying something strongly counter-intuitive, and attempt to remedy this by saying something along the lines of “I don’t mean to say the thing that is false.” But that is not the point. I do not say that you intend to say the thing that is false. I say that you give an account which logically implies the thing that is false, and that the only way you can avoid this implication is by rejecting the false dichotomy completely, namely by accepting both the reality of X, and the distinction of the modes of knowing and being. Thus for example Sean Carroll’s does not distinguish his poetic naturalism from eliminativism in terms of what it says to be true, but only in terms of what it says to be useful. But eliminativism says that it is false that there are ships: therefore Carroll’s poetic naturalism also says that it is false that there are ships, whether he intends to say this or not, and whether or not he finds it useful to say that there are.

Second, this outline uses the terminology of “Kantian” and “anti-Kantian,” but in fact the two tend to blur into one another, because the mistakes are very similar: both imply that the unknown and the known, as such, are the same. Thus for example in my post on reductionism I said that there was a Kantian error in the anti-reductionist position: but in the present schema, the error is anti-Kantian. In part, this happened because I did not make these distinctions clearly enough myself in the earlier post. But is it also because the errors themselves uphold very similar contradictions. Thus the anti-reductionist thinks somewhat along these lines:

We know that a human being is one thing. We know it as a unity, and therefore it has a mode of being as a unity. But whenever anyone tries to explain the idea of a human being, they end up saying many things about it. So our explanation of a human being cannot be the true explanation. Since the mode of knowing and the mode of being must be the same, a true explanation of a human being would have to be absolutely one. We have no explanation like that, so it must be that a human being has an essence which is currently hidden from us.

Note that this reasons in an anti-Kantian manner (the mode of being and the mode of knowing must be the same), but the conclusion is effectively Kantian: possible or not, we actually have no knowledge of human beings as they are.

As I said in the post on reductionism, the parties to the dispute will in general say that an account like mine is anti-realist: realism, according to both sides, requires that one accept one side of the dichotomy and reject the other. But I respond that the very dispute between realism and anti-realism can be itself an example of the false dichotomy, as the dispute is often understood. Thus:

A. Common sense notes that the things we normally think and talk about are real, and that the things we normally say about them are true.

B. The Kantian (the anti-realist) points out that the mode of knowing and the mode of being are not the same, and concludes that common sense is wrong. The things we normally talk about appear to be real, but they are not.

C. The anti-Kantian (the realist) applies modus tollens. We know these things are real: so the mode of knowledge and the mode of being must be the same after all.

As usual, both say something true, and both say something false. Consider Scott Sumner, who tends to take an anti-realist position, as for example here:

Even worse, I propose doing so for “postmodern” reasons. I will start by denying the reality of inflation, and then argue for some substitute concepts that are far more useful. First a bit of philosophy. There is a lively debate about whether there is a meaningful distinction between our perception of reality, and actual reality. I had a long debate with a philosopher about whether Newton’s laws of motion were a part of reality, or merely a human construct. I took the latter view, arguing that if humans had never existed then Newton’s laws would have never existed. He argued they are objectively true. I responded that Einstein showed that were false. He responded that they were objectively true in the limiting case. I argued that even that might be changed by future developments in our understanding of reality at the quantum level. He argued that they’d still be objectively approximately true, etc, etc.

On the one hand, a lot of what Scott says here is right. On the other hand, he mistakenly believes that it follows that common sense is mistaken in matters in which it is not, in fact, mistaken. The reasoning is basically the reasoning of the Kantian: one notices that we have a specific mode of knowing which is not the mode of being of things, and concludes that knowledge is impossible, or in Scott’s terminology, “objective truth” does not exist, at least as distinct from personal opinion. He has a more extensive discussion of this here:

I don’t see it as relativism at all. I don’t see it as the world of fuzzy post-modern philosophers attacking the virtuous hard sciences. It’s important not to get confused by semantics, and focus on what’s really at stake. In my view, Rorty’s views are most easily seen by considering his denial of the distinction between objective truth and subjective belief. In order to see why he did this, consider Rorty’s claim that, “That which has no practical implications, has no theoretical implications.” Suppose Rorty’s right, and it’s all just belief that we hold with more or less confidence. What then? In contrast, suppose the distinction between subjective belief and objective fact is true. What then? What are the practical implications of each philosophical view? I believe the most useful way of thinking about this is to view all beliefs as subjective, albeit held with more or less confidence.

Let’s suppose it were true that we could divide up statements about the world into two categories, subjective beliefs and objective facts. Now let’s write down all our statements about the world onto slips of paper. Every single one of them, there must be trillions (even if we ignore the field of math, where an infinite number of statements could be constructed.) Now let’s divide these statements up into two big piles, one set is subjective beliefs, and the other pile contains statements that are objective facts. We build a vast Borgesian library, and put all the subjective beliefs (i.e. Trump is an idiot) into one wing, and all the objective facts (Paris is the capital of France) into the other wing.

Now here’s the question for pragmatists like Rorty and me. Is this a useful distinction to make? If it is useful, how is it useful? Here’s the only useful thing I can imagine resulting from this distinction. If we have a category of objective facts, then we can save time by not questioning these facts as new information arises. They are “off limits”. Since they are objective facts, they can never be refuted. If they could be refuted, then they’d be subjective beliefs, not objective facts.

But I don’t want to do that. I don’t want to consider any beliefs to be completely off limits—not at all open to refutation. That reminds me too much of fundamentalist religion. On the other hand, I do want to distinguish between different kinds of beliefs, in a way that I think is more pragmatic than the subjective/objective distinction. Rather I’d like to assign probability values to each belief, which represent my confidence as to whether or not the belief is true. Then I’d like to devote more of my time to entertaining critiques of highly questionable hypotheses, than I do to less plausible hypotheses.

Again, this makes a great deal of sense. The problem is that Scott thinks that either there is no distinction between the subjective and objective, or we need to be able to make that distinction subjectively. Since the latter seems an evident contradiction, he concludes that there is no distinction between subjective and objective. Later in the post, he puts this in terms of “map and territory”:

The other point of confusion I see is people conflating “the map and the territory”. Then they want to view “objective facts” as aspects of the territory, the underlying reality, not (just) beliefs about the territory. I don’t think that’s very useful, as it seems to me that statements about the world are always models of the world, not the world itself. Again, if it were not true, then theories could never be revised over time. After all, Einstein didn’t revise reality in 1905; he revised our understanding of reality–our model of reality.

“Statements about the world are always models of the world, not the world itself.” Indeed. That is because they are statements, not the things the statements are about. This is correctly to notice that the mode of knowledge is not the mode of being. But it does not follow that there are no objective facts, nor that objective facts are not distinct from opinions. Consider the statement that “dogs are animals.” We can call that statement a “model of the world.” But is not about a model of the world: it is about dogs, which are not our model or even parts of our model, but things moving around outside in the real world. Obviously, we cannot concretely distinguish between “things we think are true” and “things that are actually true,” because it will always be us talking about things that are actually true, but we can make and understand that distinction in the abstract. Scott is right, however, to reject the idea that some ideas are subjective “because they are about the map,” with other statements being objective “because they are about the territory.” In the map / territory terminology, all statements are maps, and all of them are about the territory (including statements about maps, which refer to maps as things that exist, and thus as part of the territory.)

We can see here how Scott Sumner is falling into the Kantian error. But what about the realist position? It does not follow from any of the above that the realist must make any corresponding error. And indeed, in all such dichotomies, there will be a side which is more right than the other: namely, the side that says that common sense is right. And so it is possible, and correct, to say that common sense is right without also accepting the corresponding falsehood (namely that the mode of knowing and the mode of being are the same.) But if we do accept the realist position together with the corresponding falsehood, this can manifest itself in various ways. For example, one might say that one should indeed put some things in the category of “off limits” for discussion: since they are objective facts, they can never be revised. Thus for example James Larson, as in an earlier discussion, tends to identify the rejection of his positions with the rejection of realism. In effect, “My beliefs are objectively true. So people who disagree with my beliefs reject objective truth. And I cannot admit that my beliefs might be false, because that would mean an objective truth could be false at the same time, which is a contradiction.” The problem will not always be manifested in the same way, however, because as we said in the last post, each end of the false dichotomy implies a similar contradiction and cannot be reasoned about coherently.

Kantian and Anti-Kantian Confusion

I introduced what I called the “Kantian error” in an earlier post, and have since used it in explaining several issues such the understanding of unity and the nature of form. However, considering my original point, we can see that there are actually two relevant errors.

First, there is the Kantian error itself, which amounts to the claim that nothing real can be truly known.

Second, there is an anti-Kantian error, namely the error opposed to the element of truth in Kant’s position. I pointed out that Kant is correct that we cannot know things “as they are in themselves” if this is meant to identify the mode of knowing and the mode of being as such. The opposite error, therefore, would be to say that we can know things by having a mode of knowing which is completely identical to the mode of being which things have. Edward Feser, for example, effectively falls into this error in his remarks on sensible colors discussed in an earlier post on truth in the senses, and more recently at his blog he reaffirms the same position:

Part of the reason the mechanical conception of matter entails the possibility of zombies is that it takes matter to be devoid of anything like color, sound, taste, odor, heat, cold and the like, as common sense conceives of these qualities.  On the mechanical conception, if you redefine redness (for example) as a tendency to absorb certain wavelengths of light and reflect others, then you can say that redness is a real feature of the physical world.  But if by “redness” you mean what common sense understands by it – the way red looks in conscious experience – then, according to the mechanical conception, nothing like that really exists in matter.  And something similar holds of other sensory qualities.  The implication is that matter is devoid of any of the features that make it the case that there is “something it is like” to have a conscious experience, and thus is devoid of consciousness itself.

The implication here is that the way red looks is the way a red thing is. Since the emphasis is in the original, it is reasonable to take this to be identifying the mode of the senses with the mode of being. In reality, as we said in the earlier discussion, there is no “redefinition” because the senses do not define anything in the first place.

Both mistakes, namely both the Kantian and anti-Kantian errors, imply contradictions. The claim that there is something that we cannot know in any way contradicts itself, since it implies that we know of something of which we know nothing. Thus, it implies that an unknown thing is known. Similarly, the claim that the mode of knowing as such is the same as the mode of being, to put it in Kant’s words, “is as much as to imagine that experience is also real without experience.” In other words, suppose that “the way red looks” is the very way a red apple is apart from the senses: then the apple looks a certain way, even when no one is looking, and thus precisely when it does not look any way at all.

Thus both errors imply similar contradictions: an unknown thing as such is known, or a known thing as such is unknown. The errors are generated in much the way Kant himself seems to have fallen into the error. Either knowledge is possible or it is not, we say. If it is not, then you have the Kantian error, and if it is, it appears that our way of knowing must the same as the way things are, and thus you have the anti-Kantian error.

As I pointed out in discussing consistency, an inconsistent claim, understood as such, does not propose to us any particular way to understand the world. The situation described is unintelligible, and in no way tells us what we should expect to find if it turns out to be the case. Given this fact, together with the similarity of the implied contradictions, we should not be surprised if people rarely double down completely on one error or the other, but rather waver vaguely between the two as they see the unpalatable implications of one side or the other.

Thus, the problem arises from the false dichotomy between “knowledge is not possible” and “knowledge is possible but must work in this particular way, namely by an identity of the mode of knowing and the mode of being.” I said in the linked post that this is “one of the most basic causes of human error,” but it might be possible to go further and suggest that it is the principal cause of philosophical error apart from error caused by trading truth for other things. At any rate, the reader is advised to keep this in mind as a distinct possibility. We may see additional relevant evidence as time goes on.

Skeptical Scenarios

I promised to return to some of the issues discussed here. The current post addresses the implications of the sort of skeptical scenario considered by Alexander Pruss in the associated discussion. Consider his original comparison of physical theories and skeptical scenarios:

The ordinary sentence “There are four chairs in my office” is true (in its ordinary context). Furthermore, its being true tells us very little about fundamental ontology. Fundamental physical reality could be made out of a single field, a handful of fields, particles in three-dimensional space, particles in ten-dimensional space, a single vector in a Hilbert space, etc., and yet the sentence could be true.

An interesting consequence: Even if in fact physical reality is made out of particles in three-dimensional space, we should not analyze the sentence to mean that there are four disjoint pluralities of particles each arranged chairwise in my office. For if that were what the sentence meant, it would tell us about which of the fundamental physical ontologies is correct. Rather, the sentence is true because of a certain arrangement of particles (or fields or whatever).

If there is such a broad range of fundamental ontologies that “There are four chairs in my office” is compatible with, it seems that the sentence should also be compatible with various sceptical scenarios, such as that I am a brain in a vat being fed data from a computer simulation. In that case, the chair sentence would be true due to facts about the computer simulation, in much the way that “There are four chairs in this Minecraft house” is true. It would be very difficult to be open to a wide variety of fundamental physics stories about the chair sentence without being open to the sentence being true in virtue of facts about a computer simulation.

If we consider this in light of our analysis of form, it is not difficult to see that Pruss is correct both about the ordinary chair sentence being consistent with a large variety of physical theories, and about the implication that it is consistent with most situations that would normally be considered “skeptical.” The reason is that to say that something is a chair is to say something about its relationships with the world, but it is not to say everything about its relationships. It speaks in particular about various relationships with the human world. And there is nothing to prevent these relationships from co-existing with any number of other kinds of relationships between its parts, its causes, and so on.

Pruss is right to insist that in order for the ordinary sentence to be true, the corresponding forms must be present. But as an anti-reductionist, his position implies hidden essences, and this is a mistake. Indeed, under the correct understanding of form, our everyday knowledge of things is sufficient to ensure that the forms are present: regardless of which physical theories turn out to be true, and even if some such skeptical scenario turns out to be true.

Why are these situations called “skeptical” in the first place? This is presumably because they seem to call into question whether or not we possess any knowledge of things. And in this respect, they fail in two ways, they partially fail in a third, and they succeed in one way.

First, they fail insofar as they attempt to call into question, e.g. whether there are chairs in my room right now, or whether I have two hands. These things are true and would be true even in the “skeptical” situations.

Second, they fail even insofar as they claim, e.g. that I do not know whether I am a brain in a vat. In the straightforward sense, I do know this, because the claim is opposed to the other things (e.g. about the chairs and my hands) that I know to be true.

Third, they partially fail even insofar as they claim, e.g. that I do not know whether I am a brain in a vat in a metaphysical sense. Roughly speaking, I do know that I am not, not by deducing the fact with any kind of necessity, but simply because the metaphysical claim is completely ungrounded. In other words, I do not know this infallibly, but it is extremely likely. We could compare this with predictions about the future. Thus for example Ron Conte attempts to predict the future:

First, an overview of the tribulation:
A. The first part of the tribulation occurs for this generation, beginning within the next few years, and ending in 2040 A.D.
B. Then there will be a brief period of peace and holiness on earth, lasting about 25 years.
C. The next few hundred years will see a gradual but unstoppable increase in sinfulness and suffering in the world. The Church will remain holy, and Her teaching will remain pure. But many of Her members will fall into sin, due to the influence of the sinful world.
D. The second part of the tribulation occurs in the early 25th century (about 2430 to 2437). The Antichrist reigns for less than 7 years during this time.
E. Jesus Christ returns to earth, ending the tribulation.

Now, some predictions for the near future. These are not listed in chronological order.

* The Warning, Consolation, and Miracle — predicted at Garabandal and Medjugorje — will occur prior to the start of the tribulation, sometime within the next several years (2018 to 2023).
* The Church will experience a severe schism. First, a conservative schism will occur, under Pope Francis; next, a liberal schism will occur, under his conservative successor.
* The conservative schism will be triggered by certain events: Amoris Laetitia (as we already know, so, not a prediction), and the approval of women deacons, and controversial teachings on salvation theology.
* After a short time, Pope Francis will resign from office.
* His very conservative successor will reign for a few years, and then die a martyr, during World War 3.
* The successor to Pope Francis will take the papal name Pius XIII.

Even ignoring the religious speculation, we can “know” that this account is false, simply because it is inordinately detailed. Ron Conte no doubt has reasons for his beliefs, much as the Jehovah’s Witnesses did. But just as we saw in that case, his reasons will also in all likelihood turn out to be completely disproportionate to the detail of the claims they seek to establish.

In a similar way, a skeptical scenario can be seen as painting a detailed picture of a larger context of our world, one outside our current knowledge. There is nothing impossible about such a larger context; in fact, there surely is one. But the claim about brains and vats is very detailed: if one takes it seriously, it is more detailed than Ron Conte’s predictions, which could also be taken as a statement about a larger temporal context to our situation. The brain-in-vat scenario implies that our entire world depends on another world which has things similar to brains and similar to vats, along presumably with things analogous to human beings that made the vats, and so on. And since the whole point of the scenario is that it is utterly invented, not that it is accepted by anyone, while Conte’s account is accepted at least by him, there is not even a supposed basis for thinking that things are actually this way. Thus we can say, not infallibly but with a great deal of certainty, that we are not brains in vats, just as we can say, not infallibly but with a great deal of certainty, that there will not be any “Antichrist” between 2430 and 2437.

There is nonetheless one way in which the consideration of skeptical scenarios does succeed in calling our knowledge into question. Consider them insofar as they propose a larger context to our world, as discussed above. As I said, there is nothing impossible about a larger context, and there surely is one. Here we speak of a larger metaphysical context, but we can compare this with the idea of a larger physical context.

Our knowledge of our physical context is essentially local, given the concrete ways that we come to know the world. I know a lot about the room I am in, a significant amount about the places I usually visit or have visited in the past, and some but much less about places I haven’t visited. And speaking of an even larger physical context, I know things about the solar system, but much less about the wider physical universe. And if we consider what lies outside the visible universe, I might well guess that there are more stars and galaxies and so on, but nothing more. There is not much more detail even to this as a guess: and if there is an even larger physical context, it is possible that there are places that do not have stars and galaxies at all, but other things. In other words, universal knowledge is universal, but also vague, while specific knowledge is more specific, but also more localized: it is precisely because it is local that it was possible to acquire more specific knowledge.

In a similar way, more specific metaphysical knowledge is necessarily of a more local metaphysical character: both physical and metaphysical knowledge is acquired by us through the relationships things have with us, and in both cases “with us” implies locality. We can know that the brain-in-vat scenario is mistaken, but that should not give us hope that we can find out what is true instead: even if we did find some specific larger metaphysical context to our situation, there would be still larger contexts of which we would remain unaware. Just as you will never know the things that are too distant from you physically, you will also never know the things that are too distant from you metaphysically.

I previously advocated patience as a way to avoid excessively detailed claims. There is nothing wrong with this, but here we see that it is not enough: we also need to accept our actual situation. Rebellion against our situation, in the form of painting a detailed picture of a larger context of which we can have no significant knowledge, will profit us nothing: it will just be painting a picture as false as the brain-in-vat scenario, and as false as Ron Conte’s predictions.