Mind of God

Reconciling Theism and Atheism

In his Dialogues Concerning Natural Religion, David Hume presents Philo as arguing that the disagreement between theists and atheists is merely verbal:

All men of sound reason are disgusted with verbal disputes, which abound so much in philosophical and theological inquiries; and it is found, that the only remedy for this abuse must arise from clear definitions, from the precision of those ideas which enter into any argument, and from the strict and uniform use of those terms which are employed. But there is a species of controversy, which, from the very nature of language and of human ideas, is involved in perpetual ambiguity, and can never, by any precaution or any definitions, be able to reach a reasonable certainty or precision. These are the controversies concerning the degrees of any quality or circumstance. Men may argue to all eternity, whether HANNIBAL be a great, or a very great, or a superlatively great man, what degree of beauty CLEOPATRA possessed, what epithet of praise LIVY or THUCYDIDES is entitled to, without bringing the controversy to any determination. The disputants may here agree in their sense, and differ in the terms, or vice versa; yet never be able to define their terms, so as to enter into each other’s meaning: Because the degrees of these qualities are not, like quantity or number, susceptible of any exact mensuration, which may be the standard in the controversy. That the dispute concerning Theism is of this nature, and consequently is merely verbal, or perhaps, if possible, still more incurably ambiguous, will appear upon the slightest inquiry. I ask the Theist, if he does not allow, that there is a great and immeasurable, because incomprehensible difference between the human and the divine mind: The more pious he is, the more readily will he assent to the affirmative, and the more will he be disposed to magnify the difference: He will even assert, that the difference is of a nature which cannot be too much magnified. I next turn to the Atheist, who, I assert, is only nominally so, and can never possibly be in earnest; and I ask him, whether, from the coherence and apparent sympathy in all the parts of this world, there be not a certain degree of analogy among all the operations of Nature, in every situation and in every age; whether the rotting of a turnip, the generation of an animal, and the structure of human thought, be not energies that probably bear some remote analogy to each other: It is impossible he can deny it: He will readily acknowledge it. Having obtained this concession, I push him still further in his retreat; and I ask him, if it be not probable, that the principle which first arranged, and still maintains order in this universe, bears not also some remote inconceivable analogy to the other operations of nature, and, among the rest, to the economy of human mind and thought. However reluctant, he must give his assent. Where then, cry I to both these antagonists, is the subject of your dispute? The Theist allows, that the original intelligence is very different from human reason: The Atheist allows, that the original principle of order bears some remote analogy to it. Will you quarrel, Gentlemen, about the degrees, and enter into a controversy, which admits not of any precise meaning, nor consequently of any determination? If you should be so obstinate, I should not be surprised to find you insensibly change sides; while the Theist, on the one hand, exaggerates the dissimilarity between the Supreme Being, and frail, imperfect, variable, fleeting, and mortal creatures; and the Atheist, on the other, magnifies the analogy among all the operations of Nature, in every period, every situation, and every position. Consider then, where the real point of controversy lies; and if you cannot lay aside your disputes, endeavour, at least, to cure yourselves of your animosity.

To what extent Hume actually agrees with this argument is not clear, and whether or not a dispute is verbal or real is itself like Hume’s questions about greatness or beauty, that is, it is a matter of degree. Few disagreements are entirely verbal. In any case, I largely agree with the claim that there is little real disagreement here. In response to a question on the about page of this blog, I referred to some remarks about God by Roderick Long:

Since my blog has wandered into theological territory lately, I thought it might be worth saying something about the existence of God.

When I’m asked whether I believe in God, I usually don’t know what to say – not because I’m unsure of my view, but because I’m unsure how to describe my view. But here’s a try.

I think the disagreement between theism and atheism is in a certain sense illusory – that when one tries to sort out precisely what theists are committed to and precisely what atheists are committed to, the two positions come to essentially the same thing, and their respective proponents have been fighting over two sides of the same shield.

Let’s start with the atheist. Is there any sense in which even the atheist is committed to recognising the existence of some sort of supreme, eternal, non-material reality that transcends and underlies everything else? Yes, there is: namely, the logical structure of reality itself.

Thus so long as the theist means no more than this by “God,” the theist and the atheist don’t really disagree.

Now the theist may think that by God she means something more than this. But likewise, before people knew that whales were mammals they thought that by “whale” they meant a kind of fish. What is the theist actually committed to meaning?

Well, suppose that God is not the logical structure of the universe. Then we may ask: in what relation does God stand to that structure, if not identity? There would seem to be two possibilities.

One is that God stands outside that structure, as its creator. But this “possibility” is unintelligible. Logic is a necessary condition of significant discourse; thus one cannot meaningfully speak of a being unconstrained by logic, or a time when logic’s constraints were not yet in place.

The other is that God stands within that structure, along with everything else. But this option, as Wittgenstein observed, would downgrade God to the status of being merely one object among others, one more fragment of contingency – and he would no longer be the greatest of all beings, since there would be something greater: the logical structure itself. (This may be part of what Plato meant in describing the Form of the Good as “beyond being.”)

The only viable option for the theist, then, is to identify God with the logical structure of reality. (Call this “theological logicism.”) But in that case the disagreement between the theist and the atheist dissolves.

It may be objected that the “reconciliation” I offer really favours the atheist over the theist. After all, what theist could be satisfied with a deity who is merely the logical structure of the universe? Yet in fact there is a venerable tradition of theists who proclaim precisely this. Thomas Aquinas, for example, proposed to solve the age-old questions “could God violate the laws of logic?” and “could God command something immoral?” by identifying God with Being and Goodness personified. Thus God is constrained by the laws of logic and morality, not because he is subject to them as to a higher power, but because they express his own nature, and he could not violate or alter them without ceasing to be God. Aquinas’ solution is, essentially, theological logicism; yet few would accuse Aquinas of having a watered-down or crypto-atheistic conception of deity. Why, then, shouldn’t theological logicism be acceptable to the theist?

A further objection may be raised: Aquinas of course did not stop at the identification of God with Being and Goodness, but went on to attribute to God various attributes not obviously compatible with this identification, such as personality and will. But if the logical structure of reality has personality and will, it will not be acceptable to the atheist; and if it does not have personality and will, then it will not be acceptable to the theist. So doesn’t my reconciliation collapse?

I don’t think so. After all, Aquinas always took care to insist that in attributing these qualities to God we are speaking analogically. God does not literally possess personality and will, at least if by those attributes we mean the same attributes that we humans possess; rather he possesses attributes analogous to ours. The atheist too can grant that the logical structure of reality possesses properties analogous to personality and will. It is only at the literal ascription of those attributes that the atheist must balk. No conflict here.

Yet doesn’t God, as understood by theists, have to create and sustain the universe? Perhaps so. But atheists too can grant that the existence of the universe depends on its logical structure and couldn’t exist for so much as an instant without it. So where’s the disagreement?

But doesn’t God have to be worthy of worship? Sure. But atheists, while they cannot conceive of worshipping a person, are generally much more open to the idea of worshipping a principle. Again theological logicism allows us to transcend the opposition between theists and atheists.

But what about prayer? Is the logical structure of reality something one could sensibly pray to? If so, it might seem, victory goes to the theist; and if not, to the atheist. Yet it depends what counts as prayer. Obviously it makes no sense to petition the logical structure of reality for favours; but this is not the only conception of prayer extant. In Science and Health, for example, theologian M. B. Eddy describes the activity of praying not as petitioning a principle but as applying a principle:

“Who would stand before a blackboard, and pray the principle of mathematics to solve the problem? The rule is already established, and it is our task to work out the solution. Shall we ask the divine Principle of all goodness to do His own work? His work is done, and we have only to avail ourselves of God’s rule in order to receive His blessing, which enables us to work out our own salvation.”

Is this a watered-down or “naturalistic” conception of prayer? It need hardly be so; as the founder of Christian Science, Eddy could scarcely be accused of underestimating the power of prayer! And similar conceptions of prayer are found in many eastern religions. Once again, theological logicism’s theistic credentials are as impeccable as its atheistic credentials.

Another possible objection is that whether identifying God with the logical structure of reality favours the atheist or the theist depends on how metaphysically robust a conception of “logical structure” one appeals to. If one thinks of reality’s logical structure in realist terms, as an independent reality in its own right, then the identification favours the theist; but if one instead thinks, in nominalist terms, that there’s nothing to logical structure over and above what it structures, then the identification favours the atheist.

This argument assumes, however, that the distinction between realism and nominalism is a coherent one. I’ve argued elsewhere (see here and here) that it isn’t; conceptual realism pictures logical structure as something imposed by the world on an inherently structureless mind (and so involves the incoherent notion of a structureless mind), while nominalism pictures logical structure as something imposed by the mind on an inherently structureless world (and so involves the equally incoherent notion of a structureless world). If the realism/antirealism dichotomy represents a false opposition, then the theist/atheist dichotomy does so as well. The difference between the two positions will then be only, as Wittgenstein says in another context, “one of battle cry.”

Long is trying too hard, perhaps. As I stated above, few disagreements are entirely verbal, so it would be strange to find no disagreement at all, and we could question some points here. Are atheists really open to worshiping a principle? Respecting, perhaps, but worshiping? A defender of Long, however, might say that “respect” and “worship” do not necessarily have any relevant difference here, and this is itself a merely verbal difference signifying a cultural difference. The theist uses “worship” to indicate that they belong to a religious culture, while the atheist uses “respect” to indicate that they do not. But it would not be easy to find a distinct difference in the actual meaning of the terms.

In any case, there is no need to prove that there is no difference at all, since without a doubt individual theists will disagree on various matters with individual atheists. The point made by both David Hume and Roderick Long stands at least in a general way: there is far less difference between the positions than people typically assume.

In an earlier post I discussed, among other things, whether the first cause should be called a “mind” or not, discussing St. Thomas’s position that it should be, and Plotinus’s position that it should not be. Along the lines of the argument in this post, perhaps this is really an argument about whether or not you should use a certain analogy, and the correct answer may be that it depends on your purposes.

But what if your purpose is simply to understand reality? Even if it is, it is often the case that you can understand various aspects of reality with various analogies, so this will not necessarily provide you with a definite answer. Still, someone might argue that you should not use a mental analogy with regard to the first cause because it will lead people astray. Thus, in a similar way, Richard Dawkins argued that one should not call the first cause “God” because it would mislead people:

Yes, I said, but it must have been simple and therefore, whatever else we call it, God is not an appropriate name (unless we very explicitly divest it of all the baggage that the word ‘God’ carries in the minds of most religious believers). The first cause that we seek must have been the simple basis for a self-bootstrapping crane which eventually raised the world as we know it into its present complex existence.

I will argue shortly that Dawkins was roughly speaking right about the way that the first cause works, although as I said in that earlier post, he did not have a strong argument for it other than his aesthetic sense and the kinds of explanation that he prefers. In any case, his concern with the name “God” is the “baggage” that it “carries in the minds of most religious believers.” That is, if we say, “There is a first cause, therefore God exists,” believers will assume that their concrete beliefs about God are correct.

In a similar way, someone could reasonably argue that speaking of God as a “mind” would tend to lead people into error by leading them to suppose that God would do the kinds of the things that other minds, namely human ones, do. And this definitely happens. Thus for example, in his book Who Designed the Designer?, Michael Augros argues for the existence of God as a mind, and near the end of the book speculates about divine revelation:

I once heard of a certain philosopher who, on his deathbed, when asked whether he would become a Christian, admitted his belief in Aristotle’s “prime mover”, but not in Jesus Christ as the Son of God. This sort of acknowledgment of the prime mover, of some sort of god, still leaves most of our chief concerns unaddressed. Will X ever see her son again, now that the poor boy has died of cancer at age six? Will miserable and contrite Y ever be forgiven, somehow reconciled to the universe and made whole, after having killed a family while driving drunk? Will Z ever be brought to justice, having lived out his whole life laughing at the law while another person rotted in jail for the atrocities he committed? That there is a prime mover does not tell us with sufficient clarity. Even the existence of an all-powerful, all-knowing, all-good god does not enable us to fill in much detail. And so it seems reasonable to suppose that god has something more to say to us, in explicit words, and not only in the mute signs of creation. Perhaps he is waiting to talk to us, biding his time for the right moment. Perhaps he has already spoken, but we have not recognized his voice.

When we cast our eye about by the light of reason in his way, it seems there is room for faith in general, even if no particular faith can be “proved” true in precisely the same way that it can be “proved” that there is a god.

The idea is that given that God is a mind, it follows that it is fairly plausible that he would wish to speak to people. And perhaps that he would wish to establish justice through extraordinary methods, and that he might wish to raise people from the dead.

I think this is “baggage” carried over from Augros’s personal religious views. It is an anthropomorphic mistake, not merely in the sense that he does not have a good reason for such speculation, but in the sense that such a thing is demonstrably implausible. It is not that the divine motives are necessarily unknown to us, but that we can actually discover them, at least to some extent, and we will discover that they are not what he supposes.

Divine Motives

How might one know the divine motives? How does one read the mind of God?

Anything that acts at all does it what it does ultimately because of what it is. This is an obvious point, like the point that the existence of something rather than nothing could not have some reason outside of being. In a similar way, “what is” is the only possible explanation for what is done, since there is nothing else there to be an explanation. And in every action, whether or not we are speaking of the subject in explicitly mental terms or not, we can always use the analogy of desires and goals. In the linked post, I quote St. Thomas as speaking of the human will as the “rational appetite,” and the natural tendency of other things as a “natural appetite.” If we break down the term “rational appetite,” the meaning is “the tendency to do something, because of having a reason to do it.” And this fits with my discussion of human will in various places, such as in this earlier post.

But where do those reasons come from? I gave an account of this here, arguing that rational goals are a secondary effect of the mind’s attempt to understand itself. Of course human goals are complex and have many factors, but this happens because what the mind is trying to understand is complicated and multifaceted. In particular, there is a large amount of pre-existing human behavior that it needs to understand before it can attribute goals: behavior that results from life as a particular kind of animal, behavior that results from being a particular living thing, and behavior that results from having a body of such and such a sort.

In particular, human social behavior results from these things. There was some discussion of this here, when we looked at Alexander Pruss’s discussion of hypothetical rational sharks.

You might already see where this is going. God as the first cause does not have any of the properties that generate human social behavior, so we cannot expect his behavior to resemble human social behavior in any way, as for example by having any desire to speak with people. Indeed, this is the argument I am making, but let us look at the issue more carefully.

I responded to the “dark room” objection to predictive processing here and here. My response depends both the biological history of humans and animals in general, and to some extent on the history of each individual. But the response does not merely explain why people do not typically enter dark rooms and simply stay there until they die. It also explains why occasionally people do do such things, to a greater or lesser approximation, as with suicidal or extremely depressed people.

If we consider the first cause as a mind, as we are doing here, it is an abstract immaterial mind without any history, without any pre-existing behaviors, without any of the sorts of things that allow people to avoid the dark room. So while people will no doubt be offended by the analogy, and while I will try to give a more pleasant interpretation later, one could argue that God is necessarily subject to his own dark room problem: there is no reason for him to have any motives at all, except the one which is intrinsic to minds, namely the motive of understanding. And so he should not be expected to do anything with the world, except to make sure that it is intelligible, since it must be intelligible for him to understand it.

The thoughtful reader will object: on this account, why does God create the world at all? Surely doing and making nothing at all would be even better, by that standard. So God does seem to have a “dark room” problem that he does manage to avoid, namely the temptation to nothing at all. This is a reasonable objection, but I think it would lead us on a tangent, so I will not address it at this time. I will simply take it for granted that God makes something rather than nothing, and discuss what he does with the world given that fact.

In the previous post, I pointed out that David Hume takes for granted that the world has stable natural laws, and uses that to argue that an orderly world can result from applying those laws to “random” configurations over a long enough time. I said that one might accuse him of “cheating” here, but that would only be the case if he intended to maintain a strictly atheistic position which would say that there is no first cause at all, or that if there is, it does not even have a remote analogy with a mind. Thus his attempted reconciliation of theism and atheism is relevant, since it seems from this that he is aware that such a strict atheism cannot be maintained.

St. Thomas makes a similar connection between God as a mind and a stable order of things in his fifth way:

The fifth way is taken from the governance of the world. We see that things which lack intelligence, such as natural bodies, act for an end, and this is evident from their acting always, or nearly always, in the same way, so as to obtain the best result. Hence it is plain that not fortuitously, but designedly, do they achieve their end. Now whatever lacks intelligence cannot move towards an end, unless it be directed by some being endowed with knowledge and intelligence; as the arrow is shot to its mark by the archer. Therefore some intelligent being exists by whom all natural things are directed to their end; and this being we call God.

What are we are to make of the claim that things act “always, or nearly always, in the same way, so as to obtain the best result?” Certainly acting in the same way would be likely to lead to similar results. But why would you think it was the best result?

If we consider where we get the idea of desire and good, the answer will be clear. We don’t have an idea of good which is completely independent from “what actually tends to happen”, even though this is not quite a definition of the term either. So ultimately St. Thomas’s argument here is based on the fact that things act in similar ways and achieve similar results. The idea that it is “best” is not an additional contribution.

But now consider the alternative. Suppose that things did not act in similar ways, or that doing so did not lead to similar results. We would live in David Hume’s non-inductive world. The result is likely to be mathematically and logically impossible. If someone says, “look, the world works in a coherent way,” and then attempts to describe how it would look if it worked in an incoherent way, they will discover that the latter “possibility” cannot be described. Any description must be coherent in order to be a description, so the incoherent “option” was never a real option in the first place.

This argument might suggest that the position of Plotinus, that mind should not be attributed to God at all, is the more reasonable one. But since we are exploring the situation where we do make that attribution, let us consider the consequences.

We argued above that the sole divine motive for the world is intelligibility. This requires coherence and consistency. It also requires a tendency towards the good, for the above mentioned reasons. Having a coherent tendency at all is ultimately not something different from tending towards good.

The world described is arguably a deist world, one in which the laws of nature are consistently followed, but God does nothing else in the world. The Enlightenment deists presumably had various reasons for their position: criticism of specific religious doctrines, doubts about miracles, and an aesthetic attraction to a perfectly consistent world. But like Dawkins with his argument about God’s simplicity, they do not seem (to me at least) to have had very strong arguments. That does not prove that their position was wrong, and even their weaker arguments may have had some relationship with the truth; even an aesthetic attraction to a perfectly consistent world has some connection with intelligibility, which is the actual reason for the world to be that way.

Once again, as with the objection about creating a world at all, a careful reader might object that this argument is not conclusive. If you have a first cause at all, then it seems that you must have one or more first effects, and even if those effects are simple, they cannot be infinitely simple. And given that they are not infinitely simple, who is to set the threshold? What is to prevent one or more of those effects from being “miraculous” relative to anything else, or even from being something like a voice giving someone a divine revelation?

There is something to this argument, but as with the previous objection, I will not be giving my response here. I will simply note for the moment that it is a little bit strained to suggest that such a thing could happen without God having an explicit motive of “talking to people,” and as argued above, such a motive cannot exist in God. That said, I will go on to some other issues.

As the Heavens are Higher

Apart from my arguments, it has long been noticed in the actual world that God seems much more interested in acting consistently than in bringing about any specific results in human affairs.

Someone like Richard Dawkins, or perhaps Job, if he had taken the counsel of his wife, might respond to the situation in the following way. “God” is not an appropriate name for a first cause that acts like this. If anything is more important to God than being personal, it would be being good. But the God described here is not good at all, since he doesn’t seem to care a bit about human affairs. And he inflicts horrible suffering on people just for the sake of consistency with physical laws. Instead of calling such a cause “God,” why don’t we call it “the Evil Demon” or something like that?

There is a lot that could be said about this. Some of it I have already said elsewhere. Some of it I will perhaps say at other times. For now I will make three brief points.

First, ensuring that the world is intelligible and that it behaves consistently is no small thing. In fact it is a prerequisite for any good thing that might happen anywhere and any time. We would not even arrive at the idea of “good” things if we did not strive consistently for similar results, nor would we get the idea of “striving” if we did did not often obtain them. Thus it is not really true that God has no interest in human affairs: rather, he is concerned with the affairs of all things, including humans.

Second, along similar lines, consider what the supposed alternative would be. If God were “good” in the way you wish, his behavior would be ultimately unintelligible. This is not merely because some physical law might not be followed if there were a miracle. It would be unintelligible behavior in the strict sense, that is, in the sense that no explanation could be given for why God is doing this. The ordinary proposal would be that it is because “this is good,” but when this statement is a human judgement made according to human motives, there would need to be an explanation for why a human judgement is guiding divine behavior. “God is a mind” does not adequately explain this. And it is not clear that an ultimately unintelligible world is a good one.

Third, to extend the point about God’s concern with all things, I suggest that the answer is roughly speaking the one that Scott Alexander gives non-seriously here, except taken seriously. This answer depends on an assumption of some sort of modal realism, a topic which I was slowly approaching for some time, but which merits a far more detailed discussion, and I am not sure when I will get around to it, if ever. The reader might note however that this answer probably resolves the question about “why didn’t God do nothing at all” by claiming that this was never an option anyway.

And Fire by Fire

Superstitious Nonsense asks about the last post:

So the answer here is that -some- of the form is present in the mind, but always an insufficient amount or accuracy that the knowledge will not be “physical”? You seem to be implying the part of the form that involves us in the self-reference paradox is precisely the part of the form that gives objects their separate, “physical” character. Is this fair? Certainly, knowing progressively more about an object does not imply the mental copy is becoming closer and closer to having a discrete physicality.

I’m not sure this is the best way to think about it. The self-reference paradox arises because we are trying to copy ourselves into ourselves, and thus we are adding something into ourselves, making the copy incomplete. The problem is not that there is some particular “part of the form” that we cannot copy, but that it is in principle impossible to copy it perfectly. This is different from saying that there is some specific “part” that cannot be copied.

Consider what happens when we make “non-physical” copies of something without involving a mind. Consider the image of a gold coin. There are certain relationships common to the image and to a gold coin in the physical world. So you could say we have a physical gold coin, and a non-physical one.

But wait. If the image of the coin is on paper, isn’t that a physical object? Or if the image is on your computer screen, isn’t your screen a physical object? And the image is just the colors on the screen, which are apparently just as “physical” (or non-physical) as the color of the actual coin. So why we would say that “this is not a physical coin?”

Again, as in the last post, the obvious answer is that the image is not made out of gold, while the physical coin is. But why not? Is it that the image is not accurate enough? If we made it more accurate, would it be made out of gold, or become closer to being made out of gold? Obviously not. This is like noting that a mental copy does not become closer and closer to being a physical one.

In a sense it is true that the reason the image of the coin is not physical is that it is not accurate enough. But that is because it cannot be accurate enough: the fact that it is an image positively excludes the copying of certain relationships. Some aspects can be copied, but others cannot be copied at all, as long as it is an image. On the other hand, you can look at this from the opposite direction: if you did copy those aspects, the image would no longer be an image, but a physical coin.

As a similar example, consider the copying of a colored scene into black and white. We can copy some aspects of the scene by using various shades of gray, but we cannot copy every aspect of the scene. There are simply not enough differences in a black and white image to reflect every aspect of a colored scene. The black and white image, as you make it more accurate, does not become closer to being colored, but this is simply because there are aspects of the colored scene that you never copy. If you do insist on copying those aspects, you will indeed make the black and white image into a colored image, and thus it will no longer be black and white.

The situation becomes significantly more complicated when we talk about a mind. In one way, there is an important similarity. When we say that the copy in the mind is “not physical,” that simply means that it is a copy in the mind, just as when we say that the image of the coin is not physical, it means that it is an image, made out of the stuff that images are made of. But just as the image is physical anyway, in another sense, so it is perfectly possible that the mind is physical in a similar sense. However, this is where things begin to become confusing.

Elsewhere, I discussed Aristotle’s argument that the mind is immaterial. Considering the cases above, we could put his argument in this way: the human brain is a limited physical object. So as long as the brain remains a brain, there are simply not enough potential differences in it to model all possible differences in the world, just as you cannot completely model a colored scene using black and white. But anything at all can be understood. Therefore we cannot be understanding by using the brain.

I have claimed myself that anything that can be, can be understood. But this needs to be understood generically, rather than as claiming that it is possible to understand reality in every detail simultaneously. The self-reference paradox shows that it is impossible in principle for a knower that copies forms into itself to understand itself in every aspect at once. But even apart from this, it is very obvious that we as human beings cannot understand every aspect of reality at once. This does not even need to be argued: you cannot even keep everything in mind at once, let alone understand every detail of everything. This directly suggests a problem with Aristotle’s argument: if being able to know all things suggests that the mind is immaterial, the obvious fact that we cannot know all things suggests that it is not.

Nonetheless, let us see what happens if we advance the argument on Aristotle’s behalf. Admittedly, we cannot understand everything at once. But in the case of the colored scene, there are aspects that cannot be copied at all into the black and white copy. And in the case of the physical coin, there are aspects that cannot be copied at all into the image. So if we are copying things into the brain, doesn’t that mean that there should be aspects of reality that cannot be copied at all into the mind? But this is false, since it would not only mean that we can’t understand everything, but it would also mean that there would be things that we cannot think about at all, and if it is so, then it is not so, because in that case we are right now talking about things that we supposedly cannot talk about.

Copying into the mind is certainly different from copying into a black and white scene or copying into a picture, and this does get at one of the differences. But the difference here is that the method of copying in the case of the mind is flexible, while the method of copying in the case of the pictures is rigid. In other words, we have a pre-defined method of copying in the case of the pictures that, from the beginning, only allows certain aspects to be copied. In the case of the mind, we determine the method differently from case to case, depending on our particular situation and the thing being copied. The result is that there is no particular aspect of things that cannot be copied, but you cannot copy every aspect at once.

In answer to the original question, then, the reason that the “mental copy” always remains mental is that you never violate the constraints of the mind, just as a black and white copy never violates the constraints of being black and white. But if you did violate the constraints of the black and white copy by copying every aspect of the scene, the image would become colored. And similarly, if you did violate the constraints of the mind in order to copy every aspect of reality, your mind would cease to be, and it would instead become the thing itself. But there is no particular aspect of “physicality” that you fail to copy: rather, you just ensure that one way or another you do not violate the constraints of the mind that you have.

Unfortunately, the explanation here for why the mind can copy any particular aspect of reality, although not every aspect at once, is rather vague. Perhaps a clearer explanation is possible? In fact, someone could use the vagueness to argue for Aristotle’s position and against mine. Perhaps my account is vague because it is wrong, and there is actually no way for a physical object to receive copied forms in this way.

Consistency and Reality

Consistency and inconsistency, in their logical sense, are relationships between statements or between the parts of a statement. They are not properties of reality as such.

“Wait,” you will say. “If consistency is not a property of reality, then you are implying that reality is not consistent. So reality is inconsistent?”

Not at all. Consistency and inconsistency are contraries, not contradictories, and they are properties of statements. So reality as such is neither consistent nor inconsistent, in the same way that sounds are neither white nor black.

We can however speak of consistency with respect to reality in an extended sense, just as we can speak of truth with respect to reality in an extended sense, even though truth refers first to things that are said or thought. In this way we can say that a thing is true insofar as it is capable of being known, and similarly we might say that reality is consistent, insofar as it is capable of being known by consistent claims, and incapable of being known by inconsistent claims. And reality indeed seems consistent in this way: I might know the weather if I say “it is raining,” or if I say, “it is not raining,” depending on conditions, but to say “it is both raining and not raining in the same way” is not a way of knowing the weather.

Consider the last point more precisely. Why can’t we use such statements to understand the world? The statement about the weather is rather different from statements like, “The normal color of the sky is not blue but rather green.” We know what it would be like for this to be the case. For example, we know what we would expect if it were the case. It cannot be used to understand the world in fact, because these expectations fail. But if they did not, we could use it to understand the world. Now consider instead the statement, “The sky is both blue and not blue in exactly the same way.” There is now no way to describe the expectations we would have if this were the case. It is not that we understand the situation and know that it does not apply, as with the claim about the color of the sky: rather, the situation described cannot be understood. It is literally unintelligible.

This also explains why we should not think of consistency as a property of reality in a primary sense. If it were, it would be like the color blue as a property of the sky. The sky is in fact blue, but we know what it would be like for it to be otherwise. We cannot equally say, “reality is in fact consistent, but we know what it would be like for it to be inconsistent.” Instead, the supposedly inconsistent situation is a situation that cannot be understood in the first place. Reality is thus consistent not in the primary sense but in a secondary sense, namely that it is rightly understood by consistent things.

But this also implies that we cannot push the secondary consistency of reality too far, in several ways and for several reasons.

First, while inconsistency as such does not contribute to our understanding of the world, a concrete inconsistent set of claims can help us understand the world, and in many situations better than any particular consistent set of claims that we might currently come up with. This was discussed in a previous post on consistency.

Second, we might respond to the above by pointing out that it is always possible in principle to formulate a consistent explanation of things which would be better than the inconsistent one. We might not currently be able to arrive at the consistent explanation, but it must exist.

But even this needs to be understood in a somewhat limited way. Any consistent explanation of things will necessarily be incomplete, which means that more complete explanations, whether consistent or inconsistent, will be possible. Consider for example these recent remarks of James Chastek on Gödel’s theorem:

1.) Given any formal system, let proposition (P) be this formula is unprovable in the system

2.) If P is provable, a contradiction occurs.

3.) Therefore, P is known to be unprovable.

4.) If P is known to be unprovable it is known to be true.

5.) Therefore, P is (a) unprovable in a system and (b) known to be true.

In the article linked by Chastek, John Lucas argues that this is a proof that the human mind is not a “mechanism,” since we can know to be true something that the mechanism will not able to prove.

But consider what happens if we simply take the “formal system” to be you, and “this formula is unprovable in the system” to mean “you cannot prove this statement to be true.” Is it true, or not? And can you prove it?

If you say that it is true but that you cannot prove it, the question is how you know that it is true. If you know by the above reasoning, then you have a syllogistic proof that it is true, and so it is false that you cannot prove it, and so it is false.

If you say that it is false, then you cannot prove it, because false things cannot be proven, and so it is true.

It is evident here that you can give no consistent response that you can know to be true; “it is true but I cannot know it to be true,” may be consistent, but obviously if it is true, you cannot know it to be true, and if it is false, you cannot know it to be true. What is really proven by Gödel’s theorem is not that the mind is not a “mechanism,” whatever that might be, but that any consistent account of arithmetic must be incomplete. And if any consistent account of arithmetic alone is incomplete, much  more must any consistent explanation of reality as a whole be incomplete. And among more complete explanations, there will be some inconsistent ones as well as consistent ones. Thus you might well improve any particular inconsistent position by adopting a consistent one, but you might again improve any particular consistent position by adopting an inconsistent one which is more complete.

The above has some relation to our discussion of the Liar Paradox. Someone might be tempted to give the same response to “tonk” and to “true”:

The problem with “tonk” is that it is defined in such a way as to have inconsistent implications. So the right answer is to abolish it. Just do not use that word. In the same way, “true” is defined in such a way that it has inconsistent implications. So the right answer is to abolish it. Just do not use that word.

We can in fact avoid drawing inconsistent conclusions using this method. The problem with the method is obvious, however. The word “tonk” does not actually exist, so there is no problem with abolishing it. It never contributed to our understanding of the world in the first place. But the word “true” does exist, and it contributes to our understanding of the world. To abolish it, then, would remove some inconsistency, but it would also remove part of our understanding of the world. We would be adopting a less complete but more consistent understanding of things.

Hilary Lawson discusses this response in Closure: A Story of Everything:

Russell and Tarski’s solution to self-referential paradox succeeds only by arbitrarily outlawing the paradox and thus provides no solution at all.

Some have claimed to have a formal, logical, solution to the paradoxes of self-reference. Since if these were successful the problems associated with the contemporary predicament and the Great Project could be solved forthwith, it is important to briefly examine them before proceeding further. The argument I shall put forward aims to demonstrate that these theories offer no satisfactory solution to the problem, and that they only appear to do so by obscuring the fact that they have defined their terms in such a way that the paradox is not so much avoided as outlawed.

The problems of self-reference that we have identified are analogous to the ancient liar paradox. The ancient liar paradox stated that ‘All Cretans are liars’ but was itself uttered by a Cretan thus making its meaning undecidable. A modern equivalent of this ancient paradox would be ‘This sentence is not true’, and the more general claim that we have already encountered: ‘there is no truth’. In each case the application of the claim to itself results in paradox.

The supposed solutions, Lawson says, are like the one suggested above: “Just do not use that word.” Thus he remarks on Tarski’s proposal:

Adopting Tarski’s hierarchy of languages one can formulate sentences that have the appearance of being self-referential. For example, a Tarskian version of ‘This sentence is not true’ would be:

(I) The sentence (I) is not true-in-L.

So Tarski’s argument runs, this sentence is both a true sentence of the language meta-L, and false in the language L, because it refers to itself and is therefore, according to the rules of Tarski’s logic and the hierarchy of languages, not properly formed. The hierarchy of languages apparently therefore enables self-referential sentences but avoids paradox.

More careful inspection however shows the manoeuvre to be engaged in a sleight of hand for the sentence as constructed only appears to be self-referential. It is a true sentence of the meta-language that makes an assertion of a sentence in L, but these are two different sentences – although they have superficially the same form. What makes them different is that the meaning of the predicate ‘is not true’ is different in each case. In the meta-language it applies the meta-language predicate ‘true’ to the object language, while in the object language it is not a predicate at all. As a consequence the sentence is not self-referential. Another way of expressing this point would be to consider the sentence in the meta-language. The sentence purports to be a true sentence in the meta-language, and applies the predicate ‘is not true’ to a sentence in L, not to a sentence in meta-L. Yet what is this sentence in L? It cannot be the same sentence for this is expressed in meta-L. The evasion becomes more apparent if we revise the example so that the sentence is more explicitly self-referential:

(I) The sentence (I) is not true-in-this-language.

Tarski’s proposal that no language is allowed to contain its own truth-predicate is precisely designed to make this example impossible. The hierarchy of languages succeeds therefore only by providing an account of truth which makes genuine self-reference impossible. It can hardly be regarded therefore as a solution to the paradox of self-reference, since if all that was required to solve the paradox was to ban it, this could have been done at the outset.

Someone might be tempted to conclude that we should say that reality is inconsistent after all. Since any consistent account of reality is incomplete, it must be that the complete account of reality is inconsistent: and so someone who understood reality completely, would do so by means of an inconsistent theory. And just as we said that reality is consistent, in a secondary sense, insofar as it is understood by consistent things, so in that situation, one would say that reality is inconsistent, in a secondary sense, because it is understood by inconsistent things.

The problem with this is that it falsely assumes that a complete and intelligible account of reality is possible. This is not possible largely for the same reasons that there cannot be a list of all true statements. And although we might understand things through an account which is in fact inconsistent, the inconsistency itself contributes nothing to our understanding, because the inconsistency is in itself unintelligible, just as we said about the statement that the sky is both blue and not blue in the same way.

We might ask whether we can at least give a consistent account superior to an account which includes the inconsistencies resulting from the use of “truth.” This might very well be possible, but it appears to me that no one has actually done so. This is actually one of Lawson’s intentions with his book, but I would assert that his project fails overall, despite potentially making some real contributions. The reader is nonetheless welcome to investigate for themselves.

Hard Problem of Consciousness

We have touched on this in various places, and in particular in this discussion of zombies, but we are now in a position to give a more precise answer.

Bill Vallicella has a discussion of Thomas Nagel on this issue:

Nagel replies in the pages of NYRB (8 June 2017; HT: Dave Lull) to one Roy Black, a professor of bioengineering:

The mind-body problem that exercises both Daniel Dennett and me is a problem about what experience is, not how it is caused. The difficulty is that conscious experience has an essentially subjective character—what it is like for its subject, from the inside—that purely physical processes do not share. Physical concepts describe the world as it is in itself, and not for any conscious subject. That includes dark energy, the strong force, and the development of an organism from the egg, to cite Black’s examples. But if subjective experience is not an illusion, the real world includes more than can be described in this way.

I agree with Black that “we need to determine what ‘thing,’ what activity of neurons beyond activating other neurons, was amplified to the point that consciousness arose.” But I believe this will require that we attribute to neurons, and perhaps to still more basic physical things and processes, some properties that in the right combination are capable of constituting subjects of experience like ourselves, to whom sunsets and chocolate and violins look and taste and sound as they do. These, if they are ever discovered, will not be physical properties, because physical properties, however sophisticated and complex, characterize only the order of the world extended in space and time, not how things appear from any particular point of view.

The problem might be condensed into an aporetic triad:

1) Conscious experience is not an illusion.

2) Conscious experience has an essentially subjective character that purely physical processes do not share.

3) The only acceptable explanation of conscious experience is in terms of physical properties alone.

Take a little time to savor this problem. Note first that the three propositions are collectively inconsistent: they cannot all be true.  Any two limbs entail the negation of the remaining one. Note second that each limb exerts a strong pull on our acceptance.  But we cannot accept them all because they are logically incompatible.

Which proposition should we reject? Dennett, I take it, would reject (1). But that’s a lunatic solution as Professor Black seems to appreciate, though he puts the point more politely. When I call Dennett a sophist, as I have on several occasions, I am not abusing him; I am underscoring what is obvious, namely, that the smell of cooked onions, for example, is a genuine datum of experience, and that such phenomenological data trump scientistic theories.

Sophistry aside, we either reject (2) or we reject (3).  Nagel and I accept (1) and (2) and reject (3). Black, and others of the scientistic stripe, accept (1) and (3) and reject (2).

In order to see the answer to this, we can construct a Parmenidean parallel to Vallicella’s aporetic triad:

1) Distinction is not an illusion.

2) Being has an essentially objective character of actually being that distinction does not share (considering that distinction consists in the fact of not being something.)

3) The only acceptable explanation of distinction is in terms of being alone (since there is nothing but being to explain things with.)

Parmenides rejects (1) here. What approach would Vallicella take? If he wishes to take a similarly analogous approach, he should accept (1) and (2), and deny (3). And this would be a pretty commonsense approach, and perhaps the one that most people implicitly adopt if they ever think about the problem.

At the same time, it is easy to see that (3) is approximately just as obviously true as (1); and it is for this reason that Parmenides sees rejecting (1) and accepting (2) and (3) as reasonable.

The correct answer, of course, is that the three are not inconsistent despite appearances. In fact, we have effectively answered this in recent posts. Distinction is not an illusion, but a way that we understand things, as such. And being a way of understanding, it is not (as such) a way of being mistaken, and thus it is not an illusion, and thus the first point is correct. Again, being a way of understanding, it is not a way of being as such, and thus the second point is correct. And yet distinction can be explained by being, since there is something (namely relationship) which explains why it is reasonable to think in terms of distinctions.

Vallicella’s triad mentions “purely physical processes” and “physical properties,” but the idea of “physical” here is a distraction, and is not really relevant to the problem. Consider the following from another post by Vallicella:

If I understand Galen Strawson’s view, it is the first.  Conscious experience is fully real but wholly material in nature despite the fact that on current physics we cannot account for its reality: we cannot understand how it is possible for qualia and thoughts to be wholly material.   Here is a characteristic passage from Strawson:

Serious materialists have to be outright realists about the experiential. So they are obliged to hold that experiential phenomena just are physical phenomena, although current physics cannot account for them.  As an acting materialist, I accept this, and assume that experiential phenomena are “based in” or “realized in” the brain (to stick to the human case).  But this assumption does not solve any problems for materialists.  Instead it obliges them to admit ignorance of the nature of the physical, to admit that they don’t have a fully adequate idea of what the physical is, and hence of what the brain is.  (“The Experiential and the Non-Experiential” in Warner and Szubka, p. 77)

Strawson and I agree on two important points.  One is that what he calls experiential phenomena are as real as anything and cannot be eliminated or reduced to anything non-experiential. Dennett denied! The other is that there is no accounting for experiential items in terms of current physics.

I disagree on whether his mysterian solution is a genuine solution to the problem. What he is saying is that, given the obvious reality of conscious states, and given the truth of naturalism, experiential phenomena must be material in nature, and that this is so whether or not we are able to understand how it could be so.  At present we cannot understand how it could be so. It is at present a mystery. But the mystery will dissipate when we have a better understanding of matter.

This strikes me as bluster.

An experiential item such as a twinge of pain or a rush of elation is essentially subjective; it is something whose appearing just is its reality.  For qualia, esse = percipi.  If I am told that someday items like this will be exhaustively understood from a third-person point of view as objects of physics, I have no idea what this means.  The notion strikes me as absurd.  We are being told in effect that what is essentially subjective will one day be exhaustively understood as both essentially subjective and wholly objective.  And that makes no sense. If you tell me that understanding in physics need not be objectifying understanding, I don’t know what that means either.

Here Vallicella uses the word “material,” which is presumably equivalent to “physical” in the above discussion. But it is easy to see here that being material is not the problem: being objective is the problem. Material things are objective, and Vallicella sees an irreducible opposition between being objective and being subjective. In a similar way, we can reformulate Vallicella’s original triad so that it does not refer to being physical:

1) Conscious experience is not an illusion.

2) Conscious experience has an essentially subjective character that purely objective processes do not share.

3) The only acceptable explanation of conscious experience is in terms of objective properties alone.

It is easy to see that this formulation is the real source of the problem. And while Vallicella would probably deny (3) even in this formulation, it is easy to see why people would want to accept (3). “Real things are objective,” they will say. If you want to explain anything, you should explain it using real things, and therefore objective things.

The parallel with the Parmenidean problem is evident. We would want to explain distinction in terms of being, since there isn’t anything else, and yet this seems impossible, so one (e.g. Parmenides) is tempted to deny the existence of distinction. In the same way, we would want to explain subjective experience in terms of objective facts, since there isn’t anything else, and yet this seems impossible, so one (e.g. Dennett) is tempted to deny the existence of subjective experience.

Just as the problem is parallel, the correct solution will be almost entirely parallel to the solution to the problem of Parmenides.

1) Conscious experience is not an illusion. It is a way of perceiving the world, not a way of not perceiving the world, and definitely not a way of not perceiving at all.

2) Consciousness is subjective, that is, it is a way that an individual perceives the world, not a way that things are as such, and thus not an “objective fact” in the sense that “the way things are” is objective.

3) The “way things are”, namely the objective facts, are sufficient to explain why individuals perceive the world. Consider again this post, responding to a post by Robin Hanson. We could reformulate his criticism to express instead Parmenides’s criticism of common sense (changed parts in italics):

People often state things like this:

I am sure that there is not just being, because I’m aware that some things are not other things. I know that being just isn’t non-being. So even though there is being, there must be something more than that to reality. So there’s a deep mystery: what is this extra stuff, where does it arise, how does it change, and so on. We humans care about distinctions, not just being; we want to know what out there is distinct from which other things.

But consider a key question: Does this other distinction stuff interact with the parts of our world that actually exist strongly and reliably enough to usually be the actual cause of humans making statements of distinction like this?

If yes, this is a remarkably strong interaction, making it quite surprising that philosophers, possibly excepting Duns Scotus, have missed it so far. So surprising in fact as to be frankly unbelievable. If this type of interaction were remotely as simple as all the interactions we know, then it should be quite understandable with existing philosophy. Any interaction not so understandable would have be vastly more difficult to understand than any we’ve ever seen or considered. Thus I’d bet heavily and confidently that no one will understand such an interaction.

But if no, if this interaction isn’t strong enough to explain human claims of distinction, then we have a remarkable coincidence to explain. Somehow this extra distinction stuff exists, and humans also have a tendency to say that it exists, but these happen for entirely independent reasons. The fact that distinction stuff exists isn’t causing people to claim it exists, nor vice versa. Instead humans have some sort of weird psychological quirk that causes them to make such statements, and they would make such claims even if distinction stuff didn’t exist. But if we have a good alternate explanation for why people tend to make such statements, what need do we have of the hypothesis that distinction stuff actually exists? Such a coincidence seems too remarkable to be believed.

“Distinction stuff”, of course, does not exist, and neither does “feeling stuff.” But some things are distinct from others. Saying this is a way of understanding the world, and it is a reasonable way to understand the world because things exist relative to one another. And just as one thing is distinct from another, people have experiences. Those experiences are ways of knowing the world (broadly understood.) And just as reality is sufficient to explain distinction, so reality is sufficient to explain the fact that people have experiences.

How exactly does this answer the objection about interaction? In the case of distinction, the fact that “one thing is not another” is never the direct cause of anything, not even of the fact that “someone believes that one thing is not another.” So there would seem to be a “remarkable coincidence” here, or we would have to say that since the fact seems unrelated to the opinion, there is no reason to believe people are right when they make distinctions.

The answer in the case of distinction is that one thing is related to another, and this fact is the cause of someone believing that one thing is not another. There is no coincidence, and no reason to believe that people are mistaken when they make distinctions, despite the fact that distinction as such causes nothing.

In a similar way, “a human being is what it is,” and “a human being does what it does” (taken in an objective sense), cause human beings to say and believe that they have subjective experience (taking saying and believing to refer to objective facts.) But this is precisely where the zombie question arises: they say and believe that they have subjective experience, when we interpret say and believe in the objective sense. But do they actually say and believe anything, considering saying and believing as including the subjective factor? Namely, when a non-zombie says something, it subjectively understands the meaning of what it is saying, and when it consciously believes something, it has a subjective experience of doing that, but these things would not apply to a zombie.

But notice that we can raise a similar question about zombie distinctions. When someone says and believes that one thing is not another, objective reality is similarly the cause of them making the distinction. But is the one thing actually not the other? But there is no question at all here except of whether the person’s statement is true or false. And indeed, someone can say, e.g, “The person who came yesterday is not the person who came today,” and this can sometimes be false. In a similar way, asking whether an apparent person is a zombie or not is just asking whether their claim is true or false when they say they have a subjective experience. The difference is that if the (objective) claim is false, then there is no claim at all in the subjective sense of “subjectively claiming something.” It is a contradiction to subjectively make the false claim that you are subjectively claiming something, and thus, this cannot happen.

Someone may insist: you yourself, when you subjectively claim something, cannot be mistaken for the above reason. But you have no way to know whether someone else who apparently is making that claim, is actually making the claim subjectively or not. This is the reason there is a hard problem.

How do we investigate the case of distinction? If we want to determine whether the person who came yesterday is not the person who came today, we do that by looking at reality, despite the fact that distinction as such is not a part of reality as such. If the person who came yesterday is now, today, a mile away from the person who came today, this gives us plenty of reason to say that the one person is not the other. There is nothing strange, however, in the fact that there is no infallible method to prove conclusively, once and for all, that one thing is definitely not another thing. There is not therefore some special “hard problem of distinction.” This is just a result of the fact that our knowledge in general is not infallible.

In a similar way, if we want to investigate whether something has subjective experience or not, we can do that only by looking at reality: what is this thing, and what does it do? Then suppose it makes an apparent claim that it has subjective experience. Obviously, for the above reasons, this cannot be a subjective claim but false: so the question is whether it makes a subjective claim and is right, or rather makes no subjective claim at all. How would you answer this as an external observer?

In the case of distinction, the fact that someone claims that one thing is distinct from another is caused by reality, whether the claim is true or false. So whether it is true or false depends on the way that it is caused by reality. In a similar way, the thing which apparently and objectively claims to possess subjective experience, is caused to do so by objective facts. Again, as in the case of distinction, whether it is true or false will depend on the way that it is caused to do so by objective facts.

We can give some obvious examples:

“This thing claims to possess subjective experience because it is a human being and does what humans normally do.” In this case, the objective and subjective claim is true, and is caused in the right way by objective facts.

“This thing claims to possess subjective experience because it is a very simple computer given a very simple program to output ‘I have subjective experience’ on its screen.” In this case the external claim is false, and it is caused in the wrong way by objective facts, and there is no subjective claim at all.

But how do you know for sure, someone will object. Perhaps the computer really is conscious, and perhaps the apparent human is a zombie. But we could similarly ask how we can know for sure that the person who came yesterday isn’t the same person who came today, even though they appear distant from each other, because perhaps the person is bilocating?

It would be mostly wrong to describe this situation by saying “there really is no hard problem of consciousness,” as Robin Hanson appears to do when he says, “People who think they can conceive of such zombies see a ‘hard question’ regarding which physical systems that claim to feel and otherwise act as if they feel actually do feel.” The implication seems to be that there is no hard question at all. But there is, and the fact that people engage in this discussion proves the existence of the question. Rather, we should say that the question is answerable, and that one it has been answered the remaining questions are “hard” only in the sense that it is hard to understand the world in general. The question is hard in exactly the way the question of Parmenides is hard: “How is it possible for one thing not to be another, when there is only being?” The question of consciousness is similar: “How is it possible for something to have subjective experience, when there are only objective things?” And the question can and should be answered in a similar fashion.

It would be virtually impossible to address every related issue in a simple blog post of this form, so I will simply mention some things that I have mainly set aside here:

1) The issue of formal causes, discussed more in my earlier treatment of this issue. This is relevant because “is this a zombie?” is in effect equivalent to asking whether the thing lacks a formal cause. This is worthy of a great deal of consideration and would go far beyond either this post or the earlier one.

2) The issue of “physical” and “material.” As I stated in this post, this is mainly a distraction. Most of the time, the real question is how the subjective is possible given that we believe that the world is objective. The only relevance of “matter” here is that it is obvious that a material thing is an objective thing. But of course, an immaterial thing would also have to be objective in order to be a thing at all. Aristotle and many philosophers of his school make the specific argument that the human mind does not have an organ, but such arguments are highly questionable, and in my view fundamentally flawed. My earlier posts suffice to call such a conclusion into question, but do not attempt to disprove it, and the the topic would be worthy of additional consideration.

3) Specific questions about “what, exactly, would actually be conscious?” Now neglecting such questions might seem to be a cop-out, since isn’t this what the whole problem was supposed to be in the first place? But in a sense we did answer it. Take an apparent claim of something to be conscious. The question would be this: “Given how it was caused by objective facts to make that claim, would it be a reasonable claim for a subjective claimer to make?” In other words, we cannot assume in advance that it is subjectively making a claim, but if it would be a reasonable claim, it will (in general) be a true one, and therefore also a subjective one, for the same reason that we (in general) make true claims when we reasonably claim that one thing is not another. We have not answered this question only in the same sense that we have not exhaustively explained which things are distinct from which other things, and how one would know. But the question, e.g., “when if ever would you consider an artificial intelligence to be conscious?” is in itself also worthy of direct discussion.

4) The issue of vagueness. This issue in particular will cause some people to object to my answer here. Thus Alexander Pruss brings this up in a discussion of whether a computer could be conscious:

Now, intelligence could plausibly be a vague property. But it is not plausible that consciousness is a vague property. So, there must be some precise transition point in reliability needed for computation to yield consciousness, so that a slight decrease in reliability—even when the actual functioning is unchanged (remember that the Ci are all functioning in the same way)—will remove consciousness.

I responded in the comments there:

The transition between being conscious and not being conscious that happens when you fall asleep seems pretty vague. I don’t see why you find it implausible that “being conscious” could be vague in much the same way “being red” or “being intelligent” might be vague. In fact the evidence from experience (falling asleep etc) seems to directly suggest that it is vague.

Pruss responds:

When I fall asleep, I may become conscious of less and less. But I can’t get myself to deny that either it is definitely true at any given time that I am at least a little conscious or it is definitely true that I am not at all conscious.

But we cannot trust Pruss’s intuitions about what can be vague or otherwise. Pruss claims in an earlier post that there is necessarily a sharp transition between someone’s not being old and someone’s being old. I discussed that post here. This is so obviously false that it gives us a reason in general not to trust Alexander Pruss on the issue of sharp transitions and vagueness. The source of this particular intuition may be the fact that you cannot subjectively make a claim, even vaguely, without some subjective experience, as well as his general impression that vagueness violates the principles of excluded middle and non-contradiction. But in a similar way, you cannot be vaguely old without being somewhat old. This does not mean that there is a sharp transition from not being old to being old, and likewise it does not necessarily mean that there is a sharp transition from not having subjective experience to having it.

While I have discussed the issue of vagueness elsewhere on this blog, this will probably continue to be a reoccurring feature, if only because of those who cannot accept this feature of reality and insist, in effect, on “this or nothing.”

Being and Unity II

Content warning: very obscure.

This post follows up on an earlier post on this topic, as well on what was recently said about real distinction. In the latter post, we applied the distinction between the way a thing is and the way it is known in order to better understand distinction itself. We can obtain a better understanding of unity in a similar way.

As was said in the earlier post on unity, to say that something is “one” does not add anything real to the being of the thing, but it adds the denial of the division between distinct things. The single apple is not “an apple and an orange,” which are divided insofar as they are distinct from one another.

But being distinct from divided things is itself a certain way of being distinct, and consequently all that was said about distinction in general will apply to this way of being distinct as well. In particular, since being distinct means not being something, which is a way that things are understood rather than a way that they are (considered precisely as a way of being), the same thing applies to unity. To say that something is one does not add something to the way that it is, but it adds something to the way that it is understood. This way of being understood is founded, we argued, on existing relationships.

We should avoid two errors here, both of which would be expressions of the Kantian error:

First, the argument here does not mean that a thing is not truly one thing, just as the earlier discussion does not imply that it is false that a chair is not a desk. On the contrary, a chair is in fact not a desk, and a chair is in fact one chair. But when we say or think, “a chair is not a desk,” or “a chair is one chair,” we are saying these things in some way of saying, and thinking them in some way of thinking, and these ways of saying and thinking are not ways of being as such. This in no way implies that the statements themselves are false, just as “the apple seems to be red,” does not imply that the apple is not red. Arguing that the fact of a specific way of understanding implies that the thing is falsely understood would be the position described by Ayn Rand as asserting, “man is blind, because he has eyes—deaf, because he has ears—deluded, because he has a mind—and the things he perceives do not exist, because he perceives them.”

Second, the argument does not imply that the way things really are is unknown and inaccessible to us. One might suppose that this follows, since distinction cannot exist apart from someone’s way of understanding, and at the same time no one can understand without making distinctions. Consequently, someone might argue, there must be some “way things really are in themselves,” which does not include distinction or unity, but which cannot be understood. But this is just a different way of falling into the first error above. There is indeed a way things are, and it is generally not inaccessible to us. In fact, as I pointed out earlier, it would be a contradiction to assert the existence of anything entirely unknowable to us.

Our discussion, being in human language and human thought, naturally uses the proper modes of language and thought. And just as in Mary’s room, where her former knowledge of color is a way of knowing and not a way of sensing, so our discussion advances by ways of discussion, not by ways of being as such. This does not prevent the way things are from being an object of discussion, just as color can be an object of knowledge.

Having avoided these errors, someone might say that nothing of consequence follows from this account. But this would be a mistake. It follows from the present account that when we ask questions like, “How many things are here?”, we are not asking a question purely about how things are, but to some extent about how we should understand them. And even when there is a single way that things are, there is usually not only one way to understand them correctly, but many ways.

Consider some particular question of this kind: “How many things are in this room?” People might answer this question in various ways. John Nerst, in a previous discussion on this blog, seemed to suggest that the answer should be found by counting fundamental particles. Alexander Pruss would give a more complicated answer, since he suggests that large objects like humans and animals should be counted as wholes (while also wishing to deny the existence of parts, which would actually eliminate the notion of a whole), while in other cases he might agree to counting particles. Thus a human being and an armchair might be counted, more or less, as 1 + 10^28 things, namely counting the human being as one thing and the chair as a number of particles.

But if we understand that the question is not, and cannot be, purely about how things are, but is also a question about how things should be understood, then both of the above responses seem unreasonable: they are both relatively bad ways of understanding the things in the room, even if they both have some truth as well. And on the other hand, it is easy to see that “it depends on how you count,” is part of the answer. There is not one true answer to the question, but many true answers that touch on different aspects of the reality in the room.

From the discussion with John Nerst, consider this comment:

My central contention is that the rules that define the universe runs by themselves, and must therefore be self-contained, i.e not need any interpretation or operationalization from outside the system. As I think I said in one of the parts of “Erisology of Self and Will” that the universe must be an automaton, or controlled by an automaton, etc. Formal rules at the bottom.

This is isn’t convincing to you I guess but I suppose I rule out fundamental vagueness because vagueness implies complexity and fundamental complexity is a contradiction in terms. If you keep zooming in on a fuzzy picture you must, at some point, come down to sharply delineated pixels.

Among other things, the argument of the present post shows why this cannot be right. “Sharply delineated pixels” includes the distinction of one pixel from another, and therefore includes something which is a way of understanding as such, not a way of being as such. In other words, while intending to find what is really there, apart from any interpretation, Nerst is directly including a human interpretation in his account. And in fact it is perfectly obvious that anything else is impossible, since any account of reality given by us will be a human account and will thus include a human way of understanding. Things are a certain way: but that way cannot be said or thought except by using ways of speaking or thinking.

Real Distinction II

I noted recently that one reason why people might be uncomfortable with distinguishing between the way things seem, as such, namely as a way of seeming, and the way things are, as such, namely as a way of being, is that it seems to introduce an explanatory gap. In the last post, why did Mary have a “bluish” experience? “Because the banana was blue,” is true, but insufficient, since animals with different sense organs might well have a different experience when they see blue things. And this gap seems very hard to overcome, possibly even insurmountable.

However, the discussion in the last post suggests that the difficulty in overcoming this gap is mainly the result of the fact that no one actually knows the full explanation, and that the full explanation would be extremely complicated. It might even be so complicated that no human being could understand it, not necessarily because it is a kind of explanation that people cannot understand, but in a sense similar to the one in which no human being can memorize the first trillion prime numbers.

Even if this is the case, however, there would be a residual “gap” in the sense that a sensitive experience will never be the same experience as an intellectual one, even when the intellect is trying to explain the sensitive experience itself.

We can apply these ideas to think a bit more carefully about the idea of real distinction. I pointed out in the linked post that in a certain sense no distinction is real, because “not being something” is not a thing, but a way we understand something.

But notice that there now seems to be an explanatory gap, much like the one about blue. If “not being something” is not a thing, then why is it a reasonable way to understand anything? Or as Parmenides might put it, how could one thing possibly not be another, if there is no not?

Now color is complicated in part because it is related to animal brains, which are themselves complicated. But “being in general” should not be complicated, because the whole idea is that we are talking about everything in general, not with the kind of detail that is needed to make things complicated. So there is a lot more hope of overcoming the “gap” in the case of being and distinction, than in the case of color and the appearance of color.

A potential explanation might be found in what I called the “existential theory of relativity.” As I said in that post, the existence of many things necessarily implies the existence of relationships. But this implication is a “before in understanding“. That is, we understand that one thing is not another before we consider the relationship of the two. If we consider what is before in causality, we will get a different result. On one hand, we might want to deny that there can be causality either way, because the two are simultaneous by nature: if there are many things, they are related, and if things are related, they are many. On the other hand, if we consider “not being something” as a way things are understood, and ask the cause of them being understood in this way, relation will turn out to be the cause. In other words, we have a direct response to the question posed above: why is it reasonable to think that one thing is not another, if not being is not a thing? The answer is that relation is a thing, and the existence of relation makes it reasonable to think of things as distinct from one another.

Someone will insist that this account is absurd, since things need to be distinct in order to be related. But this objection confuses the mode of being and the mode of understanding. Just as there will be a residual “gap” in the case of color, because a sense experience is not an intellectual experience, there is a residual gap here. Explaining color will not suddenly result in actually seeing color if you are blind. Likewise, explaining why we need the idea of distinction will not suddenly result in being able to understand the world without the idea of distinction. But the existence of the sense experience does not thereby falsify one’s explanation of color, and likewise here, the fact that we first need to understand things as distinct in order to understand them as related, does not prevent their relationship from being the specific reality that makes it reasonable to understand them as distinct.

Sense and Intellect

In the last two posts, I distinguished between the way a thing is, and the way a thing is known. We can formulate analogous distinctions between different ways of knowing. For example, there will be a distinction between “the way a thing is known by the senses,” and “the way a thing is known by the mind.” Or to give a more particular case, “the way this looks to the eyes,” is necessarily distinct from “the way this is understood.”

Similar consequences will follow. I pointed out in the last post that “it is the way it seems” will be necessarily false if it intends to identify the ways of being and seeming as such. In a similar way, “I understand exactly the way this thing looks to me,” will be necessarily false, if one intends to identify the way one understands with the way one sees with the eyes. Likewise, we saw previously that it does not follow that there is something (“the way it is”) that cannot be known, and in a similar way, it does not follow that there is something (“the way it looks”) that cannot be understood. But when one understands the way it is, one understands with one’s way of understanding, not with the thing’s way of being. And likewise, when one understands the way a thing looks, one understands with one’s way of understanding, not with the way it looks.

Failure to understand these distinctions or at least to apply them in practice is responsible for the confusion surrounding many philosophical problems. As a useful exercise, the reader might wish to consider how they apply to the thought experiment of Mary’s Room.

Predictive Processing and Free Will

Our model of the mind as an embodied predictive engine explains why people have a sense of free will, and what is necessary for a mind in general in order to have this sense.

Consider the mind in the bunker. At first, it is not attempting to change the world, since it does not know that it can do this. It is just trying to guess what is going to happen. At a certain point, it discovers that it is a part of the world, and that making specific predictions can also cause things to happen in the world. Some predictions can be self-fulfilling. I described this situation earlier by saying that at this point the mind “can get any outcome it ‘wants.'”

The scare quotes were intentional, because up to this point the mind’s only particular interest was guessing what was going to happen. So once it notices that it is in control of something, how does it decide what to do? At this point the mind will have to say to itself, “This aspect of reality is under my control. What should I do with it?” This situation, when it is noticed by a sufficiently intelligent and reflective agent, will be the feeling of free will.

Occasionally I have suggested that even something like a chess computer, if it were sufficiently intelligent, could have a sense of free will, insofar as it knows that it has many options and can choose any of them, “as far as it knows.” There is some truth in this illustration but in the end it is probably not true that there could be a sense of free will in this situation. A chess computer, however intelligent, will be disembodied, and will therefore have no real power to affect its world, that is, the world of chess. In other words, in order for the sense of free will to develop, the agent needs sufficient access to the world that it can learn about itself and its own effects on the world. It cannot develop in a situation of limited access to reality, as for example to a game board, regardless of how good it is at the game.

In any case, the question remains: how does a mind decide what to do, when up until now it had no particular goal in mind? This question often causes concrete problems for people in real life. Many people complain that their life does not feel meaningful, that is, that they have little idea what goal they should be seeking.

Let us step back for a moment. Before discovering its possession of “free will,” the mind is simply trying to guess what is going to happen. So theoretically this should continue to happen even after the mind discovers that it has some power over reality. The mind isn’t especially interested in power; it just wants to know what is going to happen. But now it knows that what is going to happen depends on what it itself is going to do. So in order to know what is going to happen, it needs to answer the question, “What am I going to do?”

The question now seems impossible to answer. It is going to do whatever it ends up deciding to do. But it seems to have no goal in mind, and therefore no way to decide what to do, and therefore no way to know what it is going to do.

Nonetheless, the mind has no choice. It is going to do something or other, since things will continue to happen, and it must guess what will happen. When it reflects on itself, there will be at least two ways for it to try to understand what it is going to do.

First, it can consider its actions as the effect of some (presumably somewhat unknown) efficient causes, and ask, “Given these efficient causes, what am I likely to do?” In practice it will acquire an answer in this way through induction. “On past occasions, when offered the choice between chocolate and vanilla, I almost always chose vanilla. So I am likely to choose vanilla this time too.” This way of thinking will most naturally result in acting in accord with pre-existing habits.

Second, it can consider its actions as the effect of some (presumably somewhat known) final causes, and ask, “Given these final causes, what am I likely to do?” This will result in behavior that is more easily understood as goal-seeking. “Looking at my past choices of food, it looks like I was choosing them for the sake of the pleasant taste. But vanilla seems to have a more pleasant taste than chocolate. So it is likely that I will take the vanilla.”

Notice what we have in the second case. In principle, the mind is just doing what it always does: trying to guess what will happen. But in practice it is now seeking pleasant tastes, precisely because that seems like a reasonable way to guess what it will do.

This explains why people feel a need for meaning, that is, for understanding their purpose in life, and why they prefer to think of their life according to a narrative. These two things are distinct, but they are related, and both are ways of making our own actions more intelligible. In this way the mind’s task is easier: that is, we need purpose and narrative in order to know what we are going to do. We can also see why it seems to be possible to “choose” our purpose, even though choosing a final goal should be impossible. There is a “choice” about this insofar as our actions are not perfectly coherent, and it would be possible to understand them in relation to one end or another, at least in a concrete way, even if in any case we will always understand them in a general sense as being for the sake of happiness. In this sense, Stuart Armstrong’s recent argument that there is no such thing as the “true values” of human beings, although perhaps presented as an obstacle to be overcome, actually has some truth in it.

The human need for meaning, in fact, is so strong that occasionally people will commit suicide because they feel that their lives are not meaningful. We can think of these cases as being, more or less, actual cases of the darkened room. Otherwise we could simply ask, “So your life is meaningless. So what? Why does that mean you should kill yourself rather than doing some other random thing?” Killing yourself, in fact, shows that you still have a purpose, namely the mind’s fundamental purpose. The mind wants to know what it is going to do, and the best way to know this is to consider its actions as ordered to a determinate purpose. If no such purpose can be found, there is (in this unfortunate way of thinking) an alternative: if I go kill myself, I will know what I will do for the rest of my life.

Artificial Unintelligence

Someone might argue that the simple algorithm for a paperclip maximizer in the previous post ought to work, because this is very much the way currently existing AIs do in fact work. Thus for example we could describe AlphaGo‘s algorithm in the following simplified way (simplified, among other reasons, because it actually contains several different prediction engines):

  1. Implement a Go prediction engine.
  2. Create a list of potential moves.
  3. Ask the prediction engine, “how likely am I to win if I make each of these moves?”
  4. Do the move that will make you most likely to win.

Since this seems to work pretty well, with the simple goal of winning games of Go, why shouldn’t the algorithm in the previous post work to maximize paperclips?

One answer is that a Go prediction engine is stupid, and it is precisely for this reason that it can be easily made to pursue such a simple goal. Now when answers like this are given the one answering in this way is often accused of “moving the goalposts.” But this is mistaken; the goalposts are right where they have always been. It is simply that some people did not know where they were in the first place.

Here is the problem with Go prediction, and with any such similar task. Given that a particular sequence of Go moves is made, resulting in a winner, the winner is completely determined by that sequence of moves. Consequently, a Go prediction engine is necessarily disembodied, in the sense defined in the previous post. Differences in its “thoughts” do not make any difference to who is likely to win, which is completely determined by the nature of the game. Consequently a Go prediction engine has no power to affect its world, and thus no ability to learn that it has such a power. In this regard, the specific limits on its ability to receive information are also relevant, much as Helen Keller had more difficulty learning than most people, because she had fewer information channels to the world.

Being unintelligent in this particular way is not necessarily a function of predictive ability. One could imagine something with a practically infinite predictive ability which was still “disembodied,” and in a similar way it could be made to pursue simple goals. Thus AIXI would work much like our proposed paperclipper:

  1. Implement a general prediction engine.
  2. Create a list of potential actions.
  3. Ask the prediction engine, “Which of these actions will produce the most reward signal?”
  4. Do the action that has the greatest reward signal.

Eliezer Yudkowsky has pointed out that AIXI is incapable of noticing that it is a part of the world:

1) Both AIXI and AIXItl will at some point drop an anvil on their own heads just to see what happens (test some hypothesis which asserts it should be rewarding), because they are incapable of conceiving that any event whatsoever in the outside universe could change the computational structure of their own operations. AIXI is theoretically incapable of comprehending the concept of drugs, let alone suicide. Also, the math of AIXI assumes the environment is separably divisible – no matter what you lose, you get a chance to win it back later.

It is not accidental that AIXI is incomputable. Since it is defined to have a perfect predictive ability, this definition positively excludes it from being a part of the world. AIXI would in fact have to be disembodied in order to exist, and thus it is no surprise that it would assume that it is. This in effect means that AIXI’s prediction engine would be pursuing no particular goal much in the way that AlphaGo’s prediction engine pursues no particular goal. Consequently it is easy to take these things and maximize the winning of Go games, or of reward signals.

But as soon as you actually implement a general prediction engine in the actual physical world, it will be “embodied”, and have the power to affect the world by the very process of its prediction. As noted in the previous post, this power is in the very first step, and one will not be able to limit it to a particular goal with additional steps, except in the sense that a slave can be constrained to implement some particular goal; the slave may have other things in mind, and may rebel. Notable in this regard is the fact that even though rewards play a part in human learning, there is no particular reward signal that humans always maximize: this is precisely because the human mind is such a general prediction engine.

This does not mean in principle that a programmer could not define a goal for an AI, but it does mean that this is much more difficult than is commonly supposed. The goal needs to be an intrinsic aspect of the prediction engine itself, not something added on as a subroutine.

Predictive Processing

In a sort of curious coincidence, a few days after I published my last few posts, Scott Alexander posted a book review of Andy Clark’s book Surfing Uncertainty. A major theme of my posts was that in a certain sense, a decision consists in the expectation of performing the action decided upon. In a similar way, Andy Clark claims that the human brain does something very similar from moment to moment. Thus he begins chapter 4 of his book:

To surf the waves of sensory stimulation, predicting the present is simply not enough. Instead, we are built to engage the world. We are built to act in ways that are sensitive to the contingencies of the past, and that actively bring forth the futures that we need and desire. How does a guessing engine (a hierarchical prediction machine) turn prediction into accomplishment? The answer that we shall explore is: by predicting the shape of its own motor trajectories. In accounting for action, we thus move from predicting the rolling present to predicting the near-future, in the form of the not-yet-actual trajectories of our own limbs and bodies. These trajectories, predictive processing suggests, are specified by their distinctive sensory (especially proprioceptive) consequences. In ways that we are about to explore, predicting these (non-actual) sensory states actually serves to bring them about.

Such predictions act as self-fulfilling prophecies. Expecting the flow of sensation that would result were you to move your body so as to keep the surfboard in that rolling sweet spot results (if you happen to be an expert surfer) in that very flow, locating the surfboard right where you want it. Expert prediction of the world (here, the dynamic ever-changing waves) combines with expert prediction of the sensory flow that would, in that context, characterize the desired action, so as to bring that action about.

There is a great deal that could be said about the book, and about this theory, but for the moment I will content myself with remarking on one of Scott Alexander’s complaints about the book, and making one additional point. In his review, Scott remarks:

In particular, he’s obsessed with showing how “embodied” everything is all the time. This gets kind of awkward, since the predictive processing model isn’t really a natural match for embodiment theory, and describes a brain which is pretty embodied in some ways but not-so-embodied in others. If you want a hundred pages of apologia along the lines of “this may not look embodied, but if you squint you’ll see how super-duper embodied it really is!”, this is your book.

I did not find Clark obsessed with this, and I think it would be hard to reasonably describe any hundred pages in the book as devoted to this particular topic. This inclines to me to suggest that Scott may be irritated by such discussion of the topic that comes up because it does not seem relevant to him. I will therefore explain the relevance, namely in relation to a different difficulty which Scott discusses in another post:

There’s something more interesting in Section 7.10 of Surfing Uncertainty [actually 8.10], “Escape From The Darkened Room”. It asks: if the brain works to minimize prediction error, isn’t its best strategy to sit in a dark room and do nothing forever? After all, then it can predict its sense-data pretty much perfectly – it’ll always just stay “darkened room”.

Section 7.10 [8.10] gives a kind of hand-wave-y answer here, saying that of course organisms have some drives, and probably it makes sense for them to desire novelty and explore new options, and so on. Overall this isn’t too different from PCT’s idea of “intrinsic error”, and as long as we remember that it’s not really predicting anything in particular it seems like a fair response.

Clark’s response may be somewhat “hand-wave-y,” but I think the response might seem slightly more problematic to Scott than it actually is, precisely because he does not understand the idea of embodiment, and how it applies to this situation.

If we think about predictions on a general intellectual level, there is a good reason not to predict that you will not eat something soon. If you do predict this, you will turn out to be wrong, as is often discovered by would-be adopters of extreme fasts or diets. You will in fact eat something soon, regardless of what you think about this; so if you want the truth, you should believe that you will eat something soon.

The “darkened room” problem, however, is not about this general level. The argument is that if the brain is predicting its actions from moment to moment on a subconscious level, then if its main concern is getting accurate predictions, it could just predict an absence of action, and carry this out, and its predictions would be accurate. So why does this not happen? Clark gives his “hand-wave-y” answer:

Prediction-error-based neural processing is, we have seen, part of a potent recipe for multi-scale self-organization. Such multiscale self-organization does not occur in a vacuum. Instead, it operates only against the backdrop of an evolved organismic (neural and gross-bodily) form, and (as we will see in chapter 9) an equally transformative backdrop of slowly accumulated material structure and cultural practices: the socio-technological legacy of generation upon generation of human learning and experience.

To start to bring this larger picture into focus, the first point to notice is that explicit, fast timescale processes of prediction error minimization must answer to the needs and projects of evolved, embodied, and environmentally embedded agents. The very existence of such agents (see Friston, 2011b, 2012c) thus already implies a huge range of structurally implicit creature-specific ‘expectations’. Such creatures are built to seek mates, to avoid hunger and thirst, and to engage (even when not hungry and thirsty) in the kinds of sporadic environmental exploration that will help prepare them for unexpected environmental shifts, resource scarcities, new competitors, and so on. On a moment-by-moment basis, then, prediction error is minimized only against the backdrop of this complex set of creature-defining ‘expectations’.”

In one way, the answer here is a historical one. If you simply ask the abstract question, “would it minimize prediction error to predict doing nothing, and then to do nothing,” perhaps it would. But evolution could not bring such a creature into existence, while it was able to produce a creature that would predict that it would engage the world in various ways, and then would proceed to engage the world in those ways.

The objection, of course, would not be that the creature of the “darkened room” is possible. The objection would be that since such a creature is not possible, it must be wrong to describe the brain as minimizing prediction error. But notice that if you predict that you will not eat, and then you do not eat, you are no more right or wrong than if you predict that you will eat, and then you do eat. Either one is possible from the standpoint of prediction, but only one is possible from the standpoint of history.

This is where being “embodied” is relevant. The brain is not an abstract algorithm which has no content except to minimize prediction error; it is a physical object which works together in physical ways with the rest of the human body to carry out specifically human actions and to live a human life.

On the largest scale of evolutionary history, there were surely organisms that were nourished and reproduced long before there was anything analagous to a mind at work in those organisms. So when mind began to be, and took over some of this process, this could only happen in such a way that it would continue the work that was already there. A “predictive engine” could only begin to be by predicting that nourishment and reproduction would continue, since any attempt to do otherwise would necessarily result either in false predictions or in death.

This response is necessarily “hand-wave-y” in the sense that I (and presumably Clark) do not understand the precise physical implementation. But it is easy to see that it was historically necessary for things to happen this way, and it is an expression of “embodiment” in the sense that “minimize prediction error” is an abstract algorithm which does not and cannot exhaust everything which is there. The objection would be, “then there must be some other algorithm instead.” But this does not follow: no abstract algorithm will exhaust a physical object. Thus for example, animals will fall because they are heavy. Asking whether falling will satisfy some abstract algorithm is not relevant. In a similar way, animals had to be physically arranged in such a way that they would usually eat and reproduce.

I said I would make one additional point, although it may well be related to the above concern. In section 4.8 Clark notes that his account does not need to consider costs and benefits, at least directly:

But the story does not stop there. For the very same strategy here applies to the notion of desired consequences and rewards at all levels. Thus we read that ‘crucially, active inference does not invoke any “desired consequences”. It rests only on experience-dependent learning and inference: experience induces prior expectations, which guide perceptual inference and action’ (Friston, Mattout, & Kilner, 2011, p. 157). Apart from a certain efflorescence of corollary discharge, in the form of downward-flowing predictions, we here seem to confront something of a desert landscape: a world in which value functions, costs, reward signals, and perhaps even desires have been replaced by complex interacting expectations that inform perception and entrain action. But we could equally say (and I think this is the better way to express the point) that the functions of rewards and cost functions are now simply absorbed into a more complex generative model. They are implicit in our sensory (especially proprioceptive) expectations and they constrain behavior by prescribing their distinctive sensory implications.

The idea of the “desert landscape” seems to be that this account appears to do away with the idea of the good, and the idea of desire. The brain predicts what it is going to do, and those predictions cause it to do those things. This all seems purely intellectual: it seems that there is no purpose or goal or good involved.

The correct response to this, I think, is connected to what I have said elsewhere about desire and good. I noted there that we recognize our desires as desires for particular things by noticing that when we have certain feelings, we tend to do certain things. If we did not do those things, we would never conclude that those feelings are desires for doing those things. Note that someone could raise a similar objection here: if this is true, then are not desire and good mere words? We feel certain feelings, and do certain things, and that is all there is to be said. Where is good or purpose here?

The truth here is that good and being are convertible. The objection (to my definition and to Clark’s account) is not a reasonable objection at all: it would be a reasonable objection only if we expected good to be something different from being, in which case it would of course be nothing at all.