This is in response to an issue raised by Scott Alexander on his Tumblr.
I actually responded to the dark room problem of predictive processing earlier. However, here I will construct an imaginary model which will hopefully explain the same thing more clearly and briefly.
Suppose there is dust particle which falls towards the ground 90% of the time, and is blown higher into the air 10% of the time.
Now suppose we bring the dust particle to life, and give it the power of predictive processing. If it predicts it will move in a certain direction, this will tend to cause it to move in that direction. However, this causal power is not infallible. So we can suppose that if it predicts it will move where it was going to move anyway, in the dead situation, it will move in that direction. But if it predicts it will move in the opposite direction from where it would have moved in the dead situation, then let us suppose that it will move in the predicted direction 75% of the time, while in the remaining 25% of the time, it will move in the direction the dead particle would have moved, and its prediction will be mistaken.
Now if the particle predicts it will fall towards the ground, then it will fall towards the ground 97.5% of the time, and in the remaining 2.5% of the time it will be blown higher in the air.
Meanwhile, if the particle predicts that it will be blown higher, then it will be blown higher in 77.5% of cases, and in 22.5% of cases it will fall downwards.
97.5% accuracy is less uncertain than 77.5% accuracy, so the dust particle will minimize uncertainty by consistently predicting that it will fall downwards.
The application to sex and hunger and so on should be evident.
2 thoughts on “How Sex Minimizes Uncertainty”
>The application to sex and hunger and so on should be evident.
It isn’t in any way whatsoever.
The dust particle already does certain things before it begins predicting, and when it begins to predict, it predicts that it will continue to do those things. Consequently it does them even more than before.
In the same way, consider the belief that you will eat something sometime within the next week. Why do you believe that? For one thing, historically you have pretty much been eating every day, and you have no reason to think something else will happen. Suppose instead, you decide you aren’t going to eat for a week. It’s not inconceivable that you will be right, but it’s pretty unlikely. At some point during the week you will probably eat anyway. This is like the dust particle deciding to go upwards: it might succeed, but it will be even more likely to succeed if it decides to do what it was going to do anyway. In the same way, you will be more likely to be right if you think you are going to eat, than if you think that you are not going to eat.
The same thing applies to eating on a lower level. Eating presumable developed from some earlier kind of energy transfer which was already happening. Once an animal acquired some sort of prediction engine, its prediction was more likely to be right if it predicted it would continue to transfer energy, than if it predicted that it wasn’t going to do so.