(Reply to Krisztian Gabris)
KG: “Take the pain example… what would happen if for some reason… a decision is made which goes against the evolutionarily ingrained rules of the system. For example, a hand is left in the fire… What would be the punishment of such behavior in a Turing robot (other than tissue damage)? Nothing, the robot would go on it’s own business with signals and internal warnings, but it would not feel the pain. Whereas a human would… feel pain, and would take away the hand… not only because of [genetic] programming, but because of… feeling pain.“
Yours is the natural intuitive explanation for why we feel — the one that feels right. “Why,” after all, is a causal question: Why do we pull our hand out of the fire? Yes, fire causes tissue damage, but that’s not what makes us withdraw our hand (unless we are anaesthetized): It’s because it hurts!
So surely that’s what pain’s for: To signal tissue damage by causing pain to be felt.
Why? So you’ll withdraw your hand. Because if your ancestors had been indifferent to tissue damage, they would not have had surviving descendants.
So you withdraw your hand because it hurts. And it hurts in order to cause you to feel like withdrawing your hand — and therefore you withdraw your hand.
Injury –> pain –> withdraw hand.
And the reason the feeling of pain evolved is because those whose ancestors felt pain were more likely to feel like withdrawing their hands than those who did not.
But let us note that what was needed, for survival, was to withdraw the injured hand — an act, not a sentiment. The pain was a means, not an end. It’s an extra step; and, as I will try to illustrate with other examples, a superfluous extra step, practically speaking. So the hard problem is to explain how and why this extra, apparently superfluous step evolved at all.
Suppose that what you had chosen for your evolutionary example of the adaptive trait for “motivational” scrutiny had been — rather than the withdrawing of the injured hand — the growing of wings, or the beating of the heart or the dilating of the pupil of the eye.
You’ll perhaps find it strange to ask about feeling the “motivation” to grow wings (though it’s a reasonable question), because growing is not something we ordinarily think of ourselves as “doing.” But note that the very same question you asked about the evolution of pain — and the “punishment” for non-withdrawal of the injured hand if no one feels the “motivation” to withdraw it — applies to the non-growth of wings. And the answer is the same:
If we are talking about evolution — which means traits that increase the likelihood of survival and reproduction — then for both the disposition to grow wings and the disposition to withdraw the hand from injury the “reward” is increased likelihood of survival and reproduction; and for both the lack of the disposition to grow wings and the lack of the disposition to withdraw the hand from injury the “punishment” is decreased likelihood of survival and reproduction.
The very same evolutionary reward/punishment scenario also applies to the disposition of our hearts to beat which is even more obviously something that our bodies do — or, if you want an example of something we do in response to a circumstantial stimulus rather than constantly, there’s pupillary dilation to light intensity.
Or, if you want something we do voluntarily rather than involuntarily — although that’s begging the question, because it is really the involuntary/voluntary distinction that poses the “hard” problem and calls for explanation — consider the implicit improvement in skills that occurs without any sense of having done anything deliberately (sometimes even without the feeling that we have improved) in implicit learning, or the changes in our dispositions caused by subtle Pavlovian conditioning or Skinnerian reinforcement when we don’t even feel that our dispositions are changing, or the voluntary take-over of breathing — usually involuntary, like the heart-beat.
And a disposition is a disposition to do, whether it’s to grow, to beat, to dilate to withdraw, to salivate, to smile or to breathe. So the question remains: Why the extra intermediate step of feeling, when the reward and punishment come from the disposition to do?
The very same reasoning applies to learning itself: We learn to do things — such as what to eat and what to avoid — by trial and error and reward/punishment. The consequences of doing the right thing feel good and the consequences of doing the wrong thing feel bad, so we learn to do the right thing. “Motivation” again. But again, it is the disposition to do the right thing that matters; the feeling of reward and punishment is an extra. Why? Both in evolution and in learning there are consequences (enhanced survival and reproduction in the case of evolution, and enhanced functioning and performance in the case of learning: eating nourishing things gives us energy, eating toxic things makes us sick) and the consequences are sufficient to guide our dispositions to do. But why is any of that felt rather than just done?
These questions are hard not only because of the underlying problem of causality, but because our intuitions keep telling us that it’s obvious that we need to feel. Yet the causal role of feeling is anything but obvious, if looked at objectively, which means functionally.
You assumed that a Turing robot would not feel. That’s not at all sure. But let’s consider today’s rudimentary robots, which are as unlikely to feel as a toaster or a stone. Yet even they can already be designed to withdraw damaged limbs, or to learn to withdraw damaged limbs. They need sensors, of course, but it’s not at all clear why they would need feelings (even if we had the slightest clue of how to design feelings!), if the objective is to do — or to learn to do — what needs to be done in order to survive and function. They need to detect tissue damage, and then they need to be disposed to do — or disposed to learn to do — whatever needs to be done.
If (sensible) anti-Creationism impels us to reject arguments from robotic design, consider that in evolution can be simulated computationally in artificial life simulations; and the kinds of traits we build into our robots can therein be shown to evolve by random variation and selection; the same can be done for computer models of learning (which just involve a change in simulation time scale), including computer models of the evolution of the disposition to learn (e.g., Baldwinian evolution).
And lest we propose the superior power of cognition over Pavlovian and Skinnerian learning, remember that the kind of information processing underlying cognition can be implemented (along with its power and benefits) computationally, in unfeeling machines.
So there is definitely a problem here, of explaining the ostensibly superfluous causal role of feeling in doing. And not only do our intuitions fail us, but so does every objective attempt at the kind of causal explanation that serves us so well in just about every other functional dynamic under the sun.
To be continued in the 2012 Summer School on the Evolution and Function of Consciousness…