(Reply to John Campbell)
JC: “ “I wonder whether ‘feeling’ is really the right notion to fasten onto here. The obvious problem with ‘blind-semantics’ as illustrated by [Searle’s] Chinese Room is that language-use is being described without giving any place to its relation to perceptual experience. The connection between meaning and the hard problems comes when we try to characterize the relation between meaning and our sensory awareness of our surroundings.“
Meaning = Sensorimotor Grounding + Semantic Interpretability + Feeling. Yes, computation (formal symbol manipulation) alone is not enough for meaning, even if it has a systematic semantic interpretation. This is the “symbol grounding problem” (one of the “easy” problems).
The solution to the symbol grounding problem is to ground the internal symbols (words) of a Turing robot in its autonomous sensorimotor capacity to detect, categorize, manipulate and describe the symbols’ external referents.
But although grounding is necessary for meaning, it is not sufficient. The other necessary component is feeling:
It feels like something to mean something. If I say “The cat is on the mat,” I am not only generating a syntactically well-formed string of symbols — part of a symbol system that also allows me to systematically generate other symbol strings, such as “The cat is not on the mat” or “The mat is on the cat” or “The rat is on the mat,” etc. — all of which are systematically interpretable (by an external interpreter) as meaning what they mean in English.
In addition to that semantic interpretability to an external interpreter, I am also able, autonomously, to detect and interact with cats and mats, and cats being on mats, etc., with my senses and body, and able to interact with them in a way that is systematically coherent with the way in which my symbol strings are interpretable to an external interpreter.
I have no idea whether there can be “Zombies” — Turing robots whose doings and doing-capacity are indistinguishable from our own but that do not feel — although I doubt it. (I happen to believe that anything that could do what a normal human being can do — indistinguishably from a human, to a human, for a lifetime — would feel.)
But my belief is irrelevant, because there’s no way of knowing whether or not a Turing robot (or biorobot) is a Zombie: no way of determining whether there can be Turing-scale grounding without feeling. Worse, either way there is no explanation of feeling: neither an explanation of how and why a grounded Turing robot feels, if it it is not a Zombie, nor an explanation of how and why we feel and the Turing robot doesn’t, if it’s a Zombie.
But what is clear is what the difference would be: the presence or absence of feeling. And that is also the difference between meaning something or merely going through the motions.
JC: “ “I agree that there is such a thing as ‘speaking with feeling’, or ‘feeling the full weight of what one is saying’ , for example. But the fundamental point of contact with the hard problem has to do with sensory awareness. Once one has grasp of meaning, suitably hooked up to sensory experience, in an agent with some kind of emotional life, then as a consequence of that there will be such a thing as ‘it feeling like something to mean something’, but that’s an epiphenomenon.“
It may well be that most of what it feels like to mean “the cat is on the mat” is what it feels like to recognize and to imagine cats, mats, and cats being on mats.
But the bottom line is still that to say (or think) and mean “the cat is on the mat” there has to be something it feels like to say (or think) and mean “the cat is on the mat” — and that for someone to be saying (or thinking) and meaning “the cat is on the mat” they have to be feeling something like that. Otherwise it’s still just “blind semantics,” even if it’s “suitably hooked up (grounded) in sensorimotor capacity.
(Sensory “experience,” by the way, would be an equivocal weasel-word, insofar as feeling is concerned: is it felt experience or just “done” experience, as in a toaster or a toy robot?)
So I’m definitely not speaking of “speaking with feeling” in the sense of emphasis, when I say there’s something it feels like to mean something (or understand something).
I mean that to mean “the cat is on the mat” (whether in speaking or just thinking) is not just to be able to generate word strings in a way that is semantically interpretable to an external interpreter, nor even to be able to interact with the cats and mats in a way that coheres with that semantic interpretation. There is also something it feels like to mean “the cat is on the mat.” And without feeling something like that, all there is is doing.
Now explaining how and why we feel rather than just do is the hard problem, whether it pertains to sensing objects or meaning/understanding sentences. I’d call that a profound explanatory gap, rather than an “epiphenomenon.”
(But perhaps all that Professor Campbell meant be “epiphenomenon” here was that what it feels like to be saying and meaning a sentence is [roughly] what it feels like to be imaging or otherwise “calling to mind” its referents. I’d call that feeling “derivative” rather than “epiphenomenal,” but that’s just a terminological quibble, as long as we agree that meaning must be not only grounded but felt.)