(Reply to John Campbell-2)
JC: “You can’t address the symbol-grounding problem without looking at relations to sensory awareness. Someone who uses, e.g., words for shapes and colors, but has never had experience of shapes or colors, doesn’t know what they’re talking about; it’s just empty talk (even if they have perceptual systems remote from consciousness that allow them to use the words differentially in response to the presence of particular shapes of colors around them). Symbol-grounding shouldn’t be discussed independently of phenomena of consciousness.“
The symbol grounding problem first reared its head in the context of John Searle’s Chinese Room Argument. Searle showed that computation (formal symbol manipulation) alone is not enough to generate meaning, even at Turing-Test scale. He was saying things coherently in Chinese, but he did not understand, hence mean, anything he was saying. And the incontrovertible way he discerned that he was not understanding was not by noting that his words were not grounded in their referents, but by noting that he had no idea what he was saying — or even that he was saying anything. And he was able to make that judgment because he knew what it felt like to understand (or not understand) what he was saying.
The natural solution was to scale up the Turing Test from verbal performance capacity alone to full robotic performance capacity. That would ground symbol use in the capacity for interacting with the things the symbols are about, Turing-indistinguishably from a real human being, for a lifetime. But it’s not clear whether that would give the words meaning, rather than just grounding.
Now you may doubt that there could be a successful Turing robot at all (but then I think you would have to explain why you think not). Or, like me, you may doubt that there could be a successful Turing robot unless it really did feel (but then I think you would have to explain — as I cannot — why you think it would need to feel).
If I may transcribe the above paragraph with some simplifications, I think I can bring out the fact that an explanation is still called for. But it must be noted that I am — and have been all along — using “feeling” synonymously with, and in place of “consciousness”:
“*JC: You can’t address the symbol-grounding problem without looking at relations to feeling. A Turing robot that uses words for shapes and colors, but has never felt what it feels like to see shapes or colors, doesn’t know what it’s talking about; it’s just empty talk (even if it has unfelt sensorimotor and internal systems that allow it to speak and act indistinguishably from us). Symbol-grounding shouldn’t be discussed independently of feeling.“
I think you are simply assuming that feeling (consciousness) is a prerequisite for being able to do what we can do, whereas explaining how and why that is true is precisely the burden of the hard problem.
You go on to write the following (but I will consistently use “feeling” for “consciousness” to make it clearer):
JC: “Trying to leave out problems of [feeling] in connection with symbol-grounding, and then [to] bring [it] back in with the talk of ‘feeling’, makes for bafflement. If you stick a pin in me and I say ‘That hurt’ is the pain itself the feeling of meaning? The talk about ‘feeling of meaning’ here isn’t particularly colloquial, but it hasn’t been given a plain theoretical role either.“
I leave feeling out of symbol grounding because I don’t think they are necessarily the same thing. (I doubt that there could be a grounded Turing robot that does not feel, but I cannot explain how or why.)
It feels like one thing to be hurt, and it feels like another thing to say and mean “That hurt.” The latter may draw on the former to some extent, but (1) being hurt and (2) saying and meaning “That hurt” are different, and feel different. The only point is that (2) feels like something too: that’s what makes it meant rather than just grounded.
Harnad, S. (1990) The Symbol Grounding Problem Physica D 42: 335-346.