Anonymous: “Did ChatGPT itself ever state a significant, âinsightful” idea during your “Language Writ Large” dialogue?“
Did GPT provide a significant new insight? Itâs very hard to say, because I cannot tell whether it said anything that didnât come from its enormous database of the words of others. Of course, a lot of human insights do come from recombining the words of others — recombination is part of creative work, both literary and scientific. (Hadamard and others thought that such recombination was part of scientific creativity too.). And it occurs in nonverbal areas like music too (though we may not speak of this as âinsight.â)
I think most of what GPT does is recombination and compressive synthesis of the words of others; and, if what it says is new to me, that doesnât mean itâs new, or original, âfromâ GPT. But it doesnât mean it isnât either.
I expect that what you might have in mind with your question is something related to embodiment, situatedness, sensorimotor grounding.
The AI/transformer community thinks that if anything requiring those is missing so far, it will be provided by âmultimodalâ grounding. But I tried to suggest in Writ Large why I didnât think that that would be enough: Language is not just another one of the sensorimotor modalities, whether it is spoken or written. It is not an input or output modality but a way of producing, transmitting and receiving propositional thought. That thought is grounded in sensorimotor experience â but it is not itself sensorimotor experience; it is a verbal expression of it. A (grounded) code for expressing and communicating thought.
Chomsky thought â or once thought â that verbal thought was the only form of thought. That was of course wrong. Nonverbal animals can think, deliberate, plan, communicate, understand, reflect, ruminate. We humans can express their thoughts, but this is partly misleading, because although the sensorimotor basis of it is there in animalsâ thinking too, it is not propositional: they cannot do what we can do in words (though I donât doubt that nonhuman animal thinking is combinatorial too).
But GPT cannot do what animals are doing at all, when they are thinking. And our own thinking capacity is based on the sensorimotor capacity and experience that we share with other thinking species, including even the most primitive ones. Animals can have insights; GPT canât. Not necessarily because GPT is not a living organism (though that could turn out to be the reason too). I think that if a sensorimotor Turing robot had the capacity to do and say anything a human could, indistinguishably from any other human, to any other human, for a lifetime, then it would be grounded too — as well as sentient: able to feel.
But I think you canât get to such a Turing-scale robot top-down, from an LLM, just by adding on sensorimotor âmodalitiesâ. I think the only way to get there is bottom up, robotically, just as we animals do it. (This is whatâs behind â or underneath â the fundamental asymmetry between direct sensorimotor grounding and indirect verbal grounding I mentioned in Writ Large.)
But I think Iâve drifted off on my own ruminations. I would certainly agree that âinsight,â whether verbal or nonverbal, must be felt, and that feeling is a sensorimotor capacity, both in nonhuman animals and in human ones (and perhaps in Turing-scale sensorimotor robots, if they are possible). And that GPT can only talk the talk (the recombined talk of others) and not walk the walk, which is sensorimotor, felt, and based on feeling, not just optical, acoustic, and ambulatory.
But I have no idea why sensorimotor activity has to be felt: Thatâs the âhard problem.â