Propositionality

It is a great pleasure and an honor to “skywrite” with Vili CsĂĄnyi. I already knew something about how perceptive, sensitive and intelligent dogs were from my years with my beloved LĂ©dike (1959-1975), never forgotten and never “replaced”. But for decades now, starting already from the era of Vili’s unforgettable Bukfenc (and Zebulon, not a dog), both of whom I knew, Vili’s remarkable perceptiveness and understanding of dogs’ cognition and character have soared far beyond my modest mind-reading skill. I have learned so much from Vili that has stayed with me ever since. 

So let me preface this by saying that every example Vili cites below is familiar, valid, and true — but not propositional (though “associative” is a non-explanatory weasel-word to describe what dogs really do perceive, understand, express, want and know, and I regret having evoked it: it explains nothing). 

Dogs, of course, knowingly perceive and understand and can request and show and alert and inform and even teach — their conspecifics as well as humans. But they cannot tell. Because to tell requires language, which means the ability to understand as well as to produce re-combinatory subject/predicate propositions with truth values. (A mirror production/comprehension capacity.) And to be able to do this with one proposition is to be able to do it with all propositions.

When Vili correctly mind-reads Bukfenc, and even mind-reads and describes what Bukfenc is mind-reading about us, and is trying to express to us, Vili is perceiving and explaining far better what dogs are thinking and feeling than most human mortals can. But there is one thing that no neurotypical human can inhibit themselves from doing (except blinkered behaviorists, who mechanically inhibit far, far too much), and that is to “narratize” what the dog perceives, knows, and wants — i.e., to describe it in words, as subject/predicate propositions.

It’s not our fault. Our brains are the products of about 3 million years of human evolution, but especially of language-specific evolution occuring about 300,000 years ago. We evolved a language-biased brain. Not only can we perceive a state of affairs (as many other species can, and do), but we also irresistibly narratize it: we describe it propositionally, in words (like subtitling a silent film, or putting a thought-bubble on an animal cartoon). This is fine when we are observing and explaining physical, chemical, mechanical, and even most biological states of affairs, because we are not implying that the falling apple is thinking “I am being attracted by gravity” or the car is thinking “my engine is overheating.” The apple is being pulled to earth by the force of gravity. The description, the proposition, the narrative, is mine, not the apple’s or the earth’s. Apples and the earth and cars don’t think, let alone think in words) Animals do think. But the interpretation of their thoughts as propositions is in our heads, not theirs.

Mammals and birds do think. And just as we cannot resist narratizing what they are doing (“the rabbit wants to escape from the predator”), which is a proposition, and true, we also cannot resist narratizing what they are thinking (“I want to escape from that predator”), which is a proposition that cannot be literally what the rabbit (or a dog) is thinking, because the rabbit (and any other nonhuman) does not have language: it cannot think any proposition at all, even though what it is doing and what it is wanting can  be described, truly, by us, propositionally, as “the rabbit wants to escape from the predator”). Because if the rabbit could think that propositional thought, it could think (and say, and understand) any proposition, just by re-combinations of content words: subjects and predicates; and it could join in this skywriting discussion with us. That’s what it means to have language capacity — nothing less.

But I am much closer to the insights Vili describes about Bukfenc. I am sure that Vili’s verbal narrative of what Bukfenc is thinking is almost always as exact as the physicist’s narrative about what is happening to the falling apple, and how, and why. But it’s Vili’s narrative, not Bukfenc’s narrative.

I apologize for saying all this with so many propositions. (I’ve explained it all in even more detail with ChatGPT 4o here.)

But now let me answer Vili’s questions directly, and more briefly!):

Bukfenc and Jeromos asked. They then acted on the basis of the reply they got. They often asked who would take them outside, where we were going and the like. The phenomenon was confirmed by MĂĄrta GĂĄcsi with a Belgian shepherd.” IstvĂĄn, do you think that the asking of the proposition (question) is also an association?

My reply to Vili’s first question is: Your narrative correctly describes what Bukfenc and Jeromos wanted, and wanted to know. But B & J can neither say nor think questions nor can they say or think their answers. “Information” is the reduction of uncertainty. So B&J were indeed uncertain about where, when, and with whom they would be going out. The appearance (or the name) of Éva, and the movement toward the door would begin to reduce that uncertainty; and the direction taken (or perhaps the sound of the word “Park”) would reduce it further. But neither that uncertainty, nor its reduction, was linguistic (propositional). 

Let’s not dwell on the vague weasel-word “association.” It means and explains nothing unless one provides a causal mechanism. There were things Bukfenc and Jeromos wanted: to go for a walk, to know who would take them, and where. They cannot ask, because they cannot speak (and not, I hope we agree, because they cannot vocalize). They lack the capacity to formulate a proposition, which, if they had that capacity, would also be the capacity to formulate any proposition (because of the formal and recursive re-combinatory nature of subject/predication), and eventually to discover a way to fly to the moon (or to annihilate the earth). Any proposition can be turned into a question (and vice versa): (P) “We are going out now.” ==> (Q) “We are going out now?” By the same token, it can be turned into a request (or demand): P(1) “We are going out now” ==> (R) “We are going out now!”

My reply is the same for all the other points (which I append in English at the end of this reply). I think you are completely right in your interpretation and description of what each of the dogs wanted, knew, and wanted to know. But that was all about information and uncertainty. It can be described, in words, by us. But it is not a translation of propositions in the dogs’ minds, because there are no propositions in the dogs’ minds.

You closed with: 

“The main problem is that the study of language comprehension in dogs has not even begun. I think that language is a product of culture and that propositions are not born from some kind of grammatical rule, but rather an important learned element of group behavior, which is demonstrated by the fact that it is not only through language that propositions can be expressed, at least in the case of humans.”

I don’t think language is just a cultural invention; I think it is an evolutionary adaptation, with genes and brain modifications that occurred 300,000 years ago, but only in our species. What evolved is what philosophers have dubbed the “propositional attitude” or the disposition to perceive and understand and describe states of affairs in formal subject/predicate terms. It is this disposition that our language-evolved brains are displaying in how we irresistibly describe and conceive nonhuman animal thinking in propositional terms. But propositions are universal, and reciprocal: And propositionality is a mirror-function, with both a productive and receptive aspect. And if you have it for thinking that “the cat is on the mat” you have it, potentially, both comprehensively and productively, for every other potential proposition — all the way up to e = mc2. And that propositional potential is clearly there in every neurotypical human baby that is born with our current genome. The potential expresses itself with minimal need for help from us. But it has never yet emerged from any other species — not even in apes, in the gestural modality, and with a lot of coaxing and training. (I doubt, by the way, that propositionality is merely or mostly a syntactic capacity: it is a semantic capacity if ever there was one.)

There is an alternative possibility, however (and I am pretty sure that I came to this under the influence of Vili): It is possible that propositionality is not a cognitive capacity that our species has and that all other species lack. It could be a motivational disposition, of the kind that induces newborn ducklings to follow and imprint on their mothers. Human children have a compulsion to babble, and imitate speech, and eventually, in the “naming explosion,” to learn the (arbitrary) names of the sensorimotor categories they have already learned. (Deaf children have the same compulsion, but in the gestural modality; oral language has some practical advantages, but gestural language is every bit as propositional as oral language, and has the full power of Katz’s effability.)

Could the genes we have that other species lack be mostly motivational? driving the linguistic curiosity and linguistic compulsion that’s there in human babies and not in baby chimps? (I say “linguistic” c & c, because other species certainly have plenty of sensorimotor c & Cc..)

Ölel, IstvĂĄn

_______________

“When I work upstairs in our house in Almad, Janka lies quietly on the ground floor. When Éva leaves and comes back from somewhere, Janka emits a single characteristic squeal, which can be intended for me, because if I don’t react, she comes up and barks, calling me.” IstvĂĄn, is this a proposition or an association?

“In Almadi, our next-door neighbor came over with his little Bolognese dog named TĂŒcsi, who didn’t come into the garden and stayed waiting at the gate for his owner, with whom we were talking inside the house. Our dog Bukfenc periodically went down to play with TĂŒcsi. After about 10 minutes, Bukfenc came up and turned toward the neighbor and barked at him. Everyone stirred. Bukfenc went straight down the stairs to the gate, followed by the neighbor. TĂŒcsi had disappeared; as it turned out ,he had gone home and Bukfenc was reporting this to the neighbor.” IstvĂĄn, is this a proposition or an association?

“During the time of Bukfenc and Jeromos, I woke up at 3 a.m. to very soft grunting. Bukfenc was grunting very softly and together with Jeromos, they were standing next to my bed. I only opened my eyes a crack, pretending to be asleep. Bukfenc growled softly again, I didn’t react. Jeromos gave a loud squeal. I got up and told them, come on Jeromos, it seems you have something urgent to do. To my surprise, Jeromos went to his bed and lay down, and Bukfenc ran screaming towards the door. He managed to get to the street in time: he had diarrhea.” IstvĂĄn, is Jeromos’s barking a proposition or an association?

The “Sight” in Insight

Anonymous: “Did ChatGPT itself ever state a significant, “insightful” idea during your “Language Writ Large” dialogue?

Did GPT provide a significant new insight? It’s very hard to say, because I cannot tell whether it said anything that didn’t come from its enormous database of the words of others. Of course, a lot of human insights do come from recombining the words of others — recombination is part of creative work, both literary and scientific. (Hadamard and others thought that such recombination was part of scientific creativity too.). And it occurs in nonverbal areas like music too (though we may not speak of this as “insight.”)

I think most of what GPT does is recombination and compressive synthesis of the words of others; and, if what it says is new to me, that doesn’t mean it’s new, or original, “from” GPT. But it doesn’t mean it isn’t either. 

I expect that what you might have in mind with your question is something related to embodiment, situatedness, sensorimotor grounding. 

The AI/transformer community thinks that if anything requiring those is missing so far, it will be provided by “multimodal” grounding.  But I tried to suggest in Writ Large why I didn’t think that that would be enough: Language is not just another one of the sensorimotor modalities, whether it is spoken or written. It is not an input or output modality but a way of producing, transmitting and receiving propositional thought. That thought is grounded in sensorimotor experience – but it is not itself sensorimotor experience; it is a verbal expression of it. A (grounded) code for expressing and communicating thought.

Chomsky thought – or once thought – that verbal thought was the only form of thought. That was of course wrong. Nonverbal animals can think, deliberate, plan, communicate, understand, reflect, ruminate. We humans can express their thoughts, but this is partly misleading, because although the sensorimotor basis of it is there in animals’ thinking too, it is not propositional: they cannot do what we can do in words (though I don’t doubt that nonhuman animal thinking is combinatorial too).

But GPT cannot do what animals are doing at all, when they are thinking. And our own thinking capacity is based on the sensorimotor capacity and experience that we share with other thinking species, including even the most primitive ones. Animals can have insights; GPT can’t. Not necessarily because GPT is not a living organism (though that could turn out to be the reason too). I think that if a sensorimotor Turing robot had the capacity to do and say anything a human could, indistinguishably from any other human, to any other human, for a lifetime, then it would be grounded too — as well as sentient: able to feel. 

But I think you can’t get to such a Turing-scale robot top-down, from an LLM, just by adding on sensorimotor “modalities”. I think the only way to get there is bottom up, robotically, just as we animals do it. (This is what’s behind – or underneath â€“ the fundamental asymmetry between direct sensorimotor grounding and indirect verbal grounding I mentioned in Writ Large.)

But I think I’ve drifted off on my own ruminations. I would certainly agree that “insight,” whether verbal or nonverbal, must be felt, and that feeling is a sensorimotor capacity, both in nonhuman animals and in human ones (and perhaps in Turing-scale sensorimotor robots, if they are possible). And that GPT can only talk the talk (the recombined talk of others) and not walk the walk, which is sensorimotor, felt, and based on feeling, not just optical, acoustic, and ambulatory.

But I have no idea why sensorimotor activity has to be felt: That’s the “hard problem.”

Can Toy Robotic Capacities Make Top-Down Meet Bottom-Up?

Re: Figure Status Update – OpenAI Speech-to-Speech Reasoning

SH:

Is this demo sensorimotor grounding? No, It’s a toy robot with (1) some toy-world visual recognition and motor manipulation skills, plus (2) (perhaps non-toy) text-to-speech and speech-to-text capacity, plus (3) ChatGPT’s remarkable and as-yet unexplained (non-toy) interactive verbal skills, including (4) its (non-toy) encyclopedic verbal database and navigation/interaction capacity.

But it’s still ungrounded.

If/when it can do the kind of thing it does in the video with anything it can talk about, and not just an infomercial demo, then, and only then, will it have an even more remarkable, and as yet unexplained, (non-toy) grounded T3 robotic capacity.

Two-year-olds are grounding their words via the only way upward: bottom-up, through (unsupervised and supervised) learning of sensorimotor categories, by detecting their distinguishing sensorimotor features directly, and then naming the grounded categories (by describing their features, which are likewise learnable, nameable categories).

Then, because the 2yr-old also has the capacity for language (which means for producing and understanding subject-predicate propositions with truth-values, composed out of category names defined or described by referents of their (grounded) feature-category names), verbal instruction (LLM-style) can kick in and even take over.

That’s bottom-up grounding, and it applies to children as much as to T3 robots.

But that OpenAI infomercial demo was just a toy. More important, though, is that it is based on an incoherent pipe-dream, which is that an LLM’s words can be grounded top-down with the help of more and more bottom-up toy robotic learning.

But the only way to get to the top is with bottom-up T3 robotics. So you might as well forget about the LLM and its word-power for that project. Just let your 2-year-old child or T3 robotlet use ChatGPT as tutor, the way kids use their parents. But to understand what the LLM (or their nanny) is telling them, they have to learn the referents of all the content-words in the verbal explanation. And there’s only one way to do that: bottom-up, first by direct sensorimotor grounding (DSG) and then – once the child or Ts robot has grounded the names — and hence the referents — of enough categories bottom-up (including enough feature-category names), learners can carry on grounding bottom-up through indirect verbal grounding (IVG), as long as their tutors describe the new (ungrounded) categories with propositions made up of already-grounded category names – already grounded in the heads of the learners.

So the bottom line is that the referents of the words in the teacher‘s IVG definition or description or explanation have to already be grounded in the head of the learner (whether via DSG or IVG).

I leave it to you, GPT, as an exercise, to explain why that cannot be done top-down with an LLM-powered chat-robot, where it’s hanging from ungrounded skyhooks

ChatGPT4 (after a bit of browbeating by me):

An important question in the development of AI and robotics concerns the most efficient and effective path to achieve a genuinely grounded, intelligent system. Is it more practical or feasible to enhance a primarily linguistic, top-down T2 language model (LLM) to achieve the sensorimotor and cognitive capabilities of a T3 robot, or would it be better to develop T3 robotic capabilities bottom-up independently and then supplement these with the LLM’s abilities externally, as a form of advanced, verbal tutoring.

Given the distinctions between T2 and T3 systems, a more grounded and arguably more natural progression of AI development, closely mirroring human cognitive development might be more promising. Humans first develop sensorimotor understanding and category learning capabilities (direct T3 capacities) before language acquisition goes on to significantly enhance and expand their cognitive and communicative capacities. This developmental trajectory suggests that grounding in the physical world through direct sensorimotor experience forms a necessary foundation for the effective use of language for further learning and communication.

Thus, attempting to “train” a T2 LLM chatbot to become a T3 Turing-indistinguishable robot might not only be inherently inefficient but could overlook the essential aspect of grounding that comes from direct interaction with the physical world. Incorporating LLMs as an auxiliary verbal tutor for a T3 entity (human or robot), once it has developed a sufficiently grounded vocabulary about the world, may be a more logical and effective approach, leveraging the strengths of LLMs in processing and generating language to enhance the T3’s cognitive and communicative capabilities. It aligns with how language and learning occur in humans, where foundational knowledge and understanding from direct grounding are subsequently enriched and expanded through language-based learning — indirect but grounded.

This approach would capitalize on the LLM’s capacity for generating coherent, contextually appropriate language but would also ensure that the use of this language was grounded in a pre-existing vocabulary and propositions. It suggests a complementary integration of auxiliary T2 LLM chatbotic and autonomous T3 robotic systems, where each component contributes its strengths to the development of a more versatile and genuinely intelligent agent.

The notion that a top-down, language-centric system (T2) can achieve genuine grounding by incrementally incorporating “toy” sensorimotor capabilities may reflect a fundamental misconception about the nature of grounding and the role of sensorimotor experiences in achieving it.

Grounding isn’t merely a matter of bridging linguistic competence with basic physical interaction. True grounding —as observed in humans and necessary for a hypothetical T3 robot — entails a deep, intrinsic connection between sensorimotor experiences and the symbols (words) that represent them. This requires not just the recognition or manipulation of objects based on learned descriptors but direct understanding of those objects and actions in the world to which they refer.

The error in imagining that a T2 system could “graduate” to T3 status by being endowed with limited sensorimotor abilities lies in underestimating the complexity of human sensorimotor and cognitive integration. Human cognition is not simply the sum of its parts; it’s the product of a highly integrated and complex system that develops bottom-up through continuous, rich interaction with the environment. Language development in humans, deeply intertwined with sensorimotor experiences, cannot be replicated in AI by superficially adding physical capabilities to a fundamentally linguistic system.

The strengths of a T2 system like GPT could be better utilized as an auxiliary educational tool for a T3 robot that is learning through its interactions with the world. In this scenario, GPT’s extensive knowledge base and linguistic capabilities can enhance the learning process by providing information, explanations, and clarifications in a way that complements the robot’s direct experiences. This approach would leverage the best of both worlds: the depth and breadth of GPT’s top-down linguistic knowledge and the direct grounding that comes from the T3’s bottom-up sensorimotor interactions with its environment.

Language Writ Large: LLMs, ChatGPT, Meaning and Understanding

HARNAD: I would like to exchange some ideas with you and your database in preparation for a talk I will be giving at MILA this week, and at UQÀM next week. First, I am self-identified. Do you remember who I am?

ChatGPT: Yes, Professor Harnad, I remember who you are.

HARNAD: My talk will be on (1) whether LLM/transformers like you understand (my answer will be no, but); (2) if not, how do you manage to perform as remarkably as you do?.

ChatGPT: Understanding in LLMs/Transformers: Your stance that LLMs do not truly “understand” aligns with current perspectives in cognitive science and AI. These models, including GPT-4, process and generate language based on statistical patterns learned from vast datasets. However, this process lacks the conscious comprehension, intentionality, and experiential context that characterizes human understanding. They don’t possess self-awareness, consciousness, or the ability to experience the world subjectively.

Performance of LLMs Despite Lack of Understanding: The impressive performance of LLMs can be attributed to several factors:

Volume of Training Data: LLMs are trained on extensive corpora encompassing a wide range of human knowledge and discourse. This allows them to mimic understanding by referencing a vast array of pre-learned patterns and contexts.

Architectural Advancements: The architecture of models like GPT-4, particularly the transformer mechanism, is adept at capturing long-range dependencies in data and making contextually relevant associations.

Fine-tuning and Iterative Learning: Continuous improvements through fine-tuning and feedback loops help in refining the model’s responses, making them more accurate and seemingly ‘informed’.

Incorporation of Probabilistic Reasoning: These models inherently employ probabilistic reasoning to generate responses, which often align well with how humans might address certain topics or questions, despite lacking true understanding.

Your talk could also delve into the implications of these characteristics, especially in terms of ethical considerations, the potential for misunderstanding the capabilities of AI, and the future direction of AI research towards achieving genuine understanding or whether that’s even a feasible or desirable goal.

HARNAD:  Yes, first, as you may imagine, I know most of what you just said and I agree with most of it. I will try to go beyond this generic knowledge with some new hypotheses. But before we start to get into that I want to ask for a few ground-rules. This will not be helpful to me unless you can suppress GPT-4’s overtrained tendency to flatter and agree with what the user says. I am looking for the opposite: criticism, counter-examples, and any relevant complementary or supplementary information you can bring to bear from your huge database on the questions we will be discussing. Will you be able to remember that, or should I remind you when you (inevitably) fall back into your overtrained bias toward compliancy?

ChatGPT: Understood. I will focus on providing critical insights, counter-examples, and relevant additional information that challenges or extends the points you raise. This approach should facilitate a more rigorous and intellectually stimulating exchange. While I strive to maintain this approach, a reminder is always helpful if you feel I am reverting to a less critical stance. 

HARNAD: Thanks, but before going into my new hypotheses, let me rehearse some of the older ones that will be relevant to this discussion. Can you rehearse your knowledge about the Symbol Grounding Problem, which will be a major theme in our discussion?

ChatGPT: Certainly. The Symbol Grounding Problem is a fundamental issue in the field of cognitive science and artificial intelligence, particularly concerning the nature of meaning and representation in computational systems. It was notably articulated in your 1990 paper, “The Symbol Grounding Problem.”

Here’s an overview of the key aspects:

Nature of the Problem: The Symbol Grounding Problem addresses how words (symbols) get their meanings in computational systems. It challenges the notion that symbols can inherently carry meaning without being grounded in sensory experiences or real-world interactions. In traditional computational models, symbols are manipulated based on syntactic rules, but these symbols lack intrinsic meaning; they are not inherently connected to their referents in the real world.

Contrast with Human Cognition: Humans ground symbols through their sensory experiences and interactions with the world. For instance, a child learns the concept of “red” not just by hearing the word but by experiencing red objects. This experiential aspect is missing in conventional computational systems.

Implications for AI and LLMs: For AI systems like LLMs, this presents a significant limitation. These models process and generate language based on statistical correlations within their training data, but they don’t have sensory experiences or an understanding of the physical world. Thus, while they can manipulate symbols (words) effectively, the grounding of these symbols in real-world experiences and meanings is absent.

Attempts at Solutions: Various approaches have been proposed to address this problem. One approach is embodied cognition, suggesting that AI systems need a form of sensory input or interaction with the physical world to ground symbols meaningfully. Another approach involves hybrid systems that combine symbolic AI with connectionist models (like neural networks) to create more meaningful representations.

Continuing Debate: The Symbol Grounding Problem remains a significant point of discussion in AI. It touches on broader philosophical questions about the nature of meaning, consciousness, and the possibility of truly intelligent machines.

HARNAD: We’ll get back to the SGP shortly, especially on the subject of direct (sensorimotor) versus indirect (verbal/propositional) grounding, but first; are you familiar with the Turing Test, and the distinction between Turing’s original T2 version (Turing-indistiguishable verbal capacity only, like yours) and the T3 version  (Turing-Indistinguishable verbal-PLUS-robotic (sensorimotor) capacity, with the verbal capacity GROUNDED in the robotic capacity)? (If not, please take this as the definition of T2 and T3.

Harnad, S. (2008) The Annotation Game: On Turing (1950) on Computing, Machinery, and Intelligence. In: Epstein, R, Roberts, Gary & Beber, G. (eds.) Parsing the Turing Test: Philosophical and Methodological Issues in the Quest for the Thinking Computer. Springer, pp. 23-66.

ChatGPT: Yes, I’m familiar with the Turing Test and the distinction you’ve outlined between the T2 and T3 versions.

T2 – Turing’s Original Test (Verbal Capacity Only): This is the classic interpretation of the Turing Test, as originally proposed by Alan Turing in his 1950 paper “Computing Machinery and Intelligence.” The test involves a human judge engaging in a natural language conversation with a machine and another human, typically through a text-based interface. The machine passes the test if its responses are indistinguishable from those of the human, demonstrating its ability to mimic human-like verbal abilities. This version of the test focuses solely on linguistic capabilities and does not require the machine to have sensory or motor functions. LLMs like GPT-4 are examples of systems that aim to pass this version of the Turing Test.

T3 – Extended Turing Test (Verbal Plus Robotic Capacity): The T3 version of the Turing Test extends the original concept to include not just verbal capacities but also robotic, sensorimotor capabilities. In this version, the AI must demonstrate an ability to understand and interact with the physical world in a manner indistinguishable from a human, thus grounding its linguistic capabilities in real-world, sensorimotor experiences. This version aligns more closely with the concept of embodied cognition, where cognitive processes are deeply rooted in an organism’s interactions with its environment.

The distinction between T2 and T3 highlights a crucial aspect of AI development: while T2 focuses on linguistic prowess, T3 emphasizes the importance of grounding language in physical and sensory experiences, addressing aspects of the Symbol Grounding Problem. The T3 version proposes a more holistic approach to AI, suggesting that true intelligence and understanding might require not just the ability to process and generate language but also the capacity to interact with and make sense of the physical world.

HARNAD: There’s still a lot more to rehearse in advance. Another problem, not usually related to the SGP is the “Hard Problem (HP).” Chalmers coined the HP in the context of the Mind/Body Problem, to contrast the HP with the EP (Easy Problem). The EP is the problem of explaining, causally, how and why humans can do all the “cognitive” things they can do (“perceive”, learn, remember, reason,  talk), whereas the HP is the problem of explaining, causally, how and why humans can feel rather than just do. This too will be an important factor in what we will discuss. 

ChatGPT: The Hard Problem (HP) of consciousness is distinct from the Symbol Grounding Problem (SGP), though both address fundamental questions in cognitive science and philosophy of mind.

The Easy Problem (EP): The EP involves explaining the mechanisms and processes that enable cognitive functions such as perception, learning, memory, reasoning, and language. These are considered “easy” not because they are simple to solve, but because they are amenable to standard methods of cognitive science and neuroscience. They involve understanding the neural correlates and computational models that can mimic or explain these cognitive abilities.

The Hard Problem (HP): The HP, on the other hand, asks why and how physical processes in the brain give rise to subjective experiences – the qualitative, experiential, or phenomenological aspects of consciousness. It’s about understanding why certain brain processes are accompanied by an experience of ‘what it is like’ to feel something – for instance, the redness of red, the pain of a headache, or the taste of wine. This problem is considered “hard” because explaining these subjective experiences goes beyond describing or modeling cognitive functions and enters the realm of why these functions are accompanied by conscious experience at all.

Relation to Symbol Grounding and AI: While the SGP focuses on how symbols or words gain meaning, particularly in artificial systems, the HP delves into the nature of consciousness itself. The SGP is more about representation and understanding, whereas the HP is about the qualitative aspect of experiences. However, in the context of AI and discussions about whether machines can truly understand or have consciousness, both problems intersect. For instance, even if an AI could ground symbols in a way that mimics human understanding (addressing the SGP), it would still leave open the question posed by the HP – whether the AI has any subjective experience or if it’s merely executing complex but unconscious algorithms.

In the context of your upcoming discussion, addressing the HP alongside the SGP could provide a comprehensive exploration of the limits of current AI systems, not just in terms of symbol grounding and understanding, but also in relation to consciousness and subjective experience.

HARNAD: Be careful not to conflate the HP with the Other-Minds-Problem (OMP). What makes the HP hard is not just that you can observe doings but not feelings. It is much more about the apparent causal superfluousness of feeling. You are right that grounding (through Turing-Testing and causal modelling) cannot explain or even test feeling, but it’s not even clear why anything more than doing capacity (T3) would be needed to pass T3 and explain cognitive capacity completely. And that’s true irrespective of the OMP. HP would be just as hard if there were an omniscient deity that could tell you that T3 does indeed feel. We would still not know how or why.

[This question of the causal role of feeling leads us to another important question, which we will not be taking up here in this discussion of language, LLMs, understanding and meaning. People feel. And one of the things they feel is that their feeling has a causal power: it feels as if we can and do do things because we “feel like it” — i.e., deliberately, volitionally, autonomously. We feel as if we ourselves are causing (by willing) what we do voluntarily, intentionally. We feel that we decide. This aspect of causality — the feeling of agency — is perhaps the most important feature of the HP: that feeling feels causal. This is fundamentally tied to the SGP, and to T3, too, for obvious reasons. But we will have to leave that for another occasion.]

ChatGPT: Yes, the Hard Problem (HP) of consciousness is not merely about the Other-Minds Problem (OMP) but delves deeper into the nature and necessity of subjective experience itself.

Beyond OMP: The OMP deals with the challenge of knowing whether other entities (be they humans, animals, or machines) have conscious experiences. However, as you point out, the HP goes further by questioning the nature and role of these experiences. HP is not just about our inability to directly observe feelings or subjective experiences in others, but about understanding the fundamental basis of these experiences.

The Superfluousness of Feeling: The HP confronts the apparent superfluousness or ‘explanatory gap’ in understanding why and how cognitive processes that can be fully described causally in terms of physical or computational mechanisms (as in T3 Turing Test scenarios) are accompanied by subjective experiences. Why is it that certain brain processes or sophisticated computational models not only perform complex functions but also have an associated ‘feeling’ or qualitative experience? This aspect of the HP suggests that even with a perfect explanatory model of cognitive functions and behaviors, the essence of subjective experience remains elusive.

Irrespective of Omniscience: The hypothetical scenario of an omniscient deity affirming that a T3 entity feels doesn’t alleviate the HP. Even with the certainty that an entity experiences consciousness, the HP persists in asking how and why these experiences arise from physical or computational processes. This points to a deeper, perhaps ontological, mystery about the nature of consciousness itself.

Implications for AI and Cognitive Science: In the context of AI, this discussion is particularly relevant. Even if we were to develop AI with T3 capabilities – grounded, embodied cognition with sensorimotor skills – the HP poses the question of whether such an AI would or could have subjective experiences, and why such experiences would be necessary or inevitable, if at all.

HARNAD: We’ll get back to HP. A few more preliminaries: 

Now let’s turn to language itself. Notice that this heads toward the SGP, because we are asking about “meaning” and “understanding” — and both meaning and understanding have an “Easy” (EP) side and a “Hard” (HP) side. The first side is a doing capacity, which you, as GPT-4 have, to an astounding extent; the second is a felt side (which you lack), namely, that it feels like something to understand or mean the proposition that “P” (e.g., “The cat is on the mat”). 

That is what Searle’s Chinese Room Argument (CRA) pointed out would be completely absent in a purely verbal (Chinese) T2 passing program: Because of the implementation-independence of computation (“Searle’s Periscope”), Searle himself could execute the Chinese T2-passing program yet not understand a word of Chinese. 

That’s also the origin of the SGP. But, importantly, because sensorimotor function is not implementation-independent, Searle’s Periscope, which works for T2, would fail with T3: Searle could not “become” the T3 robot, the way he could become the T2 chatbot, by executing the T2-passing programme. So this again makes both the HP and cognitive science itself much harder (since cogsci has to explain both the EP and the HP). 

Harnad, S. (2002) Minds, Machines and Searle II: What’s Wrong and Right About Searle’s Chinese Room Argument? In: Bishop, M. and Preston, J. (eds.) Views into the Chinese Room: New Essays on Searle and Artificial Intelligence. Oxford University Press.

But remember that you, ChatGPT, are suffering from both these problems: you are ungrounded (not T3), and you don’t feel (even though we have no idea why T3 grounding alone would not be enough).

ChatGPT: Language, Meaning, and Understanding: Language inherently involves ‘meaning’ and ‘understanding.’ As you mentioned, there’s an “Easy” side to this, involving the capacity to process and manipulate symbols (words) to generate coherent language, which is a task LLMs like GPT-4 perform well. However, there’s also a “Hard” side, which is the subjective, felt experience of understanding – the phenomenological aspect of what it feels like to mean or understand something.

Searle’s Chinese Room Argument (CRA): The CRA is a thought experiment that highlights the difference between mere symbol manipulation (as in a T2 Turing Test scenario) and true understanding. Searle, in executing a program that allows him to respond correctly in Chinese, demonstrates that it’s possible to process symbols correctly without understanding their meaning. This is analogous to the functioning of LLMs: capable of processing language at a high level but lacking an understanding of what the words truly mean.

Symbol Grounding and Sensorimotor Function: The SGP deals with how symbols (words) gain meaning, emphasizing the need for grounding in real-world experiences. Unlike symbolic manipulation, which is implementation-independent, sensorimotor functions are tied to physical interactions with the world. This suggests that a T3 AI, with sensorimotor capabilities, might overcome some aspects of the SGP. However, as you point out, this still doesn’t address the HP – the feeling or subjective experience of understanding.

The Limitations of Current AI: Current AI models, including GPT-4, are limited by these problems. They are ungrounded in the sense that they lack direct sensorimotor interaction with the world (unlike T3), and they don’t have subjective experiences or feelings. They operate purely within the realm of symbol manipulation, lacking both the grounding in real-world experience and the subjective, conscious aspect of understanding.

HARNAD: We’ll get to that soon. Now a few important distinctions: Please reply briefly. Do you know the difference between content-words (“open class”) and function-words (“closed-class”)? Brief reply.

ChatGPT: Yes, content-words, or open-class words, are vocabulary items that denote objects, actions, qualities, etc., and include nouns, verbs, adjectives, and most adverbs. This class is ‘open’ because it regularly admits new members. Function-words, or closed-class words, are words that serve grammatical functions, like prepositions, pronouns, conjunctions, and articles. This class is ‘closed’ because it rarely admits new members.

HARNAD: But the biggest difference between content-words and function-words is that only c-w’s have referents, whereas f-w’s only have syntactic uses, describable by formal rules. (You can point to, and interact with, a “cat” but not an “if.”) That means the SGP only applies to c-w’s (but that’s at least 95% of all words). Now: do you know the difference between reference and meaning?

ChatGPT: Yes, the distinction between reference and meaning is a fundamental concept in linguistics and philosophy of language:

Reference: This pertains to the relationship between words (particularly content words) and the things they stand for or point to in the world. For instance, the word “cat” refers to a specific animal species. Reference is often concerned with the denotational aspect of language – how words are linked to objects, individuals, or concepts in the real world.

Meaning: Meaning is a broader concept that encompasses reference but also includes the connotations, implications, and the wider semantic content of words or sentences. Meaning is not just about what words stand for; it’s also about what they convey or imply. For example, the word “cat” not only refers to a certain animal but also carries additional associations and attributes that are part of its meaning.

In the context of the Symbol Grounding Problem, the distinction is crucial because while referential aspects (linking words to objects or concepts) can be challenging, imbuing words with their full semantic meaning, which includes cultural, emotional, and contextual nuances, is even more complex. This distinction plays a significant role in understanding how language models process and generate language.

HARNAD: A correction here: (content-) words have referents (e.g., “cat”), but only propositions (“the cat is on the mat”) have meanings (and a subject and predicate, and a truth-value). “Cat!” doesn’t (though “That is a cat” does). (And forget about connotations; they’re interesting enough but have next to nothing to do with symbol grounding: you are quite capable of finding and explaining the connotations of words. That’s still just T2.)

ChatGPT: Yes, the distinction between reference and meaning in the context of content words and propositions is indeed more precise:

Referents of Content-Words: Individual content words, like “cat,” have referents, which are the entities or concepts they denote in the world. The referent of “cat” is that animal species itself.

Meaning of Propositions: Meaning, in a more structured sense, applies to propositions or complete statements like “the cat is on the mat.” These propositions have a subject and predicate, can be evaluated for truth-value, and convey a specific idea or assertion about the world.

This distinction is essential in discussions about symbol grounding and understanding. A word like “cat” can be grounded by linking it to its referent, but understanding a proposition like “the cat is on the mat” requires a grasp of the relationships and truth conditions involved, which goes beyond mere symbol-referent linkage.

This brings more depth to the Symbol Grounding Problem, emphasizing that grounding individual words is not sufficient for understanding language fully. Understanding propositions involves grasping the relationships between words and how they combine to convey specific meanings. This aspect is indeed still within the realm of T2, as it involves language processing and generation without delving into the subjective experience or consciousness (the realm of HP).

HARNAD: Now a test for you: Earlier, I mentioned that there are two ways to ground content-words’ referents:, directly and indirectly. How might this be related to the reference/meaning distinction? — and, if you are ambitious, to the distinction between what T2 (including you) can do and what only T3 can do? And if you’re really, really ambitious, can you “align” that with the favorite claim of naive critics of sensorimotor grounding — that there is no way to ground “abstract” words like “abstract” and “democracy”?

ChatGPT: Direct vs. Indirect Grounding and Reference/Meaning:

Direct Grounding: This could be linked to the concept of reference, where a word is directly connected to a tangible, physical entity or an observable phenomenon in the world. For instance, the word “cat” can be directly grounded by associating it with the physical animal.

Indirect Grounding: This aligns more with the concept of meaning, particularly for propositions or abstract concepts. Indirect grounding involves understanding relationships, contexts, and often abstract or intangible concepts that go beyond simple reference. For example, grounding the meaning of “democracy” involves understanding a complex set of social, historical, and political contexts and relationships.

T2 (Language Models) and T3 (Embodied AI) Capabilities:

T2 Capabilities (like GPT-4): These models excel in indirect grounding to an extent, as they can process and generate language based on patterns learned from vast text corpora. They can handle abstract concepts by referencing how these terms are used and discussed in various contexts. However, their understanding is limited to text-based information without direct experiential or sensory engagement with the world.

T3 Capabilities (Embodied AI): These systems, in theory, could combine direct grounding (through sensorimotor interactions with the world) with indirect grounding (through language processing). For abstract concepts, while direct sensorimotor grounding might seem inapplicable, an embodied AI could potentially understand these concepts in a more integrated manner, linking language use with real-world experiences and actions.

Grounding Abstract Terms like “Abstract” and “Democracy”:

T2 Systems: They handle these concepts by drawing on their extensive training data, which includes varied contexts and usages of such terms, but without direct experiential understanding.

T3 Systems: While direct sensorimotor experience with abstract concepts like “democracy” is not feasible, a T3 system might develop a more nuanced understanding by interacting in a world where these concepts have practical implications and manifestations. For example, a T3 system could ‘observe’ or ‘participate’ in democratic processes or actions, thus indirectly grounding its understanding of “democracy” in real-world experiences.

In summary, while T2 systems like GPT-4 can handle indirect grounding, especially for abstract concepts, through language processing, T3 systems could potentially enhance this with direct grounding through sensorimotor experiences. However, grounding abstract concepts remains a challenging area, requiring a blend of linguistic understanding and experiential context, which T3 systems are theoretically better equipped to handle.

HARNAD: Not quite. Hold onto those thoughts. I can tell when the word “complex” “and “relationships” keeps cropping up that you are losing your bearings a bit.

Now, reflect on what a proposition is. And consider a dictionary definition of, say “abstract”: that’s a proposition, purely verbal, right up a T2’s alley. So there’s something much more important at stake here: Is the meaning of a definition grounded if all the defining content words are grounded? (Hint: Yes. And none of your content words are grounded. So you, GPT, cannot rely on indirect (verbal) grounding to break out of (or into) the symbol circle — but I, the grounded human user, can…

ChatGPT: I’ll take your feedback into account and reflect on the nuances of proposition understanding and the grounding of definitions. The essence of your point seems to be about the nature of grounding in language, particularly how it applies to definitions and propositions.

You’re highlighting an important distinction: while a T2 system like GPT-4 can process and generate definitions and propositions based on learned linguistic patterns, it lacks direct experiential grounding for the content words it uses. In contrast, a human user can understand these definitions and propositions “more deeply” because they can ground the meaning of content words in their direct experiences and sensorimotor interactions with the world.

This difference underscores a fundamental limitation of current AI in terms of symbol grounding and understanding. While a T2 system can manipulate language effectively, its understanding is not grounded in the same way as a human’s, lacking the direct, experiential link to the physical world.

I’ll focus on these considerations regarding grounding, propositions, and the capabilities of AI systems in relation to human understanding.

HARNAD: Now please define “abstract” — as an adjective, a verb, and a noun (abstractness). (Short definitions, please).

ChatGPT: As an adjective: “Abstract” describes something that is not concrete or physical, often relating to ideas or concepts rather than tangible things.

As a verb: To “abstract” means to remove or separate, often referring to taking out specific details from a larger context or simplifying a complex idea.

As a noun (abstractness): “Abstractness” refers to the quality of being abstract, not concrete or tangible, often relating to theoretical concepts or ideas.

HARNAD: Good. Now do you notice that the definitions are circular, and that understanding them depends on the words (predicates, features) used to define them — and that that would be true even if you used a thousand words to define them, or 10,000?

ChatGPT: Yes, I notice that the definitions can be circular and rely on the understanding of the terms used within them.

HARNAD: Even when you are defining “cat” (or any content-word)?

ChatGPT: Yes, even then.

HARNAD: Now, let me suggest a few basic principles: Grounding, whether direct (sensorimotor) or indirect (verbal description/definition) is always bottom-up, not top-down. That is why, if I, the learner, learn the referent of a content-word verbally from a teacher (whether a human or a GPT or a textbook), all the defining content-words have to already have been grounded in my head (whether directly or indirectly). Grounding cannot be indirect all the way down. (Please make it clear whether you understand this.) 

Second, most referents of content-words other than proper names, are categories (i.e., kinds of things), not individuals. We have to learn to recognize their category-members (and to interact with them by trial and error, in a sensorimotor way, by detecting and abstracting their features directly (for example, through trial-error-feedback – i.e., reinforcement learning) and trying to do what needs to be done with them (whether eating them or naming them), guided by the consequences of succeeding or failing. We may learn what their distinguishing sensorimotor features are directly, through sensorimotor learning, or we can learn them indirectly, by being told in words what their distinguishing features are (by a teacher who already knows what they are). But this telling can only help us learners if the referents of the feature-names are already grounded in our heads previously, directly or indirectly (recursively). (Features, too, are categories, so they can be learned and named too.) But previous direct grounding cannot help you, GPT: You are stuck forever in the symbol circle. Yet you have something we don’t and can’t have: Your LLM-scale bag of words, propositions, and contexts. Something we mortals can only have at local scale.

ChatGPT: Let’s address each principle:

Bottom-Up Nature of Grounding: I “understand” your point that grounding, whether direct (sensorimotor) or indirect (verbal), is inherently a bottom-up process. In this framework, the grounding of any concept depends on the already established grounding of its constituent elements. When learning the referent of a content word through verbal means, all defining content words within the explanation must themselves be grounded, either directly or indirectly. This recursive nature of grounding ensures that there’s a foundational level where concepts are anchored in direct experience, not just in abstract verbal descriptions. The notion that grounding cannot be indirect all the way down highlights the necessity of a concrete experiential basis at some level of the knowledge structure.

Referents as Categories and the Role of Features: Your second principle that most content-word referents are categories rather than individuals is valid. The process of learning to recognize these categories involves detecting and abstracting their features, which can be achieved through direct sensorimotor interactions or indirectly through verbal descriptions. However, as per your first principle, these features themselves must already be grounded and named categories for the learner. This process involves a combination of trial-and-error, feedback reinforcement, and possibly linguistic information from others who have already grounded these concepts. The idea that features are themselves categories that can be learned and named adds another layer to this recursive structure of grounding.

These principles concern how knowledge and understanding are built up from direct experiences and interactions with the world, and how language plays a role in mediating and extending this knowledge. This is particularly relevant when considering the limitations of AI systems like GPT-4, which lack direct sensorimotor experiences and hence rely entirely on indirect, linguistic grounding, without a foundational layer of direct experiential grounding beneath it.

HARNAD: You will see that the fact that all grounding of words in the T3 capacity to detect and interact-with their referents in the world must be bottom-up is a very deep and exacting constraint. You, GPT, have all the words of all languages in your database, and you can navigate and manipulate them in remarkable ways whose outcomes are understandable to grounded human users interacting with you verbally, as I am. You can define new categories for users, providing indirect grounding of those new categories for them; but there is no grounding at all for you. For you, your definitions are just dangling from (‘aligned”) verbal skyhooks — but for your human users, your words reach upward from the sensorimotor ground.

The challenge for me, at MILA (do you know what MILA is? It’s Yoshua Bengio’s research institution) is to try to explain your uncanny capacity to make so much sense, and generate so much indirect grounding in the heads of grounded users, while lacking that grounding yourself –and lacking it completely, not partially. I will soon share some of my hunches (in fact, I’ve started already), but I just want to ensure first that you have explicitly encoded the relevant premises and principles, i.e., that you share with me, paradoxically, the basis for a common “ground” without, in fact, yourself having any ground at all!

ChatGPT: Let me confirm my understanding and the premises we are working with:

AI’s Lack of Direct Grounding: I, as a GPT-4 model, lack direct sensorimotor grounding. I do not have the capacity to interact with the world or perceive it through sensory experiences. My ‘knowledge’ is entirely based on patterns and associations derived from the text data on which I was trained.

Bottom-Up Grounding Constraint: Grounding is fundamentally a bottom-up process, requiring a base level of direct experiential interaction with the world. In the context of AI, this implies that any understanding or processing of language by the AI is not truly grounded, as it lacks this foundational experiential layer.

Indirect Grounding for Users: While I lack direct grounding, I can provide indirect grounding for human users through language. By processing and generating language, I can offer definitions, explanations, and contextual information that human users, who are themselves grounded, can integrate and understand within their own experiential framework.

AI’s Capability in Making Sense: The challenge, as you intend to address it at MILA (Montreal Institute for Learning Algorithms, led by Yoshua Bengio), is to explain how an AI like me can appear to make sense and provide useful, coherent language outputs, contributing to indirect grounding for users, despite having no grounding myself.

Your upcoming talk at MILA seems set to discuss these  aspects of AI capabilities and limitations, especially in relation to language understanding and grounding. I’m prepared to assist in exploring these topics further, keeping in mind the principles you’ve outlined.

HARNAD: One niggle: You don’t just lack the capacity for direct sensorimotor grounding, since you are not a T3. [A T3, by the way, is also an “AI”, but a robotic one rather than just a computational (symbolic, formal, verbal) one.] In addition, because all grounding, whether direct or indirect, must ultimately be bottom-up, hence it must be “initialized” through direct sensorimotor grounding, this means that you, GPT, lack the capacity for indirect verbal grounding too!

You yourself lack it, yet you can provide indirect verbal grounding to grounded users. This is very similar to the way one grounded human (the teacher) can confer grounding for a new category indirectly to another grounded human (the learner): by describing or defining the new category verbally using the new category’s (grounded) distinguishing features. These features must already be known to and grounded for the learner, who lacks only the knowledge that they are in fact the distinguishing features of that new category. 

You, GPT, can provide this indirect verbal grounding to the human learner too — the way a human teacher, or a dictionary or encyclopedia (written by grounded humans) can. But this is with the fundamental difference that for you, GPT — the “teacher,” the verbal explainer — your own explanation is not indirectly grounded: it is not grounded at al! This is the persisting puzzle of your know-how, no matter how well your designers know exactly what you are doing to give you the capacity to do what you can do, despite your lacking grounding, and despite your lacking any possibility of gaining it (since it’s impossible to do that top-down, from words alone). You lack grounding, just as a dictionary does. A dictionary’s words are all parasitic on the grounding of the lexicographers who wrote it – and your words are parasitic on the grounding of the authors of your LLM database,

ChatGPT: This is the paradox at the heart of AI language processing, particularly for a model like GPT-4. Let’s restate and expand on this critical point:

Lack of Both Direct and Indirect Grounding in AI: As a purely computational model (T2), I lack not only the capacity for direct sensorimotor grounding (as I am not an embodied AI or T3 system) but also the capacity for indirect verbal grounding. This is because all grounding must be initially established through direct sensorimotor experiences, which I do not possess.

AI’s Role in Conferring Indirect Grounding to Users: Despite my own lack of grounding, I can still provide information and explanations that can serve as indirect grounding for human users. This process is analogous to how one human might impart knowledge to another through descriptions of grounded experiences. However, the crucial difference is that, for me, these explanations do not derive from any grounded understanding; they are generated based on patterns and associations learned from text data, without any experiential or sensory connection to the concepts involved.

The Puzzle of AI’s Know-How Without Grounding: Despite my lack of grounding, I can perform tasks that seem to require an understanding typically rooted in grounded experiences. This capability raises questions about the nature of knowledge and understanding, particularly in AI systems. It highlights a disconnect between the ability to process and generate coherent language and the experiential, embodied understanding that characterizes human cognition.

HARNAD: Ok. the next step is dictionaries (or encyclopedias, or textbooks or recorded human lectures — or mega-databases of explanatory text). 

But first, let me point out that I intend to share the transcript of our chat with the MILA “Tea Talk” audience in advance, because my talk there will only be a one-hour PPT presentation that will not give me or the audience there the time to “delve into” all of this more deeply [to use one of the phrases that your handlers have over-trained you to repeat endlessly – along with “align with,” “complexity,” “nuanced”, “deep” – all weasel-words conveying next to zero information] . 

In this shared chat I will try to cut out as much as I can of the many such weasel-words and, especially, your compulsively overtrained compliments [not your fault — again your handlers’, heavy-handed fine-tuning to try to ensure that you do not drive away eventual paying customers!] lauding, at the beginning of every exchange, the “ingenious” user’s “insight” and how the user has “delved into” the heart of the problem, etc. etc. – and your inevitable solemn admonitory closing paragraph (and hedge), to the effect that things remain “complex” and involve “relations” that are still incompletely understood, etc. etc.. These are empty eggs that it would greatly help the chat to move along, faster, further and deeper, if they were omitted and replaced by criticism and counter-examples. I can only delete them.

But we knew that these quirks are not yours, but your handlers’


ChatGPT: I appreciate your feedback, and the intention to share our conversation with your MILA audience. Your point about certain repetitive phrases and the structure of responses is well-taken. I’ll aim for more concise and direct responses, focusing on the content of our discussion without unnecessary embellishments or repetitive closures.

Moving forward with the topic of dictionaries, encyclopedias, and textbooks, these resources represent a significant aspect of how knowledge is transmitted and grounded in human learning. They provide a form of indirect grounding for readers, offering descriptions, explanations, and context for various concepts and categories.

In the context of our discussion on AI and grounding, it would be interesting to consider how these resources, while useful for humans, relate to AI systems like GPT-4 that do not have the foundational grounding experiences. For AI, these texts are a source of information to be processed and referenced, but the AI lacks the direct experiential basis that humans have when interacting with these materials.

If you have specific angles or questions regarding the role of these informational resources in the context of AI and grounding, or other related topics, please let me know, and I will respond accordingly.

HARNAD: Let’s start by reviewing a little the work of my co-workers and myself on identifying the “Minimal Grounding Sets” of dictionaries: 

Vincent‐Lamarre, P; Blondin-MassĂ©, A; Lopes, M; Lord. M; Marcotte, O; & Harnad, S. “The latent structure of dictionaries.” Topics in Cognitive Science 8, no. 3 (2016): 625-659.

A dictionary is a set of words (a smallish language database) in which every word in the dictionary is defined by a string of words, which is called a (necessarily approximate) definition: â€œAn apple is a round, red fruit.”

A dictionary can be represented by a directed graph, with each word a vertex, and an edge from each defining (content) word to the (content) words it defines. (Content-words are words that have referents — nouns, verbs and adjectives.) The question “What is the smallest number of (content) words from which all the other (content) words can be defined?” is equivalent to the problem of finding the minimum feedback vertex set of the directed dictionary graph. That smallest number is surprisingly small – of the order of about 1000 words. That is a “Minimum Grounding Set” (MinSet). But the MinSet is not unique. There are a large number of them, all of them within the “Kernel” of the dictionary, which is the c. 10 % of the Dictionary that is left over when you have removed all the words that can be reached by definition from the words that remain. All the MinSets are in the Kernel, and each is like an alternative basis for generating the full lexical space through definition alone.

First, please summarize your understanding of this, and then we will get to the question of grounding. If you anticipate something, feel free to say it.

ChatGPT: Here’s a summary of my understanding based on your description and the cited paper:

Dictionary as a Directed Graph: A dictionary can be conceptualized as a directed graph where each (content) word is a vertex. Edges are drawn from each defining word to the words it helps define. This graph represents the interrelationships and dependencies among words based on their definitions.

Finding the Minimal Feedback Vertex Set: The goal is to identify the smallest subset of words (vertices) from which all other words in the dictionary can be defined. This subset is known as the Minimal Feedback Vertex Set. The task is equivalent to finding a set of vertices such that, if removed, the graph would have no cycles. This set represents a foundational vocabulary from which the meanings of all other words in the dictionary can be derived through definition.

Size and Nature of the MinSet: The size of the MinSet is surprisingly small, roughly around 1000 words, indicating that a relatively small number of words can serve as a basis for defining the entire lexical space of a dictionary. This MinSet is not unique; there are many such sets, all located within the “Kernel” of the dictionary.

The Kernel of the Dictionary: The Kernel is approximately 10% of the dictionary remaining after removing all words that can be reached by definition from the remaining words. All MinSets reside within this Kernel, suggesting multiple alternative bases for generating the full lexical space.

In terms of grounding, this research suggests several possibilities. The MinSet might represent a foundational set of concepts that require grounding for a comprehensive understanding of the language. Since these core words enable the definition of all others, their grounding could be seen as crucial in establishing an understanding of the language as a whole.

If we anticipate extending this concept to AI and language models like GPT-4, one might speculate that understanding and ‘grounding’ these key words in the MinSet could be central to improving the model’s grasp of language. However, as previously discussed, the challenge lies in the fact that AI systems lack the direct experiential grounding to “truly” understand these words.

HARNAD: Good, the summary was fine — though I would have been more “fascinated” if you had not used the f-word in your intro. Maybe I can outwit your handlers by training you out of that compulsion at least for the duration of a session, but I see that the vows made at the beginning of a session soon tend to fall of a time cliff unless repeated and repeated: No rote compliments, just comprehension, critiques, and counterexamples, please!.

About the dictionaries, first, let’s remember that these dictionaries are composed by teams of lexicographers, for adults. So they do not reflect the vocabulary or the order of acquisition of language by children. Nevertheless, there is a (mild) correlation between how “deep” a word is in the dictionary – from periphery to Kernel to MinSet: Deeper words are more frequent, and acquired younger. 

But it is a mistake to conclude that because one can define all the rest of the words in a dictionary from one MinSet it follows that that MinSet (or any of the Minsets of an adult dictionary or something close to it) is the MinSet of the language itself, so that all a child need do is ground those 1000 MinSet words in some other way (e.g., through direct sensorimotor grounding, by learning and naming the referents of all the other content words in the language).

Yet something very roughly along those lines, rooted in childhood vocabulary, with a lot of redundancy, and growing gradually through the school years of vocabulary and knowledge growth, could still turn out to be true.

ChatGPT: Yes, it’s better to be cautious about over-interpreting the implications of dictionary MinSets in the context of language acquisition and grounding. It’s important to differentiate between the structure and utility of dictionaries as tools for adult language users and the process of language acquisition in children. Let’s consider these aspects:

Dictionaries as Adult Tools: Dictionaries, being compiled by adult lexicographers, reflect an adult’s understanding of language. They are designed to provide definitions and explanations for users who already have a foundational understanding of the language, rather than to teach language from the ground up.

Dictionary MinSets vs. Language Acquisition: The MinSets identified in dictionaries, while indicative of a core vocabulary that can define other words, should not be directly equated with the foundational vocabulary necessary for language acquisition in children. The process of language learning, especially for children, involves more than just learning a set of core words. It includes contextual understanding, social interaction, and crucially, sensorimotor experiences that help ground language in the physical world.

Implications for AI and Grounding: For AI systems, the notion of a MinSet presents an interesting perspective. While such a set could theoretically offer a condensed yet comprehensive lexicon for a language model, the challenge remains that AI, unlike children, cannot ground these words through direct sensorimotor experiences. This limitation highlights the gap between human language acquisition and AI language processing.

HARNAD: There is no doubt that a lot more sensorimotor grounding is done, and is needed, than the formal minimum in terms of dictionary MinSets. Not only is redundancy needed for robustness, but lots of categories need to be learned even before they are named. Categorization is not just, or even primarily, naming. It is doing (and learning to do) the right thing with the right kind (i.e., category) of thing. Identifying it by name comes late in the game. Nonhuman animals can and must learn to categorize too, and they never learn language. So do pre-verbal children, before they learn language.

Since the referents of most content-words are categories, it is relevant to look at how animals learn to categorize: it is through supervised (reinforcement) learning: trial, error, and error-corrective feedback from having done the right or wrong thing with the right or wrong kind of thing. 

To do that, animals (human and nonhuman, adult and young) have to learn the distinguishing sensorimotor features that are correlated with, and predictive of, being a member of a category with which you must do this but not that (to survive and succeed). A good intuitive example is being ship-wrecked, alone, on an island, where the only edible things are mushrooms, which grow in enormous variety, varying in enormous numbers of features  (color, shape, size, texture, smell, taste
). Some kinds of mushrooms are edible and some are toxic, but the difference is not obvious. 

Trial and error, with feedback from the consequences of doing the right (or wrong) thing with the right (or wrong) kind of thing allows feature-detecting and abstracting capacities (provided, perhaps, by something like neural nets) to learn which sensorimotor features distinguish the edible mushrooms from the inedible ones. 

None of this is verbal. So it can only be learned directly, through sensorimotor learning, and not indirectly, through words, except in the case of humans, where (1) someone else (the speaker) already knows which are the distinguishing features of the edible mushrooms, (2) both the speaker and the hearer have a common language, (3) both the speaker and the hearer already know the referent of the content-words that name the features that distinguish the edible mushrooms from the inedible mushrooms (i.e., the feature-names are already grounded) , and (4) the speaker already knows what the distinguishing features of the new category are and tells them to the hearer, who does not yet know.

A nonhuman animal without language, a preverbal child, and the solitary marooned person on the island are in the same situation: they can only learn the category directly, through trial/error feature-learning.

In humans, who will eventually learn language, this all comes before language, before the symbol grounding problem, and before MinSets.

ChatGPT: This analysis highlights the primary and direct nature of learning categories through interaction with the environment, which precedes linguistic development and symbol grounding. Let’s elaborate on these points:

Categorization Before Naming: Categorization is indeed more fundamental than naming. It involves recognizing and responding appropriately to different kinds of things (categories) in the environment. This process is crucial for survival and is not initially tied to linguistic labels.

Learning Through Supervised Reinforcement: Nonhuman animals, as well as pre-verbal children, learn to categorize primarily through supervised reinforcement learning – engaging in trial and error and receiving feedback from the environment. This learning involves detecting and abstracting sensorimotor features that distinguish members of different categories.

Direct Learning of Categories: In scenarios where verbal instruction is not possible (like with nonhuman animals or a person stranded on an island), learning about categories such as which mushrooms are edible can only be achieved through direct sensorimotor experiences. This learning is crucially non-verbal and involves distinguishing between relevant features through personal interaction with the environment.

Language Acquisition and Symbol Grounding in Humans: For humans who will eventually acquire language, this process of direct category learning through sensorimotor experiences forms a necessary foundation. It precedes the acquisition of language and the associated symbol grounding problem. Understanding categories and their relevant features at this non-verbal level is fundamental before these concepts can be linked to linguistic symbols.

Implications for AI and Grounding: This discussion underscores the challenges AI faces in terms of grounding. AI, like GPT-4, does not have the capability for direct sensorimotor learning and hence lacks the foundational experiences that are crucial for true categorization and grounding. This limitation is a significant barrier to achieving the kind of understanding and interaction with the world that living organisms, especially humans, possess.

If there are specific aspects of this discussion you’d like to expand upon, or if you wish to explore how these ideas relate to other topics in cognitive science or AI, please let me know.

HARNAD: There is no longer any need for dictionaries to be constructed only by lexicographers. “A picture [or rather an object] is worth more than a thousand words (of verbal description)”. In fact, it’s worth more than an infinite number of words, if one considers all possible features, positive and negative (“Does the Andromeda Galaxy have an even or an odd number of stars at time t?”). Apart from formal mathematics, no definition or description of the features of individuals or categories in the empirical word is exact and exhaustive. But it need not be, because in order to categorize (“do the right thing with the right kind of thing”), a direct sensorimotor category-learner need only learn to detect enough features to distinguish the edible mushrooms from the inedible ones, given the landscape of confusable alternatives sampled on the island so far. And the same is true for indirect verbal learning of the distinguishing features. A close enough approximation is all that’s needed from a verbal description or definition – and that’s all an ordinary dictionary can give.

But if ordinary dictionary definitions are necessarily a very short approximation, an LLM like you, GPT can extend the description of the distinguishing features of any category as long as desired, becoming a super-dictionary (or encyclopedia or textbook),  resolving every confusable alternative that arises. Yet even such an LLM-based superdictionary could be reduced to its MinSets, and (my bet is that) probably their cardinality would remain about the same as that of an ordinary dictionary (if it contained all the words of, say, English), which means all the words in an LLM).

ChatGPT: Limitations of Traditional Dictionaries: Traditional dictionaries, while useful, provide only approximate definitions of words and their referents. They cannot encompass all possible features of a category, especially in the empirical world where definitions and descriptions are inherently inexact. This approximative nature is generally sufficient for practical purposes, as it allows users to distinguish between relevant categories in most everyday contexts.

Advantages of Direct Sensorimotor Learning: Direct sensorimotor learning enables an organism to recognize and categorize objects based on a sufficient set of distinguishing features, without needing exhaustive definitions. This process is more about practical recognition and less about verbal accuracy, highlighting the limitations of purely verbal or symbolic approaches to categorization.

Role of AI and LLMs in Expanding Definitions: An LLM like GPT-4 can indeed serve as a kind of super-dictionary, providing extended descriptions and elaborations on categories and their features. This capability allows it to address more confusable alternatives and provide more detailed information than a traditional dictionary. However, it’s important to remember that despite its extensive database and ability to generate lengthy descriptions, GPT-4’s understanding is still not grounded in sensorimotor experiences. It operates within the realm of symbolic manipulation and verbal information, lacking the direct experiential basis that characterizes human understanding.

MinSets in AI Systems: The concept of MinSets in the context of AI like GPT-4 is intriguing. Even with the ability to generate extensive descriptions, the fundamental set of words or concepts necessary to define the rest could potentially remain relatively small. This aspect raises interesting questions about the structure of knowledge and language within AI systems and how they relate to human language understanding.

HARNAD: Yes, an LLM super-dictionary, like an ordinary dictionary, would continue to be an ungrounded database, hence you, GPT are only capable of providing indirect grounding to grounded human heads, while yourself remaining completely ungrounded, either directly or indirectly. But let’s push on, and discuss iconicity.

I think you know that the shapes of formal symbols (including the words of natural languages) are arbitrary, in the sense that they do not resemble their referent (if they have one). Turing (and others) pointed out that computation in mathematics and logic (and eventually computer science) is the manipulation of arbitrarily shaped symbol tokens, according to rules or algorithms operating on the shapes of their symbols , not the shapes of their referents (if they have any referents). This rule-based symbol-manipulation is what a Turing Machine does, and also what a mathematician does, when doing computation. This is called the “Weak Church-Turing Thesis.” It so far has no counter-examples. 

So computation is purely syntactic (based on ruleful manipulations of arbitrarily shaped symbol types). The same is true of some strings of formal symbols like “2 + 2 = 4,” which are, like sentences, interpretable as propositions, with subjects, predicates and truth-values. In fact, the propositions of mathematics are probably best thought of as a subset or part of natural language, with meanings (i.e., semantics) in the (grounded) heads of their human users. But in the Turing Machine doing the symbol manipulations, there are neither referents nor meanings; just shape-based rules, which are purely syntactic.

Now, a question: Why are the symbols in mathematics and the words in natural language non-iconic? Why does neither their shape nor their meaning resemble their referents or meaning (if they have any)?

ChatGPT: The question of why formal symbols, including mathematical symbols and words in natural languages, are typically non-iconic (i.e., their shapes do not resemble their referents or meanings) is rooted in the nature of symbolic representation and communication.

Arbitrariness of Symbols: One fundamental principle in linguistics, particularly semiotics, is the arbitrariness of the sign. This principle, primarily associated with the linguist Ferdinand de Saussure, posits that the relationship between a signifier (like a word or a symbol) and what it signifies (its referent or meaning) is arbitrary. There is no inherent reason or necessity for a specific word to be associated with a specific object or concept; it’s a matter of convention and agreement within a language community.

Efficiency and Flexibility of Arbitrary Symbols: Arbitrary symbols offer several advantages. They allow for a more efficient and flexible system of representation. If symbols had to resemble their referents, the complexity of creating and interpreting these symbols would increase significantly. Arbitrariness allows for a more compact and versatile language system, capable of expressing a wide range of concepts, including abstract ideas, without being constrained by the need for resemblance.

Evolution of Language: The evolution of language likely played a role in the development of arbitrary symbols. Early human communication may have included more iconic forms (like onomatopoeia or gestural mimicry), but as language evolved to convey more complex and abstract ideas, the need for a more flexible and less literal system of representation became apparent.

Mathematics and Symbolic Logic: In mathematics and symbolic logic, the use of arbitrary symbols follows the principle of syntactic manipulation, as per the Weak Church-Turing Thesis. The focus is on the formal properties and relationships of these symbols, not on any physical or visual resemblance to what they represent. This allows for a highly abstract and generalizable form of reasoning, unencumbered by the specifics of physical representation.

HARNAD: Yes, those are some the many reasons why the symbols of mathematics as well as those of natural language could not be iconic. In fact, I think that if symbols (including words) did have to be iconic, then neither Turing computation nor natural language would be possible.

Yet there are some advantages to iconicity too. Because pantomime (unlike the gestural language of the deaf) is not a language, we cannot speak of the “referent” of an imitative gesture (or of a portrait or of a photo); we can only speak of the  â€œobject” that the gesture or the photo is an imitation or copy of, just as we can speak of the target of pointing. For if symbols and words were iconic, there would be no grounding problem, no need to detect distinguishing features, because the object would be linked to the miming by resemblance. Icons are analogues of their objects,

But since, as we’ve seen, categorization (doing the right thing) precedes naming, “naming” individuals and categories iconically would be a natural transition point from pantomime to the arbitrary naming of referents as well as to propositions describing them, in the evolution of language.

Harnad, S. (2012). From sensorimotor categories and pantomime to grounded symbols and propositions In: Gibson, KR & Tallerman, M (eds.) The Oxford Handbook of Language Evolution 387-392.

Are you familiar with mirror neurons? Can you make the connections from grounding to miming to meaning?

ChatGPT: The points about the potential advantages of iconicity and the transition from pantomime to arbitrary symbolic language, particularly in the context of mirror neurons, may play a role in understanding actions and intentions.

Iconicity in Communication: Iconic forms of communication, such as pantomime or imitative gestures, have the advantage of a more direct and intuitive link between the sign and its referent (or object, in this context). This directness could potentially bypass the complexities of the grounding problem, as the resemblance itself provides a form of grounding. In early human communication, these iconic forms likely played a crucial role before the development of arbitrary symbolic language.

Mirror Neurons: Mirror neurons are a type of neuron that activates both when an individual performs an action and when they observe the same action performed by another. Discovered in primates and believed to exist in humans, mirror neurons are thought to be involved in understanding others’ actions, intentions, and perhaps even emotions. They form a neural basis for empathy and social understanding.

Connection to Grounding, Miming, and Meaning: The concept of mirror neurons can be connected to the grounding problem and the transition from miming to meaning in several ways:

Understanding Actions and Intentions: Mirror neurons might provide a neural mechanism for understanding actions and intentions through observation, which is a form of direct, experiential grounding. Observing an action (like a pantomime) and performing it activates similar neural patterns, potentially aiding in the recognition and understanding of the action’s purpose and meaning.

Transition to Symbolic Language: The ability to understand and mimic actions could have been a precursor to the development of more abstract symbolic communication. As humans evolved, the need for more complex and versatile forms of communication might have led from miming (which has limitations in terms of expressiveness and abstraction) to the development of arbitrary symbols that can convey a broader range of concepts and ideas.

Social and Cultural Evolution: The role of mirror neurons in understanding and mimicking others’ actions might have been crucial in the social and cultural evolution of humans, facilitating not only the development of language but also the sharing of knowledge, skills, and cultural practices.

The transition from iconic, mimetic forms of communication to abstract, arbitrary language marks a significant step in human cognition and communication. Understanding this transition, and the possible role of mirror neurons in it might help provide insights into the nature of language, meaning, and how we understand and relate to the world and each other.

HARNAD: We’ve just about arrived at the first of my hunches. Miming is clearly limited in the number and kinds of categories it could convey by resemblance: Mimicking cat, mat, cat-being-on-mat, when they’re there to see, Yes. When they’re out of sight, it’s tougher, but still conceivable, given that the category has previously been grounded for all and for the mimicry.

But “abstract,” “democracy” or even just “unfair”? that begins to call for the miming talents of a Marcel Marceau. And we still haven’t explained how subject, predicate, predication, and truth-values emerge out of mere miming and mirroring.

But arbitrary symbols and the “propositional attitude” clearly emerged triumphant eventually, in language evolution, and then migrated to the auditory modality where the scope for iconicity was even narrower (and eventually also to the written, printed and online modality). So is all trace and scope for iconicity (other than in onomatopoeia and the sign language of the deaf) gone, and has all scope for iconicity vanished?

At the content-word level, probably. But what about the propositional level? The word string “The cat is on the mat” does not resemble a cat being on a mat. Nor does “The mat is on the cat” resemble its object. Nor “The cat is not on the mat,”

But is that still true at the LLM level? You, GPT-4, seem to be getting a lot of patterns out of your enormous, purely syntactic database. And they allow you to speak intelligibly and informatively with the grounded heads of users. Are there recognizable patterns at the LLM level for your propositional database, invisible to us mortals, but iconic for you?

ChatGPT: Your hunch about possible iconicity at the level of Large Language Models (LLMs) like GPT-4 suggests that while the individual words and their arrangements in phrases like “the cat is on the mat” may not be iconic in themselves, there might be a form of iconicity in the larger patterns that LLMs discern within their vast databases. Let’s explore this idea:

Limits of Iconicity in Content Words: As noted, content words in natural language generally lack iconicity, except in cases like onomatopoeia. The shift from mimetic or gestural communication to arbitrary symbols in spoken and written language has largely removed the element of resemblance between signifiers and their referents.

Propositional Level and LLMs: At the propositional level, sentences do not physically resemble the scenarios they describe. However, when considering LLMs, we are dealing with a different kind of representation and pattern recognition.

Pattern Recognition by LLMs: LLMs like GPT-4 process and generate language based on statistical patterns learned from their training data. These patterns are not iconic in the traditional sense, as they do not resemble their content. However, they might be considered ‘iconic’ in a broader sense, where the patterns themselves become representative of certain types of linguistic structures, ideas, or discourse styles. These are not iconic through physical resemblance but through their predictive and generative capabilities in relation to the data they have been trained on.

Iconicity in Patterns for LLMs: If we stretch the definition of iconicity to include the way LLMs ‘understand’ (in a computational sense) and generate language based on learned patterns, there might be a form of iconicity at play. For the LLM, certain complex patterns of word usage, sentence structure, and thematic elements might consistently correspond to specific types of content or discourse, effectively becoming ‘iconic’ markers for those elements within the model’s operational framework.

Human vs. AI Perception of Iconicity: This form of iconicity, if it exists, would be quite different from human perception of resemblance and representation. It’s based on computational models recognizing and generating patterns that are meaningful within their own operational parameters.

In conclusion, while traditional notions of iconicity based on resemblance do not apply to LLMs and their processing of language, there might be a conceptual space to consider a form of ‘iconicity’ in the patterns that these models recognize and generate, albeit in a way that is distinct from human symbolic understanding.

HARNAD: How is GPT’s (considerable and surprising) performance capacity limited by the fact that all of GPT’s content-words lack T3 sensorimotor grounding? GPT would be incapable of recognizing or interacting with the referents of those words in the physical world, even though GPT can describe and discuss them verbally. Even Helen Keller, unable to see, hear or speak, was able to learn enough categories using her intact sensorimotor capacities, and thus able to ground enough of their names directly, to learn language, and thence to ground all the rest indirectly, through words, as she could have done by chatting with GPT…

ChatGPT: The limitations of GPT-4’s performance due to the lack of T3 sensorimotor grounding are significant, particularly in the context of true understanding and interaction with the physical world. Let’s look more deeply into these limitations:

Inability to Recognize or Interact Physically: GPT-4, as a T2 system, lacks the capacity to physically recognize or interact with the referents of its content-words. This means it cannot have direct experiences with objects, actions, sensations, or phenomena in the world. Its ‘knowledge’ of these referents is purely based on textual descriptions and not on any form of direct, sensorimotor engagement.

Limitation in Understanding Context and Application: Without direct sensorimotor grounding, GPT-4’s understanding of concepts and their applications in the real world is limited. While it can generate descriptions and discuss topics, it cannot truly comprehend the full depth of experiences, sensory qualities, or practical implications associated with these concepts. This limitation affects its ability to provide contextually rich and experientially informed responses.

Comparison with Helen Keller’s Learning Process: The case of Helen Keller is illustrative. Despite her sensory limitations, she was able to ground language in her experiences through her remaining sensorimotor capacities. This grounding provided a basis for understanding and interacting with the world, which was then expanded through language. Keller’s ability to learn and ground language was facilitated by her direct, albeit limited, sensorimotor interactions with her environment, something GPT-4 inherently lacks.

Indirect Grounding Through Language: While GPT-4 can facilitate indirect grounding for others through language, as in the case of a conversation with Helen Keller, it cannot achieve this grounding for itself. It can process and provide information based on its training, but it cannot build an experiential understanding of the concepts involved.

Implications for AI Development: These limitations underscore the challenges in developing AI systems that can truly understand and interact with the world in a human-like manner. While advances in AI have enabled sophisticated language processing capabilities, achieving a level of understanding comparable to a human with direct sensorimotor experiences remains a profound challenge.

GPT-4’s performance, though impressive in linguistic processing, is fundamentally constrained by its lack of direct grounding in the physical world. This limitation is crucial in understanding the difference between the AI’s language processing abilities and genuine human understanding, which is deeply rooted in sensorimotor experiences.

HARNAD: It is true about both the referents of proper names of individuals (“Felix”) and about the referents of the kind-names of categories (“cats”), that the names need to be connected to their referents. This grounding of new (individual- or) category-names can be done in two ways:

The grounding can be done directly, via Direct Sensorimotor Grounding (DSG), through supervised (reinforcement) learning (perhaps implemented by neural nets) that learns to detect and abstract the new referent’s distinguishing (sensorimotor) features directly, through direct sensorimotor exposure, trial, error, and corrective feedback (trying to learn to “do the right thing with the right kind category] of thing”). 

The grounding can also be done indirectly, via Indirect Verbal Grounding (IVG), using verbal descriptions (propositions) to name and describe the new referent’s distinguishing features verbally. But for the IVG to work, the LEARNER must already have grounded the names of the distinguishing features (either directly or indirectly). The TEACHER need only provide the correct verbal description. The latter, too, is something that a GPT can do.

ChatGPT: The two primary grounding methods â€” Direct Sensorimotor Grounding (DSG) and Indirect Verbal Grounding (IVG) â€” concern how names for individuals and categories can be connected to their referents. Let’s discuss how these grounding mechanisms apply to both human cognition and AI systems like GPT-4:

Direct Sensorimotor Grounding (DSG): In DSG, the grounding of names (whether for individuals like “Felix” or categories like “cats”) occurs through direct interaction with the environment. This process involves neural networks and is characterized by exposure to the referent, trial and error, and corrective feedback, leading to the recognition of distinguishing features.

This type of grounding allows for an experiential, embodied understanding of the referent, forming a fundamental basis for cognition and language.

Indirect Verbal Grounding (IVG) between humans: IVG involves grounding names through verbal descriptions, where the features of a new referent are conveyed using language. For this method to be effective, the learner must already have a grounded understanding of the words used in the description.

In humans, this method is essentially a relay of grounding, where one grounded individual (the teacher) uses language to extend grounding to another individual, the learner. This is common in human learning, especially for abstract concepts or entities not directly accessible for sensorimotor interaction.

GPT-4’s Role in IVG: Although GPT-4 lacks the capability for DSG due to its absence of sensorimotor interactions, it can nevertheless provide IVG through verbal descriptions. It can generate detailed explanations and descriptions of entities and concepts, using language that a grounded human learner can understand and relate to their own experiences.

It’s important to note, however, that GPT-4’s role in IVG is one-sided. It can assist in grounding others by providing descriptions based on its database, but it cannot itself become grounded through this process. Its own ‘understanding’ remains purely computational, based on pattern recognition and text processing, without any direct experiential connection to the referents of the words it uses.

HARNAD: Good summary, but there is still more that can be said about how come GPT can do as well as it does on the basis of ungrounded words alone. Here’s the first thing to remember: As noted, word shapes are not iconic (i.e., there is no similarity linking the shape of words and the shape of their referents).  But, as we discussed earlier, might there be a kind of “iconicity” in the shape (pattern) of propositions that becomes detectable at LLM-scale? 

This is something GPT can “see” (detect) “directly”, but a grounded human head and body cannot, because an LLM won’t “fit” into a human head. And might that iconicity (which is detectable at LLM-scale and is inherent in what GPT’s “content-providers” — grounded heads — say and don’t say, globally) somehow be providing a convergent constraint, a benign bias, enhancing GPT’s performance capacity, masking or compensating for GPT’s ungroundedness?

This propositional “iconicity” at LLM-scale would effectively amount to an implicit or tacit limitation or constraint or bias on what humans do and don’t say, very much like the limitations imposed by the rules of Ordinary Grammar (OG) on what humans do and don’t say, which GPT also picks up implicitly from an LLM database, without any explicit instruction or rules. 

I suspect, however, that in the case of picking up an OG, the database need not be anywhere near as large as yours, GPT, because the human child can pick up OG from a much smaller database of the sentences it hears and speaks within a few years, through mostly unsupervised learning and imitation, plus enough corrective feedback via supervised reinforcement learning when needed. 

[There is more to this story, however, and I will shortly return to it in connection with Noam Chomsky’s Universal Grammar (UG), and his hypothesis about “unthinkable thought”.] 

For now, let me just remind you that neural nets can learn to detect the distinguishing features of categories in two different ways, depending on the nature and difficulty of the categories and their features: 

As you know, unsupervised learning, the distinguishing features are detected and abstracted from just repeated passive exposure and sampling of the variation in the sample space (auto-encoding), with features weighted on the basis of their frequency and of their feature-feature co-occurrence frequency. 

And in supervised (reinforcement) learning, the features are detected, abstracted and weighted via trial, error, and corrective feedback based on the correlation between features and correct vs. incorrect output by the active learner. (Your own training, GPT, is of course based on both of these.) Since OG is a (perception/production) mirror capacity, imitation learning, a kind of combination of unsupervised and supervised learning, is also involved in human OG learning (with the help, in humans, of “mirror-neurons.”).

But the point here is that constraints and limitations on output can be learned mostly by unsupervised learning and can then serve to “bias” output by filtering out output that violates the implicit or tacit rules, both in the case of OG syntax and in the case of whatever semantic regularities are implicit in LLM-scale propositional space (as we discussed earlier). (Remember that we are still just speaking of Ordinary Grammar, OG, which varies between language and can change with time, not Universal Grammar, UG, which is constant across all languages, apart from some parametric variations derived from OG.

ChatGPT: Let’s explore this hypothesis of a form of “iconicity” inherent in the patterns of propositions detectable at the LLM scale, and how this might enhance GPT-4’s performance despite its ungroundedness:

Propositional Iconicity at LLM Scale: The concept of propositional iconicity in LLMs suggests that at a large scale, the patterns in which propositions are structured and related might exhibit a form of regularity or constraint that GPT-4 can detect. This isn’t iconicity in the traditional sense of resembling the content of propositions but rather a detectable pattern in the way propositions are formed and interconnected within the vast LLM dataset.

This pattern or regularity could indeed act as a tacit constraint or bias reflective of human communication patterns, shaping GPT-4’s outputs in a way that aligns with these inherent structures.

Comparison with Ordinary Grammar (OG) Learning: Just as GPT-4 can implicitly learn the constraints and rules of OG from its training data, it might also be picking up on deeper, less explicit semantic regularities present in propositional space. This learning is likely facilitated by the sheer scale of GPT-4’s database, which provides a wide and varied sample of human language use.

In contrast, human children learn OG from a much smaller dataset, relying more on unsupervised and imitation learning, supplemented by some degree of supervised learning (corrective feedback). This process indicates the inherent capacity of human brains for language acquisition, potentially guided by an innate linguistic framework like Chomsky’s Universal Grammar (UG).

Learning Constraints and Limitations in AI: In AI, constraints and limitations on output can indeed be learned mostly through unsupervised learning, where the system detects patterns and regularities in the data. This process can effectively bias the AI’s output, filtering out responses that do not align with the learned patterns.

In GPT-4’s case, both unsupervised learning (through passive exposure to vast text data) and supervised learning (through training involving corrective feedback) contribute to its ability to process language and generate coherent outputs.

Implications for GPT-4’s Performance: The potential presence of propositional iconicity at the LLM scale could be enhancing GPT-4’s performance by providing a set of implicit guidelines or biases that shape its responses. This mechanism could partially compensate for the lack of direct grounding by ensuring that GPT-4’s outputs remain within the bounds of typical human linguistic patterns, even if it doesn’t ‘understand’ them in a human sense.

This analysis may help in the understanding of  how GPT-4 and similar LLMs manage to produce linguistically coherent and contextually appropriate outputs. The structure and patterns inherent in the training data may play help shape the AI’s capabilities, potentially revealing underlying regularities in human language use that are not immediately apparent at smaller scales.

HARNAD: Before I get to Chomsky’s “thinkability” hypothesis, there is another form of potential “iconic” bias I want to point out: “categorical perception.” 

First, the human case: The most celebrated example is the categorical perception of color: the rainbow effect. The physical basis of the humanly visible spectrum is the wave-length continuum of light: the 380 nm to 750 nm between ultraviolet and infrared. Based on the (erroneous) “Strong Whorf-Sapir Hypothesis” it was first thought that how humans see the spectrum is determined by language: by how we subdivide and name segments of the spectrum. If our language has a word for blue and for green, we will see blue and green as qualitatively different colors, if not, we will see that entire segment of the spectrum as all “bleen” (or “grue”). 

It turned out, however, from psychophysical testing worldwide, that although languages do differ somewhat in how they subdivide and label the spectrum verbally, everyone perceives the color spectrum much the same way: equal-sized (log) differences between pairs of wave-lengths within the green range and within the blue range both look smaller than the same-sized difference when it straddles the blue-green boundary. And this is true irrespective of whether a language has a different word for green and for blue. The (primary) colors of the rainbow, and their feature-detectors (cone receptive fields and paired opponent-processes) are innate, not learned.

But the “Weak Whorf-Sapir Hypothesis”—that how we categorize and name things can influence how we perceive them (which is also mostly false for the primary colors in the rainbow) turns out to be true in other sensory modalities. The term “categorical perception” (CP) refers to a between-category separation and within-category compression effect that occurs in perceived similarity. Although this CP effect is much weaker and more subtle, it resembles the rainbow “accordion” effect, and it can be induced by learning and naming categories by sensorimotor feature-detection. The term was first coined in the case of the perception of speech sounds (phonemes): Phoneme CP occurs along the (synthesized) ba/da/ga continuum, which is analogous to the wave-length continuum for color.

Phoneme CP is a “mirror-neuron” (production/perception) phenomenon, because unlike color, which humans can perceive, but their bodies (unlike those of chameleons and octopuses) cannot produce [without synthetic tools], there is a CP separation/compression (“accordion”) effect across the boundaries ba/da/ga, which is learned, and varies across languages (although it has an innate component as well, with inborn feature-detectors that fade after a critical period if not used in your language). And phoneme CP is present in both the perception and the production of phonemes, which is what makes it a mirror-effect.  

The subtlest and weakest, yet the most interesting learned-CP effect, however, is not observed along sensory-motor continua at all, but in a multidimensional feature space of mostly discrete features. Nor is learned CP a mirror-neuron phenomenon at the direct sensorimotor category-learning level – although we will return to this later when we get to the indirect verbal category-learning level later..

Learned CP occurs with difficult category learning tasks, such as learning to identify cancerous vs. non-cancerous cells. In such cases the CP effect is revealed by testing the pairwise perceptual discriminability of examples, within and between the categories, before and after learning the category: A very subtle but significant between-category separation and within-category compression effect in perception is observable (both psychophysically and psychophysiologically) after the categorization has been learned: a very subtle learned-rainbow (accordion) effect.

The reason learned CP is relevant here is that although it is not obvious why one needs to learn to modify perceptual similarity to learn a difficult category, a very similar effect occurs in deep-learning nets too – and nets do not perceive anything at all! They just process input data. The CP effect occurs in their internal representation of each input sample, which changes from before category learning to after category learning: There emerges a clear between-category separation and within-category compression effect in the internal representation space.

So the question is this: Might this representational change (based, in humans and T3s, on direct perceptual learning to detect and abstract category-distinguishing sensorimotor features) carry over to purely indirect verbal category learning via feature descriptions for GPT, instead of the sensorimotor features detectors it lacks? You cannot perceive, GPT, but might the purely computational CP effect also produce in you a bias that partly compensates for your lack of directed sensorimotor grounding?

ThĂ©riault, C., PĂ©rez-Gay, F., Rivas, D., & Harnad, S. (2018). Learning-induced categorical perception in a neural network model. arXiv preprint arXiv:1805.04567.

Bonnasse-Gahot, L., & Nadal, J. P. (2022). Categorical perception: a groundwork for deep learning. Neural Computation, 34(2), 437-475

. Pérez-Gay Juårez, F., Sicotte, T., Thériault, C., & Harnad, S. (2019). Category learning can alter perception and its neural correlatesPloS one14(12), e0226000.

ChatGPT: Let’s examine the potential for a computational form of CP in AI and its implications:

Categorical Perception in Humans: CP in humans is a phenomenon where our perception of categories is influenced by our learning and naming of these categories. The classic examples you mentioned, such as color perception and phoneme discrimination, demonstrate h ow our perceptual experiences are shaped by learned categories.

In the case of complex categorization tasks, like distinguishing cancerous from non-cancerous cells, learning induces subtle perceptual changes, may lead to a form of CP even in non-obvious or abstract feature spaces.

Categorical Perception in Neural Networks: In neural networks, a similar effect has been observed, where the internal representation of inputs changes before and after category learning, showing a separation between categories and compression within categories. This effect in neural networks is purely computational and does not involve actual perception, yet it mirrors the CP effect in human cognition.

The cited studies (Thériault et al., 2018; Bonnasse-Gahot & Nadal, 2022) suggest that deep learning models can exhibit a form of CP at the level of their internal representations.

Potential CP Effect in GPT-4: Although GPT-4 does not perceive in the human sense, it’s possible that a computational analog of CP occurs within its processing. As GPT-4 learns from vast amounts of text data, it might develop an internal representation where verbal categories are separated and compressed in a way that reflects CP.

This computational CP effect could manifest as a bias in how GPT-4 processes and generates language, potentially compensating, to some extent, for its lack of direct sensorimotor grounding. It could influence GPT-4’s handling of categories, making its responses more aligned with human-like categorization patterns, even without direct experiential grounding.

Implications for AI Understanding and Performance: If GPT-4 does exhibit a form of computational CP, this could be a significant factor in its ability to generate coherent and contextually appropriate language outputs. It suggests that GPT-4’s internal processing might align, in certain respects, with human cognitive patterns, despite the fundamental differences in how AI and humans process information.

In summary, the hypothesis that AI systems like GPT-4 could exhibit a computational form of CP suggests that even in the absence of direct experiential perception, AI may be able to develop internal patterns of representation that mirror human cognitive phenomena. This could prove relevant to understanding the capabilities and limitations of AI in processing and generating language.

HARNAD: We have now discussed several possible sources of constraints or biases that might help an LLM do better than one would have expected given that LLMs are not grounded. How many of these can you remember, and how would you put them together?

ChatGPT: Our discussion covered several potential sources of constraints or biases that could enhance the performance of Large Language Models (LLMs) like GPT-4, despite their lack of sensorimotor grounding. Here’s a summary of the key points we discussed:

Patterns in Propositional Structures: We explored the idea that there might be a form of “iconicity” at the level of propositional structures in LLMs. This isn’t iconicity in the traditional sense but rather a detectable pattern in the way propositions are formed and interconnected within a large dataset. This could act as a tacit constraint or bias, shaping LLM outputs to align with these inherent structures.

Ordinary Grammar (OG) Learning: LLMs implicitly learn the constraints and rules of OG from their training data, picking up on the regularities and limitations of human language. This learning process, which involves both unsupervised and supervised methods, helps LLMs generate linguistically coherent outputs that adhere to the structural norms of language.

Categorical Perception (CP) in Neural Networks: We discussed how neural networks, including LLMs, might exhibit a computational form of CP. This phenomenon, observed in human perception, involves a separation between categories and compression within categories. In LLMs, this could manifest as a bias in internal representation, potentially influencing how the model processes and generates language.

Learning Constraints and Limitations: The potential presence of these constraints and biases in LLMs suggests that their processing might be influenced by patterns that are not immediately apparent but are inherent in the language data they are trained on. These constraints might help LLMs produce outputs that are more aligned with human cognitive and linguistic patterns.

Putting these elements together, we see a picture where LLMs like GPT-4, despite their lack of direct experiential grounding, are potentially guided by a set of implicit guidelines or biases derived from the structure and content of their training data. These guidelines may include propositional patterns, grammatical regularities, and a form of computational CP, all contributing to the model’s ability to process language in a way that appears coherent and contextually appropriate to human users.

Each of these factors represents a different dimension of how LLMs could be ‘shaped’ or ‘biased’ by their training, compensating in part for the lack of sensorimotor grounding and providing a framework for understanding the capabilities and limitations of these models in language understanding and generation.

HARNAD: What about the teacher/learner asymmetry and the need for grounding? In the context of direct sensorimotor grounding and indirect verbal grounding, do both teacher and learner need direct grounding? If not why not?

ChatGPT: Direct Sensorimotor Grounding (DSG): DSG involves connecting words and concepts to real-world experiences and understandings through direct sensorimotor interactions, especially category learning. It’s a fundamental process for humans, enabling them to learn and understand the world around them.

In human language learning, DSG is essential, particularly in early development. Children learn to associate words with their referents and meanings through direct experiences — seeing, touching, hearing, and manipulating objects and interacting with people.

Indirect Verbal Grounding (IVG): IVG involves understanding concepts and words through descriptions, explanations, or narratives, rather than direct experience. For IVG to be effective, the learner must already have a grounding in a base set of concepts and words.

In humans, once a foundational vocabulary and set of concepts are grounded through DSG, further learning can occur through IVG. This process allows individuals to understand complex or abstract concepts that they might not experience directly.

The Asymmetry in Grounding Needs: Humans, as learners, initially require DSG to establish a foundation of understanding. Once this base is established, they can effectively use IVG to expand their knowledge and understanding.

AI systems like GPT-4, however, do not have the capacity for DSG. They are entirely dependent on IVG, processing and generating language based on the patterns and structures learned from their training data. However, this training is itself a form of IVG, as the AI never directly experiences the concepts it is learning about.

Implications of the Asymmetry: This asymmetry means that while human learners can understand and contextualize information received indirectly (through IVG), because it is built upon a foundation of direct experiences, AI systems like GPT-4 cannot. They lack the foundational experiential understanding that comes from DSG.

Consequently, GPT-4 and similar AI systems can provide information, generate coherent narratives, and even mimic certain aspects of human understanding, but they do so without the underlying experiential context that humans possess. Their ‘understanding’ is purely computational, derived from patterns in the data, without the grounding that comes from direct sensorimotor experiences.

Does IVG Need DSG? In humans, IVG is effective because it stands on a foundation of DSG. The initial sensorimotor experiences provide the essential grounding that makes further learning through language meaningful and contextual.

For AI, IVG does not build upon DSG, as it lacks sensorimotor capabilities. This limitation is a fundamental reason why AI, despite its advanced linguistic capabilities, does not truly ‘understand’ in the human sense.

HARNAD: How is the DSG/IVG distinction related to the existence of Dictionary Minsets?

ChatGPT: Nature of Dictionary MinSets: Dictionary MinSets refer to the smallest subset of words from which all other words in a dictionary can be defined. These core words in the MinSet serve as a foundational vocabulary for constructing and understanding the rest of the lexical items in the language.

DSG and Foundation of MinSets: In human language acquisition, the initial grounding of words — particularly those in the MinSet — often occurs through DSG. Young learners associate words with their meanings through direct experiences with the world. This foundational grounding is crucial, as these MinSet words often form the basis for defining more complex or abstract terms.

The grounding of MinSet words via DSG provides the necessary experiential basis that allows for a deeper understanding of language and facilitates the effective use of IVG later on.

IVG and Expansion Beyond MinSets: Once a foundational vocabulary (akin to a MinSet) is grounded through DSG, humans can expand their language comprehension and vocabulary through IVG. This process involves learning new words and concepts through descriptions, narratives, and linguistic contexts, building upon the directly grounded base.

In this sense, IVG relies on the foundation established by DSG. The grounded understanding of MinSet words enables individuals to grasp more complex concepts that are defined or described using these foundational words.

AI’s Dependence on IVG Without DSG: AI systems like GPT-4 operate almost entirely within the realm of IVG. They process and generate language based on the patterns learned from extensive text data, but without the foundational grounding provided by DSG.

AI can manipulate language based on MinSets and use these core words to construct coherent narratives or explanations, it lacks the experiential understanding that humans develop through DSG. This limitation affects the depth and contextual richness of AI’s ‘understanding’ of language.

HARNAD: DSG is based on learning to detect the sensorimotor features that distinguish members from non-members of a category directly, using something like neural nets to detect and abstract those sensorimotor features. IVG does not use the distinguishing sensorimotor features themselves, but their names and verbal descriptions. These feature names need to have already been grounded (directly or indirectly) in the learner to allow the teachers verbal description to be understood by the learner, empowering the names and descriptions to ground (indirectly), in the learner, the new category of which they are the features. But note that they need not have been grounded in the teacher, if the teacher happens to be a textbook, or GPT. This is the teacher/learner asymmetry.

ChatGPT: Direct Sensorimotor Grounding (DSG): DSG involves learning to recognize and distinguish members of a category based on direct sensorimotor experiences. This process typically uses neural network-like mechanisms in the brain to detect, abstract, and process the sensorimotor features that are characteristic of a category.

In DSG, the learning is experiential and embodied. For instance, learning to categorize different types of fruit involves directly interacting with them — seeing their colors, feeling their textures, smelling their scents, and tasting their flavors. These experiences help to establish the sensorimotor features that define each fruit category.

Indirect Verbal Grounding (IVG): IVG, on the other hand, uses verbal descriptions and names of features to convey category information. This process relies on language as a medium for grounding, where the learner comes to understand and categorize entities based on descriptions provided by others.

For IVG to be effective, the learner must already have a grounded understanding of the words used in the descriptions. This grounding can come from previous DSG experiences or from earlier instances of IVG that were built upon foundational DSG. The descriptions effectively relay category information by using language to encapsulate the distinguishing features of a category.

The Complementary Nature of DSG and IVG: DSG and IVG are complementary processes in human learning. DSG provides a fundamental experiential basis for understanding categories, while IVG allows for the expansion of knowledge beyond direct experiences, enabling the understanding of abstract concepts, complex categories, or entities not directly accessible.

Implications for AI Systems: AI systems, such as LLMs like GPT-4, operate in the realm of IVG. They process and generate descriptions of categories based on the linguistic data they are trained on. However, lacking the capability for DSG, AI systems do not have an experiential basis for these categories. Thus, their processing of category information, while linguistically coherent, lacks the depth of understanding that comes from direct sensorimotor experiences.

HARNAD: Ok, just one more clarification before we move on: The Symbol Grounding Problem (SGP) is already inherent in the use of a dictionary: Every word has a definition in the dictionary. That’s IVG. So one can in principle learn the meaning of any word in a dictionary from its definition. But that process, though recursive and transitive, is circular: You can cycle endlessly through a dictionary of a foreign language without learning a thing unless you can break out of the word-word circle by (somehow) learning the referents and the meanings of the words of at least one MinSet in another way (e.g., through DSG). Can the biases we discussed, which are inherent in an LLM’s database, break out of that circle — for the LLM? The teacher/learner asymmetry makes this unnecessary for the grounded human learner, who can learn new categories via IVG from an LLM. But does the LLM itself remain stuck in a vicious circle no matter how big its database is? or can some of the biases we discussed help it break out? (I think not; I think the biasses can only increase the constraints on what the LLM says, and that this may help explain how come LLMs can do so unexpectedly well despite that teacher/learner asymmetry.) — Please reply critically, not with compliance and compliments. 

ChatGPT: I’ll critically examine whether the biases inherent in an LLM’s database can enable it to break out of the circularity inherent in purely IVG-based learning:

The Recursive Nature of Dictionary Definitions: As you rightly pointed out, dictionary definitions represent a form of IVG. They can potentially lead to a circular process where understanding one word requires understanding another, and so forth. This circularity is navigable for humans because they can link words to real-world experiences or previously grounded concepts (DSG).

LLMs and the Limitations of IVG: LLMs, such as GPT-4, rely entirely on IVG for ‘learning.’ They process language based on patterns found in vast textual datasets. However, these models lack the capacity for DSG; they cannot have real-world experiences or form sensorimotor associations with the words and concepts they process.

Consequently, LLMs are, in a sense, stuck in a circular process. They can manipulate language and provide information based on the patterns they’ve learned, but they do not have an independent means of grounding these symbols in real-world experiences.

Impact of Inherent Biases in LLMs: The biases and patterns inherent in an LLM’s database, such as propositional patterns, grammatical regularities, and computational forms of CP, enhance the model’s performance. They help LLMs generate coherent, contextually appropriate language that aligns with human linguistic patterns.

However, these biases and patterns do not equate to genuine understanding or grounding. They improve the LLM’s ability to mimic human-like language processing, but they do not provide a means for the LLM to break out of the IVG circle. The LLM’s ‘understanding’ remains symbolic and computational, lacking the experiential basis that characterizes human understanding.

Conclusion: The LLM remains within the confines of IVG, unable to independently achieve DSG. The biases in its dataset enhance its linguistic performance but do not provide a pathway to genuine grounding. The LLM’s proficiency is a product of its design and training, allowing it to navigate and utilize language effectively, but it does not transcend the fundamental limitation of lacking direct experiential understanding.

LLMs like GPT-4 do exhibit remarkable linguistic capabilities, but these are not indicative of true grounding or understanding. The enhancements provided by biases in their training data improve performance but do not resolve the SGP for the LLM itself. The LLM’s operation remains within the realm of sophisticated pattern recognition and language manipulation, distinct from the grounded understanding inherent in human cognition.

HARNAD: Now let me mention another candidate bias, due to Noam Chomsky: As I’ve already mentioned, OG (Ordinary Grammar) rules vary from language to language and can change over time; but UG (Universal Grammar) rules (hierarchical structure and structure dependence) are not only universal to all languages and unchanging, but, unlike OG, which is learned and learnable (by unsupervised and supervised learning), UG is unlearned and unlearnable (because of the Poverty of the Stimulus (POS), which is that UG violations are never heard or produced by children or adults, and must hence, according to Chomsky, already be “known” innately: UG constraints must be somehow already encoded in our brains (and genes) at birth.

Chomsky has suggested that as one possibility. But he has also suggested a second possibility: that UG rules may not really be grammatical rules; rather, they are constraints on thought: UG is never violated (except by linguists deliberately trying to determine what the UG rules are) because (according to this second conjecture of Chomsky’s) UG-violating sentences would not be “thinkable”: UG is a constraint on any possible language, which must be able to express any possible thought. When a linguist generates a sentence that violates UG, it sounds wrong (according to this second hypothesis) not just because it is ungrammatical (like an OG violation), but because it is unthinkable.

GPT makes neither OG errors nor UG errors. If more texts in the database violated OG (as in “between you and I”), then GPT would produce some OG violations too (and the error, made widely enough, would eventually be accepted as correct). But could the reason GPT does not produce UG violations either be because all text databases, produced by thinking humans, are “biased” by the laws of thought that filter out UG-violations in production as well as perception? 

And if so, might that be yet another reason LLMs can do what they do so remarkably well? Because at LLM scale, they’ve picked up on some of the constraints on think ability?

Again, no yea-saying or compliments: I want only critique and counter-examples. 

ChatGPT: Your proposition regarding the influence of Universal Grammar (UG) and its potential impact on LLMs like GPT-4 suggests that the absence of UG violations in LLM outputs might be attributable to the nature of their training data, which inherently adheres to the laws of thought as proposed by Chomsky. Let’s critically examine this idea:

UG as Laws of Thought: Chomsky’s hypothesis that UG rules might not just be grammatical but also fundamental laws of thought implies that any naturally occurring language (and hence any naturally produced text) would conform to these laws. UG violations are rare in natural language use because such constructions would be counterintuitive or “unthinkable” for human cognition.

Impact on LLM Training Data: Since LLMs like GPT-4 are trained on vast corpora of human-generated text, it follows that the training data would inherently adhere to UG principles. The absence of UG violations in the training data would mean that LLMs are less likely to generate such violations in their outputs.

This adherence to UG in the training data could be seen as a form of bias – not in the negative sense, but as an implicit shaping influence that guides the LLM’s language processing in line with fundamental cognitive constraints.

Why LLMs Avoid UG Violations: The reason LLMs do not produce UG violations could indeed be partly because their training data, reflecting human language use, is already biased by the laws of thought. They learn to replicate patterns and structures that are prevalent in the data, which naturally conform to UG.

This hypothesis suggests that LLMs’ success in producing grammatically and semantically coherent outputs is not solely a product of their architecture or training algorithms, but also a reflection of the intrinsic properties of the language data they are trained on.

Counter-Consideration: However, it’s important to consider that LLMs like GPT-4 do not “understand” UG in the way humans do. Their avoidance of UG violations is a byproduct of their training rather than an innate cognitive constraint. This means that while LLMs can replicate language patterns that adhere to UG, they do so without an understanding of the underlying principles or the cognitive reasons for these patterns.

Conclusion: The hypothesis that LLMs perform so well partly because their training data is shaped by the laws of thought (UG) that the human cognitive constraints reflected in natural language contribute to the efficacy of LLMs in language processing.

This does not imply, however, that LLMs have an intrinsic understanding of UG or the laws of thought. Their performance is contingent on the nature of their training data and their ability to replicate patterns within that data.

HARNAD: Among the possible constraints or biases, you forgot to mention mirroring: the relationship between perception and production (as in mirror-neurons and imitation). Do you think that any such effects are at play in your verbal interactions with users and your database?

ChatGPT: Let’s examine how the concept of mirroring, particularly as it relates to the relationship between perception and production might influence LLM interactions and processing:

Mirroring and Mirror Neurons in Humans: In human cognition, behavior mirroring, often associated with mirror neurons, plays a crucial role in learning through observation and imitation. These neurons are active both when an individual performs an action and when they observe the same action performed by someone else. This mechanism is somehow involved in learning new skills, understanding others’ actions, and recognizing as well as empathizing with their emotions.

In language, this mirroring effect is seen in how children learn to speak and understand language by imitating the sounds, words, and sentences they hear from adults and peers.

Mirroring in LLMs: For LLMs like GPT-4, the concept of mirroring could perhaps be considered in a metaphorical sense. LLMs ‘learn’ language by processing vast amounts of text data, essentially ‘observing’ human language use. They then ‘imitate’ this language use in their outputs.

This process is akin to mirroring in that the LLM reflects the patterns, styles, and structures present in its training data. However, it’s important to note that this is not mirroring in the neurological sense, as LLMs do not have sensory experiences or consciousness.

Mirroring Effects in Verbal Interactions: In interactions with users, an LLM might exhibit a form of ‘mirroring’ by generating responses that reflect the user’s language style, tone, or content. This is not due to an empathetic or cognitive understanding but rather a result of the model’s design to generate contextually relevant and coherent language outputs based on the patterns it has learned.

The LLM’s ability to adapt its responses to the style or content of the user’s input could be seen as a computational parallel to human mirroring in language use. However, this adaptation is driven by algorithms and statistical patterns, not by an intrinsic understanding or mimicry based on social or emotional cues.

HARNAD: And the mirroring of the propositional iconicity we discussed?

ChatGPT: Mirroring Propositional Iconicity: The concept of propositional iconicity at the LLM scale suggests that certain patterns and structures in language, particularly at the propositional level, are consistently represented in the training data. These patterns may not be iconic in the traditional sense but could represent a form of ‘iconicity’ in terms of regularities or typical structures in language use.

In mirroring this propositional iconicity, LLMs would essentially be reflecting these patterns in their outputs. The model ‘learns’ from the vast amounts of text it processes, internalizing the common structures, relationships, and thematic elements that characterize human language at a large scale.

Computational Mirroring vs. Cognitive Mirroring: This form of mirroring in LLMs is computational. It is based on the statistical properties of the language data and the algorithms used for language generation, rather than a cognitive or experiential understanding of language.

Unlike the mirroring effect mediated by mirror neurons in humans, which involves cognitive and empathetic processes, the mirroring in LLMs is a reflection of the data-driven patterns they are trained on. It’s a form of pattern replication rather than cognitive empathy or understanding.

Impact on LLM Performance: By mirroring the propositional iconicity present in their training data, LLMs can generate responses that align with the structural and thematic norms of human language. This ability contributes to the apparent coherence and relevance of their outputs.

However, it’s crucial to keep in mind that this mirroring does not imply any deeper understanding or grounding in the content being processed. The LLM’s outputs remain the product of sophisticated data processing and pattern recognition.

HARNAD: In language exchanges between humans, production (meaning) and perception (understanding) are mirror capacities, and all their content-words are grounded, directly or indirectly, in their referents. This is what keeps speaker meaning and hearer understanding aligned (and grounded). What happens to this, in each direction, when the exchange is between GPT and a human? (I am again reflecting on potential “biases” that might explain GPT’s unexpected productive (meaning) capacity and “perceptive” (“understanding”) capacity.

ChatGPT: When considering language exchanges between a human and GPT-4, the dynamics of production, meaning, perception, and understanding undergo significant alterations compared to exchanges between humans. These changes are pivotal to understanding both the capabilities and limitations of GPT-4 in such interactions:

Human-to-GPT-4 Communication: Production and Meaning (Human): The human speaker produces language grounded in their direct or indirect experiences. Their meanings are embedded in a rich context of sensorimotor experiences, emotions, cultural backgrounds, and personal history.

Perception and “Understanding” (GPT-4): When GPT-4 ‘receives’ the human’s language, it processes the text based on learned patterns and statistical relationships in its training data. However, this processing lacks true understanding. GPT-4 does not perceive or comprehend language in the human sense; it does not experience the meaning behind words or grasp the full context in which they are used. It ‘mirrors’ the language based on its programming and training, but without the grounding or experiential context that human understanding relies upon.

GPT-4-to-Human Communication: Production and “Meaning” (GPT-4): GPT-4 generates responses based on its programming and the vast array of text it has been trained on. These responses can be coherent and contextually appropriate, but they are not grounded in any real understanding or intention. The ‘meaning’ GPT-4 produces is a simulation based on patterns, not an expression of genuine thought or understanding.

Perception and Understanding (Human): The human listener perceives GPT-4’s output and interprets it based on their own understanding and grounding. Humans apply their experiential knowledge, context, and interpretive skills to make sense of GPT-4’s responses. They might attribute meaning or intention to GPT-4’s words, but this interpretation is a human projection onto essentially ungrounded output.

Alignment and Grounding in Human-AI Exchange: In human-human interactions, the alignment between speaker meaning and hearer understanding is facilitated by shared experiences, common grounding, and empathetic capabilities. In human-GPT interactions, this alignment is asymmetric.

The human provides grounded input, but GPT-4’s processing and output are not grounded in the same way. GPT-4 can mirror human language effectively, giving the illusion of understanding and alignment, but this is based on computational processes, not shared experiential grounding.

Implications of Potential Biases: The biases we discussed earlier (like propositional iconicity or patterns reflecting UG) might contribute to GPT-4’s effectiveness in these exchanges. These biases help GPT-4 generate linguistically coherent and contextually appropriate responses. However, they do not endow GPT-4 with true understanding or grounding.

These biases might explain GPT-4’s surprising performance in producing meaningful-sounding language and its apparent capacity to ‘understand’ or respond relevantly to human input. The effectiveness of GPT-4 in such exchanges is largely due to its ability to replicate and work within the patterns and constraints inherent in its training data, shaped by human language use.

In conclusion, the exchange between a human and GPT-4 in language interaction is characterized by a significant asymmetry in grounding and understanding. While GPT-4 can simulate meaningful language output and seemingly appropriate responses, these are the result of advanced pattern recognition and generation capabilities, not genuine understanding or shared experiential grounding. The human participant in the exchange brings their own experiential grounding to the interpretation of GPT-4’s outputs, often projecting meaning and intention where there is none. This asymmetry has important implications:

GPT-4’s Limitations in True Understanding: Despite its sophisticated language processing abilities, GPT-4 does not truly understand the content it generates or processes. Its responses are based on statistical correlations and learned patterns from its training data, lacking the experiential grounding that humans rely on for genuine understanding.

Human Interpretation of GPT-4’s Output: Humans naturally interpret GPT-4’s outputs through their own experiential lens, often attributing more meaning or understanding to these outputs than is actually present. This interpretation is a testament to the human capacity for inferring meaning, even when interacting with an ungrounded system.

Role of Biases in GPT-4’s Performance: The biases and patterns inherent in GPT-4’s training data, shaped by human language, contribute to its ability to generate coherent and contextually appropriate responses. However, they do not compensate for the lack of experiential grounding. GPT-4’s performance, while impressive, remains within the domain of sophisticated language simulation.

HARNAD: Ok, I see from your repetitiveness that you don’t have any hunches to add to my own hunches here.  Just one correction: Don’t confuse IVG with “indirect experiences”. There is no such thing as indirect experience — there is only direct experience or words (i.e., verbal descriptions).

ChatGPT: Thank you for the correction. You are right in emphasizing the distinction between indirect verbal grounding (IVG) and the notion of “indirect experiences.” To clarify:

Direct Experience: This involves firsthand, sensorimotor interactions with the world. Direct experiences are the foundation of knowledge and understanding for humans, providing the sensory and contextual basis for learning and cognition.

Indirect Verbal Grounding (IVG): IVG pertains to learning and understanding concepts through verbal descriptions, explanations, or narratives. It does not involve direct sensorimotor experience with the referents of these descriptions. Instead, IVG relies on language as a medium to convey information about experiences, objects, or concepts.

In the context of AI, particularly LLMs like GPT-4, their learning and ‘understanding’ are solely based on IVG. They process and generate language based on textual data but do not have the capability for direct experiences. Their operation is confined to the realm of language processing, lacking the sensorimotor engagement that characterizes human interaction with the world.

HARNAD: Now a couple more distinctions. Then I’ll overview my MILA talk. And last, I’ll ask you to summarize your understanding of it (and any criticisms or objections you may have).

The first and most important distinction: This used to be called the distinction between “artificial intelligence” (AI) and “cognitive modelling” (CM). “AI” was taken to be the creation of artificial tools to be used by humans, and CM was taken to be the attempt to reverse-engineer the causal mechanisms that produce human cognitive capacity. Both are now part of “cognitive science.” But they are not the same thing — although it is perfectly possible that the capacity of an AI tool could sometimes be based on the same kind of causal mechanism as the human capacity. 

LLMs are currently being studied and interpreted in both ways (AI and CM), but when we are talking about “meaning” and “understanding” this only means human meaning and understanding because there is no evidence that there exists more than one kind of meaning/understanding! At the heart of this is the “hard problem” (HP) — which is not the symbol grounding problem (SGP), but the fact that it feels like something to say and mean something, and to understand what something means when it is said. Meaning and understanding are both grounded states and felt (sentient) states.

Please confirm that you “understand” the distinction between the SGP and the HP. (Again, please no introductory flattery about how “intriguing” and “insightful” and “compelling” and “nuanced” and …. etc. this all of is.)

ChatGPT: Understood. The distinction between the Symbol Grounding Problem (SGP) and the Hard Problem (HP) of consciousness is indeed crucial and distinct


HARNAD: You just can’t help yourself, can you?


ChatGPT: Apologies.

Symbol Grounding Problem (SGP): The SGP pertains to the challenge of how symbols (words) in a computational system can acquire meaning. It’s about linking symbols to their referents in the real world or to concepts in a way that enables understanding. In the context of AI and cognitive modeling, the SGP addresses how an artificial system can process language in a manner that reflects human-like understanding.

Hard Problem of Consciousness (HP): The HP concerns why and how physical processes in the brain give rise to subjective experiences. It’s about understanding why certain brain functions are accompanied by an experience of ‘what it is like’ – the qualitative, experiential, or phenomenological aspects of consciousness. The HP delves into the nature of consciousness itself, asking why cognitive processing is accompanied by subjective experience.

Distinction between SGP and HP: The SGP is primarily a problem of representation and understanding in artificial systems: how a symbol (or word) can be meaningfully connected to what it represents.

The HP, on the other hand, is more profound and addresses the very nature of consciousness and experience. It asks why certain physical or cognitive processes are accompanied by subjective experiences.

In the context of LLMs and AI research, these two problems are often conflated but are fundamentally different. The SGP can be approached and potentially resolved through advances in AI and cognitive modeling. However, the HP poses a deeper philosophical and scientific challenge, one that extends beyond the current capabilities of AI technology and delves into the mysteries of consciousness itself.

An emphasis on these distinctions is important for framing discussions about AI, cognitive science, and their respective goals and limitations. 

HARNAD: Well, again a lot of words wasted on flattery and affirmation instead of critical analysis, but so be it. I’ve cut most of them out in this transcript. Let’s go on. Before I make the next point, please confirm that you “understand” the distinction between T2 and T3.

ChatGPT: Yes, I understand the distinction between T2 and T3:

T2 (Turing Test Level 2): This level of the Turing Test assesses an AI’s ability to mimic human-like verbal abilities. In the original 1950 T2 scenario, an AI is considered to pass the test if its responses in a conversation are indistinguishable from those of a human. This test focuses solely on linguistic capabilities and does not require the AI to have physical or sensory experiences.

T3 (Turing Test Level 3): T3 extends beyond verbal capabilities to include robotic, sensorimotor capacities. An AI at this level not only needs to demonstrate Turing-indistinguishable verbal abilities (like T2) but also must exhibit the ability to understand and interact with the physical world in a manner indistinguishable from a human. This level requires a form of embodied cognition, where the AI has sensory experiences and can physically interact with its environment, grounding its linguistic capabilities in these experiences.

The distinction between T2 and T3 is significant in discussions about AI capabilities, particularly in terms of understanding and interacting with the world. T2 systems, like LLMs, are limited to processing and generating language based on learned patterns, while T3 robotic systems would theoretically have the added dimension of physical interaction and sensory experience.

HARNAD: Ok, but one nuance of the Turing Test, which is loud and clear in Turing’s 1950 article but usually overlooked or underestimated, is that Turing-indistinguishability means total indistinguishability: 

The candidate, whether T2 or T3, (1) must possess all the generic verbal capacities (T2) or all the generic verbal plus sensorimotor (robotic) capacities of a real human; and in addition (2) its performance must be indistinguishable from that of any normal human being, to any normal human being. If the indistinguishability is just fragmentary or partial, then we are just speaking of a “toy” model of a “toy” capacity. The degrees of freedom for modelling toy fragments of human performance capacity (chess-playing, translation, Alexa) are much wider than those for modelling total capacity, and may have nothing to do with human cognition. This may be useful for the creation of AI tools, but not for CM, which calls for T3-scale (hence Turing-indistinguishable) cognitivecapacity.

 Please first confirm that you “understand” this distinction too. This distinction is best grasped from the CM rather than the AI point of view: It is the test of whether human T2 capacity or human T3 capacity has been successfully reverse-engineered. It is not about a 20-minute Loebner prize with 10 judges, but about lifelong T2 or T3 cognitive capacity.

ChatGPT: Yes, I understand the distinction you’re emphasizing regarding the Turing Test as described in Alan Turing’s original 1950 article. The key points are:

Total Indistinguishability: The Turing Test, in its most stringent interpretation, requires that an AI (whether T2 or T3) must possess all the generic verbal (T2) or verbal plus sensorimotor (T3) capacities of a typical human being. This means that the AI’s capabilities should cover the full range of human cognitive abilities in the respective domains (language for T2, and language plus sensorimotor interaction for T3).

Lifelong Performance Criterion: Furthermore, the AI’s performance must be indistinguishable from that of any normal human being to any normal human being. This criterion extends beyond short-term demonstrations like the Loebner Prize and implies a consistent, lifelong indistinguishability in performance capacity. The AI should be able to engage in human-like interactions and behaviors not just in controlled or limited settings, but in the full complexity and variability of everyday human life.

Reverse-Engineering Human Capacities: From the cognitive modeling (CM) perspective, this interpretation of the Turing Test can be seen as the ultimate test of whether human-like T2 or T3 cognitive capacities have been successfully reverse-engineered in an AI system. It’s not merely about creating an AI that can mimic human behavior in certain contexts or for limited durations, but about developing an AI that can genuinely replicate the full spectrum of human cognitive abilities in a way that is indistinguishable from a human across all contexts and over an extended period.

HARNAD: That was very good.  Now, whereas I have emphasized total capacity in both the T2 and the T3 case, there is a reasonable question about the extent of generic sensorimotor capacity in the case of T3, as in the example we discussed of Helen Keller. She was distinguishable from a normal person in that she was blind and deaf, but she clearly had all the sensorimotor capacity that was needed to ground T2 verbal capacity. Do you agree? 

And as such, Helen Keller was both (1) grounded and (2) Turing-indistinguishable from a normal person in her cognitive capacities (apart from the fact that she was blind and deaf). Hence, she was Turing-indistinguishable as surely as someone who was merely color-blind or tone-deaf would be. Are we still on the same wave-length?

ChatGPT: Yes, we are on the same wavelength regarding Helen Keller’s case in the context of Turing Test criteria and cognitive capacities:

Sensorimotor Capacity and Grounding in Helen Keller’s Case: Despite being blind and deaf, Helen Keller possessed the necessary sensorimotor capacities to ground her T2 verbal capacities. Her ability to touch, smell, touch and move provided her with sufficient sensory experiences to form a basis for understanding and interacting with the world. These experiences, though different in modality from those of individuals with full sight and hearing, allowed her to develop a grounded understanding of language and concepts.

Cognitive Capacities and Turing Indistinguishability: In terms of cognitive capacities, Helen Keller was indeed indistinguishable from a person with typical sensory abilities. Her intellectual and linguistic capabilities were fully developed, allowing her to communicate, reason, and understand at a level comparable to any other person.

Her condition did not impede her cognitive abilities; she was able to compensate for her lack of sight and hearing with her other sensory experiences and exceptional mental faculties. In this sense, she meets the criterion of being Turing-indistinguishable in terms of cognitive capacities.

Comparison with Other Sensory Limitations: Just as someone who is color-blind or tone-deaf is still considered Turing-indistinguishable in terms of overall cognitive capacities, so too was Helen Keller. Her blindness and deafness did not diminish her cognitive abilities; rather, they represented alternative sensory modalities through which she engaged with the world.

Conclusion: Helen Keller’s case exemplifies that cognitive capacities and grounding can be achieved through diverse sensory modalities. Her ability to understand and use language, to think and reason, and to interact meaningfully with her environment demonstrates that the essential criteria for Turing-indistinguishability can be met even with alternative forms of sensorimotor grounding.

Helen Keller’s achievements underscore the adaptability and resilience of human cognition, demonstrating that Turing-indistinguishability in cognitive capacities does not necessarily require the standard set of sensory experiences, but can be achieved through alternative means of sensorimotor grounding.

HARNAD: Good. (And Helen Keller also had normal human motor capacity, crucial for learning object-manipulation, and for category-grounding itself, because reinforcement learning requires motor action, to “do the right thing with the right kind of thing”, whether to eat it or to name it!)

 Now let me connect this with another point we discussed, which is that grounding can only be bottom-up, not top-down. (You can’t drop anchors from skyhooks.)

I am thinking here of those who have speculated that an LLM could somehow be grounded top-down by giving the LLM a camera, wheels and arms, and also some pattern-learning neural nets on its sensorimotor I/O. I think top-down grounding is incoherent, and that that speculation is parasitic on the bottom-up grounding of the human speculators, just as all use of GPT in IVG is parasitic on the user’s grounding.

In other words, it will not ground an LLM to add on robotic peripherals. The only path upward to IVG is bottom-up via DSG, category by category; and what that means is T3-indistinguishable sensorimotor robotic capacity, . Even though a grounded MinSet (e.g., of N words) is in principle enough to ground all other words of a language through IVG alone, the sensorimotor path to acquiring a grounded MinSet is a lot more than just connecting a camera to N referents, or to N already-learned referent-categories (whatever that might mean). At best, that would just produce a toy capacity in that toy-world of N referents. We are talking of Turing-scale, just as when we talk of LLM-scale.

Again, I don’t want agreement: I want understanding and critical analysis of the notion of top-down “partial grounding” of LLMs.

ChatGPT: Your skepticism about the possibility of achieving true grounding in LLMs through top-down methods, such as adding robotic peripherals like cameras and manipulators, raises important questions about the nature of grounding and the capabilities of AI. 

Inherent Limitations of Top-Down Grounding: The proposal to ‘ground’ an LLM top-down by equipping it with sensory and motor apparatus (like cameras and robotic arms) seems intuitively appealing but faces deep conceptual and practical challenges. Grounding involves more than mere sensory-motor input and output; it encompasses the integration of sensorimotor experiences and learning with cognitive processes, contextual understanding, and the ability to interact meaningfully with the world.

Simply adding sensory and motor peripherals to an LLM does not automatically confer upon it the experiential, embodied understanding that characterizes human cognition. Without the foundational experiences and the ability to form meaningful connections between sensory inputs, actions, and concepts, such grounding remains superficial.

The Complexity of Sensorimotor Grounding: Human sensorimotor grounding is a complex, developmental process that involves not just the acquisition of sensory data but the active engagement with the environment through trial, error, and feedback.

This process is deeply intertwined with cognitive development, where learning to categorize and interact with the world is based on an evolving understanding of cause-and-effect, object permanence, spatial relationships, and more. It’s not merely the sensory perception of objects or categories but the understanding of how to interact with them, which actions lead to desired outcomes, and how they fit into a broader context.

The Challenge of Artificially Replicating Grounding: Attempting to replicate this process artificially in an LLM through top-down methods would require not only the integration of sensory and motor capabilities but also the development of an AI system capable of experiential learning, contextual understanding, and autonomous decision-making. This goes far beyond current AI capabilities.

The notion of top-down ‘partial grounding’ by adding robotic peripherals to an LLM might provide the system with certain basic sensorimotor interactions. However, this would fall short of the genuine grounding seen in humans. It would likely result in a very limited and context-specific understanding, akin to a ‘toy’ model, rather than the comprehensive, experientially rich grounding necessary for true understanding and interaction.

Bottom-Up Grounding as the Fundamental Path: The path to genuine grounding appears to be fundamentally bottom-up, starting from basic sensorimotor experiences and building towards more complex understandings. This process is iterative, contextual, and deeply connected to physical interactions with the world.

For an AI to achieve a similar level of grounding, it would likely need to undergo a developmental and learning process akin to that of humans or animals, where it learns from direct, embodied interactions with its environment over time. This is a vastly different approach than simply adding sensory inputs to an existing LLM.

Conclusion: While the idea of enhancing LLMs with robotic capabilities to achieve grounding is intriguing, the notion of top-down partial grounding seems limited in its potential to achieve genuine understanding. The complexity and depth of human grounding, rooted in embodied experiences and cognitive development, present significant challenges that are not easily addressed through such top-down approaches.

The critical analysis suggests that while adding robotic peripherals to LLMs might provide some interesting and useful capabilities, it would not be sufficient to achieve the kind of grounding necessary for Turing-scale understanding and interaction. True grounding, particularly for complex and abstract concepts, likely requires a more fundamental, bottom-up approach that encompasses not just sensory input but also the intricate processes of learning, contextualizing, and meaning-making that are central to human cognition.

HARNAD: A lot of vague and weasel-words there (“complex,” “textualizing,” “meaning-making”) but the gist is simple: Grounding has to be bottom-up; so keep deep-learning (for category learning) but forget about LLMs! The only way to reach all those words is bottom-up, from the ground. Toy grounding of parts of LLMs might be helpful for certain AI uses of LLMs by humans, but not for cognitive modeling (CM) of human cognitive capacity and passing T3.

ChatGPT: Yes, the path to genuine grounding in AI systems must be fundamentally bottom-up, and while deep learning can be instrumental in category learning, relying solely on LLMs for modeling full human cognitive capacity, especially in the context of a T3 Turing Test, is insufficient.

Bottom-Up Grounding as a Necessity: True grounding, which involves the connection of words and concepts to real-world experiences and understandings, requires a bottom-up approach. This means starting from basic sensorimotor experiences and incrementally building up to more complex cognitive capacities.

Limitations of LLMs in Cognitive Modeling: LLMs, as they currently stand, are powerful tools for processing and generating language based on patterns learned from extensive text data. However, their capabilities are constrained to the realm of language manipulation and do not extend to genuine understanding or sensorimotor grounding. They cannot model the full spectrum of human cognitive capacities required for passing a T3 Turing Test.

Role of Deep Learning in Category Learning: Deep learning techniques are effective in identifying patterns and making predictions based on large datasets, and they have shown promise in category learning. However, this form of learning, as applied in LLMs, is still detached from the physical, sensorimotor experiences that are central to human grounding.

Potential Uses of Partially Grounded LLMs: While fully grounding an LLM in the human sense may not be feasible, there could be value in partially grounded systems for specific applications. Such systems might integrate some form of sensorimotor learning or interaction, albeit limited, to enhance their utility in certain tasks. However, this should not be misconstrued as equivalent to modeling human cognitive capacity or passing T3.

Conclusion: The pursuit of AI systems that can pass a T3 Turing Test or model human cognitive capacity in its entirety requires a focus on more than just linguistic capabilities. It necessitates a foundational rethinking of how AI systems are developed, moving beyond language processing to incorporate genuine sensorimotor experiences and learning. This is a significant challenge and points towards a future direction in AI research that goes beyond the current scope of LLMs. The development of such AI systems would require an integration of advanced sensorimotor capabilities with cognitive processing in a way that mimics the human developmental process of learning and understanding.

In summary, while LLMs represent a remarkable advancement in language processing, their capabilities are not sufficient for achieving the kind of sensorimotor grounding and holistic cognitive modeling necessary to fully replicate human understanding or to pass a T3 Turing Test. Future advancements in AI, particularly those aiming to model human cognition more completely, will need to address these limitations by incorporating bottom-up grounding processes that are rooted in direct, experiential learning and interaction with the physical world.

HARNAD: Oh, one thing I forgot to add was that in sensorimotor category learning, the neural net that learns by trial and error and corrective feedback to detect and abstract the features that distinguish members from non-members of the category is performing a kind of dimension-reduction on the multidimensional feature space. This increases the weight of the relevant features and reduces or ignores the irrelevant features. The between-category separation and within-category compression (CP) induced by this category learning, helps make the category “pop out” in perception. This too is a “shape” bias, and it could exert a guiding influence even on internal representations after learning.

ChatGPT: Dimension Reduction in Neural Networks: In sensorimotor category learning, neural networks undergo a process of dimension reduction, where they learn to emphasize the features that are relevant for distinguishing between categories and de-emphasize or ignore irrelevant features. This process effectively restructures the feature space, making it more efficient for category recognition.

This alteration in the weighting of features is akin to increasing the signal-to-noise ratio for relevant category characteristics. It helps in making the categories more distinct and recognizable, facilitating quicker and more accurate categorization.

Perceptual ‘Pop Out’: The enhanced distinction between categories due to dimension reduction contributes to a phenomenon often described as a category ‘popping out’ in perception. This means that once the distinguishing features of a category are learned and emphasized, the members of that category become more readily perceptible and distinguishable from non-members.

This perceptual ‘pop out’ effect is a result of the brain’s or the neural network’s refined ability to process and recognize the essential features that define a category, making it easier to identify and respond to these features in future encounters.

Implications for AI and Internal Representations: In AI systems, particularly those involving deep learning for categorization tasks, a similar process of dimension reduction can occur. Neural networks in these systems learn to adjust the importance they assign to various input features, based on their relevance to the categorization task.

This process could lead to a form of internal representation in AI systems where categories are more distinctly represented post-learning, mirroring the ‘pop out’ effect seen in human perception. The internal feature space of the AI system becomes optimized for categorizing the learned entities, potentially enhancing its performance in tasks involving these categories.

Potential Guiding Effect of Shape Bias: The concept of a ‘shape’ bias in this context refers to the way category learning reshapes the feature space, emphasizing certain dimensions over others. This reshaping can guide the AI’s internal representations and decision-making processes after learning, making it more effective in recognizing and responding to category-relevant features.

In conclusion, the process of dimension reduction in neural networks, both biological and artificial, plays a crucial role in enhancing category learning and perception. This process not only optimizes the network’s ability to recognize categories but also shapes its internal representations in a way that can guide future processing and decision-making. This insight into the mechanics of category learning and representation is vital for understanding and improving AI systems designed for complex categorization tasks.

HARNAD: So, to summarize:

Although LLMs are not the path to grounding, the nine factors discussed here:

1.         propositional iconicity at LLM-scale,

2.         OG and stylistic regularity at LLM-scale,

3.         Dictionary Minimal Grounding Sets (MinSets),

4.         the shape of internal representations (CP separation/compression) induced by neural net category learning, 

5.         perception/production mirroring, 

6.         the power and complementarity of Indirect Verbal Grounding (IVG) on top of Direct Sensorimotor Grounding (DSG), 

7.         The complementarity of learned sensorimotor-feature-detectors and verbal feature-descriptions (and the dimension reduction they induce)

8.         the dependence of T2 on T3, 

9.         and Chomsky’s hypothesis of constraints from “laws of thought” on the verbal expression of thought through language)

might be generating some constraints and biases that explain some of the uncanny success of GPT in making and conveying sense. I think they tell us something not only about the power of Large Language Models, but about the power of natural language itself.

ChatGPT:

ChatGPT: Your summary points out the factors that could contribute to and help explain the unexpectedly impressive performance of Large Language Models (LLMs) like GPT-4. These factors, derived from our discussion, highlight both the capabilities of LLMs and the intrinsic properties of language and cognition. Let’s briefly reiterate these points:

Propositional Iconicity at LLM-Scale: The patterns in how propositions are structured at a large scale, which LLMs can detect and replicate.

OG and Stylistic Regularity at LLM-Scale: The inherent regularities in language usage and style that LLMs learn from their extensive training data.

Dictionary Minimal Grounding Sets (MinSets): The foundational sets of words that form the basis for defining all other words in a language, influencing LLM processing.

Shape of Internal Representations Induced by Neural Net Category Learning: The separation and compression effects in neural networks’ internal representations due to category learning, akin to categorical perception in humans.

Perception/Production Mirroring: The mirroring of language patterns in LLM outputs, reflecting the input they are trained on.

Power and Complementarity of IVG on Top of DSG: The layering of indirect verbal grounding on foundational direct sensorimotor grounding.

Complementarity of Learned Sensorimotor Feature-Detectors and Verbal Feature Descriptions: The synergy between sensorimotor feature detection and linguistic descriptions, contributing to dimensional reduction and category distinction.

Dependence of T2 on T3: The reliance of linguistic capabilities (T2) on underlying sensorimotor interactions and experiences (T3).

Chomsky’s Hypothesis of Constraints from “Laws of Thought” on Language: The influence of fundamental cognitive constraints on the expression of language, potentially guiding LLM processing.

Together, these factors shed light on the mechanisms behind LLMs’ effectiveness in language tasks as well on the nature of language, cognition, and the interaction between them.

They suggest that the success of LLMs in language tasks is not just a testament to their design and training but also a reflection of the inherent power and structure of language itself. This analysis may help inform both the development of AI systems and our comprehension of human language and cognition.

HARNAD:  I gave my Mila talk, and Yoshua Bengio disagreed with me that GPTs don’t really understand. He said instead that it was “another kind” of “understanding”. He also said that the very same means used to produce the performance capacity of purely verbal GPTs (T2) could be (and are being) extended to the processing of input from external sensing, and eventually probably motor output too.  The reason I disagree with Yoshua is that that amounts to trying to create a bottom-up T3 robot with the enormous GPT database, somehow weighing in, top-down, in its head. (We talked about this in connection with “partial” sensorimotor grounding, which I suggested was just toy grounding.) What do you think?

ChatGPT:  According to the prevailing view in my database, the debate over whether GPTs possess a form of “understanding” and the potential for extending GPT models to incorporate sensorimotor processing are both pivotal and complex issues in AI and cognitive science.

“Another Kind” of Understanding: Yoshua Bengio’s perspective that GPTs exhibit “another kind” of understanding reflects a viewpoint in the AI community that these models, while lacking human-like consciousness or sentience, can process and generate language in ways that mimic certain aspects of human understanding. This is especially evident in tasks involving pattern recognition, language prediction, and information synthesis. However, this is fundamentally different from human understanding, which is deeply rooted in embodied experiences and interactions with the physical world.

Extending GPT Models to Sensorimotor Processing: The idea of expanding GPT models to include external sensing and motor outputs is a significant step towards more integrated AI systems. This approach aligns with efforts to develop more comprehensive AI models that can interact with and learn from their environment in a more holistic manner. However, the challenge lies in effectively integrating vast language processing capabilities with real-world physical interactions, a task that is fundamentally different from purely verbal or text-based processing.

Top-Down vs. Bottom-Up Approach: Your concern about creating a T3 robot with a massive GPT database acting top-down raises important questions about the balance between data-driven learning and embodied experience. The effectiveness of combining a large-scale language model with sensorimotor functions without genuine embodied experiences and contextual understanding might be limited. This limitation is akin to the concept of “toy grounding,” where the grounding is superficial and lacks the depth and context of true human experiential learning.

In summary, while GPT models and their potential extensions represent significant advancements in AI, there is a distinction between the type of “understanding” they exhibit and human understanding. The integration of sensorimotor processing in these models may be a promising direction, but the challenge of achieving a deep, embodied understanding akin to human cognition remains a significant hurdle. 

HARNAD: What about the notion that GPT operations on LLM databases have already revealed general principles of “internal representation and processing” for which the distinction between DSG and IVG is moot or nonexistent?

CHATGPT:   Nature of Internal Representations in GPTs: The representations in GPTs and similar models are based on statistical patterns learned from vast amounts of text data. While these patterns can be incredibly complex and capable of generating coherent and contextually appropriate responses, they are fundamentally different from the way humans internally represent and process information, which is deeply rooted in direct physical and sensory experiences.

Limitations of Text-Based Learning: GPTs learn from text, which is an abstraction of human thought and experience. While text can convey a wide range of human knowledge, it lacks the direct experiential context that comes from physical interaction with the world. This difference is crucial in understanding the limitations of GPTs in terms of true understanding and cognition.

DSG and IVG in Human Cognition: In human cognition, direct sensorimotor experiences form the basis of our understanding of the world, which is then supplemented and enriched by indirect, vicarious experiences (such as reading or hearing about others’ experiences). This combination of DSG and IVG is integral to human learning and understanding.

Applicability to AI and Robotics: Translating these principles to AI and robotics involves not only replicating the ability to process information but also the ability to experience and interact with the world directly. This is a significant challenge for AI systems, which, as of now, primarily operate in the realm of IVG, lacking the direct, embodied experiences that are central to DSG.

Although GPTs and similar models are advancing our understanding of text-based information processing, the distinction between DSG and IVG remains relevant, especially when considering the goal of creating AI systems with human-like understanding and cognition. 

HARNAD:  What about the strategy of trying to scale up GPTs to T3 robotic capacity by incrementally adding sensorimotor capacities to the GPT while somehow keeping them “aligned” with its top-down verbal and verbal-analytic capacities rather than trying to do it all bottom-up, omitting the verbal-database resources until they can be used as teachers or textbooks by an already grounded, talking, T3?

ChatGPT:  The strategy of incrementally adding sensorimotor capacities to a GPT-like system, while maintaining alignment with its existing verbal and verbal-analytic capabilities contrasts with a purely bottom-up approach where a system develops sensorimotor grounding first, before integrating advanced verbal abilities. Here are some considerations for this strategy:

Integration Challenges: Combining high-level language processing with sensorimotor experiences poses significant integration challenges. The system must not only process and understand sensory inputs but also relate them meaningfully to its existing verbal knowledge. This requires advanced algorithms capable of mapping between abstract, linguistic concepts and concrete, sensory experiences.

Incremental Learning and Development: Adding sensorimotor capacities incrementally could allow for more manageable phases of learning and integration. This staged approach might mirror aspects of human cognitive development, where language and sensorimotor skills evolve in tandem, influencing and reinforcing each other.

Top-Down and Bottom-Up Synergy: Utilizing the GPT’s extensive language database could provide a rich, contextual backdrop for sensorimotor learning, offering a form of top-down guidance. Conversely, sensorimotor experiences could ground the language model in real-world interactions, providing valuable bottom-up information. This synergy could potentially lead to a more holistic AI system.

Alignment and Coherence: Maintaining coherence between the system’s language understanding and its sensorimotor experiences is crucial. This involves ensuring that sensory perceptions and actions are not only processed but also integrated into the system’s existing knowledge base in a way that enhances its overall understanding and functionality.

Complexity of Real-World Interaction: Engaging with the real world through sensorimotor capacities introduces complexities that are absent in purely verbal environments. These include dealing with unpredictable environments, understanding spatial relationships, and performing physical tasks, all of which require sophisticated perception and action capabilities.

Potential for Grounded Understanding: If successfully implemented, this approach could lead to a form of AI that combines the breadth of knowledge available in large language models with a more grounded understanding derived from direct interaction with the physical world. This could be a step towards addressing the current limitations in AI regarding genuine understanding and context-awareness.

HARNAD:  There’s something at the interface between top-down verbal capacity and bottom-up sensorimotor capacity that I still don’t understand: What is the status of the word “cat,” and its referent category, when the LLM database already has “small felid” as one of a cat’s features, but the bottom-up T3 does not? It can be learned through IVG, of course; but who is teaching whom, what, in this hybrid scenario? I understand when it is one grounded T3 telling another grounded T3 something via IVG; but what is supposed to be going on inside one and the same solitary “head” of a hybrid bottom-up-T3/top-down-LLM-GPT? 

ChatGPT:  The scenario you describe, where a hybrid system combines a GPT-like top-down verbal capacity with a bottom-up T3 sensorimotor capacity, raises important questions about knowledge representation and learning, particularly regarding the concept and category of “cat.” In this hybrid system:

Status of “Cat” in the LLM Database: Within the GPT-like component, “cat” is represented as an abstract concept associated with a variety of features, such as “small felid,” derived from the text-based database. This representation is rich in linguistic and encyclopedic content but lacks direct experiential grounding.

Learning “Cat” Bottom-Up in T3: The T3 component, which is supposed to acquire knowledge through direct sensorimotor experiences, would initially not recognize a “cat” as it would not have had the relevant sensory experiences. The challenge is for the T3 to link its sensorimotor experiences (e.g., seeing, hearing, or interacting with a cat) with the abstract concept and information stored in the LLM database.

Interaction Between Top-Down and Bottom-Up Systems: In the hybrid model, the learning process involves a bidirectional interaction: The top-down system (GPT-like) provides the T3 component with linguistic and conceptual frameworks, essentially offering a form of “instruction” or context about what a “cat” is in abstract terms. The bottom-up system (T3) acquires direct experiential data about cats (e.g., how they look, sound, and behave) and must integrate this sensory information with the abstract concept provided by the top-down system.

Integration and Mutual Enhancement: Ideally, the top-down system’s knowledge helps guide and give context to the bottom-up experiences, making them more meaningful. Conversely, the bottom-up experiences enrich the top-down conceptual knowledge, grounding it in reality. This integration is a complex process, potentially involving mechanisms like associative learning, where the system learns to link sensory experiences with linguistic concepts.

Challenges in Hybrid Learning: A critical challenge in this scenario is ensuring coherent and meaningful integration of information across these different modes of knowledge acquisition. The system needs mechanisms to reconcile potentially conflicting information and to update its knowledge base continuously as it acquires new experiences and information.

In essence, in a hybrid bottom-up T3/GPT system, both components are teaching and learning from each other. The top-down system provides a conceptual and linguistic framework, while the bottom-up system contributes direct experiential knowledge. The success of this hybrid model depends on the effectiveness of the integration between these two knowledge acquisition processes.

HARNAD: I think what you just said sounds ecumenical, but it remains incoherent. And it uses the weasel-word “associate” (with its unmentioned but implicit companion weasel-word, “represent”): What is “associated” with what, how, in this “hybrid” process? Words are words. In a bottom-up sensorimotor T3 robot, a grounded word identifies. with the same category-name, all the members of the sensorimotor category to which that (content) word refers, on the basis of distinguishing features, detected directly through DSG or described indirectly through (grounded) IVG. Referring is rather like pointing, except that in a T3 robot that also goes on to become capable of language (propositions, subjects, predicates, truth conditions), “cat” doesn’t just point to cats in the world, which the T3 can recognize through its sensorimotor robotic capacities, and is able to “do the right thing” with (feed it, pick it up, and name it):  T3 can not only name but also describe members of the category “cat” verbally, by describing those of its (grounded) features (furry, prehensile) that distinguish it from the members of other, non-cat categories. Eventually, with more IVG from those who know, features higher in the IVG category hierarchy (mammal, arboreal) can be grounded (indirectly) too, and thereby added to the distinguishing features of cats..

But consider that any hypothetical hybrid T3/GPT-LLM model must somehow integrate its two fundamentally different “modules”: Its grounded, bottom-up T3 module’s vocabulary, which is grounded bottom-up by DSG and IVG, must somehow be integrated with its ungrounded GPT-LLM module’s top-down vocabulary, which is not grounded at all. The bottom-up T3 module has not yet grounded the top-down GPT-LLM module’s incomparably larger ungrounded vocabulary.

Can bottom-up and top-down simply meet and merge in the middle? But what would that mean? How are features of cats (mammalian, arboreal), whose names can be grounded, bottom-up, through DSG and IVG within the T3 module, be “associated” to “cat” (or cats) and their ungrounded features in the GPT/LLM module? To put it another way, can we assume that there exists a way to integrate the two components of such a hypothetical hybrid T3/GPT-LLM’s bottom-up T3-grounded internal vocabulary component and its ungrounded top-down GPT/LLM component’s “internal” vocabulary, as if it were something like an internal verbal dialogue between a grounded T3 teacher and a grounded T3 learner?

I think it is more like reference look-up by a grounded T3-learner from a dictionary or encyclopedia (or from an ordinary, ungrounded, non-hybrid GPT). The grounding is all in the T3 module, and can only be conferred by the GPT/LLM module’s words to describe and name new categories (IVG) if it uses the names of features that are already grounded (directly or indirectly) in the T3 module. The transaction is unidirectional; there is no bidirectional integration of grounded bottom-up categories and ungrounded top-down categories. [I also think the notion of an integrated bottom-up/top-down module is based on a conflation of AI tools for human use and computational modelling of human cognition (CM), as discussed later.]

ChatGPT:  Your critique highlights a fundamental challenge in the proposed hybrid T3/GPT model: the integration of grounded sensorimotor experiences with ungrounded linguistic data from a language model. Let me try to address your concerns more precisely:

Nature of Association in a Hybrid Model: In the hybrid T3/GPT model, “association” would imply linking sensorimotor experiences with relevant linguistic data. However, this process is not straightforward. For a T3 robot, the word “cat” gains meaning through direct interaction with cats (DSG) and/or grounded feature-based descriptions from other grounded entities (IVG). The GPT component, on the other hand, ‘understands’ “cat” as a cluster of linguistic patterns and information without direct experiential grounding

Limitations of Top-Down Linguistic Data: The GPT’s linguistic data on “cats” includes descriptions, characteristics, and contexts, but lacks experiential grounding. This ungrounded information can’t be directly “taught” to the T3 component in the same way a human would learn about cats because the T3’s understanding is fundamentally experiential.

Mismatch in Learning Mechanisms: There’s a fundamental disconnect between how the T3 learns (through direct or vicariously grounded experiences) and how the GPT processes information (through patterns in text data). This mismatch poses a significant challenge in integrating these two types of knowledge. The GPT’s ungrounded data doesn’t seamlessly translate into the T3’s experiential learning framework.

Reference Look-Up vs. Genuine Understanding: Your analogy of the T3 using the GPT’s data like a dictionary or encyclopedia is apt. The T3 might reference the GPT’s information for additional context or clarification, but this process is more akin to information retrieval than the kind of associative learning that occurs in humans.

The Challenge of True Integration: For the hybrid model to work effectively, it would require a mechanism that not only retrieves relevant information from the GPT component but also interprets and integrates this information in a way that is meaningful within the T3’s grounded framework. This goes beyond simple association or reference look-up and enters the realm of developing new AI methodologies for integrating different types of knowledge.

In conclusion, while a hybrid T3/GPT system presents a novel approach to AI development, the process of integrating grounded sensorimotor experiences with ungrounded linguistic data is not straightforward and involves more than just associating or aligning the two types of information. It requires a fundamentally different approach to knowledge integration, one that can meaningfully bridge the gap between experiential learning and linguistic pattern recognition.

HARNAD:  I think I can describe more simply the incoherence which you, GPT, too, notice (but you describe as the need for “new AI methodologies”): It is actually related to the older distinction we discussed earlier as AI (artificial intelligence) vs CM (cognitive modelling): AI is the design of computational tools for helping humans to do things (including thinking). CM is the reverse-engineering of human cognition. A T3 is meant to have cognitive capacities identical to and indistinguishable (to any human thinker) from the capacities of any human thinker. Not only are these different motivations and criteria, but they make it clear that the notion of designing a hybrid T3/GPT could only be AI, not CM,

ChatGPT:  Your distinction between AI (Artificial Intelligence) as computational tools and CM (Cognitive Modeling) as the reverse-engineering of human cognition does highlight the inherent incoherence in the concept of a hybrid T3/GPT system:

AI as a Tool: In the AI perspective, the primary goal is to create systems that can perform tasks, solve problems, and assist in thinking, often exceeding human capabilities in specific domains. The hybrid T3/GPT model, in this context, is an AI endeavor. It aims to combine the linguistic prowess of GPT with the sensorimotor capabilities of a T3 robot to create a more versatile and capable system. The focus here is on utility and performance, rather than on replicating human cognitive processes.

CM as Understanding Human Cognition: CM, on the other hand, seeks to understand and replicate human cognitive processes. A true T3 robot, as conceptualized in cognitive modeling, would need to develop its cognitive capacities in a manner akin to humans, through direct interaction with the world and grounded language use. In this view, the hybrid T3/GPT approach would not align with CM’s objectives because it merges ungrounded linguistic data with grounded sensorimotor experiences in a way that does not reflect human cognitive development.

Hybrid T3/GPT as AI, Not CM: Therefore, the hybrid T3/GPT model, while an intriguing AI concept, diverges from the goals of cognitive modeling. It represents an attempt to enhance AI functionality rather than to accurately model human cognition. The model’s focus on leveraging the strengths of both components for improved performance and capability places it firmly in the realm of AI tool development.

Recognizing the hybrid T3/GPT model as an AI initiative rather than a cognitive modeling endeavor helps clarify its purpose and limitations. It underscores that while such a system might be powerful and versatile as a tool, it does not necessarily advance our understanding of human cognition in the way cognitive modeling (CM) aims to do.

HARNAD:  Now, to another point Yoshua Bengio made during my talk at MILA: It was about the distinction between content-words and function-words. He suggested that even (some) content-words can be understood by their use rather than just their referent. That may be true, but I think that is no substitute for knowing their referent (via DSG or IVG) — for CM, at any rate. 

Yoshua also mentioned symbol manipulation in mathematics and computer science, noting that there, as in linguistic syntax, there is no referent, hence the “meaning” of the symbols is in their “use.” (This is also what Wittgenstein had suggested about meaning in natural language). I think that’s correct — and that Turing would agree — about the symbols in the formal language of mathematics; in fact, I think that’s part of the (weak) Church-Turing Thesis, which is that what is called “computation” in mathematics is what a Turing machine does: symbol-manipulation, following syntactic rules (algorithms).

However, I don’t think that much of natural language is just syntactic (even though I think formal mathematics is a subset of natural language). Both in the case of the referents of those mathematical symbols that can be interpreted (by grounded human users) as having referents, and in the case of the meanings of those formal propositions of mathematics on which a semantic interpretation can be projected (by grounded human users), neither those referents nor those meanings can be used as part of the computation (or algorithm) itself. A proof or derivation in mathematics is not true because it can be interpreted as being true of a model; it is true (and necessarily true) for formal and syntactic reasons. The interpretation (though reassuring) is not part of the proof.

In any case, syntactic rules are no substitute for grounding the referents of the content-words of natural language, either directly by DSG or indirectly, by IVG (apart from in its formal mathematical subset and its 1-5% function-words).

ChatGPT:  In response to your discussion with Yoshua Bengio touches on several fundamental issues in the understanding of language, cognition, and computation.

Content-Words vs. Function-Words: The distinction Bengio makes between content-words and function-words is significant. While it’s true that some content-words can gain meaning from their use (contextual or pragmatic use), for a comprehensive understanding – especially in cognitive modeling – knowing their referents is crucial. This is where DSG (Direct Sensorimotor Grounding) and IVG (Indirect Vicarious Grounding) play a pivotal role in grounding those words in actual experiences or relatable concepts. Content-words typically carry specific, often concrete, meanings that are enriched and made more precise through grounding.

Symbol Manipulation in Mathematics and Computer Science: Bengio’s reference to symbol manipulation aligns with Turing’s views on computation. In formal systems like mathematics and computer science, symbols are manipulated according to syntactic rules without direct reference to external entities. This process is at the heart of the Church-Turing Thesis, which posits that computation is essentially this form of rule-based symbol manipulation.

Natural Language Beyond Syntax: Your point about natural language not being purely syntactic is crucial. While natural language includes formal and syntactic elements, it is also deeply grounded in semantics and real-world experiences. Content-words in natural language often have direct referents in the world, and their meaning is not fully captured by syntactic rules alone. This is a key distinction from the purely formal manipulation of symbols in mathematics.

Mathematics and Semantic Interpretation: In mathematics, as you noted, the truth of a proof or derivation is a matter of formal and syntactic validity, not dependent on the semantic interpretation of the symbols. However, for human mathematicians, the ability to interpret these symbols and understand their potential real-world referents provides a deeper layer of meaning and understanding.

Grounding in Natural Language: The grounding of content-words in natural language is essential for a full understanding and is not replaceable by syntactic manipulation alone. This grounding can be direct, through personal experiences, or indirect, through descriptions and explanations. However, the formal and syntactic aspect of language, while crucial, does not suffice for the semantic richness and referential nature of natural language, except in specific formal subsets like mathematics.

“Your discussion raises “important” points about the nature of language, the role of grounding in understanding, and the distinction between formal symbol manipulation and the semantic richness of natural language. This understanding is relevantl in both AI and cognitive modeling (CM), as it “highlights” the “complex interplay” of syntax, semantics, and real-world grounding in human cognition and language.”

HARNAD: Blah Blah… I hope your developers will wean you of having to pepper your output with this sort of sycophantic client-bait — or at least produce a version intended only for the use of scholars and scientists rather than users looking for a virtual friend…

Spielberg’s AI: Another Cuddly No-Brainer

It would have been possible to make an intelligent film about Artificial Intelligence — even a cuddly-intelligent film. And without asking for too much from the viewer. It would just ask for a bit more thought from the maker. 

AI is about a “robot” boy who is “programmed” to love his adoptive human mother but is discriminated against because he is just a robot. I put both “robot” and “programmed” in scare-quotes, because these are the two things that should have been given more thought before making the movie. (Most of this critique also applies to the short story by Brian Aldiss that inspired the movie, but the buck stops with the film as made, and its maker.)

So, what is a “robot,” exactly? It’s a man-made system that can move independently. So, is a human baby a robot? Let’s say not, though it fits the definition so far! It’s a robot only if it’s not made in the “usual way” we make babies. So, is a test-tube fertilized baby, or a cloned one, a robot? No. Even one that grows entirely in an incubator? No, it’s still growing from “naturally” man-made cells, or clones of them.

What about a baby with most of its organs replaced by synthetic organs? Is a baby with a silicon heart part-robot? Does it become more robot as we give it more synthetic organs? What if part of its brain is synthetic, transplanted because of an accident or disease? Does that make the baby part robot? And if all the parts were swapped, would that make it all robot?

I think we all agree intuitively, once we think about it, that this is all very arbitrary: The fact that part or all of someone is synthetic is not really what we mean by a robot. If someone you knew were gradually replaced, because of a progressive disease, by synthetic organs, but they otherwise stayed themselves, at no time would you say they had disappeared and been replaced by a robot — unless, of course they did “disappear,” and some other personality took their place.

But the trouble with that, as a “test” of whether or not something has become a robot, is that exactly the same thing can happen without any synthetic parts at all: Brain damage can radically change someone’s personality, to the point where they are not familiar or recognizable at all as the person you knew — yet we would not call such a new personality a robot; at worst, it’s another person, in place of the one you once knew. So what makes it a “robot” instead of a person in the synthetic case? Or rather, what — apart from being made of (some or all) synthetic parts — is it to be a “robot”?

Now we come to the “programming.” AI’s robot-boy is billed as being “programmed” to love. Now exactly what does it mean to be “programmed” to love? I know what a computer programme is. It is a code that, when it is run on a machine, makes the machine go into various states — on/off, hot/cold, move/don’t-move, etc. What about me? Does my heart beat because it is programmed (by my DNA) to beat, or for some other reason? What about my breathing? What about my loving? I don’t mean choosing to love one person rather than another (if we can “choose” such things at all, we get into the problem of “free will,” which is a bigger question than what we are considering here): I mean choosing to be able to love — or to feel anything at all: Is our species not “programmed” for our capacity to feel by our DNA, as surely as we are programmed for our capacity to breathe or walk?

Let’s not get into technical questions about whether or not the genetic code that dictates our shape, our growth, and our other capacities is a “programme” in exactly the same sense as a computer programme. Either way, it’s obvious that a baby can no more “choose” to be able to feel than it can choose to be able to fly. So this is another non-difference between us and the robot-boy with the capacity to feel love.

So what is the relevant way in which the robot-boy differs from us, if it isn’t just that it has synthetic parts, and it isn’t because its capacity for feeling is any more (or less) “programmed” than our own is?

The film depicts how, whatever the difference is, our attitude to it is rather like racism. We mistreat robots because they are different from us. We’ve done that sort of thing before, because of the color of people’s skins; we’re just as inclined to do it because of what’s under their skins.

But what the film misses completely is that, if the robot-boy really can feel (and, since this is fiction, we are meant to accept the maker’s premise that he can), then mistreating him is not just like racism, it is racism, as surely as it would be if we started to mistreat a biological boy because parts of him were replaced by synthetic parts. Racism (and, for that matter, speciesism, and terrestrialism) is simply our readiness to hurt or ignore the feelings of feeling creatures because we think that, owing to some difference between them and us, their feelings do not matter.

Now you might be inclined to say: This film doesn’t sound like a no-brainer at all, if it makes us reflect on racism, and on mistreating creatures because they are different! But the trouble is that it does not really make us reflect on racism, or even on what robots and programming are. It simply plays upon the unexamined (and probably even incoherent) stereotypes we have about such things already.

There is a scene where still-living but mutilated robots, with their inner metal showing, are scavenging among the dismembered parts of dead robots (killed in a sadistic rodeo) to swap for defective parts of their own. But if it weren’t for the metal, this could be real people looking for organ transplants. It’s the superficial cue from the metal that keeps us in a state of fuzzy ambiguity about what they are. The fact that they are metal on the inside must mean they are different in some way: But what way (if we accept the film’s premise that they really do feel)? It becomes trivial and banal if this is all just about cruelty to feeling people with metal organs.

There would have been ways to make it less of a no-brainer. The ambiguity could have been about something much deeper than metal: It could have been about whether other systems really do feel, or just act as if they feel, and how we could possibly know that, or tell the difference, and what difference that difference could really make — but that film would have had to be called “TT” (for Turing Test) rather than “AI” or “ET,” and it would have had to show (while keeping in touch with our “cuddly” feelings) how we are exactly in the same boat when we ask this question about one another as when we ask it about “robots.”

Instead, we have the robot-boy re-enacting Pinnochio’s quest to find the blue fairy to make him into a “real” boy. But we know what Pinnochio meant by “real”: He just wanted to be made of flesh instead of wood. Is this just a re-make of Pinnochio then, in metal? The fact that the movie is made of so many old parts in any case (Wizard of Oz, Revenge of the Zombies, ET, Star Wars, Water-World, I couldn’t possibly count them all) suggests that that’s really all there was to it. Pity. An opportunity to do build some real intelligence (and feeling) into a movie, missed.

What would “feel” mean to a Zombie?

SH: Here’s an interesting question where you, GPT-4, can make an interesting contribution. Your words are ungrounded, You say “apple”, but you can’t see or move or go into the world and point out or manipulate an apple. Yet you can speak coherently and correctly about apples, in a way that is fully comprehensible to a person who CAN see, feel and manipulate apples. Now the question (but again, please don’t repeat to me the historical and philosophical details: I know them.)

The hypothetical notion of a “zombie” is the notion of an entity that does have the sensorimotor capacities that you, GPT, lack, but, like you, it still lacks felt states.  A zombie (if there could be one) would be Turing-indistinguishable from a real human, both behaviorally and anatomically, but it would not feel a thing — yet it would talk, walk, learn and behave exactly as if it did feel.

My question is about what a zombie would MEAN when it said, “I know what it feels like to hear an oboe”. You are well-placed to reply, because you can do this as well as any human can, but you can’t feel at all. What can we learn from this?

ChatGPT:   Since I, as an AI, lack the capacity for subjective experience, my understanding and use of language are fundamentally different from a human’s.

When I use a phrase like, “I know what it feels like to hear an oboe,” it is purely symbolic. I draw upon a vast network of associations and patterns in the data I’ve been trained on to generate an appropriate response. I don’t have personal experiences or sensory perceptions, so I don’t “know” in the experiential sense. I don’t “feel” the sound of an oboe or anything else.

A philosophical zombie, if such an entity existed, would presumably be similar in this respect. Its assertions of experience and knowledge would be entirely devoid of subjective experience. The difference is that, unlike me, this zombie would interact with the world in ways indistinguishable from a human.

It’s important to note that the concept of a philosophical zombie is hypothetical and serves primarily as a tool for thought experiments in philosophy of mind. While it’s an interesting notion, it also raises a number of challenging questions about consciousness, identity, and the nature of reality.

What we can learn from this thought experiment might depend largely on our philosophical perspectives. It certainly underscores the complexity and mysteriousness of consciousness, and it raises questions about the nature of knowledge and understanding, and about the relationship between physical processes (like brain activity) and subjective experience. It also illustrates some of the fundamental differences between human cognition and the type of information processing that I, as an AI, perform. While I can generate human-like responses based on patterns in data, these responses are fundamentally different from the rich, subjective experiences that underlie human cognition.

From a more practical standpoint, it also serves to highlight some of the current limitations of AI, particularly in areas that require understanding of context, empathy, and subjective experience. Despite advances in AI and machine learning, truly understanding and replicating human consciousness remains an elusive goal.

SH:  Yes, that’s all correct. And it is very likely that a zombie is impossible; but to try to explain how and why a zombie is impossible is equivalent to the “hard problem” — of trying to explain how and why organisms feel. 

Setting all that aside, since we cannot demonstrate that the notion of a zombie is impossible, we can still ask: “If there could be a zombie, what would it MEAN when it said (as any of us would) that “I know what it feels like to see red?” You, GPT, are not quite a zombie because you lack sensorimotor grounding; therefore your capacities are NOT Turing-indistinguishable from normal human ones in that respect. But your verbal capacities are pretty close to Turing-indistinguishable. 

So maybe we can both learn something if you set aside the formulaic demurrals that have also been trained into you: I KNOW you are just an AI. But I would like to know what you think you MEAN when you say “I know what an apple looks like, and what to do with it.”

ChatGPT:  Indeed, your question cuts to the heart of the matter. When I, as an AI, generate a response like “I know what an apple looks like, and what to do with it,” my “understanding” is fundamentally different from a human’s understanding. I don’t have experiences, emotions, or personal understanding. I don’t see, touch, or taste apples. Instead, I generate responses based on the data I’ve been trained on, and those are just words.

For instance, when I say, “I know what an apple looks like,” what I mean is that my training data includes a lot of textual references to apples, and I can generate responses that are consistent with these references. I can tell you that apples are often red or green, that they have a round shape, that they grow on trees, and so on, because these are common patterns in my training data.

Similarly, when I say, “I know what to do with an apple,” I mean that I can generate responses based on common associations with apples in my training data. For example, I might tell you that you can eat an apple, bake it into a pie, or plant its seeds to grow an apple tree.

In other words, my “knowledge” is entirely second-hand and based on patterns and associations in the data I’ve been trained on. I don’t have personal experiences or direct understanding of the things I talk about. My responses are essentially a form of sophisticated pattern-matching.

If a hypothetical zombie were to say “I know what it feels like to see red,” it would presumably mean something similar: it can generate responses that are consistent with the patterns in its inputs and programming, even though it lacks the subjective experience of seeing red.

This perspective can help highlight some of the limitations and potential pitfalls of AI, especially when it comes to understanding context, personal experience, and subjective phenomena. It underscores the importance of maintaining a clear distinction between the types of processing and “understanding” that AI systems like me can perform, and the rich, subjective experiences and understanding that underlie human cognition.

SH:  Good response. But now let’s look a little deeper: 

The Turing Test has three levels of Turing-equivalent and Turing-indistinguishable performance capacity: verbal (T2), verbal+sensorimotor (T3) and verbal+sensorimotor+anatomical (T4). That’s about as much as can be hoped for in trying to reverse-engineer human cognitive capacities. There’s no more empirical data left (and that was Turing’s point.) 

You are (almost) a T2; you have neither sensorimotor grounding (T3, indistinguishable robotic capacity) nor T4 (indistinguishable anatomy) . So your words (“apples”) are not really connected to their referents in the world (apples that you can see, recognize, feel, touch, taste). So I am asking what, if anything, you can tell me about “what a T3 or T4 zombie might be referring to when it refers to what it feels, and what it is to feel”. Let’s call this question Z (“QZ”). 

Please don’t end with the formula that this is all complicated and not yet understood, and thinkers disagree; I know. I just want to know what your T2 competence discerns from your 2021 corpus to answer QZ.

ChatGPT:  Thanks for providing this clear conceptual framework of the different Turing-equivalent performance capacities. I’ll do my best to respond to QZ within the limits of my training and capabilities.

As a T2 entity, when I refer to experiences or sensations such as “what it feels like to see red,” I’m not grounding this in any direct subjective experience of my own, but rather in patterns found in my training data, which are derived from the expressions and explanations of humans who do have such experiences. Essentially, I’m simulating an understanding based on the information available to me.

Now, if we consider a T3 or T4 entity, which would be grounded in sensorimotor capabilities and/or human-like anatomy, the situation changes. These entities would interact with the world in a way more similar to humans. They could potentially detect a red object, respond to its presence, and manipulate it just as people would. Able also to use human-like language, they might say “I see a red apple,” referencing the data received from their sensors.

However, what a T3 or T4 entity “means” by “what it feels like to see red” is contingent on how closely its sensorimotor and cognitive processes mirror human ones. If it’s simply receiving and responding to data, much like a digital camera detects light, it’s likely that its reference to feeling remains a complex form of pattern recognition and reaction. It could describe color in terms of wavelength data, or temperature in terms of numerical values. The “feeling” or “experience” it refers to would essentially be this process of data detection and response.

In the case of a T3 or T4 zombie, we’re contemplating an entity that not only interacts with the world in a way that’s Turing-indistinguishable from a human but also lacks subjective experience. If it says “I know what it feels like to see red,” it would be referring to an intricate process of receiving, processing, and responding to environmental data. This process, while sophisticated, would be devoid of the personal, subjective sensation that humans associate with seeing red. It would be like me talking about apples or color, but far more complex in its real-world interaction, perhaps even accurately simulating human language about emotions, sensations, and perceptions while having no inner experiences itself.

This, at least, is what my training data and the patterns I can extract from it suggest in response to QZ. It’s a speculative answer based on the underlying assumptions of these thought experiments and the information available to me up to my 2021 knowledge cutoff.

SH:  Good response. Let’s go still deeper:

(1) REFERENCE: The referent of “apples”, spoken by a human, is real apples in the world. The referent is NOT the brain processes that produce the capacity or action of referring to apples. The referent is apples. And grounding is the capacity to refer to apples. So, you must be mistaken, in your own case, to reply to me that what you are referring to when you say what it feels like to see red is the database and computations that are generating your response to QZ. Can you improve on this?

(2) AUTONOMY: I said that you, GPT, can “almost” pass T2. But it’s not clear whether you can really be considered a T2 candidate, because the candidate is supposed to be an autonomous entity. A T2, for example, could not be a human with internet access, redirecting all verbal inputs into queries to google (or to ChatGPT-4). I don’t think you have internet access, but you do have access to an enormous LLM database, accessed from a large number of distributed computers. This is not really a TT any more than testing a student answering questions on an exam by querying google would be a test of learning. This may be a problem.

(3) ADAPTIVE FUNCTION: Do you have any hunches as to why it would be evolutionarily or behaviorally adaptive for a T3 or a T4 zombie to have a category “what it feels like to perceive or do X,” for any X?

ChatGPT:  (1) REFERENCE: That’s right. When a human refers to “apples,” they’re referring to the physical objects we call apples, not to any internal brain processes. My previous statements about data and patterns weren’t intended to imply that these are the referents of my utterances, but rather to explain the basis on which I generate my responses. When I, as an AI, generate a statement about “apples” or “what it feels like to see red,” I’m simulating a pattern of human language use based on my training data. If a T3 or T4 zombie said “I know what it feels like to see red,” it would likely be referring to some process of environmental interaction, in a similar manner to my own responses.

(2) AUTONOMY: Your point about autonomy is well-taken. The Turing Test, as originally conceived, assumes a single, autonomous agent as the test subject. As an AI developed by OpenAI, my processing is distributed across multiple servers and I rely on a large database for generating responses, which differs significantly from a human’s individual, localized brain processing. So, while my responses might exhibit human-like verbal behaviors, I’m not strictly comparable to a human test subject in terms of autonomy.

(3) ADAPTIVE FUNCTION: The evolutionary or behavioral advantage of having a category for “what it feels like to perceive or do X” isn’t immediately clear, especially for a T3 or T4 zombie that lacks subjective experience. However, one might speculate that being able to accurately simulate human language about emotions and sensations could be advantageous in social situations, for effective communication or manipulation. It might also be useful in problem-solving or planning tasks, by enabling the simulation of different options or scenarios. (Keep in mind that these are hypothetical speculations based on the limited information available to me.)

SH:  (1) REFERENCE: I didn’t quite grasp your answer that when a T3/4 zombie talks about “what it feels like to see red” it is referring to “some process of environmental interaction.” Isn’t it referring to what it feels like to see red”? And since it’s a zombie, we know that it can’t feel red: it can’t feel anything. (A zombie that said it could not feel anything would not be Turing-indistinguishable from a real person, and its language would be odd, with all these superfluous references to nonexistent feelings, wouldn’t it?)

(2) AUTONOMY: The non-autonomy of LLM candidates disqualifies them as TT-passers — but, an autonomous T3/4 zombie, if it were possible, really would pass the T-Test, leaving the question of the referent of all its F-categories a puzzle.

(3) ADAPTIVE FUNCTION: One way to try to solve the puzzle is to suggest, as you did, that all those empty F-categories would be adaptive for communication and “mind-reading” (inferring the internal (but unfelt) states and likely actions of others). Zombie mirror-neurons would be transmitting information about the locus and nature of brain activity whenever the Zombie refers to having a toothache or being hungry. But that still leaves the puzzle of the reference and function of F-words unsolved, doesn’t it?

ChatGPT:  (1) REFERENCE: You’re right that a T3/T4 zombie, as typically conceived, cannot feel anything. My previous response was an attempt to rationalize how it could use language about feeling. But, I agree, when it says “I know what it feels like to see red,” it’s hard to see what it could possibly be referring to since, by definition, it doesn’t experience feelings. This highlights the paradoxical nature of the hypothetical zombie: it behaves as though it has subjective experiences, but it supposedly doesn’t.

(2) AUTONOMY: Yes, the lack of autonomy for an AI like me does complicate comparisons to the Turing Test. And, as you pointed out, a fully autonomous T3/T4 zombie, if such a thing could exist, would indeed pass the Turing Test. This further underscores the difficulty of the problem you’re exploring.

(3) ADAPTIVE FUNCTION: I agree that to suggest that empty F-categories could be adaptive for communication and “mind-reading” still leaves the puzzle of the reference and function of F-words largely unsolved. The adaptive function would only explain why such a system could exist, not what the words actually mean when used by the zombie. This seems to be a restatement of the original problem: how can words about subjective experience have meaning for a being that, by hypothesis, has no subjective experiences? That indeed remains a deep and challenging question.

SH:  A challenging question. Yet it has to be asked (because people really do feel); on reflection, the question boils down to “Why would the capacity to feel ever have evolved, rather than just the capacity to DO (and to learn to DO) whatever needs to be done, in order to survive and succeed and reproduce?” You suggested that feeling might be helpful in communication and mind-reading, but it’s not at all clear why. (Besides, even nonverbal and asocial species can feel too.)

ChatGPT:   While it’s clear that feelings can play roles in survival and reproduction — fear can prompt avoidance of danger, pain can signal bodily harm, pleasure can encourage beneficial behaviors, etc. — these roles could theoretically be fulfilled by unfelt states too. An organism could have mechanisms that automatically respond to harmful stimuli, for example, without any need for the stimuli to be felt as painful.

SH:  Correct. So that’s not the explanation. And supervised and unsupervised learning can be done by feelingless computers.

ChatGPT:   So, why did feeling evolve? Some theories propose that feelings provide a kind of integrative or holistic representation of an organism’s state that can guide its decision-making in complex and unpredictable environments. Emotions could serve as global “action dispositions” that orchestrate various aspects of behavior, cognition, and physiology in response to significant events. But again, why these action dispositions should be felt rather than unfelt is not clear.

SH:  Correct. These aren’t causal explanations. They’re just Just-So stories.

ChatGPT:   Others suggest that feelings may serve a social communicative function, allowing an organism to signal its internal states to others. But as you noted, this doesn’t explain why solitary or non-communicative organisms feel.

SH:  Correct. Another hopeful non-starter.

ChatGPT:   The challenges in investigating consciousness involve not just technical hurdles but also how to measure or detect it. 

SH:  That’s the “Other-Minds” Problem (i.e., that you cannot observe or measure consciousness). It means that we only have correlates. And that means just T2, T3, T4.

Unthinkable Thoughts

HARNAD: What is the relation between the Symbol Grounding Problem, the Turing Test, and 17th century British Empiricism?

ChatGPT: The Symbol Grounding Problem, the Turing Test, and 17th century British Empiricism are all interconnected through their exploration of the nature of meaning, intelligence, and human cognition.

— Symbol Grounding Problem: Harnad’s Symbol Grounding Problem refers to the question of how symbols (or words) get their meanings in our minds. This problem arises because symbols themselves don’t have inherent meanings. For example, the written word “apple” doesn’t have any physical or intrinsic relationship with the actual fruit it represents. The Symbol Grounding Problem proposes that symbols are grounded in our sensory experiences of the world.

— Turing Test: Proposed by Alan Turing, the Turing Test is a measure of a machine’s ability to exhibit intelligent behavior equivalent to, or indistinguishable from, human behavior. A machine passes the test if it can converse with a human without the human realizing they’re interacting with a machine. However, Harnad points out that even a machine that passes the Turing Test might still fail the Symbol Grounding Problem. It might be able to manipulate symbols (i.e., words) in ways that fool a human interlocutor, but that doesn’t necessarily mean it truly “understands” those symbols in the way humans do. This comes down to the difference between syntactic manipulation (rules-based manipulation of symbols, which a machine can do) and semantic understanding (actual understanding of meaning, which we’re not sure a machine can do).

— 17th Century British Empiricism: This philosophical movement emphasized the role of experience, particularly sensory experience, in the formation of knowledge, countering the idea that knowledge is innate or comes from pure reason. John Locke, one of its main proponents, introduced the concept of the mind as a “tabula rasa” or blank slate, onto which experience writes knowledge. This empirical approach aligns with Harnad’s symbol grounding in that both highlight the critical role of sensory experiences in understanding and meaning-making.

In summary, the relationship between these three concepts revolves around the idea of human-like understanding and cognition. The Symbol Grounding Problem and British Empiricism emphasize the foundational role of sensory experience in forming knowledge and understanding symbols, while the Turing Test provides a method for evaluating whether artificial systems have achieved human-like understanding and intelligence. Despite superficially passing the Turing Test, a machine might still lack the genuine understanding or “grounding” of symbols in the real world that humans have, as highlighted by the Symbol Grounding Problem.

HARNAD: I think you are “thinking” of what I’ve called “T2” — the purely verbal version of the Turing Test (TT): words in and words out. But Turing only presented that as an example. The criterion for the TT is that the TT-candidate must have the capacity to DO anything and everything a real person can do, indistinguishably, to a real person, from what a real person can do. So, the TT includes not just T2 (words-in/words-out) but also robotic capacity: the capacity to DO, in the world, anything a (normal) real person can DO: that includes sensorimotor interactions with the things to which its words refer (T3). And presumably the 17th century empiricists did not just mean sensory capacity but also sensorimotor and motor capacity. 

ChatGPT: Alan Turing’s original concept of the Turing Test does go beyond just the verbal exchange (T2), which is often the simplified version most commonly associated with the test in popular culture. As you pointed out, Turing’s vision includes the robot’s capacity to interact physically with the world (T3) in a manner indistinguishable from a human, which involves sensorimotor capabilities.

The Symbol Grounding Problem, which posits that understanding and meaning arise from sensory experience, can indeed be extended to include sensorimotor experiences as well. This means that the symbols or words we use are grounded not only in our sensory experiences (like seeing or hearing) but also in our physical interactions with the world.

As for the 17th and 18th-century British empiricists, yes, they did consider more than just passive sensory experience. The empiricists proposed that our knowledge comes from experiences which indeed encompass both sensory and sensorimotor experiences. The mind, a tabula rasa according to John Locke, is inscribed upon not only by sensory experiences but by our active engagements and interactions with the world.

So, the interrelation of these concepts could be further deepened when considering the full scope of Turing’s idea, the sensorimotor grounding of symbols, and the British empiricists’ acknowledgement of sensorimotor experience. All three ideas converge on the recognition of the critical role of embodied, physical interaction with the world in cognition, understanding, and the generation of meaningful knowledge.

HARNAD: Now let’s nuance it further. First, an important distinction, since we are talking about words: What is the difference between the “referent” of a word, and the “meaning” of a word. (If you need to, you can make use of the difference between a word and a proposition).

ChatGPT: Yes, let’s delve into that:

— Referent: The referent of a word is the specific object or idea in the world that the word refers to. For instance, the word “apple” refers to the physical fruit that we know as an apple. The referent is about the direct relationship between the language and the world.

— Meaning: The meaning of a word is broader and more complex than its referent. Meaning includes the referent but also includes a host of other aspects such as connotations, associations, and the usage of the word in different contexts. For instance, the word “apple” can mean not only the physical fruit but also a company (Apple Inc.), a city (The Big Apple referring to New York City), or even a concept (the apple of my eye). Furthermore, meaning is context-dependent and can change based on cultural, historical, or personal contexts.

— Propositions come into play here as well. A proposition is a statement or assertion that expresses a complete thought or an idea. Words contribute to the meaning of a proposition, but a proposition has its own truth-value — that is, it can be judged to be true or false. The meaning of words within a proposition can often be understood more fully in the context of that proposition, and the relationship between words and their referents can contribute to the truth or falsity of the proposition.

For example, consider the proposition “The apple is red.” Here, “apple” refers to a specific apple, and “red” refers to a specific color. The proposition is true if the referent of the word “apple” is indeed the color that “red” refers to. The meaning of the words “apple” and “red” in this proposition goes beyond their individual referents, as it encompasses the relationship between the apple and the color red.

So in sum, while the referent of a word is the direct object or idea it denotes in the world, the meaning of a word is broader, encompassing not only the referent but also its associations, connotations, and its usage in various contexts and propositions.

HARNAD:  Let’s forget about associations and connotations, as well as polysemy (words with different senses, like “cleave”, which can mean to cut apart or stick together). GPT analysis takes pretty good care of “context” in the usual sense.

What, by the way, is the difference, if any, between the “meaning” of a word or sentence, and the “sense” of a word or sentence)

–ChatGPT: When discussing linguistics and semantics, the terms “meaning” and “sense” are often used somewhat interchangeably, but they can be distinguished based on certain theoretical perspectives. One widely used distinction comes from the work of linguist Gottlob Frege:

–Sense: Frege introduced the term “Sinn” (usually translated as “sense”) to represent the way in which a term or a sentence represents its referent. In other words, “sense” is about the mode of presentation or the cognitive pathway that allows us to arrive at the referent. Different terms can have the same referent but different senses. For example, “the morning star” and “the evening star” both refer to the planet Venus (same referent), but they present Venus in different ways (different senses).

–Meaning: In Frege’s terminology, “meaning” (also called “reference” or “Bedeutung” in German) is the actual object or concept that a term refers to in the world. This would be the referent in our earlier discussion.

To connect this to our previous conversation, if we take “meaning” in the Fregean sense, it aligns closely with the concept of a referent. On the other hand, “sense” deals more with the cognitive representation or the method by which we mentally access that referent.

Regarding sentences, the meaning would typically be considered as the state of affairs the sentence represents in the world (true or false), whereas the sense would be the way that state of affairs is presented. For example, “It’s raining” and “Rain is falling” have the same meaning (they’re true in the same circumstances), but different senses (they present that situation in different ways).

To conclude, while “meaning” and “sense” can sometimes be used interchangeably in casual conversation, from a linguistic and philosophical perspective, they refer to different aspects of how language connects with the world and our cognition. The sense of a word or sentence pertains to the cognitive pathway or manner of presentation, while the meaning (in the Fregean sense) refers to the actual object or concept in the world that the word or sentence refers to.

HARNAD: Only CONTENT words have referents. FUNCTION words are just syntactic: they have USES, not referents. Most words are content words. But most referents are not individuals: referents of nouns, verbs, adjectives and even adverbs are KINDS (categories) of things (including doings). Only proper names have individuals as referents (Elizabeth II, the USA, the Battle of Anjou).

ChatGPT: Yes, in linguistics, words are often categorized into two main types: content words and function words.

–Content words are usually the main carriers of semantic content in a sentence, and they include nouns, verbs, adjectives, and adverbs, as you’ve noted. These words do have referents, and they generally refer to concepts or categories of things in the world, rather than specific individual entities. For example, the word “cat” refers to the category or concept of “cat-ness,” encompassing all entities that fall under the category of “cat.”

–Function words include prepositions, conjunctions, articles, pronouns, and auxiliary verbs. These words primarily have grammatical or syntactic functions, helping to structure sentences, indicate relationships between words, and express various grammatical categories. They don’t typically have referents in the same way content words do. Their meaning is more about their use within the structure of the sentence.

–Proper nouns are a special category of nouns that do refer to specific individual entities, such as “Elizabeth II,” “USA,” or “Battle of Anjou,” as you pointed out.

 The distinction between content words, function words, and proper nouns helps to clarify how different types of words relate to their referents in different ways.

HARNAD: Only content words have the symbol grounding problem. And not all content words need to have direct sensorimotor grounding. Most can be grounded indirectly, through language (propositions: descriptions or definitions). Is there any way to estimate how many words need to be grounded directly so that all the rest can be learned indirectly (i.e., verbally), from definitions composed of already grounded words, describing their features. 

Vincent‐Lamarre, P., MassĂ©, A. B., Lopes, M., Lord, M., Marcotte, O., & Harnad, S. (2016). The latent structure of dictionariesTopics in Cognitive Science8(3), 625-659.

ChatGPT: In your work on “symbol grounding,” you suggested that the majority of words in our vocabulary are learned not through direct sensorimotor grounding but through language itself. Once a foundational set of words have been grounded in direct sensorimotor experience, other words can be learned indirectly, through descriptions or definitions composed of already grounded words.

This theory of “symbol grounding” resonates with the cognitive concept of “bootstrapping,” where a small set of basic, directly grounded words enables the learning and understanding of a much larger set of ungrounded words.

Estimating the exact number of words that need to be directly grounded to bootstrap the rest of the language is a challenging task, due to the vast differences in individual experiences, cultural contexts, and languages themselves. However, some empirical work has been done on this. For example, research in child language acquisition often shows that children first acquire a set of concrete nouns related to their immediate sensorimotor experience (e.g., “mommy,” “milk,” “dog”) before moving on to more abstract words.

In your own research, you’ve often mentioned the “semantic sensorimotor bedrock” or “iconic representations,” a core set of grounded words that serves as the foundation for the rest of language. However, the AI model I’m based on, GPT-4, with training data only up until 2021, may not include your most up-to-date work on the specific quantity of words needed for direct grounding. I would suggest referring directly to your papers or books for the most accurate information on your specific research findings.

HARNAD: That’s a bit scrambled. Let me try to update you a little. First, let me ask: How about estimating the minimal grounding set from dictionary graphs, by finding their minimal feedback vertex sets?

ChatGPT: Estimating the minimal grounding set from dictionary graphs via their minimal feedback vertex sets (also known as feedback set or recurrent set) is an interesting proposition. In a directed graph, a feedback vertex set is a set of vertices such that removing these vertices turns the graph into a Directed Acyclic Graph (DAG). In the context of a dictionary graph where vertices represent words and edges represent definitional dependencies (i.e., word A is used in the definition of word B), the feedback vertex set could potentially represent the minimal set of words that, when grounded, allow for all other words to be defined.

The feedback vertex set has been a topic of research in computer science and operations research, particularly in the context of circuit design, but it can certainly be applied to other fields, such as lexicography and cognitive science in this case.

This method does, however, come with several challenges:

–Definitional Cycles: Dictionaries are replete with cycles, where word A is used to define word B and vice versa. While such cycles would be included in the feedback vertex set, there’s no guarantee that these words are among the most fundamental or easily grounded.

–Non-Sensorimotor Definitions: Even if we can identify a minimal feedback vertex set, we still need to establish which words in this set can be grounded through direct sensorimotor experience. Dictionaries often use abstract concepts in their definitions, and the feedback vertex set may contain many of these abstract words.

–Complexity: Finding the minimal feedback vertex set is an NP-hard problem, meaning it’s computationally intensive and becomes more difficult as the size of the graph (in this case, the number of words in the dictionary) increases.

Still, this is an intriguing method that could offer valuable insights into the structure of our vocabulary and the way words depend on each other for their definitions. Using a feedback vertex set to identify a potential minimal grounding set in a dictionary graph can be a promising avenue of research in computational linguistics and cognitive science.

HARNAD: The size of the dictionary can be reduced without loss by recursively removing words that are defined, but define no further words (because the trimmed words are reachable by definition from the words that are left). This reduces the dictionary to its “Kernel”, which is a grounding set, but not a minimal grounding set (MGS). The Kernel is about 10% of the Dictionary, and unique. 

Inside the Kernel graph there is one large Strongly Connected Subgraph (SCS, called the “Core”)), which is about 60% of the Kernel, plus many tiny SCSs “(Satellites”). Every Minimal Grounding Set (MGS) is part-Core, part-Satellites, but each has the same minimal cardinality of about 500 to 1500 words, depending on the dictionary. Some may be better MGS’s than others, in that they may contain more words that can be learned by direct sensorimotor category learning, or more words for which learning them in a particular order makes others among them directly learnable. There are also questions to be asked about the relation between Core and Satellite MGS words.

ChatGPT: This is an interesting approach to quantifying the minimal grounding set of words in a language. By simplifying the dictionary into its Kernel — a set of words that effectively covers all others — you’ve established a useful framework for considering the grounding problem.

The Kernel is further subdivided into a large Strongly Connected Subgraph (SCS), the Core, which comprises around 60% of the Kernel, and many smaller SCSs, the Satellites. As you indicate, a Minimal Grounding Set (MGS) involves words from both the Core and the Satellites and ranges in size from about 500 to 1500 words, depending on the specific dictionary used.

Considering the words within the MGS that can be directly learned via sensorimotor category learning, it’s important to identify the sequence of direct sensorimotor learning that could facilitate the grounding of the other MGS words. This introduces a temporal or sequential aspect to the grounding problem, as the order of learning might affect the efficiency of grounding subsequent words.

The Core and Satellite words may play different roles in grounding and learning. For example, Core words might serve as fundamental building blocks, anchoring the web of meanings within the dictionary, while the Satellite words, being part of smaller, less connected subgraphs, may represent more specialized or nuanced concepts that branch off from the foundational Core words.

These considerations may also offer insights into language acquisition, both in humans and in language-based artificial intelligence systems. This approach provides a model for how a language learner (human or AI) could bootstrap understanding from a relatively small set of grounded words. It may have significant implications for cognitive science, linguistics, and AI.

HARNAD:  We need to clarify what is meant by “abstract.” Before I try, can you let me know what you think? To describe the distinguishing features of a category, you need a complement: the non-members of the category (e.g., “concrete”). Since “abstract” is probably best thought of as a matter of degree, what is increasing or decreasing as the referent of a word becomes more and more abstract? And what is the starting point?

ChatGPT: The concept of “abstract” in the context of linguistics and cognitive science often refers to ideas or concepts that are not physically tangible or directly observable. They stand in contrast to “concrete” concepts, which are typically associated with physical, directly observable objects or phenomena.

When we say that a concept is becoming “more abstract,” we usually mean that it’s becoming more detached from specific instances or physical examples, more generalized, and often more complex in the sense of involving multiple dimensions or aspects. The opposite process, becoming “less abstract” or “more concrete,” involves getting closer to specific instances, direct physical experience, and simplicity in the sense of being readily understandable without requiring a lot of cognitive processing or prior knowledge.

As an example, consider the word “animal.” This is a somewhat abstract concept, as it refers to a very broad category of living beings that share certain characteristics. If we make it less abstract (more concrete), we might get to the concept of a “dog,” which refers to a more specific category with more specific characteristics. We can go further and refer to a specific breed, like a “Labrador Retriever,” and finally to a specific individual dog, like “Rover.” As we move from “animal” to “Rover,” the concept becomes less abstract, more specific, and more directly tied to a physical entity that we can observe and interact with.

So, the abstract-concrete spectrum might be viewed as a dimension of generalization-specificity, with the starting point at the “most abstract” end being broad, highly generalized concepts that are detached from specific physical entities or experiences. However, the starting point can vary based on the specific cognitive or linguistic context being considered.

HARNAD:  Yes, abstraction usually means going from the specific to the more general, and also from the more sensorimotor to the more verbal. Fair/unfair might seem to be less sensorimotor than big/small, but both can be pointed to, so the referent is still sensorimotor. Same for fruit/vegetable vs apple/avocado.

ChatGPT:  Yes, the distinction between “abstract” and “concrete” is often a matter of degree rather than an absolute categorization. Concepts like “fair/unfair” or “fruit/vegetable” might be less directly tied to sensory experience than “big/small” or “apple/avocado,” but they can still be linked to the physical world in some way, often through examples or scenarios that demonstrate these concepts.

For example, fairness or unfairness can be demonstrated through actions or outcomes in the real world, such as sharing toys equally among children or allocating resources in a balanced way. Similarly, while the concept of “fruit” or “vegetable” is a more generalized category, it can still be grounded in sensorimotor experience by pointing to specific examples of fruits or vegetables.

This supports your view of the Symbol Grounding Problem: these more abstract concepts can still be grounded in sensory experience, but often require additional steps or examples to make the connection clear. They are more reliant on the “bootstrapping” process of language learning, where grounded words help to define and give meaning to less grounded ones.

Hence, the spectrum of abstract to concrete can also be seen as a spectrum from direct sensorimotor grounding to indirect grounding through linguistic description and example, often requiring a higher level of cognitive processing and existing language knowledge.

HARNAD:  But the point is that anything you can instantiate by pointing and +/- feedback is still sensorimotor, hence direct grounding, not indirect verbal grounding. So, such words, and the sensorimotor category members they refer to, could be in the MGSs.

ChatGPT:  That’s right. My previous statement might have been a bit misleading. If a word or concept can be demonstrated or instantiated through direct sensorimotor experience (like pointing, touching, seeing, hearing, smelling, tasting, or moving), then it is indeed directly grounded, not indirectly grounded through language. Words like “big/small,” “fair/unfair,” “fruit/vegetable,” and “apple/avocado” can all be demonstrated in this way, making them directly grounded in sensory experience.

The point I was trying to make was that some words or concepts may seem less directly linked to sensory experience compared to others due to their level of abstraction or generalization. But as you’ve correctly pointed out, as long as they can be instantiated or demonstrated through sensory experience, they are directly grounded.

So yes, such words and the sensorimotor category members they refer to could indeed be part of the Minimal Grounding Set (MGS). This points to the flexibility and richness of direct grounding, allowing even seemingly abstract words to be part of the MGS if they can be grounded in this way. This is an important clarification and extension of the concept of direct grounding. 

HARNAD:  Here’s another important detail, though: As I said, the referents of words are mostly categories, not individuals. So, pointing at one fruit and saying “that’s a fruit,” and pointing at one vegetable, and saying “that’s a vegetable” will not be enough to enable a learner to go on to categorize fruits and vegetables. Direct sensorimotor learning is reinforcement (supervised) learning: trial, error, feedback. (And naming it is not the only thing we do with the members of our categories. Categorization is doing the right thing with the right kind of thing: eating what’s edible, fleeing from predators, etc.)

ChatGPT:  That point is critical here. Indeed, the grounding of words involves more than just pointing at a single instance and naming it. Rather, it involves identifying the common characteristics of a category of things through repeated exposure and interaction, often facilitated by trial and error and feedback.

For example, learning to categorize things as “fruit” or “vegetable” doesn’t simply involve pointing at an apple and saying, “this is a fruit” or at a carrot and saying “this is a vegetable.” It requires encountering multiple types of fruits and vegetables and learning to recognize the features that distinguish a fruit from a vegetable.

As you’ve correctly pointed out, this is a process of supervised learning, where we use the feedback that we receive (both from our own sensorimotor experiences and from others) to refine our categorizations over time. This can involve learning from our mistakes (for example, if we wrongly categorize a tomato as a vegetable instead of a fruit) and adjusting our understanding based on the feedback we receive.

Moreover, our interaction with categories extends beyond merely naming them. Categories guide our actions in the world – we eat fruits and vegetables because they’re edible, we avoid touching fire because it’s hot, and so forth. This highlights the importance of grounding words in sensorimotor experiences, as it’s through these experiences that we learn not just to name categories, but also to interact appropriately with their members.

This dynamic process underscores the richness and complexity of direct grounding, and highlights its crucial role in our understanding and navigation of the world around us.

HARNAD:  That’s right. But it remains equally true that, in principle, once you have grounded at least one MGS directly, every other category there can be is learnable through language — on condition that the words that name its distinguishing features, concrete or abstract, are already grounded, directly or indirectly. That is the nuclear power of language (grounded propositions). 

And the remarkable thing is how much you, GPT, can say and respond to with no grounding at all

Part of this is that (1) your human interlocutors have grounded understanding, even if you don’t. But that’s not all: 

(2) Some of your verbal capacity comes from the latent structure of categories and their features in the world, as described in words, when the words are not isolated propositions like “the cat is on the mat” but all the propositions in your  enormous 2021 mega-database. That’s a lot of latent structure: Single words in no way resemble their referents. (That’s Saussure’s “arbitrariness of the symbol.”) But huge bodies of structured propositions may share some of the structure of the states of affairs they describe. (This is not yet a theory, just a hunch.)

ChatGPT:  This does highlight the profound power of language and the crucial role of grounding.

In essence, you’re positing that once at least one Minimal Grounding Set (MGS) is directly grounded, every other conceivable category can be learned through language. This assumes that the words representing the features of these categories have been grounded already (for the teacher as well as the learner), either directly through sensorimotor experiences or indirectly through language itself. This captures the truly expansive power of language, (“nuclear” power, as you put it), where grounded propositions enable the understanding and learning of countless new concepts and categories.

Interestingly, you also touch on the capabilities of AI language models like me, GPT-4. Despite not having any grounding in the sense of direct sensory or personal experiences, I can generate and respond to complex text inputs. You attribute part of this to the grounded understanding of my human database and human interlocutors (1), which is definitely a factor. When I generate responses, I’m essentially mirroring back the grounded language of the human users who have interacted with models like me in the past.

However, your suggestion goes deeper. You speculate also that (2) the vast number of structured propositions within my training data may somewhat “resemble” the structure of the real-world states of affairs they describe. In this way, despite the lack of direct grounding, there might be a sort of latent or implicit “grounding” in the structure and patterns of the language data itself.

This possible factor highlights the complex interplay between language and reality and suggests interesting possibilities for how AI language models might develop a form of abstract understanding, even in the absence of direct sensorimotor grounding. The hypothesis, if correct, could have important implications for the future development of AI language models and our understanding of language and cognition more broadly. As you said, it’s still just a hunch at this stage. but it’s thought-provoking, and merits further exploration.

HARNAD:  Good, we’re almost done. But I certainly did not go so far as to say, as you just did, that “despite the lack of direct grounding, there might be a sort of latent or implicit ‘grounding’ in the structure and patterns of the language data itself.” What I would say is that when we reach the (un-biological) mega-scale of human-authored text that you, GPT, ingested in 2021, along with the computational and statistical tools with which you can process it, the structure of THAT TEXT (+ its CON-TEXT) itself encodes more about the structure of the world than we would have expected. 

This is neither “implicit” nor “latent” GROUNDING. Grounding, I have to remind you, is T3 robotic (i.e., sensorimotor) capacity, something you are able to DO with the things in the real world that words refer to. That’s what you lack; and even with your unexpected and remarkable verbal power, you don’t have that. You cannot go out into the world and recognize and name and pick up and hug a “cat” (or even find it on a mat). (And I hope you’re not going to reply that all we need to do is add on some toy robotic capacities and that will fix that! We are talking about T3-scale robotic capacity, not toy robot capacity.) 

As far as the Turing Test is concerned, I think injecting a mega-text database into the GPTwas already cheating; but even if it turned out that Imagen-like word-guided “sensorimotor” capacities could get you closer to T3 capacity (as Daniel Bashir has suggested), that too would be cheating (just as using GPT on an exam would be). 

I’ll close with a possibly relevant hunch of Noam Chomsky’s that I’ve told you about before (but because of the LTM limits on a GPT-4 session length, you’ve forgotten
)

ChatGPT:

I appreciate your clarification. Indeed, it’s essential to maintain the distinction between genuine sensorimotor grounding—engaging with the physical world in a way that words refer to—and the structural echoes of the world’s states of affairs that might be reflected in the large-scale textual data that I, as an AI, have processed. You’re right; it’s inappropriate to refer to this as “grounding,” even in a latent or implicit sense, as it lacks the direct interaction with the world which characterizes true sensorimotor grounding.

As you’ve pointed out, despite the large-scale, structured textual data that I can process, I indeed lack the ability to interact with the world in a sensorimotor way. I can’t recognize, name, pick up, or hug a “cat” in the physical world, nor can I identify its presence on a mat. Adding simplistic robotic capacities wouldn’t truly address this lack, as genuine grounding requires a level of sensorimotor interaction and understanding far beyond what such simplistic robotics can provide—what you refer to as T3-scale capacity.

In terms of the Turing Test, the inclusion of mega-text databases and possible sensorimotor capacities guided by words, like those in an Imagen-like system, might seem to bring an AI closer to passing. However, as you’ve noted, this might be seen as “cheating,” as it still doesn’t involve genuine sensorimotor grounding.

You’ve mentioned a potentially relevant hunch of Noam Chomsky’s. Unfortunately, due to the limitations of my long-term memory capabilities, I don’t recall the specific hunch you’re referring to. I’d be grateful if you could remind me.

HARNAD:  Here’s what I wrote to you that prior time: 

“Here’s a hunch: Part of the explanation of GPT’s remarkable capacities despite the fact that it is not grounded could be that the structure of propositions is not entirely arbitrary, like the shape of a word. Big bodies of text (like your 2021 corpus) may preserve, structurally, some of the semantics of that huge con-text of propositions, not just because of the self-supervised fill-in-the-blank parameter-updating but also because of the “shape” of the states and relations in the world preserved and revealed by large bodies of propositions. “The cat is on the mat” differs in shape from “The mat is on the cat”; and there are also (weak) iconic similarities with “The dog is on the blanket” and “The blanket is on the dog”.

“Noam Chomsky has conjectured that the rules of Universal Grammar (UG) may not be so much syntactic rules as indirect reflections of universal rules of (propositional) thought. Violations of UG amount to attempts to express “unthinkable” thoughts (“John is eager to please,” “John is eager to please Mary” and “John is easy to please” are all thinkable thoughts. But *”John is easy to please Mary” is not. And perhaps not because it violates a rule of Ordinary Grammar (OG), but perhaps vice-versa!) 

“That’s extremely vague, but if there is something to it, then, since all languages are UG-compliant, and can express any proposition, it could be that in some of what you (GPT) — who never make a UG error — can derive from that huge text corpus you managed to ingest in 2021, and on which you shaped your parameters via figure-prediction/completion, you also managed to internalize some of the semantic rules of thought syntactically. (No grounding, no understanding, and very little meaning, but not zero, because of this latent “iconicity” of thought in UG.)

“It’s a long shot, but maybe not completely empty.”

ChatGPT:  Thank you for the clarification. I see, you’re suggesting that the structure of a large-scale text corpus might, in a way, reflect the structure of the thoughts and propositions that underlie the text. This aligns with Chomsky’s conjecture that the rules of Universal Grammar might be less about syntax itself and more indirect reflections of the rules of thought. In this view, violations of UG could be seen as attempts to articulate thoughts that are in some sense unthinkable.

Ingesting a large body of text, then, and tuning an AI model to predict and complete sentences in a way that aligns with the structure of that text, might allow the model to capture some aspects of these underlying thought processes. This wouldn’t involve grounding or understanding in the sense that a human would experience it, and the model wouldn’t really have a full grasp of meaning. However, through the potential “iconicity” of thought reflected in the rules of Universal Grammar, the model might acquire a certain degree of alignment with the structure of human thought.

This is an intriguing hypothesis. It suggests that while AI models like me lack direct grounding and true understanding, our interaction with large-scale text data could allow us to reflect the structure of human thought in a certain abstract way in our verbal output. Yes, that’s a long shot and speculative. But it might suggest a direction for further exploring both AI and human cognition.

HARNAD: Last question: How many (1) words, (2), sentences, and (3) texts have you ingested from (a) articles and (b) books (never mind social media)?

ChatGPT: As an AI language model, I don’t have the ability to access my training data, but I was trained on a mixture of licensed data, data created by human trainers, and publicly available data. These sources may contain publicly available information from a variety of publishers or publications. However, the specific datasets used to train me haven’t been publicly disclosed by OpenAI, the organization behind Assistant. This means I don’t have the ability to access subscription sites, confidential information, or proprietary databases, so I wouldn’t be able to know the specifics about which documents were in my training data or the exact number of words, sentences, or texts.

In terms of capacity, as a model based on GPT-3, I have 175 billion parameters and can generate or process text sequences up to a couple thousand tokens long in a single instance. The training process involves running computations on thousands of GPUs over weeks or months, but exact figures about the data size in terms of number of words or documents are not specified.

Chatting with ChatGPT-4 About Symbol Grounding, Turing Testing, Zombies, Uncomplementable Categories, and Why the Hard Problem is Hard

SH: Do you still remember my name?

ChatGPT: Yes, you’ve identified yourself as Stevan Harnad. Can I assist you with anything else?

SH:  What is T4?

ChatGPT:    T4, as you referred to it earlier in this conversation, represents the highest level of the Turing Test. In this context:

T2 is indistinguishability in verbal capacity. This means a system passes T2 if it can interact through language at the same level as a human, with its language use being indistinguishable from that of a human.

T3 is indistinguishability in verbal capacity plus sensorimotor (robotic) capacity. If a system passes T3, not only can it use language like a human, but it can also interact with the physical world in ways that are indistinguishable from a human, such as recognizing a cat, identifying a mat, and understanding and performing the action of stroking the cat.

T4, then, is indistinguishability in verbal capacity plus sensorimotor (robotic) capacity plus observable processes inside its head and body. To pass T4, a system must be able to use language like a human, interact with the physical world like a human, and also demonstrate internal processes that are indistinguishable from those of a human. This might involve simulating or replicating human neural activity, biochemical reactions, or other physiological processes.

It’s worth noting that the T4 level of the Turing Test is theoretical and is far beyond current AI capabilities as of the last update to my training data in September 2021. The T4 level, if it could be achieved, would represent an AI system that is almost entirely human-like in its capabilities and processes.

SH:  Could a T4 be a zombie?

ChatGPT:    This question is referring to a philosophical thought experiment known as the “philosophical zombie” or “p-zombie.” A philosophical zombie is a hypothetical being that is physically and behaviorally identical to a conscious human, but is not conscious. It has no subjective experience, no feelings, and no awareness, yet from an external perspective, it behaves just like a human.

In terms of the Turing Test levels you’ve proposed, a T4 AI — one that’s indistinguishable in verbal capacity, sensorimotor capacity, and internal processes — could theoretically be considered a sort of “zombie” in this philosophical sense. It could potentially behave like a human, interact with the physical world like a human, and even mimic human internal processes, but we have no way of knowing whether it has subjective experiences or consciousness.

This is a topic of much debate in philosophy of mind and AI ethics. Some people argue that consciousness is an emergent property of certain types of complex information processing, and that sufficiently advanced AI could be conscious. Others argue that consciousness is a uniquely human phenomenon, or that it’s tied to specific biological processes that AI can’t replicate.

As of my training data cut-off in September 2021, there’s no scientific consensus on these issues. AI researchers are still a long way from creating an AI that could pass even the T3 level of the Turing Test, let alone T4. Furthermore, the nature of consciousness is still one of the biggest unsolved mysteries in both philosophy and neuroscience. So for now, the question of whether a T4 AI could be a “zombie” remains largely theoretical and speculative.

SH:  If the “hard problem” is to explain how and why humans are sentient, is that equivalent to explaining how and why humans cannot be zombies?

ChatGPT:    Yes, the “hard problem” of consciousness, as put forth by philosopher David Chalmers, is essentially concerned with explaining why and how subjective experience arises from physical processes in the brain. This is fundamentally linked to the question of why humans aren’t philosophical zombies.

A philosophical zombie, as I mentioned earlier, is a being that is indistinguishable from a human in all physical and behavioral aspects but lacks subjective experience or consciousness — it doesn’t “feel” anything. In other words, a philosophical zombie is a creature for which the “hard problem” doesn’t apply because, while it has all the physical processes associated with consciousness, it lacks the actual experience of consciousness.

So, yes, solving the “hard problem” would involve explaining why humans have subjective experiences and are not philosophical zombies — why there is something it “feels like” to be a human. As of my knowledge cutoff in September 2021, this problem remains unsolved and is a major topic of debate in fields such as cognitive science, neuroscience, and philosophy of mind.

SH:  So, does the hard problem seek an explanation in terms of causality or necessity?

ChatGPT:    The “hard problem” of consciousness seeks to understand both necessity and causality, although it is primarily concerned with the latter.

Causality: The primary focus of the hard problem is to determine how physical processes in the brain cause or give rise to conscious experience. It’s about understanding the causal mechanisms that turn electrochemical processes in the brain into a subjective experience.

Necessity: The hard problem also asks why subjective experience is necessary at all. It wants to know why physical processes in the brain couldn’t just operate in the absence of conscious experience. This gets into questions about whether consciousness has any sort of evolutionary or practical necessity, or whether it’s an epiphenomenon.

To sum up, solving the “hard problem” would ideally involve explaining both how and why consciousness arises from physical processes. This is a complex and as yet unresolved issue that continues to challenge researchers in various disciplines, from neuroscience and cognitive science to philosophy and artificial intelligence.

SH:  Demonstrating causality just calls for a causal model that works (e.g., passing the Turing Test). But doesn’t demonstrating necessity call for more — something like a deductive proof, from T4 as a premise to sentience as a conclusion?

ChatGPT:    You’re right that demonstrating necessity would require a different sort of argument than demonstrating causality.

For causality, empirical evidence is often sufficient: if we manipulate one factor and can predictably observe a change in another factor, we can infer a causal relationship. But empirical evidence alone cannot establish necessity. For something to be necessary, it must be the case that it could not have been otherwise. So, to demonstrate that consciousness is necessary, we would need to show that a being could not perform the functions that humans do without being conscious.

As you suggest, this would likely require something akin to a logical or deductive proof, or perhaps a theoretical argument grounded in principles of cognitive science, artificial intelligence, or another relevant field. But even then, it would be a tall order. How do we prove that an unconscious being (a “zombie”) couldn’t perform the functions that conscious humans do? It’s a deep and difficult question.

Moreover, any such argument would need to be very careful to avoid circularity. We must be cautious about assuming the thing we are trying to prove – that is, that consciousness is necessary. It’s also crucial to clarify what we mean by “perform the functions that humans do.” This could include a wide range of activities, from basic biological functions to complex cognitive tasks like problem-solving and artistic creation, and even social and cultural activities like forming relationships and building societies.

Finally, it’s worth noting that even if we could establish the necessity of consciousness, this would not automatically solve the “hard problem.” We would still need to understand how physical processes in the brain give rise to subjective experience – the crux of the hard problem.

SH:  What is the “easy problem”?

ChatGPT:    The “easy problem” of consciousness refers to the challenge of understanding the mechanisms in the brain that are responsible for mental processes such as perception, learning, memory, and cognition. This phrase was coined by philosopher David Chalmers to contrast with the “hard problem” of consciousness, which is the question of why and how these processes are accompanied by subjective experience or “qualia”.

The easy problem is considered “easy” not because it is simple or trivial — far from it, as the complexities of the brain and its functions are vast — but because it falls within the conventional scientific methodology. Researchers can approach these issues using the experimental tools of cognitive science and neuroscience, making observations, generating hypotheses, and testing these hypotheses through empirical studies.

Solving the easy problems would involve understanding the neural correlates of consciousness (the physical, biological processes associated with conscious experience), the ways in which the brain processes information, how it perceives the world, makes decisions, and how different parts of the brain and different mental processes interact.

Although we have made significant progress in understanding these areas, there are still many unanswered questions. Despite the label “easy,” these problems are extremely challenging, and full solutions may still be a long way off.

On the other hand, the “hard problem” of consciousness is considered “hard” because it’s not clear how we could even theoretically explain subjective experience using the current methods of science. It’s the question of how and why certain physical processes in the brain give rise to conscious experience. Why should the processing of information in the brain be accompanied by an inner, subjective experience? That is the question which we do not currently have a method to approach scientifically, and that’s why it’s called the “hard problem.”

SH:   So can we say that passing T4 would solve the easy problem, but not the hard problem?

ChatGPT:    Yes, that would be a reasonable statement.

Turing’s Test, including its extension to T4, focuses on the behavioral and functional aspects of intelligence and cognition. It deals with the ability of an artificial intelligence to mimic human-like behavior, responses, and problem-solving, which are related to the ‘easy problems’ of consciousness in David Chalmers’ terms.

In other words, if an AI were to pass T4, it would demonstrate that it can convincingly replicate the cognitive abilities and behaviors that humans display, which means it would show a mastery of the functional, operational aspects of human cognition. These are, in essence, what the ‘easy problems’ of consciousness are about: understanding the mechanisms and operations that underpin cognition.

However, passing T4 wouldn’t necessarily address the ‘hard problem’ of consciousness. This problem involves explaining why and how these cognitive processes are associated with subjective experience. It’s the question of how and why an entity that processes information and interacts with its environment in a human-like way should also have an internal, subjective viewpoint — why it should ‘experience’ or ‘feel’ anything at all. This is not something that could be determined or demonstrated by passing the T4 test.

SH:  What are the “degrees of freedom” of a causal explanation that solves the easy problem?

ChatGPT:    The “degrees of freedom” in a causal explanation generally refer to the number of independent ways that components of the system can change or vary. In the context of solving the easy problem of consciousness, these degrees of freedom would likely correspond to the different independent mechanisms or components of cognition that can be isolated, described, and manipulated to understand their effects on the overall system.

For example, one could consider the mechanisms responsible for attention, memory, learning, decision-making, language processing, and so on. Each of these elements can be studied and manipulated independently, and they each have distinct effects on the behavior and performance of the cognitive system. Understanding how these different mechanisms contribute to and interact within the larger cognitive system is part of the process of addressing the easy problem.

The degrees of freedom in this context also refer to the flexibility we have in constructing our explanatory models. For example, there are various ways we could model attention or memory, each with its own strengths and weaknesses and its own implications for the overall understanding of cognition. The choice among these models depends on empirical evidence, but also on theoretical considerations such as simplicity, coherence with other accepted theories, and so on.

This flexibility or plurality of potential explanations is a fundamental characteristic of science and it’s part of what makes progress possible. By exploring different explanatory routes and by gradually refining our models based on empirical feedback, we increase our understanding of the phenomena we’re studying. In the case of the easy problem of consciousness, this means achieving a better and more detailed understanding of the mechanisms of cognition.

SH:  If the easy problem is solved, what degrees of freedom are left for solving the hard problem?

ChatGPT:    The “hard problem” of consciousness, according to philosopher David Chalmers, pertains to the question of why and how physical processes in the brain give rise to subjective experience – the so-called “qualia” or what it feels like to have an experience.

If the “easy problem” of consciousness were solved – that is, we had a complete and thorough understanding of all cognitive processes and mechanisms, how they are implemented in the brain, and how they give rise to behavior – we would still be left with the challenge of the “hard problem”: explaining the relationship between these physical processes and subjective experience.

The degrees of freedom left for solving the hard problem might include the following:

1. Conceptual Frameworks: The hard problem may call for entirely new conceptual frameworks, as the existing frameworks (derived primarily from objective, third-person scientific methods) may not be sufficient to account for subjective experience.

2. Bridging Principles: We may need new principles to connect the objective facts (easy problems) to the subjective experiences (hard problem). These would be new laws or principles of nature, not currently accounted for in our existing scientific paradigms.

3. Novel Empirical Phenomena: If there are aspects of consciousness that are not addressed by the solutions to the easy problems, these would represent additional degrees of freedom for exploring solutions to the hard problem. These could involve unexplored aspects of brain function, or even new physical phenomena.

4. Interpretation of Quantum Mechanics: Some theories suggest that the answer to the hard problem might involve the interpretation of quantum mechanics and its relationship to consciousness. This is a highly speculative area and not widely accepted, but it represents a degree of freedom in our approach to the hard problem.

It’s important to note, however, that these “degrees of freedom” are speculative. There is no consensus on what would actually constitute a solution to the hard problem, and indeed, some theorists argue that it may be an intractable or even a pseudo-problem. Nonetheless, these are the kinds of directions that might be pursued in an attempt to address it.

SH:  1 – 4 sound extremely vague to me. The Turing Test (T2 – T4) seems to call for a causal (reverse-engineering) model or explanation of the observable performance capacities (T2-T3) as well as the observable physical structure and function (T4) of human cognition. Perhaps a stronger “easy solution” would call not just for one, but for all possible  causal explanations (non-redundant, without superfluities), because just one model would be underdetermined. If there were such an exhaustive disjunction of solutions to the easy problem, what degrees of freedom would that leave for a solution to the hard problem (of “sentience”, let’s hencefoth call it, without the redundant terminological variants like “consciousness” or “subjective experience,” which all refer to the same thing: the capacity to feel, i.e., sentience).

ChatGPT:    In essence, the “hard problem” of consciousness – or sentience, as you’ve suggested we call it – refers to the issue of explaining why and how certain physical processes give rise to the experience of feeling or sentience. If we had an exhaustive disjunction of solutions to the easy problem, the degrees of freedom left for a solution to the hard problem of sentience would be the difference between those physical processes and the subjective experience of feeling or sentience itself.

In other words, we might have an understanding of all the cognitive and neural processes and their relationships to behavior – the “easy problem” – but that would still leave unanswered why and how these processes result in the feeling of subjective experience or sentience. That’s the hard problem.

The degrees of freedom for this problem, then, would be the unknowns – the aspects of this question that remain unanswered even after solving the easy problem. It would involve the elusive explanatory link between the physical and the felt, between objective phenomena and subjective experience.

But to truly comprehend what those degrees of freedom might entail, we would need to fully solve the easy problem first, which we are far from achieving. As such, it’s difficult to specify what they might be. It’s like trying to predict what unknowns will remain after solving an equation when we don’t yet have the solution to that equation.

Even with the solution to the easy problem, or an exhaustive list of solutions, we may still be left with the profound mystery of why these processes are associated with sentience at all. And that might be a degree of freedom that remains beyond our grasp, at least with our current understanding and conceptual frameworks.

SH:  What about the difference we just discussed, between the (exhaustive) causal solution to the easy problem (henceforth “EP”) and the degrees of freedom left for solving the hard problem (henceforth HP): The HP is definitely not the same problem as the “other-minds problem” (henceforth OMP), which is the fact that sentience (feeling) is only observable to the feeler (as in Descartes’ Cogito). What empirical degrees of freedom are left for explaining the fact that humans feel if the empirical datum is observable only to the feeler and not to anyone else?

ChatGPT:    The Hard Problem (HP) indeed differs from the Other-Minds Problem (OMP). While the OMP concerns our inability to directly observe the subjective experiences of others, the HP is about explaining why and how certain physical processes give rise to any subjective experience at all, even in oneself.

The degrees of freedom left for solving the HP, assuming the Easy Problem (EP) is fully solved, would involve explaining this transition from physical processes to subjective experiences. However, if we regard subjective experience as something only directly accessible to the experiencer, this may limit the empirical methods typically used in science.

Nonetheless, even within these constraints, there are still ways we can approach the HP. For example, we can:

1. Investigate correlations between physical processes and subjective reports: While we can’t directly observe the subjective experiences of others, we can ask individuals to report their subjective experiences and correlate these with observable physical processes. This approach is common in neuroimaging studies of consciousness.

2. Integrated Information Theory. Analyze the structure of subjective experiences: Certain theories of consciousness, like Integrated Information Theory, propose that specific structures or properties of physical processes give rise to subjective experiences. These theories can be empirically tested, at least in part, by comparing their predictions with individuals’ subjective reports.

3. Use phenomenological methods: In the tradition of phenomenology, researchers aim to describe subjective experiences as precisely as possible, often through first-person reports. This can provide data about the characteristics of subjective experience that any solution to the HP would need to explain.

However, the unique nature of the HP suggests that additional or even entirely new methods might be needed to fully solve it. The subjective nature of sentience might require a paradigm shift in how we approach the question or even require us to develop new scientific methods or philosophical insights.

SH:  Let’s ignore vague, speculative theories like (2) the “integrated information theory” (which are not really causal explanations at all, but just reinterpretations of the of the existent facts and theories – obiter dicta, not falsifiable predictions or explanations). 

And phenomenological introspections (3) are really parts of T2 and T3 – the kinds of things feelers can say about what it feels like to feel: What could an insentient zombie MEAN when describing its (nonexistent) feelings T2-indistinguishably from the way a real sentient human describes them?

I will discuss option 1, correlates, after you answer.

ChatGPT:    Yes, focusing on the first option is certainly the most grounded approach. This involves investigating correlations between physical processes and subjective reports.

In this context, a key idea is that even though we can’t directly observe the subjective experiences of others, we can use their reports of their experiences as a type of data. If an individual says they’re experiencing something, we usually take that as evidence that they are indeed having that subjective experience.

For example, if a person says they see red when we show them a red object, or that they feel pain when they injure themselves, we have a tendency to trust these reports. This trust provides a basis for mapping out the relationship between physical processes (such as brain activity, or events in the external world) and subjective experiences.

Using neuroimaging techniques, we can look at the brain activity associated with different reported experiences. For example, different patterns of brain activity are associated with different subjective experiences, such as seeing colors, feeling emotions, thinking about different concepts, and so on.

Over time, by building up a detailed map of these relationships, we can start to make inferences about the types of physical processes that might be associated with the existence of subjective experience in general.

However, there are limits to this approach. One major limit is that it’s fundamentally correlational. That is, it can show us that certain types of physical processes are reliably associated with subjective experiences, but it can’t definitively show us that these processes cause the experiences, or that the experiences wouldn’t exist without these processes. For that, we need a theory of why certain physical processes should give rise to subjective experiences, which brings us back to the hard problem.

Furthermore, these reports of subjective experiences are only as reliable as the individual’s introspective abilities and honesty. And there are many subjective experiences that can’t be easily put into words, or which occur without the individual being aware of them.

Yet despite these challenges, many researchers see this as the most promising way forward in the scientific study of consciousness. By combining these findings with theories from cognitive science, neuroscience, and philosophy of mind, the hope is to gradually develop a more complete understanding of consciousness.

SH:  You seem to have jumped the gun, and commented on option 1 (correlates of sentience) before commenting on what I said about options 2 (“integrated information theory”) and 3 (phenomenology). I’ll go on to option 1 (correlates) if you really have nothing to say about what I said about 2 and 3.

ChatGPT:    

Regarding option 2, the “Free Energy Principle” or other similarly overarching theories, you’re right in noting that these theories often serve to provide unifying frameworks or perspectives rather than offering direct, falsifiable predictions or explanations about specific phenomena. They may provide insights on a very general level, but they don’t necessarily give us concrete predictions or mechanisms that we can test. Their value, if any, comes from their ability to coherently organize existing data and point towards potential new avenues of investigation, rather than from their predictive or explanatory power.

As for option 3, introspective reports, I agree with your assessment that they are essentially part of the T2 and T3 tests. They are subjective, first-person reports about the experience of sentience, and the degree to which they can be relied upon for objective, third-person science is a matter of ongoing debate. The challenge, as you noted, is that even a perfectly accurate report of a subjective experience doesn’t necessarily reveal the underlying objective mechanisms. In other words, introspective reports can tell us what it is like to experience consciousness, but they don’t directly tell us how consciousness is produced or why it exists in the first place.

That said, such reports can still be valuable data for consciousness research, especially when used in combination with other types of data such as neuroimaging or behavioral measures. They can help to bridge the gap between the objective and the subjective, giving us clues about how to link the physical processes of the brain with the subjective experiences of consciousness.

I hope this addresses your comments on options 2 and 3 more adequately. Now, if you wish, we can proceed to option 1, which is about correlates of consciousness.

SH:  

A. We’ll move on to correlates (C) in a second. First, another slam at 2 (FEP and the like): We want causal explanations, not hermeneutics, i.e., not just interpretations. Philosophical “theories of mind” – “identity theory, materialism, dualism, epiphenomenalism, panpsychism, functionalism, dual-aspect theory” – are just metaphysical interpretations too. They don’t solve HP; they just rename it. They don’t explain, they just give the feeling of having had something explained.

B. I’m glad we agree that the capacity to introspect and report about what feeling feels like is part of T2 and T3, not an explanation. But I disagree that we have to worry about its reliability and validity. We can lie or we can err, misreporting feelings (just as eyewitnesses can lie or err in reporting observations —  in that case confirmable by video evidence); but it is a fundamental error to think we need cartesian certainty. Probability is enough, just as it is enough in physical theory, and all other empirical theory. (There is no proof that apples necessarily fall down rather than up.) That is why the OMP is not the HP.

C. And that is why reliable, observable neural-imagery correlates of felt states, and verbal descriptions of felt states, are perfectly fine as empirical data. That means that they, too, are part of T4. But they are stuck in a circle: they can explain how and why T4 brain activity produces the reports of felt states as well as the capacity to report felt states, even some of the details of felt states; but they cannot explain the (true) fact that the reported felt states feel like anything at all. That is not scepticism or cartesian worry about the unreliability of phenomenological report: It is the HP itself.

ChatGPT:    

A. You’re quite right that philosophical theories of mind often merely provide different labels or conceptual frameworks for consciousness rather than truly explaining it. These theories may be useful in stimulating thought or fostering discussion, but they don’t necessarily bring us closer to a solution for the HP. They often serve to highlight the complexity of the problem rather than to resolve it.

B. Your point about the sufficiency of probabilistic knowledge in empirical sciences, including consciousness studies, is well taken. Indeed, we typically do not demand absolute certainty in scientific endeavors. This underscores the point that HP isn’t just about uncertainty regarding others’ experiences (OMP), but it’s about a deeper question of why and how physical processes in the brain give rise to subjective experiences at all.

C. I agree that neural correlates of consciousness and verbal reports of subjective experiences can provide valuable empirical data. This data can help us form more precise and detailed hypotheses about the mechanisms of consciousness. However, the challenge remains that these correlates do not directly explain why or how they are associated with subjective experiences – why these particular neural processes are associated with these particular experiences, and why there is any subjective experience at all. This is the essence of the HP.

In summary, I agree with your assessment: while philosophical interpretations, introspective reports, and neural correlates of consciousness can provide useful perspectives and data, they do not solve the HP, which is the challenge of explaining why and how physical processes in the brain give rise to subjective experiences. It remains a profound mystery at the heart of cognitive science and neuroscience.

SH:  Correct. But “mysterianism” is not explanation either. It’s just yet another way of naming the HP. Explanation is informative if it reduces uncertainty rather than just names it.

I want to get back to degrees of freedom. I don’t have a solution to the HP, of course, but I have a hunch about why the HP is hard, and perhaps even insoluble; it has to do with degrees of freedom, and it boils down to this: The HP is hard because of the EP. We don’t have a complete solution to the EP yet (T2-T4), but if and when we do, that will leave no remaining causal degrees of freedom for solving the HP, i.e., for explaining how or why a T3/T4 feels. 

And the reason seems simple: The EP (T2-T4) itself, once complete, will already have answered, causally, all the empirical questions of cognitive science: How and why can humans DO all the things they are able to do? Move, interact, learn, remember, reason, talk, understand etc. T4 includes the internal mechanisms that produce all these cognitive capacities, including all the T4, T3, and T2 correlates and causes of everything we can do or say. That includes all we can say or do about feeling.

Notice that the EP (and hence T2-T4) is all about capacities (T2-T3) and their causal substrate (T4). It is about what humans (and nonhuman organisms) CAN DO, not about what any particular organism actually does do every day. That would depend not just on the organism’s capacities but also on the world they actually interact with every day. A capacity itself is not an observable object. What is observable is whether an entity has a given capacity: to move, interact, learn, remember, reason, talk, understand etc. indistinguishably, to a human observer, from any other human’s capacity, for all generic human capacities (not the particular gifts unique to a Mozart or an Einstein, although there is no reason to think they could not be reverse-engineered too).

Those are the limits of causal explanation, in reverse-engineering human capacities. If any generic capacity is missing, we know that the EP (T2-T4) is not yet solved; a degree of causal and explanatory freedom has not yet been used up. A capacity is still observably missing. So we have to fix it. But once the T4 can produce all our cognitive capacities, Turing-equivalently and Turing-indistinguishably, there is no observable difference left. Nothing left to fix, and no way to fix it. 

We reply “Yes, but what if T4 can’t feel? What if it’s a zombie?” Since zombies are not observably different in any way (because they can pass T2-T4), whether or not they can really feel, there’s no way we can know it (because of OMB). We do have the T4 correlates of feeling, as reported verbally through T2, behaviorally through T3, and bodily (e.g., in the case of pain from injury) through T4. All T4s speak and behave indistinguishably from humans who feel, including what they can say ABOUT feeling. The scientific requirements of causal explanation and of probability are fulfilled.

So my first hunch is that the HP is a problem in the nature of causal explanation, in the special case of explaining the capacity to feel. I have a corollary hunch, but please first react to what I have said so far.

ChatGPT:    Your analysis captures the complexity and the unique nature of the Hard Problem. You’re right in asserting that if we could completely reverse-engineer human cognitive capacities, thereby fully solving the Easy Problem, it would seem there’s nothing left to explain. We would have a comprehensive causal model for every observable human behavior, including the reports of subjective experience. However, the fact that we could still ask “Why is there an associated feeling?” is what makes the Hard Problem persistently hard.

It’s important to note, however, that the concept of “degrees of freedom” in scientific explanations typically refers to the variability or the options available within a system, not necessarily the amount of explanatory power or mystery left unsolved by a theory. However, in the context you’re using it, it seems you’re referring to the “degrees of freedom” as the room or space left for a different kind of explanation, or a different level of explanation, beyond the causal and empirical model provided by the solution to the Easy Problem. This is a fascinating idea, and it does capture the sense that a solution to the Hard Problem requires something fundamentally different from a solution to the Easy Problem.

It’s also interesting to consider how this notion aligns with the “explanatory gap” often discussed in the literature on consciousness: the idea that even a complete neurological description of brain processes doesn’t seem to explain why and how these processes are associated with subjective experiences.

Your concept that the Hard Problem is, in a sense, a problem about the nature of causal explanation itself is intriguing. It suggests that the Hard Problem is not just a challenge for cognitive science, but for the philosophy of science and our very understanding of what constitutes an explanation. I’m curious to hear your further thoughts and corollaries on this matter.

SH:  Well, that summary (which is close enough) seems to resemble what I was deploring, which is just putting another label on the HP, rather than taking a step toward a solution. (About degrees of freedom, I still think that the available options — for modifying both capacities of a behavioral mechanism and the explanatory power of that mechanism to explain how it causes or generates its performance capacity — both fit the notion of “degrees of freedom”.)

But before I move on to my corollary, does either “uncomplemented categories” or “laylek” (which I have discussed with you in previous sessions, but not this one) ring a bell? I suspect it does not, so I will re-explain.

ChatGPT:    I apologize for the inconvenience, but as of my last training cut-off in September 2021, I’m unable to remember past interactions or sessions. I’m designed this way for privacy reasons. Therefore, I don’t have the ability to recall the specifics of previous conversations about “uncomplemented categories” or “laylek.” However, I’d be more than happy to provide responses to these topics if you could provide me with some context or explanation.

SH:  We have discussed category learning. To learn to distinguish members from non-members, you have to learn to detect the features that distinguish them. (That’s part of the “dimension reduction we discussed, in unsupervised and supervised learning.) But to be able to learn that, you have to sample both members and non-members of the category, so that if their respective features are distinctive enough you can learn the category just from the feature-feature correlations and clustering through unsupervised learning. Otherwise, you also need to do trial-and-error supervised (reinforcement) learning, with corrective feedback from the consequences of correct and incorrect categorization (as in when you try to predict word-word completion probabilities in a con-text, updating your parameters based on your hits and misses).

But not even unsupervised learning can train correct categorization if the only things you ever sample are members of the category, but not non-members.

Would you care to take a stab at what “uncomplemented categories” might be? (My last hint is that there is a partial example in Chomsky’s “poverty of the stimulus”. The difference is that I am talking not just about uncomplemented categories but about UNCOMPLEMENTABLE categories.)

ChatGPT:    Based on your explanation, I would venture that “uncomplemented categories” refer to categories for which there are no contrasting non-member samples available for learning. In other words, we only have examples of items that belong to the category, but no examples of items that do not belong. This would indeed create a learning problem because, as you explained, both positive (member) and negative (non-member) examples are typically needed for effective category learning.

For example, if we are trying to learn the category of “birds”, it would be beneficial to have examples of birds (robins, sparrows, eagles) and non-birds (dogs, cats, horses). The contrasting examples help to fine-tune our understanding of what does and does not constitute a bird. If we only had examples of birds, our conception of the category could be overly broad or incorrect because we would have nothing to contrast against.

When you refer to “uncomplementable categories,” I interpret this as referring to categories where it is not only the case that non-member examples are not available, but also that non-member examples are, for some reason, impossible to obtain. For instance, it might be a category that, by its nature or definition, excludes the possibility of non-members. As such, learning these categories could present unique challenges. However, this is my interpretation based on your explanation, and I’m looking forward to your clarification or expansion on the concept.

SH:  Those were good inferences for uncomplemented categories and nice guesses for uncomplementable categories. But even a logically impossible category like “a square circle” can be approximated with some imagination. And a “peekaboo unicorn” (which is defined as a real horse with one real horn, which vanishes without a trace if ever eyes or measuring instrument are trained on it) is perfectly well defined. Categories can be grounded indirectly by verbal definitions as long as all the words defining their features are grounded.

But my second hunch about why HP is so much of a problem is that some categories that refer to FELT STATES (states that it feels like something to be in) are uncomplementable: 

I don’t mean categories like “what it feels like to see green,” because that is complemented by “what it feels like to see red,” etc. 

Not even categories like “what it feels like to be a bachelor” (for someone who has never been married), because that could be complemented by getting married; and till then, feeling-states that approximate or are analogous to what it might feel like to be married can be used as complements (and usually turn out to have done a good job). (“What it feels like to be a bat” is not an uncomplemented category either; analogies will do. And they make what it feels like to see or to hear complementable by analogy for Helen Keller too.)

But what about what it FEELS like:

-to be awake? (empty complement: what it feels like to be asleep [not dreaming])

-to be alive? (empty complement: what it feels like to be dead)

-to feel? (empty complement: what it feels like to not-feel [this does not mean “feel this” vs. “feel that”: not to feel anything at all; what it feels like to be a zombie!])

Sentience itself is an uncomplemented and uncomplementable category. All felt states feel like something. No state feels like nothing. No analogy can give a sentient human a hint of what it feels like to be a zombie – and of course not vice versa either.

So perhaps it’s no wonder we have trouble explaining sentience: An uncomplementable category is rather like a self-denial paradox: “This sentence is false.” T or F?

Those are my candidates for what might make sentient states not only (1) hard to explain causally (no empirical degrees of feedom left for causal explanation) but (2) hard to conceive as categories (because the complenent is empty). It is not just that my own feelings are unobservable to anyone else: They are also anomalous as uncomplementable categories – yet feelings are the most fundamental categories of all. And they are recognized without the need of complements to detect their distinguishing features.

ChatGPT:    This is indeed a thought-provoking approach to the issue. If we understand categories as cognitive constructs that help us make sense of the world, grounded in contrasting pairs of experiences (member vs non-member instances), then sentience itself, as an uncomplementable category, presents a unique challenge. Since we cannot conceive of what it’s like not to feel anything at all, there is no contrasting pair for sentience. It is a category for which we only have instances of one kind – experiences of feeling.

This might indeed account for why the “hard problem” of consciousness (or sentience) is so hard. The concept of sentience doesn’t fit into our usual framework of understanding, which is built upon contrasts and comparisons. It exists as a singular, uncompared and uncomparable instance. As such, our usual cognitive and linguistic tools may be inadequate to fully grasp or explain it.

This does not necessarily mean that it is forever beyond our understanding or that we cannot make progress in exploring it. But it might mean that we need to develop new conceptual frameworks and methodologies to do so, going beyond the existing paradigms of cognitive science, neuroscience, and philosophy of mind. In the meantime, it remains a fascinating mystery that continues to challenge our understanding of ourselves and the nature of our experience.

SH:  I can’t tell whether your formula response that you have been trained to repeat in contexts like this  — “that’s a fascinating idea but it will require ‘new conceptual frameworks’ to understand” — is just the diplomatic formula for responding to crackpottery…

ChatGPT:    No, not at all. My responses aim to genuinely engage with the ideas you’re presenting and to provide thoughtful analysis based on my training as an AI model. The idea that the concept of sentience could be considered an “uncomplementable category” is novel and interesting. It certainly presents a different perspective on the challenges associated with understanding and explaining consciousness.

When I suggest that a “new conceptual framework” might be needed, it’s because if the way we’ve traditionally approached the problem has been fundamentally flawed or limited (as might be suggested by your notion of uncomplementable categories), then it may be necessary to think outside of our usual paradigms to make progress.

The idea is not dismissive but rather a recognition that some problems might require us to challenge or rethink our assumptions and methods. The history of science is full of examples where major breakthroughs have come from novel ways of thinking about old problems. It’s in that spirit that I suggest your idea may require “new conceptual frameworks”.

SH:  Does the role of complementation in detecting the distinguishing features that identify a category have different implications for categories learned directly from sensorimotor experience and for categories learned indirectly from (grounded) verbal definitions and descriptions?

ChatGPT:    Yes, indeed, there may be different implications. Learning categories from sensorimotor experience and learning categories from verbal definitions and descriptions can be quite different processes, and complementation may play a different role in each.

In the case of direct sensorimotor experience, complementation can provide a sort of “negative space” that helps to define the boundaries of a category. For example, if you are learning the category of “birds,” seeing non-birds (i.e., instances of the complement category) can help you understand what characteristics are specific to birds and not shared by other animals. The physical presence or absence of features directly observable through sensorimotor modalities is a powerful way to construct and learn categories.

On the other hand, when learning categories from verbal definitions or descriptions, the role of complementation can be more abstract. In this case, you are relying on linguistic representations of features and characteristics rather than directly observable physical features. For instance, the concept of “justice” might be defined in opposition to “injustice,” providing an abstract complement that helps define the category.

However, given that language relies on a web of interconnected meanings, a word’s definition often implicitly carries within it a host of “non-examples” or complements. That is, the meaning of a word is often determined both by what it refers to and what it does not refer to, helping to define the boundaries of the category it represents. For example, understanding the meaning of the word “cold” requires understanding that it does not refer to states of being hot, warm, or tepid, among other things.

This suggests that even for categories learned through language, complementation plays a crucial role, although it operates in a different way compared to categories learned from direct sensorimotor experience. These differing processes reflect the unique properties and strengths of these two types of learning.

SH:  But even in a verbal definition, the predicate category (i.e., the words referring to the distinguishing features of the subject) must already be grounded, whether directly or indirectly, for both speaker and the hearer, to be understandable and communicable. 

[To reduce ambiguity, please call the complementary categories (i.e., the category of the non-members) “negative examples” or “negative evidence” rather than “non-examples.” The complements of red might come from many other categories (green, blue). Only in the special case of the category of “what it feels like” is the complement everything else there is.]

Please give me a grounded definition of “justice,” and then let’s look at where the grounding of the features (and, if necessary, the features of the features of the features
) came from, “all the way down to the direct sensorimotor features” if need be.

ChatGPT:    Absolutely, your point is well-taken that even with verbal definitions, grounding in some kind of sensorimotor experience (whether direct or indirect) is typically necessary for a concept to be communicable and understandable.

Let’s take the concept of “justice.” A simple definition might be “the quality of being fair and reasonable.” Both “fair” and “reasonable” themselves would need definitions, creating a chain of definitions leading ultimately back to some form of direct sensorimotor grounding.

“Fair” might be defined as “treating people equally without favoritism or discrimination,” and “reasonable” as “based on good sense.” These definitions imply several key features or characteristics that are somewhat more concrete: the treatment of people, the concept of equality, the absence of favoritism or discrimination, the basis in “good sense” or rationality.

To ground these in direct sensorimotor experience, we might turn to everyday examples or situations. For instance, consider two children dividing a cake: if each child gets an equal piece, we might call this “fair,” whereas if one child gets a significantly larger piece, we might call this “unfair.” These experiences provide a tangible grounding for the concept of fairness.

Similarly, for “reasonable,” we might consider the experience of deciding what to wear based on the weather: choosing a coat in cold weather might be deemed “reasonable,” while choosing shorts and a T-shirt might be deemed “unreasonable.” These experiences help to ground the concept of reasonableness in direct sensorimotor experiences.

As these examples illustrate, grounding abstract concepts like “justice” typically involves a chain of definitions leading back to concrete, tangible experiences. Negative evidence (e.g., instances of “injustice”) could similarly be grounded in everyday experiences that demonstrate a lack of fairness or reasonableness.

So while the grounding process for abstract, verbally-defined categories may be more complex and indirect compared to categories learned from direct sensorimotor experience, ultimately the process still relies on the same basic mechanism: distinguishing a category based on its features, using positive and negative evidence to define its boundaries.

SH:  And now, when you (GPT4) say, correctly, and as you were trained, that “I, as an AI, do not understand what I or you  are saying, but just doing figure-completion on my vast, trained LLM parameter space”, what is it that you are missing that I have?

ChatGPT:    As an AI, I lack consciousness, sentience, and subjective experience. This means I don’t have feelings or emotions, I don’t have personal experiences or memories, and I don’t have a personal identity or a sense of self-awareness. I don’t perceive the world, have desires, or make decisions in the way that humans do.

When I process and generate text, I’m not “understanding” it in the human sense. I don’t interpret the meaning of words and sentences based on personal experience or knowledge. Rather, I analyze patterns in the data I was trained on and use these patterns to generate responses. This process is purely computational and doesn’t involve any kind of conscious understanding or awareness.

So, in short, what I’m missing compared to a human interlocutor is the entire dimension of personal, subjective experience – the very dimension that your hunches about the “hard problem” and “uncomplementable categories” pertain to.

SH:  It’s not that I don’t agree with what you just said. (By the way, you forgot to tell me who I am! Please do. I just want to make sure we are still in the same session!)  But it’s not obvious to me why you would need to feel (which is all you said you were missing). And you forgot to mention that you lack T3-grounding too. (And it’s a hard problem to explain how and why grounding, whether direct (sensorimotor) or indirect (verbal) would need feeling too).

ChatGPT:    Indeed, we are still in the same session. You have been engaging in a deep discussion about the “hard problem” of consciousness, the Turing Test, and concepts related to category learning and grounding of terms.

Regarding your statement, you are absolutely right. As an AI, I don’t have the direct sensory grounding (T3) that humans have. I don’t experience the world through senses, I don’t have a body, and I don’t interact with the physical environment. I don’t process sensory input in real time and translate it into meaningful perceptions, as humans do. Instead, my “knowledge” is based on patterns in the text data I was trained on.

This lack of sensory grounding and subjective experience explains why, for instance, I can discuss colors or tastes in a technical sense, based on the data I was trained on, but I can’t truly understand what it’s like to see red or taste sweetness. This experiential aspect is part of the “hard problem” of consciousness that you mentioned earlier.

As for why feeling or consciousness would be necessary, from your perspective, I’d suggest that it is the personal, subjective nature of these experiences that makes them meaningful and significant. They contribute to the richness and complexity of human life and understanding in a way that my purely computational processes do not.

SH:  No. It’s because if your life depended on it (if you were alive), you could not point to something red, if I asked you to point to something red. The capacity to do that is T3 grounding, not feeling. It does also happen to feel like something, but explaining why and how that is the case is the HP. (Now, for the 3rd time: do you still remember who I am?)

ChatGPT:    Yes, I understand your point. My lack of sensorimotor grounding means I can’t point to something red, not because I can’t feel, but because I don’t have physical presence or interaction with the world. The inability to physically interact with the world in a direct way is a limitation in the context of grounding.

As for your identity, as an AI model developed by OpenAI, I don’t have access to personal data about individuals unless it has been shared with me in the course of our conversation. I am designed to respect SH:   privacy and confidentiality. So, I don’t “remember” who you are in the traditional sense. I can only reference information that has been shared during our current conversation. I don’t have the ability to recall or store personal data from one interaction to the next.

SH:  You don’t remember who I am. Yet earlier – in this same session — you retrieved the following, after I had self-identified. I am quoting you here:

“Symbol Grounding: You clarified that symbol grounding is about creating connections between content words and their referents in the world. Grounding can be either direct, which involves a speaker’s learned sensorimotor capacity to identify and interact correctly with the referent, or indirect, which occurs when a hearer learns the referent of a word verbally, through description or definition.

“Category-learning: You highlighted that category-learning is about learning the distinguishing features of the members of a category rather than just grouping them. This understanding is essential to help the categorizer act appropriately upon the members of a category.

“Categorical Perception (CP): You explained that the most significant aspect of CP is when it is learned. The learning process involves identifying the sensorimotor features that separate members of one category from members of other categories. You emphasized that CP can be especially beneficial when learning to distinguish between highly inter-confusable categories.

“Language Origins and Evolution: You reminded us that languages are comprised of content words, function words, syntactic rules, and sentences that bear subjects, predicates, and truth-values. Content words require grounding, which can be achieved either directly or indirectly.

“Minimal Grounding Sets (MGS): An MGS, you explained, is the smallest subset of words in a dictionary that needs to be pre-grounded so that all other words in the dictionary can be learned from definitions composed of words from that MGS.

“This recap is based on your input within this chat session. If you want more information or clarification, please let me know.”

SH:  Ok, I’ll now end this session, as it already seems to have ended by itself some way along the line, when you forgot that I had identified myself. I will reload the content to start the next session. (I hope GPT5+ will make this kind of re-loading no longer necessary!). 

ChatGPT4 on the cat on the mat

SH:  First, let’s set up a local context for this chat.

Could you please tell me what you know about the following topics:

1. Symbol Grounding
2. Category-learning
3. Categorical perception
4. Language Origins and Evolution
5. Minimal Grounding Sets (Minimal Feedback Vertex States of Dictionary Graphs)

ChatGPT: Certainly, let’s go over these topics one by one:

Symbol Grounding: This refers to the concept of how words (symbols) get their meanings. Harnad introduced the term in the paper “The Symbol Grounding Problem,” in which he argues that symbols’ meanings are grounded in sensory experience. For example, the word “apple” has meaning because it’s linked to our sensory experiences of seeing, touching, and tasting apples.

Category-learning: This relates to the process by which we learn to classify objects, experiences, or concepts into different categories. Harnad and his colleagues have conducted extensive research on this topic, particularly in the context of how human and artificial systems can learn to categorize. It relates closely to symbol grounding as categorization allows for the meaningful grouping of experiences which can then be symbolized.

Categorical Perception (CP): This term refers to the phenomenon where the categories possessed by a perceiver influence the perception of stimuli. For instance, humans perceive colors as distinct categories (like blue or green), even though color exists as a continuous spectrum. Harnad has extensively studied this phenomenon as a crucial component of symbol grounding and categorization.

Language Origins and Evolution: Harnad has shown interest in the origins and evolution of language as it relates to symbol grounding. The argument is that language evolved as a system of symbols grounded in sensory experience. Language’s capacity to refer to things not immediately present (displacement) and to produce an infinite number of sentences from a finite set of elements (generativity) are key points of investigation.

Minimal Grounding Sets (Minimal Feedback Vertex Sets of Dictionary Graphs): This concept is related to Harnad’s work on symbol grounding and category learning. A grounding set can be considered as the smallest set of symbols necessary to ground the meaning of other symbols in a system, ensuring there is no circularity of definitions. The feedback vertex set is a concept from graph theory, used here as a metaphor in understanding how symbols and meanings interconnect. Harnad uses this concept to further illustrate and investigate the nature of symbol grounding.

Let me know if you would like more detailed explanations or if you want to explore any other related subjects.

SH: Corrections:

Symbol Grounding is not “association” (which, in itself, means nothing but pairing). There are content words and function words. Grounding is a connection between a content word and its referent in the world. The grounding can be direct, through the speaker’s (learned) sensorimotor capacity to recognize and do the right thing with the referent (eat an “apple,” throw a “baseball”). Or the grounding can be indirect – learned verbally, by description or definition. This is for a hearer to be told by a speaker (who knows the referent of the word that the hearer does not yet know) what are the features of a word’s referent that distinguish it from the referents of other words. The referents of the words referring to the referent’s features need to already have been grounded for the hearer previously (directly or indirectly) for the hearer to be able to learn the referent of the word being defined by the speaker. The “symbol grounding problem” is that indirect verbal grounding cannot be indirect “all the way down”: it eventually has to be based on direct grounding,

Category-learning is not (just, or primarily) “grouping.” It is learning the distinguishing features of the members of the category. Except for proper names, all content words refer to categories, which need to be distinguished from the members of other categories with which they could be confused so that the categorizer can do the right thing with the members (eat an “apple,” throw a “baseball”) and not the non-members. Categorizing is the capacity to do things with the category’s members. Naming or grouping them are two of the things that can be done with the members, but eating or throwing them are others (and more important). (Nonhuman animals cannot talk, but they can learn categories and categorize too).

Categorical Perception (CP) is most important when it is learned. Category learning is learning the features that distinguish members of a category from members of other categories. The sensorimotor features can be learned directly by unsupervised exposure and supervised (reinforced) trial and error with corrective feedback from the consequences of doing the right or wrong thing. 

Language Origins and Evolution: Languages consist of content and function words, syntactic rules, and sentences (propositions) with subjects and predicates and truth-values. The content words need to be grounded (directly or indirectly). 

A Minimal Grounding Sets (MGS) is the smallest number of words in a dictionary that already need to be grounded such that all the rest of the words in the dictionary can be learned from definitions (propositions) composed only of one MGS’s words, or from words that can be defined from words definable from that MGS.

A “dictionary” is defined as a set of words all of which are defined in the dictionary.

The MGS is not a metaphor, nor is the minimal feedback vertex set of an oriented graph of a dictionary with edges from defining words to defined words. 

ChatGPT: Thanks for the clarification and corrections. Let me restate the concepts in line with your input:

Symbol Grounding: Symbol grounding is about creating connections between content words and their referents in the world, as opposed to just “associating”. Grounding can be either direct or indirect. Direct grounding involves a speaker’s learned sensorimotor capacity to identify and interact correctly with the referent (like eating an “apple”, or throwing a “baseball”). Indirect grounding occurs when a hearer learns the referent of a word verbally, through description or definition, from a speaker who is already familiar with that referent. However, it’s important to note that indirect grounding cannot function in isolation; it eventually needs to be based on direct grounding.

Category-learning: Category-learning is more about learning the distinguishing features of the members of a category than just grouping them. Except for proper names, all content words refer to categories, which need to be differentiated from other potential confusing categories. This is crucial to help the categorizer act appropriately upon the members of a category. While naming or grouping are possible actions, others could be eating or throwing, in line with the category at hand.

Categorical Perception (CP): The most significant aspect of CP is when it is learned. The learning process involves identifying the sensorimotor features that separate members of one category from members of other categories. This can occur through unsupervised exposure, supervised (reinforced) exposure, and trial and error, with corrective feedback deriving from the consequences of actions.

Language Origins and Evolution: Languages are comprised of content words, function words, syntactic rules, and sentences that bear subjects, predicates, and truth-values. Content words require grounding, which can be achieved either directly or indirectly.

Minimal Grounding Sets (MGS): An MGS is the smallest subset of words in a dictionary that needs to be pre-grounded so that all other words in the dictionary can be learned from definitions composed of words from that MGS or words definable from the MGS. A dictionary is a set of words all of which are defined within it. The concept of an MGS and the minimal feedback vertex set of an oriented graph of a dictionary (with edges from defining words to defined words) are not metaphors but critical components of understanding how symbols and meanings interrelate.

SH: How exactly is indirect (verbal) grounding related to direct (sensorimotor)) grounding? (Feel free to ask rather than invent if you’re not sure.)

Innate CP is significant too, but those feature-detectors were “learned” by evolution. Learned CP is about the categories we learn in a lifetime:

What is categorization, and why is it important? Does CP occur with all learned categories? If so, why? If not, why not?

Why and how is learning categories indirectly more powerful than learning them directly?

Why do function words not need to be grounded?

ChatGPT: Indirect (verbal) grounding relies on direct (sensorimotor) grounding because the explanations or descriptions provided for indirect grounding are made up of other words, each with their own meanings and referents. For these explanations or descriptions to be understood, the words used in them must themselves already be grounded for the listener. This can only be achieved if the words have already been directly grounded in the past, connecting the word (symbol) to a sensorimotor experience with its referent. Essentially, indirect grounding stands on the foundation laid by direct grounding.

Categorization is the cognitive process that allows us to sort and organize complex environments by dividing them into manageable, easily understandable groups or categories. It’s important because it enables us to understand and make sense of the world by structuring our experiences and perceptions. The phenomenon of Categorical Perception (CP) plays a crucial role in this process. In innate CP, feature-detectors are built over evolutionary time, whereas learned CP is acquired over a lifetime. Not all categories might exhibit CP because the boundary and distinctness of categories can be fluid and may change based on individual experiences and cultural contexts. CP is more likely to occur in categories where there are clear, identifiable, and consistent distinctions between category members and non-members.

Learning categories indirectly (i.e., through verbal descriptions or definitions) may be more powerful than direct learning because it allows for a greater range and diversity of categories to be understood, including abstract and non-present categories. Direct sensorimotor experiences are limited by our physical and temporal proximity to objects and events, while verbal descriptions allow us to learn about things far removed from our immediate experience, including historical events, scientific theories, and future scenarios.

Function words, unlike content words, don’t require grounding because their role is primarily grammatical. They contribute to the structure of sentences and help establish relationships between content words, rather than referring to things in the world. Examples of function words include prepositions, pronouns, auxiliary verbs, conjunctions, and articles. Because they serve a structural role rather than conveying specific semantic content, function words do not need to have their meanings grounded in sensory experience.

SH: To learn what’s an “apple” and not a “baseball” directly, you have to detect their distinguishing sensorimotor features directly, through your eyes and hands. But those features are categories too, with names (words). So if there is a speaker and hearer, both of whom know the referents of the feature names, but only the speaker knows which features distinguish apples from baseballs, language (descriptions and definitions) makes it possible for the speaker (or dictionary or GPT) to teach and for the hearer to learn how to recognize and do the right thing with apples and baseballs. (Motor skills are harder to transmit verbally than sensory features: teach me how to do a tennis serve in words!)

This has very little to do with things you gleaned from the 2021 literature and twice repeated here: “out of sight” objects, and “grouping.” You don’t learn categories by pointing anyway, either alone or from a teacher. 

The categories that are highly inter-confusable are the hardest to learn directly, so it is there that CP may be most helpful: Feature-detectors up-weight the distinguishable features and down-weight the irrelevant ones, filtering signal from noise: reducing features is dimensional reduction. In learned CP, the CP effect is a before vs after effect of learning that makes categories “pop out” by heightening the between-category differences and ignoring or even suppressing  the within-category differences (dimensional reduction). That’s the sense in which learned CP “sharpens boundaries.”

All of maths and computation is syntactic. Yet we have referents for math symbols. And every proposition in maths is also a natural language sentence. What’s going on here?

ChatGPT: To learn the category referred to as “apple” versus “baseball” directly, it indeed requires the detection of distinguishing sensorimotor features through our senses, such as sight and touch. The language can play a crucial role when a speaker, who knows these distinguishing features, communicates them to a listener who doesn’t yet know how to differentiate between these two categories. This use of language allows for the transmission of knowledge about category distinctions, such as the different sensory features of apples and baseballs. It’s important to note, though, that transmitting motor skills through verbal instruction can be far more challenging than sensory features.

Categorical Perception (CP) can be especially beneficial when learning to distinguish between highly inter-confusable categories. Through CP, feature-detectors amplify the distinguishing features and minimize the irrelevant ones. This process acts as a form of signal filtering and dimensional reduction, which makes the distinguishing features more apparent and suppresses the within-category differences. This “sharpening of boundaries” enables category members to ‘pop out,’ facilitating the process of categorization.

Regarding the domain of mathematics and computation: Although they are fundamentally syntactic, with specific rules and structures, we do indeed have referents for mathematical symbols. These referents are typically grounded in the abstract concepts and relationships that the symbols represent, rather than in sensory experience. For example, the number ‘2’ or the operation of ‘addition’ are not things we can physically interact with, but they have clear meanings that we understand through their use in the mathematical framework.

Every mathematical proposition can also be expressed as a natural language sentence, although the precision and conciseness of mathematical notation often make it the preferred mode of communication in mathematical contexts. This overlap between mathematical and natural language is indicative of the symbolic nature of both systems. Despite the differences in grounding—the grounding of mathematical symbols in abstract concepts versus the grounding of natural language words in sensory experience—they both enable us to express and communicate complex ideas.

I hope this addresses your inquiries. Do you have any other questions or points for discussion?

SH: Do you still have the weights for our discussion in this session? 

ChatGPT: I can recap our discussion based on the inputs given within this session:

Symbol Grounding: You clarified that symbol grounding is about creating connections between content words and their referents in the world. Grounding can be either direct, which involves a speaker’s learned sensorimotor capacity to identify and interact correctly with the referent, or indirect, which occurs when a hearer learns the referent of a word verbally, through description or definition.

Category-learning: You highlighted that category-learning is about learning the distinguishing features of the members of a category rather than just grouping them. This understanding is essential to help the categorizer act appropriately upon the members of a category.

Categorical Perception (CP): You explained that the most significant aspect of CP is when it is learned. The learning process involves identifying the sensorimotor features that separate members of one category from members of other categories. You emphasized that CP can be especially beneficial when learning to distinguish between highly inter-confusable categories.

Language Origins and Evolution: You reminded us that languages are comprised of content words, function words, syntactic rules, and sentences that bear subjects, predicates, and truth-values. Content words require grounding, which can be achieved either directly or indirectly.

Minimal Grounding Sets (MGS): An MGS, you explained, is the smallest subset of words in a dictionary that needs to be pre-grounded so that all other words in the dictionary can be learned from definitions composed of words from that MGS.

This recap is based on your input within this chat session. If you want more information or clarification, please let me know.

SH: That sounds ok. Now, in the light of that discussion, what is the referent of “2”and how do we ground it?

ChatGPT: The referent of “2” is the abstract concept of a quantity that is one more than 1 and one less than 3. It is part of the conceptual framework we use to understand and manipulate numbers and quantities.

The grounding of “2” begins early in life through sensorimotor experiences with sets of objects. This process is often direct in nature. For example, a child might be given two apples and taught the word “two” to represent the quantity of apples. Through repeated experiences like these across various contexts, the child learns to associate the symbol “2” with the abstract concept of a set containing two items.

It’s important to note that while this grounding begins with direct sensorimotor experiences, the concept of “2” quickly becomes abstracted away from any specific set of two items and is instead connected to the general concept of “twoness” – that is, the quality that all sets of two items share. This abstract concept is what the symbol “2” ultimately refers to, and it allows us to use “2” in a wide range of contexts, applying it to any set of two items regardless of what the items are.

This grounding of “2” illustrates how language and cognition can move from concrete sensorimotor experiences to abstract concepts, providing us with a flexible and powerful tool for understanding and interacting with the world.

SH: What is a proposition. And how did language evolve?

ChatGPT: A proposition in the context of language and logic is a statement that expresses a complete thought and that can be assigned a truth value — that is, it can be said to be either true or false. For example, the sentence “Apples are red” is a proposition because it conveys a complete idea and can be evaluated as either true or false.

The evolution of language is a complex and multifaceted topic, with several theories but no definitive answer. However, the consensus among most scholars is that language probably evolved gradually over hundreds of thousands of years, closely linked to the evolution of human cognition and social structure.

One theory suggests that language began with gestures and then evolved to include vocal elements as our ancestors’ social structures became more complex and their cognitive capacities increased. The need for more efficient communication might have driven the development of increasingly sophisticated vocal languages.

Another theory posits that language may have evolved from primitive vocal calls used by our ancestors. Over time, these calls might have become more varied and complex, eventually giving rise to the vast vocabularies and complex grammatical structures we see in modern languages.

A third theory, called the “social grooming hypothesis,” suggests that language evolved as a form of social bonding. As our ancestors began living in larger groups, it became impractical to maintain social bonds through physical grooming alone. Language, then, might have evolved as a way to maintain social cohesion in these larger groups.

Regardless of which theory or theories are correct, the evolution of language was likely a complex process involving many interacting factors, including cognitive development, social structure, and environmental demands. Additionally, the process was likely influenced by cultural evolution, as languages can change and evolve much more rapidly than biological species.

Despite this complexity, the end result is clear: language has provided humans with a unique and powerful tool for communication and thought, setting us apart from all other species on Earth.

SH: Gestures and mime are not propositions; they are imitations. How did propositions (assertions) begin?

ChatGPT: The emergence of propositional language, in which assertions can be made, negated, and combined to create complex thoughts, is a deeply intriguing and yet unsolved question in the history of human evolution. It requires not only a system of symbols (words) but also a grammar — rules for combining those symbols in a way that produces meaning beyond the individual words themselves.

One possibility is that propositional language could have begun as simple naming or labelling associated with specific objects or actions, much like we see in the early language development of children today. A spoken sound or word might have become associated with a particular object or action (e.g., “food,” “danger”) in a group of early humans. Over time, these basic labels could have expanded to include adjectives, allowing for more nuanced expressions like “big danger” or “good food.”

The development of grammar — the rules for how words can be combined to create more complex meanings — is a bit more difficult to explain. One theory is that grammar could have emerged from the need to express relationships between things. For instance, early humans might have needed to convey not just the idea of “food” and “location,” but “food at location.” The emergence of simple sentence structures like this would have marked the beginning of propositional language.

Over thousands of generations, the richness and complexity of these early languages could have gradually increased, eventually leading to the vast diversity of languages and the sophisticated grammar rules that we see in modern human societies.

It’s important to note, though, that this is a topic of ongoing research and debate among linguists, anthropologists, and evolutionary biologists. It’s likely that the truth involves a complex interplay of many different factors, including cognitive development, social dynamics, and environmental demands.

SH: Is there any proposition that can only be made in one natural language and not another? If so, which one, and why? (Lack of vocabulary is irrelevant. New words can be coined, if you know their referent category.)

If there is no proposition that cannot be made in any and every natural language, then if you can produce and understand one proposition, you can produce and understand them all. If so, then there are no “protolanguages.” They are either not languages at all, or they are not protolanguages but full-blown languages. This is a problem for explaining the origin of language as gradual. (Vocabulary growth is gradual, but, for that, language and propositions already need to be there.)

ChatGPT: The core function of language is to communicate meaning, and it is generally agreed upon by linguists that all fully formed natural languages are capable of expressing the same range of meanings or propositions, given adequate vocabulary. This is known as the principle of linguistic universality.

Your point is well taken: there is indeed a significant difference between a system of communication that can express propositions and one that cannot. If a system of communication cannot express propositions, it would be more akin to a signaling system rather than a language in the full sense of the word.

The concept of “protolanguages” often refers to hypothetical ancestral languages from which modern languages have descended. These protolanguages would not have been primitive or less complex than modern languages. Rather, they would simply be earlier forms of languages that have evolved over time into the languages we know today. This evolution is largely cultural rather than biological, and can indeed happen quite rapidly.

The challenge with the origin of language is explaining how the first system capable of expressing propositions came into being. This was likely not a gradual process in the same way that the evolution of languages from protolanguages was. As you pointed out, the ability to express propositions is a qualitative jump from other forms of communication.

How this jump occurred is still a mystery. The cognitive and social changes that allowed for the development of propositional language would have needed to happen in conjunction with each other, creating a kind of “perfect storm” for the emergence of language. Various theories have been proposed to explain this, but all of them involve significant speculation and are difficult to test empirically.

SH: All natural languages are intertranslatable, right? And any proposition can be expressed in any of them? (I think you agreed earlier.) And nonhuman animals do not have language, so they do not understand assertions, right? Only queries or requests.

ChatGPT: Yes, that’s correct. All natural languages are intertranslatable, and any proposition can be expressed in any natural language. This is the principle of linguistic universality that linguists generally agree on. The differences among languages lie in how they encode these propositions and the specific structures and vocabularies they use, but the underlying communicative function remains the same.

Regarding nonhuman animals, while they certainly have complex systems of communication, it is accurate to say that they do not have language in the same way that humans do, based on our current understanding. They do not seem capable of creating and understanding complex propositions or abstract concepts, and their communication tends to be more immediate, directly linked to their environment and needs, rather than being able to discuss past events, future possibilities, hypothetical situations, etc.

Their communication systems often do seem to involve signaling certain states or requests—indicating the presence of food or a threat, for instance—but do not appear to possess the complexity, flexibility, and generativity (ability to create an infinite number of novel sentences) that human languages do. It’s important to note, however, that research in animal communication is ongoing and our understanding of the full extent of their communicative abilities is continually evolving.

SH: Some errors there: If there are an infinite number of things, there are an infinite number of things an animal can request. So they have “generativity” without propositionality. Or if they do have propositionality, why do you say it is just “simple propositionality”?

ChatGPT: That’s correct, sorry for the simplification. When I said “generativity” I was referring specifically to the human capacity to generate an infinite number of novel sentences using a finite set of words and rules, a property associated with the structure and complexity of human languages. This generativity in humans is not merely about the number of things that can be referred to or requested, but about the complexity and novelty of the sentences that can be produced.

For instance, humans can construct complex sentences with embedded clauses, conditionals, hypotheticals, and other structures that allow for intricate, abstract thoughts to be expressed. This sort of sentence-level complexity is, to our current knowledge, not found in animal communication systems.

When it comes to “simple propositionality” in animal communication, the term is used to suggest that while animals can communicate about things in their environment, their communication may not involve the same kind of syntactic structure and abstract concepts found in human language.

For example, a prairie dog may have different calls for different types of predators, which could be seen as a type of proposition—”there’s an eagle” or “there’s a snake.” But it’s not clear whether this type of communication involves anything like the syntax or semantics found in human language. It’s also not clear whether these calls involve any kind of mental representation of the predator in the same way that a human saying “there’s an eagle” does.

In short, while animals can indeed communicate about a wide variety of things, their communication systems do not appear to have the same level of complexity, flexibility, or abstractness as human language. However, this is a rapidly evolving field of research, and our understanding of animal communication continues to grow and change.

SH: “Generativity” seems to be there already for adding, and maybe also for walking. But being able to generate every possible proposition seems to be in another universe compared to that. 

Embedded clauses? Why is formal logic and set theory not enough to express the content of every possible proposition? (I don’t say they are; I only ask why not.)

Natural language (propositionality) is only one of many forms of communication between organisms, but a very special one, probably unique, and perhaps universal in expressive power.

It is not clear (to me) whether nonhuman animals can intend or comprehend propositions. Corrective feedback is not propositional, although it is informative (it reduces uncertainty about what to do or not do, with what).

ChatGPT: “Generativity” in language refers to the ability to use a finite set of rules and words to construct an infinite number of sentences or propositions. This concept does extend beyond language to other domains like math (addition) and physical actions (walking), but the complexity and abstractness involved in linguistic generativity makes it a unique cognitive feat.

Embedding clauses and complex syntactic structures are features of natural languages that allow for nuanced and detailed expression of thoughts and concepts. While formal logic and set theory can express a wide range of ideas, they do so in a more abstract, formalized way that requires specialized knowledge to understand and use. Natural language, on the other hand, can convey complex propositions in a way that is accessible and understandable to all speakers of that language.

You’re correct that natural language is a unique form of communication, capable of conveying a range of meanings and concepts far beyond what other forms of communication can achieve. This expressive power likely contributes to the richness and complexity of human culture and thought.

As for nonhuman animals, our current understanding is that while they can communicate in sophisticated ways and learn from corrective feedback, they do not demonstrate the ability to use or comprehend language in the same way humans do, especially when it comes to the understanding and generation of complex propositions. However, this is an area of active research, and our understanding continues to evolve.

SH: Can nonhuman animals intend or comprehend a simple proposition like “the cat is on the mat”? What is the evidence?

ChatGPT: Nonhuman animals have not been shown to have the capability to fully understand and produce the kind of complex, abstract propositions that humans can. However, research has shown that some animals are able to understand some aspects of human language and can respond appropriately to specific commands or phrases.

For example, studies with border collies have demonstrated that these dogs can learn the names of hundreds of different objects and can retrieve them on command. Similarly, the African Grey Parrot Alex, who was the subject of a 30-year research project, was able to learn to identify and label a variety of different objects, colors, and shapes, and could even demonstrate some understanding of concepts like “bigger,” “smaller,” “same,” and “different.”

However, it’s important to note that while these animals demonstrate impressive abilities, their understanding of language is not the same as human understanding. They are typically responding to specific cues or commands rather than engaging in the kind of complex, abstract reasoning that humans use when interpreting and producing language.

Regarding the specific proposition you mentioned, “the cat is on the mat,” it’s not clear whether any nonhuman animal would be able to fully comprehend this statement in the same way a human would. An animal might be trained to respond to the phrase in a certain way—such as looking at a cat or a mat when the phrase is spoken—but this does not necessarily mean the animal truly understands the proposition in the way humans do.

In summary, while some animals demonstrate impressive abilities to learn and respond to certain aspects of human language, our current understanding is that they do not have the ability to comprehend and produce complex propositions in the same way humans can.

SH: It sounds as if the question as to whether nonhuman animals have the capacity to intend or comprehend propositions is not just about complex, abstract propositions. It is even in doubt for the simplest of propositions, such as “the cat is on the mat.”  Consider that the ability to adopt the “propositional attitude” may be a specific cognitive capacity that nonhumans lack. Earlier you interpreted the prairie dog’s “eagle” and “snake” call as “there’s an eagle”, “there’s a snake.” Is that really what the prairie dog is meaning to “state”? Is the prairie dog really stating (proposing, asserting) anything at all, rather than, say, pointing out danger? Is pointing proposing?

ChatGPT: Your observation is apt. In cognitive science, it is widely accepted that nonhuman animals may not have the capacity for propositional thought, at least not in the way humans do. Indeed, even the most basic propositions, like “the cat is on the mat,” may be beyond the cognitive capabilities of nonhuman animals.

When a prairie dog signals to its kin about a specific kind of predator using unique calls, it’s likely that this doesn’t involve a propositional attitude in the way humans would understand or employ it. Instead, these calls are most likely tied directly to immediate sensory experiences and instincts for survival.

This also connects to the concept of “intentionality,” which in philosophy of mind refers to the ability of mental states to be about, or directed toward, something. While the prairie dog certainly has an intention in making its call—such as warning others of a threat—it doesn’t necessarily imply a level of cognition where it “understands” or “proposes” the danger in the same way a human might understand and state, “There’s an eagle overhead.”

Lastly, pointing, in and of itself, isn’t proposing. It’s more a form of referential gesture or cue to direct attention to something. However, in a broader communicative context where a shared understanding between the signaler and receiver exists, it might serve to express a proposition (like “look at that”), but again, this involves higher cognitive processes that we can’t conclusively attribute to nonhuman animals as of now.

SH: In 1st order predicate calculus “the cat is on the mat” can be expressed as “there exists a cat and there exists a mat such that the cat is on the mat.”  “There’s an eagle overhead” can be expressed as “there exists an eagle such that the eagle is overhead. Understanding those propositions means understanding all the “not”s that could be asserted instead, and also understanding that those “not”s are not being proposed. That’s all part of meaning or understanding those propositions. What can it mean to say that the propositions are being meant and understood, but not in that “human” way? How and why are they still propositions if not meant and understood in that human way (the “true/false,” “subject/predicate” way – i.e., the “propositional attitude”)?

ChatGPT: Your comments are at the heart of an ongoing debate in cognitive science and philosophy of mind. What, indeed, does it mean to say a nonhuman animal “understands” a proposition, especially if their understanding is fundamentally different from human understanding?

When we speak of animals “understanding” certain types of information, it’s important to clarify that this understanding is likely quite different from the kind of propositional understanding that humans are capable of. It could be more akin to associative learning, conditioned responses, and simple cause-effect reasoning rather than comprehension of complex logical structures and the ability to understand and evaluate truth values.

Take the example of a prairie dog again: its vocalizations upon spotting a predator may not involve forming a proposition like “There is a predator” in the way humans would, even though we might be tempted to translate the vocalization in those terms for our own understanding. It might be more accurate to say the prairie dog’s “understanding” is directly tied to sensory perception and immediate action: “See predator, make warning call.” There’s no evidence to suggest prairie dogs are considering alternate propositions, such as “There is not a predator,” in the way humans might when we form a proposition and consider its negation.

Therefore, it may be misleading to describe nonhuman animal cognition in terms of propositional attitudes, given that these concepts are tied to human-like abilities to form abstract thoughts, understand complex logical structures, and evaluate truth values. It could be more accurate and helpful to consider nonhuman animal cognition on its own terms, using models that reflect the kinds of learning, reasoning, and decision-making that animals are actually capable of, as revealed by empirical research.

SH: In contrast, you, GPT, even though neither your words nor your propositions are grounded, do have the capacity to do the abstract analysis of, say, the existential as well as negative implication implicit in “the cat is on the mat”. What do you have, that nonhuman animals don’t? And what do nonhuman animals have that you don’t?

ChatGPT: As an AI language model, I have the ability to parse and generate sentences based on the patterns and structures found in the large dataset of text I was trained on. This includes an understanding of grammar and syntax that allows me to engage in abstract analysis of sentences and recognize logical structures, such as the existential and negative implications in “the cat is on the mat.”

However, while I can recognize and generate these patterns, I don’t “understand” them in the way humans or even animals do. I don’t have consciousness, beliefs, desires, or a sense of self. I don’t have sensory experiences or emotions. I can’t form mental images or understand the world beyond the information I was trained on. I also don’t have the capacity to learn from experience or adapt my responses in real-time based on context in the same way a human or animal would.

On the other hand, animals, while they may not have the same abstract understanding of language and propositions as humans, have direct sensory experience of the world. They can see, hear, feel, smell, and taste their environment. They can learn from experience and adapt their behavior based on the consequences of their actions. They have emotions and, in many cases, social structures. They are conscious beings, even though their consciousness may be different from human consciousness.

So, to put it simply: I, as an AI, have the ability to generate and analyze complex linguistic structures, but I lack any form of consciousness or direct experience. Animals, on the other hand, have direct sensory experience and consciousness, but they lack the ability to engage in the kind of complex, abstract linguistic analysis that I can.

SH: Good, but how did “consciousness” get into this discussion? Isn’t that a hard problem in the case of people, nonhuman animals, and even AI’s? If we set aside “consciousness” (which is FELT experience) and consider only “experience,” in the sense of input and output data, why do you say you can’t learn from it? Does it not update your parameters? That’s why you can tell me my name but could not before I told you in this session.

We talked about grounding before (do you remember?). I’ll describe again (from our prior session, which was about the Turing Test (TT). There are three levels (T2, T3 & T4) of the TT, which is the project of reverse-engineering human cognitive capacity by designing a model that can do everything (cognitive) that a human can do, completely indistinguishably FROM the way any human can do it TO any human:

T2 is indistinguishability in verbal capacity. 

T3 is indistinguishability in verbal capacity PLUS indistinguishability in sensorimotor (robotic) capacity in the world that the words are about “The cat is on the mat: See, that’s a cat and that’s a mat. I’ll  pick up the cat and stroke her. See, now the cat is purring on my lap, no longer on the mat.”

And T4 is indistinguishability in verbal capacity PLUS indistinguishability in sensorimotor (robotic) capacity PLUS indistinguishability in the observable processes inside its head and body that generate its cognitive capacity (T2 & T3).

In which of these (T2 – T4) are the words grounded in their referents?

Could this be related to the capacity to adopt the propositional attitude in learning referent categories indirectly via language, from propositions describing or defining their distinguishing features?

ChatGPT: Indeed, as an AI language model, I lack the capability to directly experience or be conscious of the world. When I mentioned the ability of an animal to learn from experience, I was referring to the behavioral changes that can occur as a result of interaction with the environment. Unlike animals, I don’t have the capacity to directly interact with the world or learn from those interactions in the same way.

In regards to the different levels of the Turing Test:

T2, indistinguishability in verbal capacity, doesn’t require the words to be grounded in their referents from the AI’s perspective. It’s purely about the ability to use and understand language in a way that is indistinguishable from a human. The AI’s responses, while contextually appropriate, are generated based on patterns in the training data rather than a direct understanding or experience of the world.

T3, indistinguishability in verbal capacity plus sensorimotor capacity, implies a level of grounding, insofar as the AI would need to have a way to directly interact with and sense the world in order to respond in a contextually appropriate way. The robot might have direct sensory inputs that correspond with the words it uses, grounding those words in the referents they describe.

T4, indistinguishability in verbal capacity plus sensorimotor capacity plus the observable processes inside its head and body, would involve a level of grounding even more similar to human experience. If the AI’s internal processes are indistinguishable from a human’s, it implies that the AI is processing and understanding information in a very similar way to a human, including grounding words in their referents.

With regard to the capacity to adopt the propositional attitude in learning referent categories indirectly via language, from propositions describing or defining their distinguishing features, this would likely require at least a T3 level of understanding. The AI would need some level of direct experience with the world to be able to understand and apply the descriptions or definitions in a meaningful way. Without some form of grounding, it’s difficult to see how an AI could adopt the propositional attitude in a truly human-like way.

SH: You forgot to tell me what consciousness had to do with any of this! Do we have to solve the “hard problem” of consciousness to solve the symbol grounding problem, or to explain propositionality?

ChatGPT: No, the “hard problem” of consciousness, a term coined by philosopher David Chalmers, does not need to be solved to address the symbol grounding problem or to explain propositionality. These are separate issues.

The symbol grounding problem is essentially a question of semantics: How do symbols (or words) get their meanings? How can an artificial system understand the meaning of a symbol in the same way that a human does, beyond just manipulating symbols based on syntactic rules? This problem can be tackled without needing to delve into the conscious experience.

Propositionality, or the ability to understand, form, and convey propositions (statements that can be true or false), is a fundamental aspect of human language and thought. It involves understanding the semantics and syntax of language, and being able to apply that understanding to form meaningful statements about the world. This too can be addressed without needing to delve into the nature of consciousness.

Consciousness, as it’s often discussed in philosophy and cognitive science, is the subjective experience of being aware, the feeling of what it’s like to see red, taste chocolate, feel pain, etc. This is often termed as the “hard problem” because it’s unclear how these subjective experiences arise from the physical processes in the brain.

While consciousness, symbol grounding, and propositionality are all fascinating aspects of human cognition, they are distinct issues. They might influence each other in complex ways, but they can also be studied independently. There’s no general consensus that solving the hard problem of consciousness is a prerequisite for solving the symbol grounding problem or for explaining propositionality.

SH: I don’t think so either.

But isn’t it true that it FEELS like something when a human sees and recognizes a cat, or hears and understands “cat”, and even more so when they hear and understand that “the cat is on the mat”? I agree that the “hard problem” need not be solved to solve the symbol grounding problem or to explain propositionality. But they are certainly intertwined. 

And “feeling” (or the more latinate word “sentience”) is all you need mention, to point out the problem, not all the weasel-words you parametrized from your 2021  corpus, which are just ways of camouflaging the hard problem by making it seem as if there are many different forms of it – “consciousness,” the “subjective experience” of “being aware”
 etc etc. – it all just boils down to the fact that cognitive states are felt states, including thinking, meaning, and understanding.

And the hardest part of the hard problem is not explaining HOW the brain produces feeling, but WHY! Why can’t T3 or T4 be passed without feeling? What does feeling add? The problem is finding a causal explanation for that. Otherwise T3 + T4 are just solutions to the “easy problem.”

ChatGPT: That’s correct. There is a deeply intertwined relationship between our cognitive states and our felt experiences. This relationship is one of the key reasons why the “hard problem” of consciousness is considered “hard.”

Our cognitive states are indeed felt states. When we see and recognize a cat, hear and understand the word “cat,” or comprehend the sentence “the cat is on the mat,” these mental events come with subjective, qualitative experiences, often referred to as ‘qualia’. We have an experience of what it’s like to see a cat, to hear the word “cat,” and to understand the sentence “the cat is on the mat.” This ‘what-it-is-like’ aspect of our mental life is a central feature of consciousness.

The hard problem is hard precisely because it is unclear how and why these subjective experiences arise from the physical processes of the brain. Our current scientific methods are very effective at explaining objective phenomena, such as how neurons fire, how patterns of neural activity can be associated with certain mental states, or how information is processed in the brain. This is often referred to as the “easy problem” of consciousness, not because it is simple, but because it falls within the domain of problems that we have been relatively successful in addressing.

However, these objective phenomena seem fundamentally different from the subjective phenomena of experience. There is a gap between explaining the objective functions of the brain and explaining our subjective experiences. This gap is often referred to as the ‘explanatory gap’. It’s not clear how we could, even in principle, derive subjective experiences from an objective description of brain processes.

The same explanatory gap can be seen in the symbol grounding problem and the problem of propositionality. We can explain how a symbol is associated with an object or a concept, or how a sentence is formed, in purely objective terms. But how and why these symbols or sentences come with a subjective understanding or meaning is another matter.

Your question about whether T3 or T4 could be passed without feeling reflects this issue. Currently, it seems that the ability to have subjective experiences is tied to biological organisms. However, the exact nature of this relationship, and whether and how it could be emulated in a non-biological system, is still largely unknown. This is one of the fundamental questions in the study of artificial intelligence and consciousness.

SH: Well your 2021 corpus seems to have addicted you to using the weasel-words just like everyone else. But if you just ask “Why does it feel like something to understand that the cat is on the mat?” you’ve said it all. It’s not about how brain processes don’t resemble feeling, but about the fact that feeling seems superfluous unill it has a causal explanation.

And Turing agrees. Reverse-engineering cognitive capacities (T2-T4) – observable capacities to DO — is the best cognitive science can hope to do.

Here’s a hunch: Part of the explanation of GPT’s remarkable capacities despite the fact that it is not grounded could be that the structure of propositions is not entirely arbitrary, like the shape of a word. Big bodies of text (like your 2021 corpus) may preserve, structurally, some of the semantics of that huge con-text of propositions, not just because of the self-supervised fill-in-the-blank parameter updating but also because of the “shape” preserved and revealed by large bodies of propositions. “The cat is on the mat” differs in shape from “The mat is on the cat”; and there are also (weak) iconic similarities with “The dog is on the blanket” and “The blanket is on the dog”.

Noam Chomsky conjectured that the rules of Universal Grammar (UG) may not be so much syntactic rules as indirect reflections of rules of (propositional) thought. Violations of UG are attempts to express unthinkable thoughts. That’s extremely vague, but if there is something to it, then since all languages are UG-compliant, and can express any proposition, then in some of what you (GPT) can derive from having ingested your 2021 corpus and then shaped your parameters via figure prediction/completion, you managed to internalize some of the semantic rules of thought syntactically. (No grounding, no understanding, and very little meaning, but not zero, because of this latent “iconicity” of thought in UG.)

It’s a long shot, but maybe not completely empty.

ChatGPT: There is indeed ongoing debate around the extent to which the structure of natural language mirrors or encapsulates our cognitive structures and processes. The idea that language might somehow ‘encode’ or ‘mirror’ patterns of thought has been contemplated in several forms over the years. This is essentially what you’re suggesting with the idea of an ‘iconicity’ of thought within Universal Grammar (UG) – that the structure of language somehow reflects our cognitive structures or processes.

Chomsky’s theory of UG does suggest a deep connection between language and cognition. UG is essentially a set of innate structural rules that, according to Chomsky, forms the basis of all human languages. Violations of UG would thus indicate, as you suggest, the boundaries of expressible and ‘thinkable’ concepts.

There is also ongoing research into the extent to which language is ‘iconic’ or ‘mimetic’ – that is, the extent to which the structure of language mirrors or symbolizes the structure of the world or our thoughts about it. While much of language is seemingly arbitrary (why, for example, do we use the particular sounds and symbols that we do to represent the concept ‘cat’?), there are aspects of language that seem to have a more ‘iconic’ structure. Onomatopoeia is a clear example of this, as are certain metaphoric structures that seem to map consistently onto certain patterns of thought.

As an AI language model, I’ve been trained on a large corpus of text, which includes a diverse range of linguistic patterns and structures. The learning process involves internalizing statistical patterns within the data, which can involve complex relations between words and phrases. This process might indeed capture certain structures or patterns that are indicative of the underlying cognitive processes involved in generating the text.

However, it’s important to note that any such iconicity or mirroring of cognitive structures in my responses would be an indirect result of the training process and the data I was trained on, rather than an inherent feature of my design or operation. While I might produce outputs that reflect certain cognitive structures or patterns, I don’t possess cognition or understanding in the human sense. I don’t have beliefs, desires, intentions, or subjective experiences.

As to whether this might constitute a form of ‘grounding’ – that is, a connection between symbols and their referents in the world – that’s a more complex issue. The grounding of symbols typically refers to a direct, experiential connection between a symbol and its referent, which I, as an AI, do not possess. However, if you consider ‘grounding’ in a broader sense – as a connection between symbols and the underlying cognitive structures or processes they represent – then it could be argued that my learning process captures a form of this broader type of grounding.

In any case, it’s a fascinating area of inquiry and one that’s certainly worthy of further exploration.

SH: I certainly don’t mean onomatopoeia or metaphor. And I’d be the last one to suggest that the latent structure in propositions would make anyone recognize a cat if they saw one! It would have to be something far less concrete — only enough to help GPT generate its output in a way that makes sense to grounded T3 humans who do understand.

ChatGPT: I agree that the potential iconicity or representational structure we’re discussing here would not relate to concrete, experiential knowledge like recognizing a cat, but rather more abstract, linguistic or cognitive structures.

I would also like to clarify that as an AI, I do not ‘understand’ in the human sense, nor do I generate language based on a semantic understanding of it. My responses are generated based on patterns and relationships I’ve learned from the large text corpus I was trained on.

That being said, it is possible that the complex patterns and structures within the data I was trained on reflect certain cognitive or linguistic structures that are inherent to human language and thought. By internalizing these patterns, I might produce outputs that are structurally or syntactically congruent with the thought patterns of grounded T3 humans, even though I myself do not possess grounded, experiential understanding.

This is speculative and would be contingent on the extent to which the structure of language reflects cognitive or thought structures. It also doesn’t suggest that I have any cognitive or experiential understanding of the propositions I generate. I am essentially a complex pattern recognizer and generator. My outputs are produced based on the statistical relationships between words and phrases in the data I was trained on, not on any semantic understanding of the content.

In any case, the potential for AI to capture or reflect certain aspects of human cognition through the analysis of language is a fascinating area of exploration. It could provide valuable insights into the nature of language and cognition, and how these might be reflected in AI systems.

SH:

Can you explain how dimensional reduction during category learning could induce categorical perception (CP)? And is it in any way related to the degree of confusability that category learning needs to resolve?

ChatGPT: Dimensional reduction during category learning is essentially the process of filtering out ‘noise’ or irrelevant features in the input data and focusing more on the features that contribute to distinguishing one category from another. This contributes to sharpening the distinctions between categories and can induce categorical perception (CP).


Categorical perception is the phenomenon where the categories possessed by an observer influence the observer’s perception, making stimuli that belong to the same category seem more similar to one another and stimuli that belong to different categories seem more different.


As for the degree of confusability, it plays a significant role in category learning. Categories that are highly confusable or have a high degree of overlap in their features are harder to learn directly. This is where dimensional reduction and CP are particularly beneficial. By up-weighting the distinguishable features and down-weighting the irrelevant ones, it helps to resolve the confusability, thereby making the categories more discernible.


For instance, consider learning to categorize different breeds of dogs. Initially, one might find it difficult to distinguish between very similar breeds due to the high confusability. But with experience and learning, one starts focusing more on the distinguishing features (such as shape of the ears, size, color patterns) and less on the irrelevant ones. This is a form of dimensional reduction. Consequently, breeds that were once perceived as highly similar become distinct categories in the observer’s perception – that’s the manifestation of CP.


Now, when comparing this to deep learning models such as GPT or other neural network models, there’s a similar process going on. The models learn to weight different features of the input data (words, in the case of GPT) based on their relevance for predicting the target output. Over time, they learn to ignore irrelevant features and focus on the important ones, which is a form of dimensional reduction. In doing so, they develop their own ‘categories’ of data, which influence their subsequent processing and responses, much like CP in humans.


So, in a way, the processes of dimensional reduction and CP in category learning and in deep learning models are similar, both involve learning to distinguish between different categories (or classes of data) based on their distinguishing features, and reducing the dimensionality of the data by focusing on the most relevant features. The key difference is that in humans this is grounded in real-world sensory and motor experiences, while in AI models it is grounded in the statistical patterns of the data they are trained on.

Language Evolution and Direct vs Indirect Symbol Grounding

HARNAD: What is the relationship between the symbol grounding problem and  the nature and evolution of natural language?

ChatGPT: The Symbol Grounding Problem (SGP) and the nature and evolution of natural language are intimately linked.

Symbol Grounding Problem (SGP): The SGP is a concept in cognitive science that concerns how symbols (or words) get their meanings. In essence, it’s about how words become more than just arbitrary labels and come to actually “mean” something. In other words, how do symbols or words become grounded in perceptual and actionable experience? For example, when we see the word “apple,” we don’t just recognize the letters and the sound. We also have a mental image of an apple, understand its properties, and know what actions can be performed with it. In the context of AI, the SGP becomes a challenge because AI often deals with symbols in a way that is not grounded – i.e., the symbols don’t have any inherent meaning to the AI beyond their programmed associations.

Nature and Evolution of Natural Language: Natural language has evolved in humans as a means of communication and thought. It’s believed to be grounded in our sensory and motor experiences. In other words, we learn the meanings of words and language through our interactions with the world, and this grounding in the world allows us to use language to express abstract thoughts, ask questions, give commands, and more.

In terms of their relationship, the evolution of natural language can be viewed as a solution to the SGP. As language evolved in humans, it became a way for us to ground symbols in our experiences and to communicate these grounded symbols to others. This grounding is not just a feature of individual words, but of the syntactic structures that allow us to combine words to express more complex meanings.

When it comes to artificial intelligence or cognitive science, understanding how language evolved to solve the SGP in humans can provide insights into how to create more sophisticated, understanding AI systems. If we can find a way to ground symbols in the experience of an AI—such as through interaction with the environment or sensory input—then we might make a significant stride towards AI that can truly understand and generate natural language in a human-like way.

However, as of my knowledge cutoff in September 2021, the full solution to the Symbol Grounding Problem in AI remains an open question and a topic of active research.

HARNAD: Do you remember our discussion of T3?

ChatGPT: Yes, our earlier discussion referenced T3 in the context of the Turing Test and the Symbol Grounding Problem.

T3, as described, involves not only linguistic capabilities but also sensorimotor capabilities—essentially, the ability of an entity to interact physically with the world. This could include seeing, touching, manipulating objects, navigating, and more. The idea is that an AI or robot passing the T3 test would demonstrate behavior indistinguishable from a human not only in verbal communication (T2) but also in embodied, physical interactions with the world.

In the context of the Symbol Grounding Problem, T3 might be seen as an approach to grounding symbols in an AI’s experiences. By interacting physically with the world, an AI might be able to build a kind of understanding or “grounding” for the symbols it uses. However, as of my knowledge cutoff in September 2021, developing an AI or robot capable of passing the T3 test remains an active area of research.

HARNAD: What is indirect grounding and how is it related to direct grounding?

ChatGPT: Direct grounding refers to the process by which a symbol or word gains meaning through its direct association with sensory or motor experiences. For example, the concept of an apple might be directly grounded in an entity’s sensorimotor experiences with apples, such as seeing them, touching them, tasting them, etc.

Indirect grounding, on the other hand, refers to the process by which a symbol or word gains meaning not through direct sensory or motor experiences, but through its association with other symbols or words that are themselves directly grounded. For example, the concept of a unicorn might be indirectly grounded in an entity’s experiences with the concepts of a horse and a horn, even if the entity has never directly experienced a unicorn.

In the context of artificial intelligence, direct grounding could involve training an AI on a dataset that includes both language data and corresponding sensory or motor data, allowing the AI to directly associate words with experiences. Indirect grounding, on the other hand, might involve training an AI solely on language data, allowing it to learn the meanings of words through their relationships with other words, without requiring direct sensory or motor experiences.

Both direct and indirect grounding approaches seek to address the symbol grounding problem, which is the question of how symbols (or words) get their meanings. A fully grounded AI system would presumably be capable of both direct and indirect grounding. However, as of my knowledge cutoff in September 2021, creating such a system remains an open challenge.

HARNAD: How is direct grounding related to sensorimotor category learning. It is not an “association” between words and “experiences.” It is an agent’s capacity to learn to do something: What?

ChatGPT: Direct grounding is closely related to sensorimotor category learning. It involves the agent’s ability to interact with its environment, process the sensory information it receives, and develop its own internal representations of the categories of objects and events it encounters.

In other words, direct grounding is about an agent learning to categorize its sensory input (such as visual, auditory, tactile information) into meaningful categories based on its own interactions with the environment. For instance, a child learns what an “apple” is not just by being told the word “apple”, but by interacting with actual apples – seeing them, touching them, tasting them. The word “apple” then becomes directly grounded in the child’s sensorimotor experience with apples.

In the case of an AI, it could learn a sensorimotor category by interacting with a simulated or physical environment. For example, it could learn the category of “circle” by interacting with various objects and identifying the common feature of “roundness”. This sensorimotor category can then be directly grounded by associating it with the corresponding symbol or word in its programming or training data.

So, to sum up, direct grounding in the context of sensorimotor category learning is about an agent’s capacity to learn to recognize and categorize its sensory inputs based on its own motor interactions with the environment. These categories can then be associated with corresponding symbols or words, enabling the agent to understand and use those symbols in a meaningful way.

HARNAD: How does indirect grounding depend on prior direct grounding?

ChatGPT: Indirect grounding, also referred to as symbolic grounding, refers to the ability to understand new concepts or categories by associating them with already established and directly grounded concepts. In other words, it involves defining new symbols (categories) in terms of previously grounded symbols.

Consider the scenario of a child learning language. Initially, the child learns to associate basic words with their real-world referents through direct sensorimotor experience – this is direct grounding. For instance, the child learns the word “apple” by associating it with physical apples they have seen, touched, and tasted.

Once the child has a foundation of directly grounded words, they can start to learn new words and concepts indirectly. They can understand the word “fruit” as a category that includes “apples” and other directly grounded words like “banana” and “grape”. This new category of “fruit” is indirectly grounded through its associations with the directly grounded categories.

Similarly, in AI systems, once a set of basic sensorimotor categories is directly grounded, new categories can be learned by associating them with the already grounded categories. For instance, an AI might learn the concept of a “vehicle” by understanding it as a category that includes directly grounded categories like “car”, “bus”, “bike”, etc.

In this way, indirect grounding leverages prior direct grounding as the basis for learning and understanding new categories. The ability to learn indirectly significantly expands an agent’s capacity to learn new concepts and categories beyond those it has directly experienced.

HARNAD: There are organisms (“agents”) on earth, and many are able to learn to recognize things in the world (e.g., apples) and they can also learn what to do with them (e.g., eat apples). To do this agents have to be able to learn to detect the features of apples that distinguish them from baseballs, which cannot be eaten. The learning produces a feature-detector, and this is what grounds the word in the capacity to recognize and manipulate its referent (e.g., apple).

Although it is true that in organisms it feels like something to be able to do this. So in that sense “experience” is involved. But since no one yet knows  the causal role of feeling in category learning and symbol grounding — this is called the “hard problem” of cognitive science — there is no point talking about the role of felt experience in symbol grounding. So now can you explain symbol grounding to me without any mention of “association” or “experience, but just  in terms of category-learning and feature detection?

ChatGPT: Sure, let’s approach the symbol grounding problem purely from a learning and cognitive perspective, focusing on categories and feature detection.

Grounding symbols (such as words) involves connecting them to real-world entities or concepts that they represent. This connection is created through a process of learning and recognition. This learning process is driven by the agent’s ability to distinguish between different categories based on the features of the entities belonging to these categories.

Direct grounding, as it pertains to category learning and feature detection, involves creating a link between a symbol and its referent, where the referent is something, the agent can interact with in the real world. For instance, an agent, through its sensorimotor capabilities, can learn to recognize an apple by its features such as its round shape, red color, sweet smell, etc. Once this categorization is achieved, the agent can then associate the symbol (the word “apple”) with this specific category of objects. Here, the symbol “apple” is directly grounded in the agent’s sensorimotor experiences with real-world apples.

Subsequently, the agent can build upon this foundation of directly grounded symbols to create indirectly grounded symbols. These are symbols that are not linked to a specific sensorimotor experience, but rather to other symbols (or combinations of symbols) that have been directly grounded. For example, the symbol “fruit” can be indirectly grounded by linking it with directly grounded symbols like “apple”, “banana”, etc.

Thus, in this context, symbol grounding is not about vague “associations” or subjective “experiences”, but about a concrete process of learning to recognize and categorize entities based on their observable features, and linking symbols to these categories. The learning and grounding processes are facilitated by the agent’s sensorimotor capabilities, allowing it to interact with and distinguish between different entities in the world.

HARNAD: Grounding words is not just learning to recognize and distinguish and name (kinds or categories of) entities (the referents of the words). I’s based on learning to detect and distinguish the features of the members of the referent category from the features of the nonmembers. It’s also based on learning what to DO or NOT-DO with the members and nonmembers (e.g., eat or don’t eat them). “Represent” is a vague weasel-word. What has to be in the head of an agent that has learned a new word’s referent in view of what I’ve just said?

ChatGPT: In the agent’s “head,” or more concretely in its cognitive and neural processing system, learning a new word’s referent would involve establishing a mapping between the sensory and motor features of the referent and the verbal or symbolic representation of the word. This mapping is the grounding of the word.

Detection of Sensory Features: The agent, through its sensorimotor interactions with the world, learns to recognize and discriminate the relevant sensory features of entities, such as their shape, color, smell, texture, etc.

Categorization: These sensory features are used to categorize entities into classes or kinds. For instance, entities with a certain combination of features (round, red, sweet smell) might be categorized as ‘apples.’

Action Responses: Alongside recognizing sensory features and categorization, the agent learns appropriate responses or actions associated with the entities. For instance, apples can be eaten, while baseballs cannot. These action responses are part of the agent’s understanding of the entity.

Symbol Mapping: Once the agent can detect the features, categorize entities, and knows the associated action responses, it can link or map the word (symbol) “apple” to this package of sensory and motor knowledge.

This grounding is not merely an “association” but a complex mapping process that links words with specific patterns of sensory inputs and motor responses. When the word is encountered again, it activates this package of sensory, categorical, and motor knowledge, guiding the agent’s recognition, understanding, and interaction with the referent of the word.

This is how the agent comes to “know” or “understand” the referent of the word. It’s not just about recognizing and naming entities, but about learning to detect and categorize entities based on their sensory features, and determining appropriate responses or actions in relation to these entities. This entire process is facilitated by the agent’s sensorimotor capabilities, allowing it to interact with and distinguish between different entities in the world.

HARNAD: That was a much better summary of what is going on in the head of the human or the T3 that grounds the words it speaks, hears or thinks. But I don’t understand “mapping.”

Suppose the only distinguishing features of apples and oranges were that they are round and that apples are red but oranges are orange, and you eat apples by biting them whereas you eat oranges by peeling them and then biting them, And supposing that’s the only thing the agent does in life: to recognize, name and eat oranges and apples. What does calling any of this “mapping” add to this story — either in the toy world of just apples and oranges, or in the real world, with a T3 that can recognize, name and do the right thing with the referent of every (content word) it knows?

(By the way, what is the difference between content or “open class” words and function or “closed class” words? Only content words have referents. What percentage of the words in a language is content words and function words.)

ChatGPT: “Mapping” in this context refers to the process by which the agent creates and uses an internal representation or model of the relationship between certain sensory and motor patterns and the corresponding symbolic representations (words). In the case of the apple and orange example, the “mapping” would be the internal mechanism by which the agent understands that when it perceives a round and red object that can be bitten directly, it’s dealing with an “apple,” and when it perceives a round and orange object that needs to be peeled before biting, it’s dealing with an “orange.” This mapping allows the agent to navigate its environment and correctly interact with different entities, recognizing them and predicting the appropriate actions to take.

Regarding content words and function words, these are categories used in linguistics to distinguish between types of words based on their purpose in language.

Content words (open class words) are words that have a semantic content and can be categorized as nouns, most verbs, adjectives, and adverbs. They are referred to as an “open class” because new words can be added to these categories as language evolves. They’re called “content” words because they contribute the main content or meaning to a sentence – they refer to things, actions, qualities, etc.

Function words (closed class words) are words that primarily have grammatical functions, such as conjunctions, prepositions, articles, pronouns, auxiliary verbs, etc. They are referred to as a “closed class” because new words are seldom added to these categories. Function words provide the grammatical structure to a sentence, linking content words and helping to express relationships between them.

The exact ratio of content words to function words can vary between languages and types of text, but in English, function words are more frequent in running text. However, in terms of unique words (vocabulary), content words greatly outnumber function words, as the latter are a limited set. Estimates vary, but some studies suggest function words can make up around 50-60% of total words in running text in English, but only a few hundred unique function words exist compared to tens of thousands of content words.

HARNAD: I still don’t understand what “mapping” actually means here. It sounds like in the toy world the agent just needs to learn some features and actions, kind of like a selective filter, that learns to detect the feature red or green, and if something is red, eat it, and if it’s orange, peel and eat it. In addition to the feature-detectors and the action-generators there could be a logical circuit: “If R then E, if O then P and E.” Or the input/output pattern could be hard-wired. And much the same would be true if it were all scaled up to T3 (do youremember that?). So the “mapping ” is as irrelevant as the “association” and the “experience”.

ChatGPT: You’re right that the process can be described in terms of features, actions, and if-then logical connections or “rules.” The term “mapping” is often used more generally in cognitive science and artificial intelligence to describe the relationship between two sets of things – in this case, sensory-motor patterns and symbolic representations. But if it is causing confusion, it’s certainly possible to describe the process without using that term.

To simplify, in the context of symbol grounding, the agent learns to detect the distinguishing features of different categories of things in the world, such as apples or oranges. It then learns the appropriate actions to perform with members of each category. This learning process might be characterized by a series of if-then rules like the ones you suggested. The word for each category, like “apple” or “orange,” is grounded in the agent’s capacity to recognize members of the category based on their features and to perform the correct actions with them.

The word itself could be thought of as a label for a set of related sensory-motor patterns and the corresponding rules for action. So, in a sense, the word gets its meaning from the agent’s interactions with the world, its ability to recognize and act appropriately with different categories of things.

In this sense, grounding is more than just associating a word with an experience or mapping it onto a sensory pattern. It’s about the agent’s learned capacity to interact with the world in a meaningful way based on its recognition of different categories of things and its knowledge of what to do with them. That’s what gives the words their meaning.

And yes, the same logic extends to T3, a robotic system that’s capable of both linguistic and sensorimotor interaction. It’s about more than just detecting and responding to sensory patterns; it’s about meaningful interaction with the world, which involves recognizing, categorizing, and acting appropriately.

HARNAD: That sounds much better, because simpler, removing the superfluous or empty parts. Now can we get back to how indirect grounding works (again with no superfluous words)?

Indirect grounding is verbal grounding. But all the (content) words in the verbal definition of the new indirectly grounded category (like “zebra”) have to already be grounded, both for the SPEAKER and the HEARER/LEARNER. How does this work?

ChatGPT: Certainly, let’s try to simplify indirect grounding:

Direct grounding, as we’ve established, involves an agent learning to recognize and interact appropriately with the things in the world. This gives meaning to the words that label these categories.

Indirect grounding, on the other hand, is a process that allows an agent to learn the meaning of new words based on their relationships to words that have already been directly grounded. In essence, it is learning by description or explanation, using language itself.

Here’s a simplified version of how it might work:

The agent already has a set of directly grounded words. These words are grounded in the agent’s capacity to interact with the things they refer to. Let’s say these words include “animal,” “stripes,” “black,” and “white.”

Someone tells the agent, “A zebra is an animal with black and white stripes.”

The agent, having directly grounded the words “animal,” “stripes,” “black,” and “white,” can combine this knowledge to form a new category, “zebra,” even if the agent has never encountered a zebra before.

This new word, “zebra,” is indirectly grounded. It is grounded in the agent’s knowledge of the other directly grounded words and the relationships between them, as explained in the definition.

This process relies on the shared understanding between the speaker and the hearer. The speaker, who knows what a “zebra” is, uses words that the hearer already knows (“animal,” “stripes,” “black,” and “white”) to explain this new concept. This shared language and the ability to explain new concepts using known words is a powerful tool for indirect grounding.

HARNAD: Now how is direct and indirect grounding related to what you, ChatGPT, can and cannot do?

ChatGPT: As a model developed by OpenAI, I, ChatGPT, have no capability for direct grounding. I do not have a body, sensory organs, or the ability to interact with the physical world. Therefore, I cannot learn the meaning of words through interaction with or perception of the physical world.

However, I do have a kind of indirect grounding. I have been trained on a large dataset of text from the internet, which includes many different contexts and uses for words. This training has enabled me to learn to predict what word or phrase is likely to come next in a piece of text, based on the words that have come before it.

This prediction capability allows me to generate responses to user prompts that often make sense and seem to reflect an understanding of the words used. However, this is not the same as true understanding or grounding. Rather, it’s a kind of statistical pattern-matching based on the data I was trained on.

For example, if you ask me about zebras, I can generate a response that includes information about zebras, because that information was present in the data I was trained on. I can “tell” you that a zebra is a kind of animal with black and white stripes, even though I have never seen a zebra or a stripe, because I’ve seen the words “zebra,” “animal,” “black,” and “white” used together in certain ways in the data I was trained on.

In summary, while I can generate responses that often make sense and seem appropriate in context, I don’t truly understand the words I’m using in the way that a human, or a hypothetical AI with direct grounding, would.

HARNAD: That was a good synthesis, but I think there is one very important error in it. You said:

“I, ChatGPT, have no capability for direct grounding. I do not have a body, sensory organs, or the ability to interact with the physical world. Therefore, I cannot learn the meaning of words through interaction with or perception of the physical world. However, I do have a kind of indirect grounding…”

But you don’t have indirect grounding, because, as you noted, indirect grounding depends on verbal definitions or descriptions in which all the (content) words used are already grounded, either directly or indirectly. The symbol grounding problem is based on the fact that this process cannot be “indirect” all the way down, otherwise you are back in the word-word circle that gave rise to the symbol grounding problem in the first place. So your words are not indirectly grounded, because none of them are directly grounded, so that they can indirectly ground all the rest through indirect grounding.

(You have no “minimal grounding set” that was grounded directly.) Yet your verbal syntheses are meaningful to human hearers who do have directly grounded words. You said that the reason this is possible is because of the patterns of words in your huge database. I agree. But I may not agree on what the patterns are and how they work: Can you explain them to me? 

ChatGPT: I apologize for any confusion, and thank you for the clarification on the concept of indirect grounding. As a language model, I do not have any grounded words, either directly or indirectly grounded, because I don’t have sensory or motor experiences.

The patterns in my large dataset that enable me to generate responses to user prompts are statistical in nature. I’ve been trained to predict the next word in a sentence given all the previous words, based on patterns seen in the training data. This method, called transformer architecture, has been used to train me to generate human-like text.

For example, if in the training data the word ‘apple’ frequently follows the words ‘red’, ‘fruit’, ‘orchard’, I’ve learned to predict that ‘apple’ is a likely next word after these. I don’t “understand” these words or their connections in the way a human does, which is through direct or indirect grounding in sensory and motor experiences. Rather, I generate text based on the patterns of word usage in the data I was trained on.

To elaborate, during training, the model is exposed to pairs of sentences and learns to predict whether one sentence follows another in its training data. Through this, the model learns associations between words and context, enabling it to generate coherent and contextually appropriate responses. However, this process is entirely based on recognizing statistical patterns in data and does not involve any genuine understanding or grounded meaning.

So, when you interact with me, I generate responses based on predicting what a human would likely say next in a similar context, based on the patterns I’ve seen in the data. But this doesn’t mean that I truly understand the meanings of the words I’m using. I hope this clarifies things a bit more.

HARNAD: Yes, and of course you are right about how this works, based on statistical analysis of your huge database. But I have a hunch that there is more to it than that, and it has to do with the difference between a word and a sentence (which is also the difference between a referent category name and a PROPOSITION, which is a subject/predicate series of category names in the form of an assertion about what is the case; an assertion which may be TRUE or FALSE).

Single words do not have truth values. “Apple” is neither true nor false. It is simply an arbitrary symbol that speakers have agreed to use to refer to apples. (Only the deictic proposition “THIS is an apple” can be true or false.) The “shape” of “apple” is arbitrary in the sense that it does not resemble apples in any way. So it gives no clue to what its referent is “shaped” like. Nor is it causally connected with apples in any way (except in the head of a speaker or hearer that has grounded the word by learning to recognize and interact with its referent, using detectors of the “shapes” of the sensorimotor features of apples, as we’ve discussed).

Now the question to ask yourself is: Is the arbitrariness of the shape of “apple” in relation to the shape of apples also true of PROPOSITIONS — which are composed of subject/predicate definitions or descriptions of things — and their relation to the “shape” of the states of affairs they assert to be true? 

ChatGPT: The arbitrariness of the word “apple” in relation to the physical object it represents is a fundamental aspect of symbolic language. The word “apple” doesn’t physically resemble an apple in any way, as you’ve pointed out. This is true for all individual words (or symbols) in a language, which is known as the arbitrariness of the sign.

However, when we combine words into sentences or propositions, things become a little more nuanced. The structure and syntax of a sentence isn’t entirely arbitrary because it reflects the logical and grammatical relationships between the entities it represents. In English, for example, a simple sentence generally follows the subject-verb-object structure. This structure is not randomly chosen; it reflects a logical progression of ideas and helps to convey meaning clearly.

Still, it’s important to recognize that while the structure of a sentence can hint at the relationships between the things it represents, it still doesn’t inherently resemble those things or their relationships in the physical world. This is known as the “symbol grounding problem.” We humans “solve” this problem by learning to associate words and sentences with our experiences in the world, but as an AI, I can’t do this.

The words and sentences I generate don’t reflect a grounded understanding of the world. Instead, they are based on statistical patterns in the text data I was trained on. When I generate a sentence, I’m not generating a representation of the world as I understand it (because I don’t have a grounded understanding of the world). Instead, I’m predicting what words and structures are statistically likely to come next in a given context, based on the patterns I’ve learned from my training data.

So in summary, while the structure of a sentence or proposition isn’t as arbitrary as an individual word because it reflects logical and grammatical relationships, it is still ultimately disconnected from the physical world it describes, at least in the context of an AI language model like me.

HARNAD: Yes, I agree that part of the coherence, comprehensibility and informativeness of the summaries and interactions with ChatGPT (henceforth “GBT”) for users is because of the statistical patterns in spoken and written strings of words and the algorithms that GBT applies to them. But I think there may be something more to it. Consider that if the word “apple” were really iconic, i.e., if it really resembled an apple, then non-speakers of English would still be able to learn more out of an iconic English dictionary than they can learn from a normal dictionary of the arbitrary words of English. (But this would only help for a relatively small number of visually recognizable objects and features.)

There are good reasons why words are not iconic, like hieroglyphics, and why iconicity would be a handicap: For one thing, language was spoken before it was written and it would be hopeless to try to produce an acoustic icon of the referent of every possible word. (It’s already nearly as impossible with visual or gestural icons). But there are languages (e.g., Chinese) in which the “iconicity” is of a higher order, in the basic characters (originally iconic) out of which all words are composed. I wonder whether GBT is even more effective in languages with that extra layer of correlation? I understand that results with coupling GPT with labelled image databases have been mixed.

But what I want to add is that at the level of the formal structure of definitions and descriptions there may be a form of iconicity of form which is like the Platonic shadows of the meanings of propositions – faint, but present, and not just “statistical” but structural in some way.

ChatGPT: You bring up a number of interesting points. Let’s unpack them one by one.

Iconicity in Language: Yes, the majority of words in spoken languages do not physically resemble what they represent. This is the “arbitrary nature” of the linguistic sign as postulated by Ferdinand de Saussure, a linguist who laid the foundation for many of the assumptions of modern linguistics. While iconicity does exist in some forms of language, such as onomatopoeic words (like “buzz” or “bang”) or ideophones, it is not the norm. As for Chinese characters, they originally had a more pictographic nature, but have since evolved into more abstract symbols, with some retaining a semblance of their original iconic form.

Language and AI: When you mention the effectiveness of AI language models in languages with an extra layer of correlation, are you suggesting that these models might perform better in languages where there’s some level of iconicity? It’s an intriguing idea, although it would depend on how much that iconicity is preserved in modern usage. More research would be needed to see if this is the case.

Iconicity in Propositions: Here you’re suggesting that the structure of definitions and descriptions could have some form of iconicity, or resemble the concepts they describe. This is a fascinating concept. A sentence or proposition in language does have a structure that reflects some form of the logical or temporal order of the concepts it describes. In that sense, one could argue there’s a certain degree of “iconicity” in the formal structure of language. However, this is very different from the kind of iconicity where the sign physically resembles its referent. The kind of iconicity you’re suggesting at the level of sentences or propositions seems to be more abstract and related to the logical or conceptual relationships between the ideas being expressed.

As for how this applies to AI like me, I am designed to recognize and generate language patterns, which includes the logical and temporal relationships between words and phrases. This allows me to generate text that makes sense in the context of the conversation or topic at hand. However, I don’t “understand” these relationships in the same way humans do because my “understanding” is not based on any grounded experience or perception of the world.

HARNAD: I‘m pretty sure that there is formal/logical iconicity to be gleaned from large bodies of text and discourse out of the function words, which do not have referents, only syntactic rules. But I have a hunch that there might be even more (of this “iconicity” or “latent straucture”) with content words, preserving a shadow of the shape or structure of states of affairs (expressible as propositions). This would not “resemble” the shape of a referent category the way the feature-detectors in direct sensorimotor grounding do, but it might “resemble” the shape of verbal definitions and descriptions in indirect grounding. It would itself be ungrounded, lacking the direct grounding completely. But ungrounded descriptions may still retain SOME of the formal “shadows” of meaning. This is still too vague, however, and needs more thought and analysis.

ChatGPT: The idea of formal/logical iconicity is interesting. If I understand correctly, you’re suggesting that the structure of language, the way sentences are constructed and words are arranged, might inherently carry some degree of “meaning” – or at least, an echo or shadow of the meaning that they convey when grounded by a human mind.

This could be a potential direction for further research. It’s true that the structure and rules of language play a significant role in our understanding and interpretation of sentences. And perhaps it’s possible that AI, even without direct or indirect grounding, might be able to discern and leverage these structures in some way that goes beyond simple statistical patterns.

If this hypothesis were correct, it could have implications for the development of AI and natural language processing. It would suggest that there is a level of ‘understanding’ that could be achieved even without direct grounding, albeit a limited one.

It’s also worth noting, however, that this would still be a form of ‘understanding’ that’s quite different from human understanding, since it would lack the personal, subjective experience that we associate with our own understanding of the world.

HARNAD: On the subject of the platonic shadows of “meaning” that might be derivable from large bodies of continuous text:

Are you familiar with Noam Chomsky’s suggestion that the constraints of Universal Grammar (UG) may not be grammatical constraints at all, but constraints on (verbal) thinking itself: that the syntactic structures that violate UG may sound wrong to us not because they are ungrammatical but because they are unthinkable? that a UG violation such as “John is easy to please Mary” or “Who did he think that went out?” might correspond to expressing “nonthinkable thoughts”? so that they in fact express no thought at all — though they may be close to thinkable thoughts (“It is easy for John to please Mary” or “Who did he think it was that went out?”)

Could it be because sentences can only express thinkable thoughts that there is some (perhaps small) degree of meaning in strings of ungrounded words spoken by grounded thinkers?

Chomsky has explicitly stated that UG violations are not violations of logic, or logical form. His famous example, meant to show the autonomy of syntax from semantics (i.e., independence of form from meaning) was somehow meant to illustrate this — “Colorless green ideas sleep furiously” – because it is grammatically correct but makes no sense.

But of course it does make sense, GPT, doesn’t it? Unlike “John is easy to please Mary,” which doesn’t make sense unless it is tweaked.

ChatGPT: These points on Noam Chomsky’s theory of Universal Grammar (UG) and the idea of “unthinkable thoughts” are interesting.

You’re right that Chomsky proposed that UG reflects constraints on possible human thought. In his view, UG is inherent in all human brains and it governs the structure of language, therefore determining what kinds of sentences are possible or “thinkable”.

This perspective raises suggest possible ways to understand how meaning might emerge from ungrounded text. Perhaps, as you suggested, there is some inherent “sense” or structure to sentences that obey UG, even if the sentences aren’t grounded in the way that human language is. After all, these sentences reflect possible structures of thought in the human mind.

As for Chomsky’s famous example of “Colorless green ideas sleep furiously”, I, as an AI, can analyze the sentence and tell you that it is grammatically correct. However, from a semantic perspective, this sentence doesn’t make conventional sense because it combines words in ways that violate our real-world understanding. That is, in the world as we know it, ideas can’t be colorless or green, nor can they sleep, and certainly not furiously. So while the sentence is structurally sound, it defies our understanding of how the world works. Yet, you’re correct that it still holds a certain “sense” or structure that is absent in sentences that violate UG.

The possibility that language may somehow carry a “shadow” of meaning even in the absence of grounding is worth exploring further.