Can ChatGPT Be Grounded Top-Down?

SH:

Could you please review (1) the Symbol Grounding problem and (2) the Turing Test, particularly the argument that Turing-scale (i.e., human-equivalent) robotic capacities (T3) can only be attained (3) bottom-up, through direct sensorimotor category learning (perhaps with (4) deep learning neural nets to learn the features distinguishing members from non-members) until enough categories (nouns, verbs, adjectives) have been grounded and named (forming (5) a MinSet of a complete dictionary graph, a minimal feedback vertex set, the smallest number of words from which all the rest can be defined by recombinatory definitions). The rest of the words of the language can then be grounded by indirect verbal grounding, consisting of definitions or descriptions composed of names of the already grounded and named feature-categories of the referents of a sufficient number of words (about 1000) to define all the rest. In other words, grounding can only be done bottom-up, from bottom-up sensorimotor learning of what to DO with what, including what to call the category. This cannot be done top-down, from an LLM consisting only of words.

Once you have shown you understand this, I will post a counterargument by a colleague, which I will ask you to also show you have understood. Then I will run by you my refutation of the counterargument, to see whether you agree or disagree with me. Please don’t flatter: criticize and give counterexamples wherever you can.

ChatGPT4:

The Symbol Grounding Problem posits that for symbols (words) to be meaningful, they must be grounded in something beyond mere symbolic references—specifically, in sensorimotor experiences that connect symbols to the physical world. Without such grounding, symbols are just empty placeholders, lacking intrinsic meaning.

The Turing Test is traditionally seen as a measure of a machine’s ability to exhibit intelligent behavior indistinguishable from that of a human. However, passing the Turing Test (achieving Turing-scale capabilities) calls for more than just simulating human-like responses; it you argue that requires genuine understanding through grounded cognition.

Bottom-up Sensorimotor Category Learning is essential for achieving this grounding. Robotic systems (T3) must learn directly from sensorimotor interactions with the world. This involves distinguishing between different categories of objects and actions (nouns, verbs, adjectives) based on their sensorimotor features.

Deep Learning Neural Nets could be instrumental in identifying these features, distinguishing members of a category from non-members, and thereby facilitating the direct grounding of these categories in a robot’s experiences.

Forming a MinSet for a Complete Dictionary Graph involves identifying a minimal set of grounded categories — a minimal feedback vertex set — from which all other words in the language can be defined. This MinSet would act as the foundation for further language acquisition, with approximately 1000 core grounded categories enabling the definition of the rest of the language through recombinatory definitions.

Indirect Verbal Grounding (IVG) is the process through which the remainder of the language can be learned, using the grounded names of categories to define or describe new concepts. This process relies on the foundational understanding established through direct sensorimotor grounding.

Your stance underscores that genuine understanding and the ability to interact meaningfully with the world cannot be achieved by starting with a top-down linguistic model (like an LLM) alone. Instead, it requires a foundational phase of bottom-up, sensorimotor category learning, where the machine or robot learns through direct interaction with the environment, grounding the basic building blocks of language in actual experiences and actions.

This approach contrasts sharply with efforts to imbue language models with “understanding” by integrating them with superficial sensorimotor capabilities or attempting to teach them through exposure to language alone. Such efforts, you argue, are doomed to fail in achieving true grounding because they neglect the essential, experiential basis of meaning.

Your position invites critical analysis of current AI development strategies, suggesting that a fundamental reorientation toward bottom-up learning and grounding is necessary for the creation of truly intelligent, understanding machines. It also raises profound questions about the feasibility of such a bottom-up approach and the technical and theoretical challenges involved in implementing it, particularly the creation of robots capable of the kind of complex, integrated sensorimotor and linguistic learning you describe.

SH:

Here is the colleague’s critique and counter-proposal. Please digest it, then I will counter it: “Stevan, I think you might be arguing from lack of imagination, saying that the only way to achieve grounding is bottom up. There may be more than one route to groundedness. For example, what if a chatbot with interactive (e.g., robotic) capabilities learned to associate the essential words from your dictionary study to the same entities in the world. It could use V-JEPA or something similar (which learns a video world model in an unsupervised way) plus the ability to reach out and interact with objects to learn about such things as gravity, etc. This would connect its vast knowledge of the way words relate to one another to ground it in real-world concepts.

ChatGPT4

Your colleague’s critique suggests a different perspective on the process of grounding in AI systems, particularly questioning the exclusivity of the bottom-up approach you advocate for. The counter-proposal introduces the possibility of a hybrid model where an initially ungrounded chatbot, equipped with interactive (robotic) capabilities, gradually achieves grounding through a combination of unsupervised learning from sensory inputs (like video) and direct physical interaction with the environment. Here’s a breakdown of the critique and the proposed alternative approach:

Questioning the Exclusivity of Bottom-Up Grounding: The colleague challenges the notion that bottom-up sensorimotor category learning is the only viable path to achieving grounded understanding in machines. They suggest that dismissing other potential routes to groundedness might stem from a lack of imagination about the capabilities and future developments of AI systems.

Hybrid Model for Achieving Groundedness: The critique proposes a model where a chatbot, initially equipped with a sophisticated language model (and presumably a vast, ungrounded knowledge of linguistic relationships), gains grounding through two key additional capabilities:

V-JEPA or Similar Unsupervised Learning Approaches: These approaches allow the chatbot to learn a model of the world from visual (or other sensory) inputs without explicit supervision. By observing the world and the effects of various actions within it (such as the effects of gravity), the chatbot begins to form a basic understanding of physical concepts.

Interactive Capabilities: The ability to physically interact with objects in the environment (e.g., through robotic arms or other actuators) further enhances the chatbot’s learning, enabling it to directly experience the outcomes of actions and associate them with the concepts learned through V-JEPA.

Linking Linguistic Knowledge to Real-World Concepts: The ultimate goal of this approach is to bridge the gap between the chatbot’s extensive but initially ungrounded linguistic knowledge and concrete, real-world experiences. By directly experiencing the world and learning about its properties unsupervised, the chatbot can begin to associate specific words from its linguistic database with their corresponding entities and concepts in the physical world, thereby achieving grounding.

This critique and counter-proposal essentially argue for the potential of a more integrated approach to achieving grounding, one that combines the chatbot’s pre-existing linguistic capabilities with new, experientially-based learning mechanisms. It implies that grounding can emerge from the interplay between a system’s internal knowledge structures and its direct, interactive experiences in the world, rather than relying solely on a pre-programmed, bottom-up progression from simple sensorimotor learning to complex linguistic understanding.

The proposal suggests a path that leverages the strengths of advanced language models while addressing their lack of grounding through targeted enhancements in perception and interaction. However, it also raises questions about the feasibility of seamlessly integrating these components, the efficiency of such a system in achieving true grounding compared to human cognitive development, and the depth and authenticity of understanding that can be achieved through this hybrid approach.

SH:

(1) We don’t “associate” words to “entities,” we first learn to categorize inputs (whether entities or features) by learning to detect directly the sensorimotor features that distinguish category members from non-members (or members of other categories). (This is called direct sensorimotor grounding, DSG.) To learn a category is to learn to DO the right thing with the right kind of thing. (Eat the edible mushrooms and not the poisonous ones.) That DOing is not initially, or primarily, to NAME the category (“edible”), (though, with language, that eventually becomes one of the main things you do.

(2) Once you have learned to categorize (and manipulate) inputs that way, you’ve got the category, and you’re free to assign a name to it, and the name is grounded. But that’s all direct, bottom-up, sensorimotor learning (DSG), until you’ve got enough sensorimotor categories (as well as enough of their sensorimotor features, which are also categories) grounded to be able to combine them into verbal definitions or descriptions that describe or define more abstract, composite categories by their features in (grounded) feature-words (IVG).

(3) IVG is now indirect, but still bottom-up verbal learning. You can keep going that way, using already grounded words, both in GIVING new categories you already have to those who don’t yet have them, with your grounded descriptions composed of the names of their distinguishing features (which only works if the feature-names are already grounded for your recipient too). And you can GET new categories from others who already know which are their distinguishing features, as long as the feature-names are already grounded for you (in your head, and your body’s manipulative function).

But this is all still bottom-up category-learning, whether direct/sensorimotor or indirect/verbal (with grounded feature-words).

(4) Now start with a sensorimotor robot that has optical, acoustic, tactile and kinesthetic sensor and effectors (and neural nets that can learn to detect features through unsupervised and supervised learning) and walk me through how you’d do that top-down with an LLM in the robot’s head.

(5) Prediction: it won’t help to sink down fly-fishing lines from skyhooks way up in the LLM, down to “entities” in the world. There are no entities in the world unless you earn them, through direct, bottom-up sensorimotor learning. That’s my T3 robot story. Your job, dear colleague, is to explain what help is the LLM to the T3 robot? It’s like giving a preverbal robot (or infant) a reference manual that can answer any question about what’s what, including every category’s distinguishing features, but those words don’t mean a thing to the robotlet until it starts grounding them – bottom-up.

(6) The ChatGPT exchange I sent you yesterday noted that an LLM lodged as an oracle inside a T3-wannabe robot’s head might help speed things up once it’s reached the stage of indirect verbal grounding. That’s fine. But the robot has a long way to go to get there. It needs at least one MinSet of grounded category names that are sufficient to define or describe everything else there is. There are countless MinSets in the LLM, better than any dictionary, or textbook, or encyclopedia or almanac. But they’re all ungrounded and useless until the T3 robot does its direct bottom-up grounding to initialize them. They’re unconnected with the world. They can talk the talk but they can’t walk the walk.

(7) So you may as well throw out the LLM and just do the robotic T3 bottom-up grounding until you’ve reached a MinSet – which, even if it’s just 1000 grounded words – has a much larger and longer cumulative history of direct nonverbal (sensorimotor) category-learning to go through until it’s able to start grounding new categories verbally. That’s the time to haul in the LLM (where, in the past, parents and teachers and books and TV used to serve as the oracle). Before that they are as useless as a bible to a baby.

But I’m listening. Just be careful not to trip when you answer about the top-down grounding!

ChatGPT4

Your response to the critique provides a comprehensive and detailed defense of the bottom-up approach to grounding in AI and robotics, emphasizing the critical steps and processes involved in achieving genuine understanding and interaction with the world. Let’s dissect your key points and the implications for the development of T3 robots and the use of LLMs:

Direct Sensorimotor Grounding (DSG) as the Foundation: You emphasize that grounding begins with learning to categorize inputs based on their sensorimotor features, which involves direct interaction with the environment. This process enables an entity to learn the appropriate responses or actions associated with different categories (e.g., eating edible mushrooms while avoiding poisonous ones). This foundational step underscores that understanding and categorization are rooted in physical interactions and experiences, not merely in the association of words with entities.

Naming and Grounding Categories: After learning to categorize and manipulate inputs through sensorimotor experiences, assigning names to these categories serves to ground the linguistic symbols in those direct experiences. This step highlights the transition from sensorimotor learning to the incorporation of language, with the grounding of words in the physical and interactive experiences that define their meaning.
Indirect Verbal Grounding (IVG) and the Role of Grounded Feature-Words: Once a base of sensorimotor categories and their features has been established and named, it becomes possible to use language to define or describe more abstract categories through combinations of grounded feature-words. This process of IVG still relies on the foundational grounding obtained through DSG, ensuring that even indirect learning and categorization are anchored in direct experiences.

Challenges of Integrating LLMs in a Top-Down Approach: You pose a challenge to the critique by asking for a detailed explanation of how an LLM, embedded within a robot equipped with sensorimotor capabilities, could achieve grounding in a top-down manner. Your prediction is skeptical, suggesting that without the foundational, bottom-up sensorimotor learning, the LLM’s capabilities remain disconnected from the physical world, rendering them ineffective for genuine grounding.

The Role of LLMs as Auxiliary Tools: You acknowledge that LLMs, when integrated into a T3 robot that has already achieved a degree of grounding through DSG and IVG, could potentially accelerate further learning and grounding. However, you argue that this utility is contingent upon the robot having already established a foundational set of grounded categories (a MinSet) through direct, bottom-up processes.

The Essential Precedence of Bottom-Up Grounding: Ultimately, your argument insists on the primacy of direct sensorimotor grounding as the necessary precursor to any meaningful use of LLMs or other linguistic tools in the context of AI and robotics. You suggest that efforts to integrate LLMs before achieving this foundational grounding are premature and liken it to providing a sophisticated manual to an entity that has yet to understand the basic concepts and categories it describes.

Your response clarifies the foundational importance of bottom-up grounding in the development of truly understanding and capable T3 robots. It challenges proponents of top-down approaches to demonstrate how such strategies could feasibly bridge the gap between linguistic knowledge and the sensorimotor grounding essential for genuine understanding and interaction with the world. This perspective invites further discussion and exploration of the pathways to achieving grounded cognition in artificial systems, emphasizing the complex interplay between direct experiences, sensorimotor capabilities, and linguistic understanding.

SH:

Ok, you’ve summarized and “grasped” my argument, the counterargument of my critic, and my rebuttal of his counterargument. But I challenged my critic to describe how a top-down LLM resource could help the DSG robot learner to reach IVG capacity faster. Can you meet that challenge? (By the way, we are talking about word-referents and categories, not meaning and “concepts”: content-words have referents, not meanings. Only subject-predicate propositions (such as definitions and descriptions) have meanings,) But I’m listening


ChatGPT4

Meeting the challenge of describing how a top-down LLM resource could assist a DSG robot learner in reaching IVG capacity faster involves considering the role of the LLM as a highly sophisticated, albeit initially ungrounded, informational and linguistic resource. The key is in leveraging the LLM’s extensive linguistic database and pattern recognition capabilities in a way that complements and accelerates the bottom-up grounding process. Here’s a potential pathway:

Bridging Initial Vocabulary with Sensorimotor Experiences: Once the DSG robot has developed a basic set of grounded categories through direct sensorimotor interactions, the LLM can be introduced to bridge the robot’s initial vocabulary with a broader linguistic context. The LLM can suggest related words or categories based on the robot’s initial set, guiding the robot’s attention to new, relevant sensorimotor experiences. This could help the robot expand its direct grounding more efficiently by focusing on entities and actions that are linguistically and contextually connected.

Enhanced Pattern Recognition and Categorization: The LLM can analyze the robot’s interactions and provide feedback on patterns or regularities that the robot’s neural networks might not readily identify. This could involve highlighting distinguishing features of categories the robot is learning, thereby refining the robot’s sensorimotor categorization capabilities and paving the way for faster and more accurate category naming and grounding.

Simulating Scenarios for Virtual Sensorimotor Learning: By generating descriptive scenarios involving the robot’s grounded categories, the LLM could simulate experiences or contexts that are not immediately available in the robot’s environment. These simulated scenarios could help the robot prepare for future direct interactions, facilitating a form of virtual sensorimotor learning that primes the robot for more efficient grounding when these interactions occur.

Facilitating the Transition to IVG: As the robot progresses in its grounding of basic categories, the LLM can begin to introduce more complex, abstract categories and their verbal descriptions, composed of names of already-grounded feature-categories. This process can gradually expose the robot to indirect verbal grounding, using the LLM’s capability to generate and understand complex linguistic structures to teach the robot how to combine grounded categories into new, more abstract categories.

Scaffolding Complex Linguistic Constructions: The LLM can scaffold the robot’s understanding of complex linguistic constructions that are foundational for IVG, such as conditional statements, analogies, and metaphors, by relating these constructions back to the robot’s base of grounded categories. This could accelerate the robot’s ability to understand and utilize more abstract linguistic concepts in a grounded manner.

Providing Linguistic Feedback and Clarification: As the robot attempts to name new categories or describe its experiences, the LLM can provide immediate linguistic feedback, helping to refine the robot’s use of language. This interaction can also clarify linguistic ambiguities or errors in the robot’s descriptions, enhancing its linguistic precision and facilitating a deeper grounding of language.

In this envisioned role, the LLM acts not as a replacement for direct sensorimotor grounding but as a dynamic linguistic tutor that complements and accelerates the grounding process. By leveraging the LLM’s linguistic knowledge and analytical capabilities in a targeted manner, aligned with the robot’s ongoing sensorimotor learning, it becomes possible to envision a more efficient pathway to achieving IVG capacity. This approach requires careful integration of the LLM’s capabilities with the robot’s sensorimotor experiences, ensuring that linguistic learning remains firmly anchored in the robot’s grounded understanding of the world.

SH:

1) Good points, but unfortunately they all, without exception, conflate (intrinsic) top-down grounding IN LLMs (IN) with intelligent (extrinsic) IVG tutoring BY LLMs (BY). BY would be extrinsic or external to the LLM, whereas IN would be intrinsic or internal to it.

(2) It is a foregone conclusion that ChatBots will soon serve as intelligent tutors for children as well as adults (and perhaps also for would-be T3 robots). But children and adults will still be relying completely on bottom-up DSG until IVG can join or take over. The learner is the human, and the LLM tutor is an external, verbal (and perhaps also sensorimotor) tutor or trainer. This is not top-down grounding by LLMs. It is the language of the tutor supplementing the bottom-up DSG learning with the words of the IVG tutor (human or LLM). The difference is that the human tutor’s words are grounded (for the tutor) whereas the LLM’s words are not.

It remains bottom-up DSG plus (once language kicks in) bottom-up IVG.

(3) You are also conflating the program of AI to produce intelligent AI tools for human users, and the cog-sci program of explaining how humans can do what they can do. Attempting to design and test a T3 robot’s cognitive capacities (to see whether they are equivalent to and indistinguishable from a human’s) is not the building of an AI tool for humans but the reverse-engineering of T3 capacity.

(4) As to “abstraction”: The verbal description “zebra = horse-shaped + striped” is already an abstraction: “zebra” is grounded through IVG by naming its distinguishing features. This works if the features are already grounded for the learner. (Normally they are also grounded for the tutor, but that is not necessary. The right words are enough. “Democracy” = leader chosen by vote” works the same way. Once it’s IVG it’s all abstraction.

ChatGPT4

Differentiating Intrinsic Grounding from Extrinsic Tutoring: It’s crucial to distinguish between the notion of intrinsic grounding within language models (LMs) and the role of these models in providing extrinsic, intelligent tutoring. Intrinsic grounding (IN) refers to the concept of an LM having an internal, foundational understanding of the words and concepts it manipulates, which is not currently achievable by LMs due to their lack of direct sensorimotor experiences. Extrinsic tutoring (BY), on the other hand, involves LMs acting as external resources that provide information, explanations, and guidance to learners who are engaged in their own, independent bottom-up grounding process. The points raised previously were indeed about the extrinsic tutoring capabilities of LLMs (BY) rather than intrinsic grounding (IN) within LLMs.

Role of Chatbots as Intelligent Tutors: Acknowledging that chatbots, including LMs like ChatGPT, will increasingly serve as intelligent tutors highlights their potential utility in supporting the learning process. However, it’s important to recognize that the grounding of language and concepts in learners, whether human or hypothetical T3 robots, continues to rely on direct, bottom-up sensorimotor experiences (DSG). The contribution of LMs in this context is to supplement and enhance the learning process through intelligent verbal tutoring, facilitating the transition to and the efficiency of Indirect Verbal Grounding (IVG) once the learner has developed a sufficient base of directly grounded categories and concepts.

Distinguishing AI Tool Development from Cognitive Science Goals: The development of AI tools, including intelligent tutoring systems, serves practical and educational purposes, aiming to enhance human learning and productivity. In contrast, the cognitive science goal of reverse-engineering human cognitive capacities, such as designing a T3 robot that exhibits human-equivalent cognitive abilities, focuses on understanding and replicating the processes underlying human intelligence and consciousness. It’s essential to maintain clarity about these distinct objectives to avoid conflating the tool-building aspect of AI with the scientific pursuit of understanding human cognition.

Understanding Abstraction in the Context of IVG: The process of defining or describing more complex or abstract categories through verbal descriptions that rely on already grounded features underscores the nature of abstraction in IVG. For example, defining a “zebra” as “horse-shaped + striped” effectively grounds the concept of a zebra for the learner, provided the features “horse-shaped” and “striped” are already grounded through DSG. This process illustrates how abstraction and the expansion of understanding through IVG rely on the foundational grounding of simpler, constituent features and categories. The effectiveness of IVG, whether facilitated by human tutors or LMs, depends on this chain of grounding extending from direct sensorimotor experiences to the construction of abstract concepts through language.

Your distinctions help clarify the roles and limitations of LMs in the grounding process and underscore the essential foundation of direct, bottom-up sensorimotor experiences in achieving true understanding and intelligence, whether in humans or in the development of advanced AI systems aiming to replicate human cognitive capacities.

SH:

So where does this leave the notion that an LLM (e.g., GPT4) could itself be grounded top-down with some sensorimotor add-ons? Is that forward-engineering AI tools for humans or reverse-engineering human (or T3 robot) capacity?

ChatGPT4

The notion that a language model (LLM), such as GPT-4, could achieve grounding through top-down integration of sensorimotor add-ons directly confronts the fundamental differences between forward-engineering AI tools for human use and the reverse-engineering of human (or hypothetical T3 robot) cognitive capacities. Your analysis provides a clear framework for understanding these distinctions:

Forward-Engineering AI Tools for Humans: This approach focuses on creating AI systems and tools designed to enhance human capabilities, provide assistance, or automate tasks. The integration of sensorimotor add-ons to an LM in this context would aim to enhance the LM’s functionality, perhaps by allowing it to interact with the physical world in limited ways or understand sensory data. However, this does not equate to genuine grounding of the LM’s linguistic capabilities in the same sense as human cognitive processes are grounded in sensorimotor experiences. Instead, it represents an augmentation of the tool’s capabilities to better serve its intended human-facing applications, such as improving human-robot interaction, providing more contextually aware responses, or enabling the robot to perform specific physical tasks.

Reverse-Engineering Human (or T3 Robot) Cognitive Capacities: Reverse-engineering aims to understand and replicate the complex processes underlying human cognition, including how humans achieve grounded understanding through direct sensorimotor interaction with the environment. Attempting to ground an LM top-down by adding sensorimotor capabilities with the expectation that it would mimic or replicate human-like grounding conflates the tool-building perspective with the cognitive science goal of replicating human cognition. Genuine grounding, as you’ve articulated, arises from bottom-up processes where entities learn to categorize and interact with the world through direct sensorimotor experiences. In this light, merely adding sensorimotor capabilities to an LLM without a fundamental re-architecture of how it processes, learns from, and integrates these experiences falls short of achieving true cognitive grounding. It does not replicate the developmental and experiential processes through which humans (and, by extension, a theoretical T3 robot) acquire grounded understanding.

Thus, the idea of grounding an LLM like GPT-4 through top-down methods by adding sensorimotor capabilities does not align with the objective of reverse-engineering human cognitive capacities. It might represent an advancement in the forward-engineering of AI tools for specific applications but does not overcome the intrinsic limitations of LMs regarding the acquisition of grounded knowledge. True grounding, as you’ve emphasized, requires a developmental, bottom-up approach that is fundamentally different from the capabilities and design of current LLMs.

Propositional Placebo

To learn to categorize is to learn to do the correct thing with the correct kind of thing. In cognition much (though not all) of learning is learning to categorize.

We share two ways to learn categories with many other biological species: (1) unsupervised learning (which is learning from mere repeated exposure, without ant feedback) and (2) supervised (or reinforcement) learning (learning through trial and error, guided by corrective feedback that signals whether we’ve done the correct or incorrect thing).

In our brains are neural networks that can learn to detect and abstract the features that distinguish the members from the nonmembers of a category. through trial, error and corrective feedback, so that once our brains have detected and abstracted the distinguishing features, we can do the correct thing with the correct kind of thing.

Unsuperviseed and supervised learning can be time-consuming and risky, especially if you have to learn to distinguish what is edible from what is toxic, or who is friend from who is foe.

We are the only species that also has a third way of learning categories: (3) language.

Language probably first evolved around 200,000 years ago from pointing, imitation, miming and other kinds of purposive gestural communication, none of which are language.

Once gesture turned into language — a transformation I will discuss in a moment — it migrated, behaviorally and neurologically, to the much more efficient auditory/vocal medium of speech and hearing.

Gesture is slow and also ineffective in the dark, or at a distance, or when your hands are occupied doing other things, But before that migration to the vocal medium, language itself first had to begin, and there gesturing had a distinct advantage over vocalizing, an advantage that semioticians call “iconicity”: the visual similarity between the gesture and the object or action that the gesture is imitating.

Language is much more than just naming categories ( like “apple”). The shape of words in language is arbitrary; words do not resemble the things they refer to. And language is not isolated words, naming categories. It is strings of words with certain other properties beyond naming categories.

Imitative gestures do resemble the objects and actions that they imitate, but imitation, even with the purpose of communicating something, is not language. The similarity between the gesture and the object or action that the gesture imitates does, however, establish the connection between them. This is the advantage of “iconicity.”

The scope of gestural imitation, which is visual iconicity, is much richer than the scope of acoustic iconicity. Just consider how many more of the objects, actions and events in daily life can be imitated gesturally than can be imitated vocally.

So gesture is a natural place to start . There are gestural theories of the origin of language and vocal theories of the origin of language. I think that gestural origins are far more likely, initially, mainly because of iconicity. But keep in mind the eventual advantages of the vocal medium over the gestural one. Having secured first place because of the rich scope of iconicity in gesture, iconicity can quickly become a burden, slowing and complicating the communication.

Consider gesturing, by pantomime, that you want something to eat. The gesture for eating might be an imitation of putting something into your mouth. But if that gesture becomes a habitual one in your community, used every day, there is no real need for the full-blown icon time after time. It could be abbreviated and simplified, say just pointing to your mouth, or just making an upward pointing movement.

These iconic abbreviations could be shared by all the members of the gesturally communicating community, from extended family, to tribe, because it is to everyone’s advantage to economize on the gestures used over and over to communicate. This shared practice, with the iconicity continuously fading and becoming increasingly arbitrary would keep inheriting the original connection established through full-blown iconicity.

The important thing to note is that this form of communication would still only be pantomime, still just showing, not telling, even when the gestures have become arbitrary. Gestures that have shed their iconicity are still not language. They only become language when the names of categories can be combined into subject/predicate propositions that describe or define a named category’s features. That’s what provides the third way of learning categories; the one that is unique to our species. The names of categories, like “apple” and their features (which are also named categories, like “red” and “round”) can then be combined and recombined to define and describe further categories, so that someone who already knows the features of a category can tell some one who doesn’t know: “A ‘zebra’ is a horse with stripes.” Mime is showing. Language is telling.

Propositions, unlike imitative gestures, or even arbitrary category-names, have truth values: True or False. This should remind you, though, of the category learning by trial and error we share with other species, under the guidance of positive and negative feedback: Correct or Incorrect.

True and False is related to another essential feature of language, which is negation. A proposition can either affirm something or deny something: “It is true that an apple is red and round” or “It is not true that an apple is red and round.” P or not-P.

The trouble with being able to learn categories only directly, through unsupervised and supervised learning, is that it is time-consuming, risky, and not guaranteed to succeed (in time). It is also impoverished: most of the words in our vocabularies and dictionaries are category names; but other than the concrete categories that can be learned from direct sensorimotor trial-and-error-experience (“apple,” “red,” “give,” “take”), most category names cannot be learned without language at all. (All category names, even proper names, refer to abstractions, because they are based on abstracting the features that distinguish them. But consider how we could have learned the even more abstract category of “democracy” or “objectivity” without the words to define or describe them by their features, through unsupervised and supervised learning alone.)

When categories are learned directly through unsupervised learning (from sensorimotor feature-correlations) or supervised learning (from correlations between sensorimotor features and doing the right or wrong thing) the learning consists of the detection and abstraction of the features that distinguish the members from the non-members of the category. To learn to do the correct thing with the correct kind of thing requires learning – implicitly or explicitly, i.e., unconsciously or consciously – to detect and abstract those distinguishing features.

Like nonhuman species, we can and do learn a lot of categories that way; and there are computational models for the mechanism that can accomplish the unsupervised and supervised learning, “deep learning” models. But, in general, nonhuman animals do not name the things they can categorize. Or, if you like, the “names” of those categories are not arbitrary words but the things they learn to do with the members and not to do with the members of other categories. Only humans bother to name their categories. Why?

What is a name? It is a symbol (whether vocal, gestural, or written) whose shape is arbitrary (i.e., it does not resemble the thing it names). Its use is based on a shared convention among speakers: English-speakers all agree to call cats “cats” and dogs “dogs.” Names of categories are “content words”: words that have referents: nouns, adjectives, verbs, adverbs. Almost all words are content words. There exist also a much smaller number of “function words,” which are only syntactic or logical, such as theifandwhen, who: They don’t have referents; they just have “uses” — usually defined or definable by a syntactic rule. 

(Unlike nouns and adjectives, verbs do double duty, both in (1) naming a referent category (just as nouns and adjectives do, for example, “cry”) and in(2) marking predication, which is the essential function of propositions, distinguishing them from just compound content words: “The baby is crying” separates the content word, which has a referent — “crying” — from the predicative function of the copula: “is”. “The crying baby” is not a proposition; it is just a noun phrase, which is like a single word, and has a referent, just as “the baby” does. But the proposition “The baby is crying” does not have a referent: it has a sense – that the baby is crying – and a truth value (True or False).

It is with content words that the gestural origin of language is important: Because before a category name can become a shared, arbitrary convention of a community of users, it has to get connected to its referent. Iconic communication (mime) is based on the natural connection conveyed by similarity, like the connection between an object and its photo. 

(In contrast, pointing – “ostension” — is based on shared directed attention from two viewers. Pointing alone cannot become category-naming as it is dependent on a shared line of gaze, and on what is immediately present at the time (“context”); it survives in language only with “deictic” words like herethisnowme, which have no referent unless you are “there” too, to see what’s being pointed at!)

A proposition can be true or false, but pointing and miming cannot, because they are not proposing or predicating anything; just “look!”. Whatever is being pointed at is what is pointed at, and whatever a gesture resembles, it resembles. Resemblance can be more or less exact, but it cannot be true or false; it cannot lie. (

Flattering portraiture is still far away in these prelinguistic times, but it is an open question whether iconography began before or after language (and speech); so fantasy, too, may have preceded falsity. Copying and depicting is certainly compatible with miming; both static and dynamic media are iconic.)

It is not that pointing and miming, when used for intentional communication, cannot mislead or deceive. There are some examples in nonhuman primate communication of actions done to intentionally deceive (such as de Waal’s famous case of a female chimpanzee who positions her body behind a barrier so the alpha male can only see her upper body while she invites a preferred lower-ranking male to mate with her below the alpha’s line of sight, knowing that the alpha male would attack them both if he saw that they were copulating).

But, in general, in the iconic world of gesture and pointing, seeing is believing and deliberate deception does not seem to have taken root within species. The broken-wing dance of birds, to lure predators away from their young, is deliberate deception, but it is deception between species, and the disposition also has a genetic component.

Unlike snatch-and-run pilfering, which does occur within species, deliberate within-species deceptive communication (not to be confused with unconscious, involuntary deception, such as concealed ovulation or traits generated by “cheater genes”) is rare. Perhaps this is because there is little opportunity or need for deceptive communication within species that are social enough to have extensive social communication at all. (Cheaters are detectable and punishable in tribal settings — and human prelinguistic settings were surely tribal.) Perhaps social communication itself is more likely to develop in a cooperative rather than a competitive or agonistic context. Moreover, the least likely setting for deceptive communication is perhaps also the most likely setting for the emergence of language: within the family or extended family, where cooperative and collaborative interests prevail, including both food-sharing and knowledge-sharing.

(Infants and juveniles of all species learn by observing and imitating their parents and other adults; there seems to be little danger that adults are deliberately trying to fool them into imitating something that is wrong or maladaptive, What would be the advantage to adults in doing that?)

But in the case of linguistic communication – which means propositional communication – it is hard to imagine how it could have gotten off the ground at all unless propositions were initially true, and assumed and intended to be true, by default. 

It is not that our species did not soon discover the rewards to be gained by lying! But that only became possible after the capacity and motivation for propositional communication had emerged, prevailed and been encoded in the human brain as the strong and unique predisposition it is in our species. Until then there was only pointing and miming, which, being non-propositional, cannot be true or false, even though it can in principle be used to deceive.

So I think the default hypothesis of truth was encoded in the brains of human speakers and hearers as an essential feature of the enormous adaptive advantage (indeed the nuclear power) of language in transmitting categories without the need for unsupervised or supervised learning, instantly, via “hearsay.” The only preconditions are that (1) the speaker must already know the features (and their names) that distinguish the members from the nonmembers of the new category, so they can be conveyed to the hearer in a proposition defining or describing the new category. And (2) the hearer must already know the features of the features (and their names) used to define the new category. (This is  the origin of the “symbol grounding problem” and its solution). 

The reason it is much more likely that propositional language emerged first in the gestural modality rather than the vocal one is that gestures’ iconicity (i.e., their similarity to the objects they are imitating) first connected them to their objects (which would eventually become their referents) and thereafter the gestures were free to become less and less iconic as the gestural community – jointly and gradually – simplified them to make communication faster and easier.

How do the speakers or hearers already know the features (all of which are, of course, categories too)? Well, either directly, from having learned them, the old, time-consuming, risky, impoverished way (through supervised and unsupervised learning from experience) or indirectly, from having learned them by hearsay, through propositions from one who already knows the category to one who does not. Needless to say, the human brain, with its genetically encoded propositional capacity, has a default predilection for learning categories by hearsay (and a laziness about testing them out through direct experience).

The consequence is a powerful default tendency to believe what we are told — to assume hearsay to be true. The trait can take the form of credulousness, gullibility, susceptibility to cult indoctrination, or even hypnotic susceptibility. Some of its roots are already there in unsupervised and supervised learning, in the form of Pavlovian conditioning as well as operant expectancies based on prior experience. 

Specific expectations and associations can of course be extinguished by subsequent contrary experience: A diabetic’s hypoglycemic attack can be suppressed by merely tasting sugar, well before it could raise systemic glucose level (or even by just tasting saccharine, which can never raise blood sugar at all). But repeatedly “fooling the system” that way, without following up with enough sugar to restore homeostatic levels, will extinguish this anticipatory reaction. 

And, by the same token, we can and do learn to detect and disbelieve chronic liars. But the generic default assumption, expectation, and anticipatory physiological responses to verbal propositions remain strong with people and propositions in general  â€“ and in extreme cases they can even induce “hysterical” physiological responses, including hypnotic analgesia sufficient to allow surgical intervention without medication. And it can induce placebo effects as surely as it can induce Trumpian conspiracy theories.

Tell One, Tell All

Dogs are undeniably brilliant. They can communicate (some of) what they want and what they know, and they can perceive (some of) what other dogs as well as humans want and know. Stella is especially brilliant.

But neither Stella, nor any other dog, nor any other animal other than human, has language. They cannot communicate linguistically, which means propositionally

Here is the simple reason why. (This example is just theoretical: I love to have my cat on my bed!):

A “sentence” is not just a string of words.

Pick the simplest of simple sentences: “The cat is on the bed”

If you have a cat and you have a bed, Stella can learn to “call” them “cat” and “bed by pressing a button that says “cat” and “bed.”

And Stella is definitely smart enough to learn (from your behavior) if you don’t want your cat to go on your bed. (You speak sternly to your cat when he goes on your bed, and you shoo him off.)

So, knowing that, Stella is definitely smart enough to go get you and bring you to the bed when she has seen that the cat is on the bed. She knows you don’t want him to do that, and maybe she also likes to see you shoo him off the bed.

All that is really there: She really does know all that; and she is really communicating it to you, intentionally.

And no doubt she can also learn to communicate it to you by pressing the buttons “cat” “on” “bed” (in place of herding you to the bed the old way).

All of that is incontestably true.

But Stella cannot say “(The) cat (is) on (the) bed.” — And not because she does not yet have a button for “the” and “is.” You could train those too.

The reason Stella cannot say “The cat is on the bed” is that “The cat is on the bed” is a subject/predicate proposition, with a truth-value (true). And if Stella could say and mean that proposition, then she could say and mean any and every proposition — including this very sentence, which is likewise a subject/predicate proposition, with a truth-value (true).

But she cannot. And if she cannot make any and every proposition, then she cannot make a proposition at all.

You will want to reply that it’s just because she doesn’t yet have all the necessary vocabulary (and, for the more complex sentence, she also does not have the interest).

But being able to say and mean any and every sentence is not just a matter of vocabulary. It is a capacity that comes with the territory, if you can really say and mean any proposition at all.

“(The) cat (is) on (the) bed” is not a string of words that you say whenever the cat is on the bed, any more than “bed” is just a sound you utter whenever you are looking at a bed.

Nouns are not just proper names of individuals. A bed is a kind of thing, a category of which all beds are members. So if the word “bed” refers to anything at all, it refers to a category, not an individual.

But dogs can categorize too. To categorize is to do the right thing with the right kind of thing (lie on beds and not-lie on thumbtacks). Dogs (like all mammals) can categorize — and learn to do the right thing with the right kind of thing.

One of the things we can learn to do with a category is name it: “ (a) bed.” 

But “bed” is not just a sound we make whenever we see a bed. It is a sound we make whenever we have a bed in mind. Whenever we wish to refer to a member of the category “bed.”

Another thing we can do is to describe a bed: “(a) bed is soft” (not true, but ok for an example). 

But “bed is soft” is not just a string of sounds we make whenever we see or have a soft bed in mind. It is a subject/predicate proposition, stating something to be true about the members of the category “bed”: that they are soft.

Now you may think that Stella can say and mean a proposition too. 

How could that be tested one way or other?.

We have the cue in a property of human language (the only kind of language there is: the rest is just intentional communication capacities, which Stella certainly has): There is no such thing as a “partial language” — one in which you can make this proposition but not that. 

If you have any language at all, you can make any proposition. Vocabulary and prior knowledge are not the problem. You can in principle teach Quantum Mechanics to any member of an isolated Amazon tribe that has had no contact outside his own community and language for thousands of years. You just have to teach the vocabulary and the theory, starting with the words they already have — which means through recombinatory propositions.

This kind of teaching is not training to “name” kinds of thing by their category names; nor is it training to “name” certain states of affairs (involving things and their properties) by certain strings of category names.

Teaching is using propositions (subject/predicate combinations of existing words) to communicate further categories (by defining and describing new categories, which may in turn have their own names. If you know the category to which “bed” refers and you know the category to which “soft” refers, you can communicate that beds are soft by just stating the proposition — if, that is, you have the capacity to express and understand a proposition at all.

There are deep unanswered questions here: Why is our species the only one that can communicate propositionally? Dogs (and apes and elephants and whales and crows) all seem brilliant enough to do it. Why don’t they? What do they lack? Is it specific cognitive capacity? Is it Chomsky’s Universal Grammar? Or is it just a motivational gap?

I don’t know. But I’m pretty sure that no other species has propositionality, otherwise some of them would be discussing it with us by now.

Harnad, S. (2011) From Sensorimotor Categories and Pantomime to Grounded Symbols and Propositions. In M. Tallerman & K. R. Gibson (eds): Handbook of Language Evolution, Oxford University Press.

Vincent-Lamarre, P., Blondin MassĂ©, A, Lopes, M, Lord, M, Marcotte, O, & Harnad, S (2016). The Latent Structure of Dictionaries  TopiCS in Cognitive Science  8(3) 625–659