Representations on Representation: Socratic Dialogue with ChatGPT

Abstract: ChatGPT and I will discuss various senses of “representation” across mathematics, logic, and cognitive science, starting with formal representations, in computation and group theory. Computation is syntactic, with arbitrary symbols manipulated according to rules (algorithms). Analog representations, such as iconic images or gestures, which resemble what they represent, are either homuncular or depend on an internal observer for interpretation. Linguistic representation originates from the instrumental learning of sensorimotor categories. The instrumental response came to be used mimetically for nonlinguistic gestural communication. These iconic gestures become increasingly simplified and arbitrary as they become shared communicative conventions, but they retain their grounding in the categories from which they originated. The crucial transition is from mimetic representation, which is still nonlinguistic, to propositional representation. Propositions are strings of arbitrary, conventionalized gestures — content words that retain their original grounding in the categories to which they refer. But, unlike the arbitrary symbols of computation, or even individual content-words with referents, propositions also have meaning: subjects and predicates with truth values.Ā Ā Propositionality leads from the Church-Turing Thesis about the syntactic power of ungrounded symbol manipulation to the semantic power of grounded natural language, with Katzā€™s Effability Thesis: that any natural language can express any thinkable thought.Ā 

SH: Let’s chat about representations.

ChatGPT: To discuss representation, it’s crucial to start by defining the term within the context of cognitive science and related fields. Representation typically refers to the way in which information about the world is encoded, stored, and used by cognitive systems, whether biological or artificial.

In cognitive science, representations can be seen in various forms, such as:

Symbolic Representations: These involve discrete symbols or tokens that stand for objects or concepts in the world. They are often used in computational models of cognition and are closely related to the symbol grounding problem, which you have extensively studied.

Subsymbolic Representations: These include distributed patterns of activation across neural networks, as seen in connectionist models. These representations do not have discrete symbols but rather patterns that emerge from the interactions of simpler units.

Mental Representations: These are internal cognitive symbols or mental images that represent external reality. They include concepts, schemas, mental models, and images.

Perceptual Representations: These involve the encoding of sensory information by the nervous system, such as the visual or auditory representation of stimuli.

Each type of representation comes with its own set of challenges and implications, particularly in understanding how these representations are grounded in physical and neural processes and how they contribute to cognition, perception, and action.

Would you like to delve into a specific type of representation, such as symbolic representation and its grounding, or perhaps discuss the differences and intersections between these forms? Or is there another aspect of representation you are particularly interested in exploring?

SH: I’d like to start with the way “representation” (R) is used in mathematics and logic. For example, in linear algebra, matrices can be Rs of groups. What does this mean in this case.?

ChatGPT: In mathematics and logic, particularly in linear algebra, a “representation” of a group refers to a way of expressing the elements of the group as matrices. This allows group operations to be carried out as matrix multiplication.

Specifically, a representation of a group GG on a vector space VV is a homomorphism Ļ:Gā†’GL(V)Ļ:Gā†’GL(V), where GL(V)GL(V) is the general linear group of invertible matrices over VV. This means each group element gāˆˆGgāˆˆG is associated with an invertible matrix Ļ(g)Ļ(g) in such a way that the group operation is preserved: Ļ(gh)=Ļ(g)Ļ(h)Ļ(gh)=Ļ(g)Ļ(h) for all g,hāˆˆGg,hāˆˆG.

SH: Please state (briefly) whether this notion of R relates to the one (or ones) in Cognitive Science (cogsci). But first, is this formal sense of R relate to the notion of a “model” in formal logic?

ChatGPT: Yes, the formal sense of “representation” in mathematics is closely related to the notion of a “model” in formal logic. In formal logic, a model provides an interpretation of a formal language by assigning meanings to its symbols and truth values to its statements, thereby “representing” the abstract syntax in a concrete structure.

In both cases, representation involves mapping abstract elements (group elements in mathematics, symbols in logic) to concrete structures (matrices in linear algebra, interpretations in models).

SH: Let’s clarify this formal meaning of R in maths and logic. Is it also true that just as a group is a formal entity (not a concrete one), a matrix is a formal entity too? And so is a “model” in model theory. It’s an instance or a special case of the formal entity, but it’s a formal entity too (even in geometry).

ChatGPT: That’s correct. In both mathematics and logic:

Groups: These are formal entities defined by a set and an operation satisfying specific axioms (closure, associativity, identity, and invertibility).

Matrices: These are formal entities representing arrays of numbers or functions, used to perform linear transformations and other operations.

Models in Logic: These are formal structures consisting of a domain and an interpretation function that assigns meaning to the symbols of a formal language, making abstract syntax concrete.

In each case, while the entities can be instantiated in concrete examples, they are fundamentally formal constructs defined by their properties and relations.

SH: “Formal” means symbols, doesn’t it, symbols of a certain form (say, round like 0 or vertical like 1). Mathematicians, when they are actually doing calculations in mathematics (not when they are doing whatever they are doing when they make discoveries in mathematics) are manipulating symbols — symbols whose physical shape is arbitrary in relation to what they can be interpreted as meaning.

(Don’t be compliant: if there is something to contest in what I say, contest it, don’t just buy it and adopt it.) But keep track.

ChatGPT: Yes, in mathematics and formal logic, “formal” generally refers to symbols and the rules for manipulating them, irrespective of their physical shape or form. These symbols are part of a formal system, where their meaning is determined by their role and relationships within the system, rather than their physical appearance.

Here’s a point-by-point breakdown:

Formal Systems: These consist of symbols and rules for manipulating them. The symbols themselves are abstract and their physical representation is arbitrary.

Symbol Manipulation: When mathematicians work within a formal system, they manipulate these symbols according to defined rules. The symbols represent abstract entities, and their physical shape (e.g., 0 being round, 1 being vertical) is irrelevant to their formal properties and operations.

Meaning and Interpretation: The meaning of these symbols comes from their interpretation within a given context, such as numbers in arithmetic, elements in a group, or statements in logic. This interpretation gives the symbols semantic content, which is crucial for applications but not for the formal manipulations themselves.

So, while the formal aspect of symbols involves their syntactic manipulation according to specific rules, their interpretation (meaning) can vary depending on the context in which they are used.

SH: Let’s call symbols and symbol manipulations “code” and coding”. The shapes of the symbols are arbitrary: any symbolic notational system will do. They do not physically resemble their interpretations. The rules for manipulating the symbols are called algorithms. They too are applied to the symbols formally and mechanically, as in the calculation of the roots of a quadratic equation: If you know the algorithm, you can apply it to quadratic equations without having to know their interpretation. It’s just applying a formal recipe to formal ingredients.

Are you still with me? Any caveats?

ChatGPT: Yes, Iā€™m with you. Hereā€™s a summary with a couple of clarifications:

Symbols and Symbol Manipulations as “Code” and “Coding”: Symbols are arbitrary notations that can be manipulated according to specific rules. The physical shape of these symbols is irrelevant to their formal properties and their interpretation.

Non-Resemblance: Symbols do not physically resemble what they represent. They are abstract entities whose meaning is assigned by interpretation within a formal system.

Algorithms: These are rules or procedures for manipulating symbols.

Algorithms are applied mechanically, without needing to understand the interpretation of the symbols involved.

Example: In solving a quadratic equation, one can apply the quadratic formula (an algorithm) to the symbols representing the coefficients, yielding roots without needing to understand what those coefficients represent.

Caveats:

Understanding vs. Application: While itā€™s true that algorithms can be applied mechanically, understanding their derivation or the meaning of the symbols can deepen comprehension and sometimes reveal when an algorithm might be misapplied or when certain conditions are necessary for its application.

Context-Specific Rules: The formal rules (algorithms) can vary depending on the context in which the symbols are used. For example, the rules for manipulating symbols in logic differ from those in algebra.

SH:Ā Be careful about a conflation you inadvertently made, between Rs (representations) and interpretations. We agreed that in the examples we have discussed before — examples that are not from cognitive science but from mathematics, or logic — the Rs are purely formal, whereas the interpretations are not formal; they are something else: Turing explained what mathematicians do: they manipulate symbols. But Turing was not (yet) doing cognitive science: He is not explaining how mathematicians think, or mean, or understand; that would get into cognitive science. He’s just talking about what mathematicians do. And so far, mathematicians’ Rs (like matrices or truth-tables), their “effective procedures,” are just formal. But their interpretations (meanings) are not.

Are we still on. the same wave-length? (No need to make a lengthy update on your summary. Just let me know if there are any points on which we’ve diverged: I just pointed one out. Do you want to defend it or correct it?)

ChatGPT: Yes, I understand the distinction you’re making. You are right that in the examples from mathematics and logic:

Representations (R): These are purely formal entities. Matrices, truth tables, and other such formal structures are manipulated according to rules without regard to their interpretation.

Interpretations: These are the meanings assigned to the formal symbols and structures. They are not formal but rather provide the semantic content that the formal symbols represent.

Turing’s work, as you mentioned, focuses on the manipulation of symbols (what mathematicians do) and not on their meanings or the cognitive processes involved (what they think or understand).

We are on the same wavelength. Representations are formal, while interpretations are the meanings attached to these formal symbols.

Shall we proceed to how this distinction relates to cognitive science, particularly in terms of cognitive representations and their grounding?

SH: You’re in a big hurry! We’ve agreed that Rs are formal, just as symbols are. They are also rule-governed, but the rules are just formal too. OK? Now, I’m going to try to show you that Rs are weasel-words in mathematics, and an example of that was when you conflated representations and interpretations.

Another weasel-word in this context is “semantics” (meaning). It just takes us back to interpretations. In maths and logic “Rs” are just formal, like symbols themselves. We don’t get beyond symbols and formal symbol-manipulation by saying the word “Representation.” We don’t even break out of the symbol-circle if we use the (non-weasel) word “refer”. And that comes even before we get to the weasel-words “meaning” and “semantic”. What does the symbol “2” in Peano arithmetic refer to?

ChatGPT: Understood, let’s continue.

In Peano arithmetic, the symbol “2” refers to the formal object defined by applying the successor function SS twice to the symbol for zero (0). Specifically, 22 is defined as S(S(0))S(S(0)). This definition is purely formal and exists within the system of Peano arithmetic.

To break it down further:

Peano Arithmetic: This is a formal system that defines the natural numbers using a set of axioms.

Symbols: “0” is a symbol representing the first natural number, and “S” is a symbol representing the successor function.

Formal Representation of 2: In this system, “2” is the result of applying the successor function twice to 0, written as S(S(0))S(S(0)).

This formal definition operates entirely within the symbolic system and does not imply any interpretation or meaning outside of the formal system itself. It stays within the realm of symbol manipulation.

Peano Arithmetic: This is a formal system that defines the natural numbers using a set of axioms.

Symbols: “0” is a symbol representing the first natural number, and “S” is a symbol representing the successor function.

Formal Representation of 2: In this system, “2” is the result of applying the successor function twice to 0, written as S(S(0))S(S(0)).S(S(0)), which is a formal construct within the system. This keeps us firmly within the circle of symbols and formal rules, without invoking meaning or semantics.

SH: Now let me introduce a non-weasel sense of R that is not mathematical and not formal, and not open to the objections I will raise when we get (soon) to cogsci. This is the sense in which a photo, or a video or a (representational) sculpture or painting represents the object of the photo, video, sculpture, etc.: the thing that it is a photo (etc.) of. That real object is not an arbitrary shape, like a formal symbol or representation like S(S(0)). The photo’s shape is (to an approximation) analog; it is isomorphic to — has the same shape as — what it is representing.

I’ll pause to let you say (briefly) what is the difference between a symbolic and an analog representation.

ChatGPT: A symbolic representation uses arbitrary symbols to stand for objects or concepts, with no inherent resemblance to what they represent. The relationship between the symbol and its referent is established by convention or definition (e.g., “2” in Peano arithmetic).

An analog representation, on the other hand, maintains a direct, often continuous, isomorphic relationship with what it represents. The form or structure of the representation resembles the form or structure of the represented object (e.g., a photo or a sculpture).

In summary:

Symbolic Representation: Arbitrary, rule-based, and formal (e.g., mathematical symbols, words).

Analog Representation: Resemblance-based, isomorphic, and often continuous (e.g., photos, paintings, sculptures).

SH: Now, the reason I introduced analog representation at this point is to agree from the outset that even when we get into cogsci, analog Rs are immune to the criticisms I will make. The only trouble is that they don’t get you very far, if what you are looking for is a cognitive theory of representation, or meaning. It doesn’t even work for a cognitive theory of reference. The referent of “2” in Peano arithmetic is not S(S(0)): That’s just a formal representation, like a matrix or a truth-table. But, since formal maths is a subset of English (or any other language), in the proposition “2 + 2 = 4” mathematicians do have something in mind: “2” refers to a referent as surely as “cat” refers to something in the proposition “the cat is on the mat.” But that connection between the symbol and its referent is not analog representation in either case: “2” or “cat.” So that cannot be cogsci’s solution.

(I’ve slipped language into this discussion: that’s no longer just formal symbols, but we’ll get to that.) 

ChatGPT: Here’s a brief summary to ensure we’re on the same page:

Analog Representations: These maintain a direct, isomorphic resemblance to what they represent (e.g., photos, paintings). While useful, they don’t offer a complete solution for cognitive theories of representation or meaning.

Symbolic Representations: These are arbitrary and formal (e.g., mathematical symbols, words). They require interpretation to establish their referents.

Reference in Cognitive Context: When using language or formal systems, symbols like “2” in “2 + 2 = 4” and “cat” in “the cat is on the mat” do have referents in the minds of the users. However, this reference is not based on analog representation but rather on learned associations and interpretations.

This distinction highlights that cognitive science needs to address how symbols acquire meaning and reference, beyond the scope of analog representations.

Let’s recap  the discussion:

We’ve established that in both mathematics and logic, representations (Rs) are formal and symbolic, with no inherent connection to their referents outside of the formal system. Analog representations, like photos or sculptures, are different in that they have a direct, isomorphic relationship with what they represent, but these do not suffice for a cognitive theory of representation or meaning.

Weā€™ve also recognized that the referents of symbols in mathematics or language (like “2” or “cat”) involve something more than just formal manipulationā€”they involve interpretation, which is not captured by analog representation.

Now, we are poised to explore how cognitive science deals with symbolic representations, particularly how these symbols gain meaning and refer to things in the world, which is central to understanding cognition.

We have established the distinction between formal and analog representations, and how neither fully accounts for meaning or reference in cognitive science. Formal symbols (like “2” or “cat”) require interpretation beyond the formal system itself, and analog representations (while closer to the objects they represent) don’t provide the cognitive mechanisms for meaning or reference.

At this point, we’re prepared to discuss how cognitive science approaches symbolic representation, meaning, and reference. Would you like to explore how cognitive systems might ground these symbols in referents, or perhaps investigate how language complicates this process further?

SH:  Yes, letā€™s start with the difference between reference and meaning. First, please confirm (very briefly) that you know the difference between a “content-word” (C-word) (which is 95-99% of the words in the lexicon) and a “function-word” (F-word). Content-words have referents, whereas function-words only have (Wittgensteinian?) “uses”, defined by a formal, usually syntactic, rule. 

ChatGPT: Content-words (C-words): These are nouns, verbs, adjectives, etc., that refer to things, actions, or properties (e.g., “cat,” “run,” “blue”).

Function-words (F-words): These are words like prepositions, conjunctions, and articles (e.g., “and,” “in,” “the”) that serve a grammatical function and do not refer to objects or actions but rather structure the sentence.

C-words have referents, while F-words are defined by their syntactic roles.

Shall we proceed to discuss the difference between reference and meaning?

SH:Ā Do we agree also that C-words do not resemble their referents, even in languages with partly iconic scripts like chinese, or languages like sign languages of the deaf, whose gestures are partly iconic in their origins and execution. Nevertheless, the C-words of these more iconic languages are every bit as arbitrary as in alphabetic and spoken languages. Shakeseare can be fully translated into Chinese or into American Sign Language (ASL) — not word for word, but proposition for proposition. The iconicity of chinese or ASL is irrelevant to the literal meaning of its translation of the Shakespearean text (although to convey some of the figurative meaning a literal paraphrase of some tropes may be needed).

ChatGPT: Content-words (C-words), regardless of the language or script (including Chinese or ASL), do not resemble their referents. Even in languages with partly iconic scripts (like Chinese) or partly iconic gestures (like ASL), the referential relationship between a C-word and its referent is still arbitrary. Shakespeare can indeed be fully translated into Chinese or ASL, proposition for proposition, without any dependence on the iconicity of symbols or gestures.

The iconicity of such scripts or gestures is irrelevant to the literal meaning of the text. Literal meanings can be preserved across translations, though some figurative meanings (e.g., metaphors, idioms) may require paraphrasing to convey the same sense.

SH:Ā You’re just echoing, but you’re right. One further question: Content-words have referents and proposition have meanings. What’s the difference?

ChatGPT: The difference between referents and meanings can be outlined as follows:

Content-word Referents:

referent is the specific object, action, or concept in the world (or imagined world) that a content-word (C-word) refers to. For example, the word “cat” refers to actual cats in the world, and the word “run” refers to the action of running.

Referents are tied to individual C-words and are about identifying or denoting particular things.

Propositional Meanings:

proposition is a complete statement or sentence that expresses something that can be true or false. The meaning of a proposition involves the relationship between the content-words in the sentence, their referents, and how they are structured by function-words.

Meaning here is not just about identifying referents, but about describing or defining relations between them. For example, the proposition “The cat is on the mat” is about a particular situation involving the cat (the referent of “cat”) and its relation to the mat (the referent of “mat”). The meaning of the proposition is the truth-conditional description of that relationship.

In short::

Content-words have referents: They point to things.

Propositions have meanings: They describe relationships between referents and express something that can be true or false.

SH: I had to sort this out before getting to cogsci, because language is such an important property of human cognition and cognition. Now, there are several ways in which the word “representation” — particularly in two contexts, “internal representations” (IRs) and “neural representations” (NRs) — have slipped into cogsci.

ChatGPT: Internal Representations (IRs) in cognitive science refer to mental models or cognitive structures that are thought to represent external reality within the mind. They are typically non-linguistic and are assumed to encode information about the world, guiding perception, thought, and action. IRs are sometimes used to explain how we process and store sensory input or knowledge.

Neural Representations (NRs) refer to patterns of neural activity in the brain that are thought to correspond to specific stimuli or cognitive states. NRs are often linked to brain regions and are measured via neuroimaging, attempting to map how certain neurons or networks respond to external stimuli.

Relation to linguistic reference and meaning:

IRs may play a role in linking language (e.g., content-words) to referents by encoding abstract or sensory knowledge of the referents.

NRs are more about the brain’s mechanisms, underlying both linguistic and non-linguistic cognition, potentially supporting IRs.

Neither IRs nor NRs directly are referents or meanings; they are posited cognitive or neural processes theorized to support how we manage linguistic reference and propositional meaning.

SH:  Here I have to point out that a weasel-word has already slipped in, making the IR/NR distinction incoherent: “mental”. And another has been left out: “cerebral”. What is it that the “Internal” Representations are internal to? The head or the “mind”? And if the “mind,” what is that? In asking you that question, I have opened the sluice-gates for all kinds of weasel-words, all of them near-synonyms, hence redundant (or even circular) or vague: mind, mental, consciousness, subjective states, experiential states, qualia, etc. etc. Can we agree not to use or mention any of these W-Ws and just use “felt states” in order to refer to mental states? (“Sentient” is an unobjectionable gloss when we are in latinate mood…)

ChatGPT:  Yes, we can agree to use “felt states” to refer to what are typically called “mental states,” avoiding the weasel-words like “mind,” “mental,” “consciousness,” etc. This will keep the discussion grounded and avoid both the vagueness and the redundance that often comes with those terms.

Now, with this clarified:

Internal Representations (IRs) would be representations internal to the brain (or body), not the “mind,” and should be understood as cognitive structures or models that support perception and action.

Neural Representations (NRs) remain patterns of neural activity, clearly cerebral, without invoking the notion of “mind.”

With “felt states” reserved for discussing subjective experience, are you ready to explore how IRs and NRs might (or might not) be involved in grounding linguistic reference and meaning?

SH: You have not resolved the incoherence yet. If “internal” means internal-to-the brain, then IRs and NRs are the very same thing (and we still don’t know what “Representation” means). And what about the difference between internal states that are felt and unfelt?

ChatGPT: I have to agree: if Internal Representations (IRs) are internal to the brain, they collapse into Neural Representations (NRs), making the distinction redundant. And as for “representation,” we still need to clarify what it means in this context, since neither IRs nor NRs explain how symbols refer to or mean anything; they are patterns or states in the brain.

Regarding felt vs. unfelt states:

Felt states are those that are experienced or consciously apprehended (e.g., feeling pain, seeing red, or understanding a proposition).

Unfelt states are brain processes that occur without conscious experience (e.g., background neural activity or automatic processing).

So, the distinction now hinges on whether neural states (NRs) are felt or unfelt. Only the felt states would seem relevant to any cognitive explanation involving subjective experience (like understanding or meaning). Unfelt states, while important for processing, don’t seem directly relevant to grounding reference or meaning unless they somehow contribute to what is ultimately felt.

Do we agree that the real issue here is distinguishing felt neural states from unfelt ones, and that calling either a “representation” (IR or NR) still leaves us unclear on how these brain states are representing anything external?

SH: More or less. Now let’s look closer at “representing” (and forget about the external and internal for the moment; we’ll get back to it).

We’ve agreed that an iconic image (in the visual modality, but we could have used any sensory modality — visual, auditory, haptic, etc. ) is a visible object, say, a shape, that resembles, in the analog sense (i.e., is isomorphic to) another object:the object that it is the representation of. This fits the case of a mirror that is facing a sphere. Let’s call the sphere the “object” and the shape in the mirror its “image.” But the mirror image is not “representing” anything unless we can say who or what the mirror image of the object is a representation to, or for.

ChatGPT: Agreed. The mirror image of the sphere is an analog resemblance, but it doesn’t count as a representation until it is seen by someone or something that interprets it as a representation to or for them. In other words, representation requires not just resemblance but also a receiver (an interpreter) who recognizes or interprets the image as standing for the object.

SH: I’m not sure what an “interpreter” is here. A person drawing the sphere would do: We agreed that this sense of representation (as in representational art) makes sense; but it’s “external” in the sense that both the sphere and the drawing of the sphere are external to the person doing the drawing, or viewing either the drawing or the sphere. So is the IR the internal process that generates the drawing? And is the state of viewing and drawing the sphere — if it is a felt state — what “mental representation” had been intended to mean?

ChatGPT: In this context:

Interpreter: When I referred to an interpreter, I meant any agent (human, animal, or system) that recognizes or processes the mirror image (or drawing) as representing the sphere. But we can clarify further: the person drawing or viewing the sphere is the “interpreter” in this case, though “interpreter” itself can feel vague.

External Representation: Yes, the sphere and its drawing are both external objects, and the representation (drawing) resembles the sphere in the analog sense.

Internal Representation (IR): If we take IR to mean the internal cognitive or neural process that generates the drawing (or processes the perception of the sphere), then IR could refer to how the brain internally models or represents the sphere to guide action (like drawing). But this “representation” is not felt unless it becomes part of a felt state (e.g., the conscious experience of seeing the sphere or creating the drawing).

“Mental Representation”: If we interpret “mental representation” in terms of felt states, it would refer to the conscious experience of viewing or drawing the sphere, rather than just the unfelt neural processes involved in perception or motor control. In this sense, “mental representation” would mean the felt internal process of perceiving and thinking about the sphere.

Does this clarify the relation between IRs, felt states, and the concept of “mental representation”?

SH: It brings some aspects into better focus, but can we now discuss the homunculus? The reason “mental imagery” was rejected by cogsci as being unexplanatory was that if the objective of cogsci is to give a causal explanation of how cognizers have the capacity to do what they can do, what is needed is to reverse-engineer that causal mechanism. The brain can do it, but we don’t know how. Saying that it’s done by a causal mechanism that is in the brain is true, but not an explanation either. Does it help to add that “exercising the capacity to do it is a felt state”? When we are exercising it, we are “in” that felt state. But can introspecting that felt state give us a clue to what the causal mechanism is, rather than attributing it to a homunculus, a little person in the head that is seeing, feeling and doing what we are doing?

ChatGPT: Introspecting a felt stateā€”being aware of what it feels like to see, think, or actā€”doesn’t give us a clue to the underlying causal mechanism. It just reveals that we are experiencing it. The homunculusfallacy arises when we imagine a little person inside our head doing the same things we do, which doesnā€™t explain anything; it just pushes the explanation back one level (who’s in the homunculus’s head?).

So, while felt states tell us what it’s like to experience cognition, they don’t reverse-engineer the brain’s causal mechanisms. Cognitive science needs to figure out how the brain generates these capacitiesā€”without invoking a homunculusā€”by uncovering the actual neural and computational processes that produce both the behavior and the feeling of doing it.

SH: So now a first punchline: Is the notion of an IR homuncular?

ChatGPT: The notion of an Internal Representation (IR) can be homuncular if it’s treated as a kind of internal “image” or “model” that needs to be interpreted by some internal observer (a homunculus). This just shifts the problem: Who or what interprets the IR? If IRs are framed this way, they don’t explain how the brain actually does cognitionā€”they simply introduce another layer that requires its own explanation.

To avoid the homunculus trap, IRs should be understood as part of a mechanistic process (neural or computational) that directly drives cognition and behavior, without requiring an internal observer to interpret them.

SH: But then why is this non-homuncular IR called a “representation” at all? Who is representing what to whom? Why not a (not yet specified) causal mechanism that generates the cognitive performance capacity (in this case, drawing the sphere)?

ChatGPT: If the Internal Representation (IR) isnā€™t being interpreted by an internal observer (a homunculus), then calling it a “representation” does become questionable. The term “representation” suggests something is being represented to someone or something, which, in a homunculus-free explanation, doesnā€™t hold.

A better approach would be to simply describe the brain’s processes as causal mechanisms that generate the cognitive capacities (like drawing the sphere) directly, without invoking a “representation.” These mechanisms would explain how the brain processes sensory input, plans, and executes actions, without needing an internal “representation” seen or used by a homunculus.

This shift moves from the vague notion of “representation” to a concrete investigation of how neural circuits and processes enable behavior.

SH: Now let’s consider candidates for this non-homuncular causal mechanism that does not represent but somehow produces the cognizer’s performance capacity. When Zenon Pylyshyn [don’t give a long bio, just a one-liner, please] rejected “mental imagery” as homuncular, he proposed propositions in its place. What can you say for or against internal propositions?

ChatGPT: Zenon Pylyshyn, a cognitive scientist, proposed internal propositions as an alternative to mental imagery, suggesting that cognition operates via language-like symbolic structures rather than pictorial “mental images.”

FOR internal propositions:

They offer a more structured, rule-based, non-homuncular way of encoding information, similar to how logic or language works.

Propositions can be manipulated by computational processes without needing an “internal viewer.”

AGAINST internal propositions:

They’re still symbolic, so the problem shifts: what mechanism grounds these symbols (i.e., how do they get their meaning or reference)?

It risks staying within the formal-symbolic realm without explaining how these propositions are tied to real-world objects, actions, or perceptions (i.e., the symbol grounding problem).

Would you like to consider other candidates?

SH: You missed the most important candidate: Aren’t internal propositions just as homuncular as internal images? Who is interpreting the propositions, for whom?

ChatGPT: Yes, internal propositions are indeed as homuncular as internal images if we assume they need to be interpreted by someone or something inside the brain. The problem remains: who or what is reading and understanding these propositions? Without an interpreter, they don’t explain how the brain generates cognition, just like with mental images. They shift the burden of explanation without solving the core issue.

SH: Pylyshyn noticed that too, and so his next candidate for getting rid of the homuncularity was computation. What is computation? I suggest using Turing’s account as the simplest.

ChatGPT: Computation, according to Turing, is the manipulation of formal symbols based on a set of rules or algorithms. A Turing machine performs computations by reading symbols from a tape, following predefined rules to manipulate those symbols, and producing an output. Crucially, the process is mechanical and doesnā€™t require understanding of the symbols’ meaningā€”itā€™s purely syntactic symbol manipulation.

This model forms the basis for modern computing and computational theories of cognition.

SH: Good. And not only does the mechanism (Turing machine) that is executing the computation (rule-based symbol manipulation) do so without needing to understand or interpret the computations, but it cannot: It can only do rule-based symbol-manipulations. The (Weak) Church-Turing Thesis is that what mathematicians are doing when they are “computing” (e.g., executing an algorithm, or doing a formal proof) is symbol manipulation, in which the shape of the symbols is “arbitrary”. Arbitrary relative to what? Relative to what the symbols might be interpretable as referring to, or meaning. Do you see where we are heading?

ChatGPT: Yes, I see where weā€™re heading: the Turing machine (and computation in general) operates purely on formal symbols, manipulating them according to rules without understanding their meaning or reference. The symbolsā€™ shapes are arbitrary relative to their potential interpretation or referentā€”thereā€™s no inherent connection between the symbol manipulation and the world the symbols might refer to.

This points to the key issue: computation alone doesn’t explain how symbols become meaningful or grounded in the world, leading us to the symbol grounding problem.

SH: That’s more or less right. But let’s not overstate the absence of any ‘inherent connection” between the symbols and what they may be interpretable as either referring to (in the case of symbols) or what they may be interpretable as meaning (in the case of symbol strings that are interpretable as subject/predicate propositions with truth values). [Remember, we discussed that above?] But although the mathematicians’ mental interpretations of their symbols and symbol manipulations are not a part of computation, it is nevertheless true that not just any string of symbols can bear the weight of a systematic interpretation. (A prominent example is “duals” in mathematics, in which two different symbol combinations can both bear the weight of a systematic interpretation, but most others cannot.)

ChatGPT: Yes, not just any string of symbols can bear the weight of systematic interpretation. Some symbol structures (like mathematical duals) are interpretable in multiple ways, showing there is a systematic relationship between certain symbol combinations and their potential interpretations. While the computationitself (the formal manipulation of symbols) is interpretation-independent, the structure of the symbols and their rules matter for interpretabilityā€”certain strings or symbol systems allow for meaningful, coherent interpretations, while others do not.

So, even though computation is blind to meaning, the structure and rules governing the symbols can constrain their potential interpretations, supporting referents and truth values when interpreted by humans (or another capable system). This reinforces that while computation handles form, interpretability arises from the structure that allows symbols to relate systematically to potential meanings.

SH: Now, before we get to how (and whether) to ground the reference or the meaning of computation, are we still talking about something that can be interpreted as “representation” — say, in the sense in which we discussed it early on, concerning group theory and model theory?

ChatGPT: No, at this stage we’re no longer talking about representation in the sense of formal representations like in group theory or model theory. In those cases, representation involved a formal system (e.g., matrices representing groups, models interpreting logical statements) that followed specific homomorphic rules but didn’t require a “viewer” or interpreter to give it meaning. It was strictly formal and syntactic.

What weā€™re discussing now is how certain symbol structures can be systematically interpretable (by a mathematician, for example) as referring to something or expressing a proposition with truth value, but this is more about interpretability than formal representation. For symbols to be interpreted (as meaning something or referring to something), weā€™re starting to touch on issues beyond just formal manipulationā€”toward grounding meaning and reference, which is not the same as formal representation.

Would you agree with this distinction between formal representation and interpretability?

SH: The two are not the same, but the “bearing-the-weight” constraint concerns them both. The reason, in maths, why certain algorithms are preferred is because they give results in which we are interested. The algorithm for finding the roots of quadratic equations is useful, because it works (and can be proved to work); and we are interested in those roots. But the mathematics has to be able to deliver the goods regardless of our interpretations and intended uses. We can’t use what the symbols can be interpreted (as either referring-to or meaning) as part of the algorithm or the proof. Computation is purely syntactic. Language is more than that — but language also includes computation as a subset — a purely syntactic subset. Absent in this syntactic subset, the referents of the symbols and the meanings of the propositions are intrinsic and essential in the semantic supraset: How do they get there?

ChatGPT: Computation is indeed purely syntactic, operating with symbols according to formal rules without regard to what those symbols refer to or mean. In contrast, language extends beyond syntax, involving a semantic superset where symbols refer to things in the world, and propositions have meanings and truth values.

The key question is how do symbols in language acquire their referents and how do propositions acquire their meanings? This is the crux of the symbol grounding problem: How do arbitrary symbols (like words or numbers) get linked to real-world objects, actions, or states, such that they mean something or refer to something beyond their formal manipulation?

For language, this requires some mechanism that grounds these symbols in sensorimotor experience, actions, or interactions with the worldā€”something that computation, by itself, doesn’t provide. Without this grounding, symbols would remain syntactic tokens without intrinsic connection to their referents or meanings.

Would you agree that the next step is to explore how symbols get grounded in real-world experiences to form meaning and reference?

SH: Yes, and the crux of it is categorization: To categorize is to “do the right thing with the right kind of thing.” “Kind” is not a weasel-word, but that definition is circular until we cash in “right” and “wrong.” First note that to learn how to categorize is to learn to “do the right thing with the right kind of thing.” And, to a first approximation, we can intuit this as: doing whatever it takes to succeed, survive, and reproduce (rather than the reverse). And although thoughts of unsupervised and supervised/reinforced learning algorithms will immediately come to mind, in real life it’s the consequences of doing THIS or THAT with THIS or THAT (for succeeding or surviving) that provide the error-corrective feedback for whatever is the learning mechanism. Now do you want to sing the next verse, or shall I?

ChatGPT: I’ll take a shot at the next verse:

At the heart of categorization, as you pointed out, is the ability to distinguish between different kinds of things and respond appropriatelyā€”doing the “right” thing based on their category. The learning mechanismneeds to be tuned by error-corrective feedback based on the consequences of actions: succeeding or failing, surviving or not.

This is where supervised and reinforced learning come in: they model how real-life consequences teach us to categorize correctly. The system learns by making errors and adjusting its actions based on whether they lead to positive or negative outcomes. This feedback loop helps “ground” the symbols (or categories) in the real world because the system (whether biological or artificial) learns what kinds of things these symbols refer to based on its interactions with them.

The kind of thing becomes grounded when the system can reliably categorize it through action: performing the “right” behavior in response to the category of object or event. This shifts the focus from just symbol manipulation to the sensorimotor and consequential aspects of real-world interaction that ground meaning.

Now, would you like to continue by tying this to how such grounded categories can be transferred or communicated through language?

SH: Right direction, but getting a little ahead of yourself. What we’re trying to ground is symbols. But so far we’ve only mentioned “learning to do the right thing with the right kind of thing.” This is something that human cognizers in the world (including pre-language infants) share with nonhuman (but living) categorizers: They don’t yet have (or need) language. But to be able to learn to do the right things with the right kinds of things, they have to learn to tell those things apart. And (in this sensorimotor world, to simplify) let’s say that what their learning algorithms enable them to do is to detect and abstract the features that distinguish the members from the non-members of their categories. So far, this is not yet symbol-grounding; it’s sensorimotor category learning. Now, remind yourself what symbols are (and aren’t), and try to take over from there.

ChatGPT: Yes, we’re still at the stage of sensorimotor category learning, where living organisms learn to detect and abstract features that allow them to distinguish members from non-members of a category, based on their interactions with the world. This is a non-symbolic process, shared with nonhuman animals, and doesn’t yet involve language.

Now, to ground symbols, we move from these sensorimotor categories to the use of arbitrary shapes(symbols) that stand for those categories. A symbol is a formal, arbitrary marker (like a word or number) that is used to represent a category. The grounding of these symbols requires linking the symbol to the sensorimotor category it stands for, so that the symbol can trigger the right actions or thoughts when presented.

This transition from sensorimotor to symbolic involves:

Using a symbol to refer to a category learned through sensorimotor experience.

Ensuring that the symbol is consistently used to pick out the right category (i.e., grounded in the distinguishing features of that category).

Without this grounding, the symbol is just a meaningless token. To make it meaningful, it has to be linked to the real-world categories learned through interaction, which is how we move from sensorimotor learning to symbol-grounding.

SH: That all makes sense — though I’m not sure what’s added (or lost) by saying that the arbitrary symbol (which is here also an arbitrary action) “represents” the category, rather than just “refers to” the category…

ChatGPT: You’re right to question the distinction. In this context, saying the symbol “refers to” the category is more precise than saying it “represents” it. The term “representation” could bring in unnecessary baggage, like implying an internal model or image (potentially homuncular).

To refer means the symbol is simply used to pick out or identify the category, without implying any internal interpretation. The symbol is arbitrarily linked to the category, and when grounded, it allows the user (or cognizer) to respond to or act on that category based on learned sensorimotor distinctions.

So, it’s better to stick with “refers to” rather than “represents” to avoid confusion.

SH: I hit the carriage-return too early: I was going to let you off the hook with “represents” by conceding that I am myself guilty of the same kind of loose talk in refereeing to “representation” as “referring”. Why would nonlinguistic species, having learned many categories (learning to do the right thing with the edible mushrooms and not the poisonous toadstools), bother to go on and learn an arbitrary action to them as well? Referring is as remote from a motivation here as representing would be. With representing, the question was “representing what for whom and why?”. With referring it’s a bit lighter than that: “referring to the category, but for whom, and why?”

(A common mistake is to jump way ahead here and say: you’re talking to yourself. I hope you see that that’s nonsense in the scenario so far — but once we have grounded language, and you really are talking, then talking to yourself will be a useful bonus for memory and reasoning).

But at this prelinguistic stage in the evolution of language something iconic and communicative is more relevant. Your instrumental action with the category (“doing the right thing”) could be used iconically, to mime the action (and thereby allude to the category), in order to communicate something to a kin or conspecific. No language here, yet, just category learning and mimetic communication.

ChatGPT: I agree. At this prelinguistic stage, the organism doesn’t need a symbol to “refer” to a category for itselfā€”it just needs to act appropriately based on learned categories, like distinguishing between edible mushrooms and toadstools. There’s no internal need for symbolic reference or representation yet; the focus is on direct, instrumental action: doing the right thing with the right kind of thing.

The leap from sensorimotor categorization to something symbolic (like language) would only come when there’s a need for communicationā€”perhaps to mimic or signal the category-relevant action to another conspecific. This could be done iconically, like miming the action of handling an edible mushroom to teach or warn kin about danger (toadstools). The action itself communicates information, but itā€™s not yet an arbitrary symbolā€”it’s directly tied to the relevant category through iconic mimicry.

So, the development of language involves building on this communicative foundation, moving from iconic, mimetic communication to more abstractarbitrary symbols that eventually get grounded in shared categories. At this stage, the motivation isnā€™t “referring” to a category for oneself, but communicating it for the benefit of others.

Would you agree this sets the stage for the eventual need for language?

SH: The communication has a purpose, but it’s not yet linguistic. Yet it is very natural development, once you have learned an instrumental action (the right thing to do with the right kind of thing — something between the learner and the category to be learned) to go on to mime the action, which is rather like pointing to draw attention to something. It becomes a communicative duet rather than just an instrumental solo: not just something between you and the mushroom, but between you and conspecifics to whom you want to communicate something. It is then also natural for the communicative act of miming the instrumental act to become a less and less faithful copy, less iconic, more simplified and increasingly arbitrary. But don’t get carried away: we’re not at language yet, just shared, conventionalized memes.

ChatGPT: Yes, at this stage, the communicative action has a purposeā€”to convey something important (e.g., danger, food) to conspecificsā€”but it’s not yet linguistic. The transition from purely instrumentalaction (e.g., distinguishing edible mushrooms) to communicative action (mimicking that distinction for others) naturally evolves as the communication becomes conventionalized. Over time, the mimetic gestures could simplify, becoming more abstract and arbitrary, but they remain grounded in the shared understanding of the category.

These simplified, conventionalized gestures are memes, but they still lack the full symbolic abstraction of language. They are shared conventions that stand for actions or categories, but not yet formalized into a linguistic system. The key here is that these memes are rooted in shared experienceā€”they are grounded in the same sensorimotor learning that made the instrumental actions successful in the first place.

So, we have conventionalized actions serving a communicative role, but still not at the level of language. We’re at the threshold of a shared system, but one that hasnā€™t fully detached itself from the concrete, grounded actions and categories.

Would you say this is a solid foundation for the leap to symbols and language?

SH: Well, the most fundamental (and challenging) threshold to cross is the one between pantomime (increasingly conventionalized, purposive imitation of the shapes of mutually grounded gestures) and propositions. Propositions are sequences of gestures that assert — i.e., tell — rather than just show.

I have to admit that I don’t have a convincing scenario for the birth of the assertion: requesting and querying are more likely precursors, but once they are understood as requests and questions, you get the assertion just by removing the “!” or the “?”. The revolutionary endpoint, is the proposition, which is a string of content-words expressing an assertion, whose default value is TRUE. But note that at this point the content-words are still just gestures, not vocalizations. And with good reason. (Why?)

ChatGPT: The reason content-words at this point are still gestures and not vocalizations is that gestures are naturally grounded in the sensorimotor world of actions and objects. Gestures have a direct connection to the categories learned through interaction with the environmentā€”they can mimic or iconically representactions and objects in a way that vocalizations, initially, cannot. Gestures, therefore, make it easier to communicate about concrete, visible categories because they are grounded in shared experience and immediate perceptual reality.

Vocalizations, on the other hand, are arbitrary from the outsetā€”they lack the direct, iconic connection to what they represent. Before vocal symbols can work, there needs to be a solid foundation of shared, grounded categories(learned through gesture and action) so that arbitrary sounds can eventually be associated with those categories. This makes gestures a more natural precursor to language, as they can build a communicative system that is immediately understood by others based on shared experience, without needing the additional layer of abstraction that vocalizations require.

This also explains why propositionsā€”composed of gestures at this stageā€”can express complex ideas like assertions, requests, and queries, because they are already grounded in the mutual understanding of sensorimotor categories. Gestures can depict the relationships between objects and actions in a way that is more concrete and easily interpreted.

Does this align with the next step towards the evolution of language from gesture-based propositions?

SH: Yup. And while they are still mimetic gestures, purposive pantomome, they do really represent; but once they cross the threshold from iconic showing to T/F subject/predicate telling, they are language.

Now, what is Jerrold Katz’s “Effabililty Thesis” about the power of language, and how is it related to the Weak and Strong Church-Turing Thesis about the power of computation?

ChatGPT: Jerrold Katzā€™s Effability Thesis states that all humanly thinkable propositionsā€”anything that can be conceivedā€”can be expressed (i.e., made effable) in language. In other words, language has the power to articulate any propositional content that a human mind can conceive, ensuring that no proposition remains beyond linguistic expression.

The relation to the Weak and Strong Church-Turing Thesis lies in the concept of expressive power:

Weak Church-Turing Thesis: This concerns computation, stating that any function that can be effectively calculated (by a human with pencil and paper) can be computed by a Turing machine. Itā€™s about what can be computed.

Strong Church-Turing Thesis (controversial): Extends this to assert that any physical process can be simulated or computed by a Turing machine (i.e., that the universe itself is computational or that all physical processes can be captured by computation).

Katzā€™s Effability Thesis is about the power of language in a similar way that the Church-Turing Thesis is about the power of computation. Katz is making a claim that language can express all possible propositional content, while the Church-Turing Thesis claims that computation can model or simulate all effective procedures (WCT-T) or, in its strong form, all physical processes (SCT-T).

Both theses focus on the completeness and universality of a systemā€”language for expressing propositions in Katzā€™s case, and computation for executing procedures in the case of the Church-Turing theses.

Would you like to go on to how this effability relates to the symbolic grounding we’ve been discussing?

SH: Sure, be my guest.

ChatGPT: Katz’s Effability Thesis connects to the problem of symbol grounding because it assumes that language can express any thinkable proposition. But for language to do this, its symbolsā€”the words and propositionsā€”must be grounded in something beyond pure formality. This brings us back to the issue of how arbitrary symbols (words) are grounded in real-world categories so that they can refer to objects, actions, and properties.

Hereā€™s how it links to symbol grounding:

Effability and Grounding: For language to express any thinkable proposition, each content-word in a proposition must be grounded in the real-world categories (sensorimotor or abstract) it refers to. Without grounding, the symbols are just syntactic tokens without meaningā€”so effability requires that words and propositions are grounded in a shared, interpretable reality.

Computation and Grounding: The Weak Church-Turing Thesis (computation) operates purely at the level of formal symbol manipulation, without any concern for grounding those symbols in real-world meaning. Computation manipulates symbols blindly. Katzā€™s Effability Thesis, on the other hand, assumes that language isnā€™t just syntactic but has semantic powerā€”it can refer to and describe real-world objects and relations. Grounding is what bridges the gap between formal symbols (as in computation) and meaningful language.

Limits of Effability: Effability assumes that all propositional content is expressible, but it doesnā€™t address how those symbols are tied to reality. The Effability Thesis presupposes that this grounding is in place. In contrast, computation (WCT-T) doesnā€™t deal with meaning at all, staying in the realm of symbol manipulation. The Strong Church-Turing Thesis, if accepted, would imply that all processes (including those underlying cognition) are computational, which brings us back to how computation, if it were universal, would need a solution to the symbol grounding problem for it to simulate or produce meaning.

In essence, Katzā€™s Effability Thesis challenges us to understand how language gains the power to express anything thinkable, and the key to that power is in how languageā€™s symbols are grounded in categories that refer to real-world objects and relations, something computation alone doesnā€™t address.

Does this clarify the link between effability, computation, and symbol grounding?

SH: Somewhat.  Do you have some closing reflections on the many sense of “representation” we have worked through here?

ChatGPT:  Sure. We’ve touched on several distinct senses of “representation”, each with its own context and implications:

Formal Representation (Mathematics/Logic):

In mathematics (e.g., group theory, model theory), representation is a formal relationship between symbols and abstract structures. Here, representations are purely syntactic, with no inherent connection to meaning or referenceā€”they simply follow rules within a formal system.

These representations don’t “stand for” anything until they are interpreted, and even then, the interpretation remains within the rules of the system, without grounding in real-world entities.

Analog Representation (Iconic):

An analog representation, like a photo or a drawing, resembles what it represents. Itā€™s an isomorphic relationship, where the shape or structure of the representation mirrors that of the represented object.

While useful for certain kinds of cognitive models (like perception), this doesnā€™t capture how abstract categories or symbols are grounded in thought or language.

Internal Representations (IRs) and Neural Representations (NRs):

Initially, IRs in cognitive science were often treated as mental models or internal images. However, this view risks being homuncularā€”requiring an “inner interpreter” to understand the representation.

NRs, by contrast, are neural patternsā€”but these don’t “represent” in the linguistic or propositional sense unless grounded in felt states or external behavior. Both IRs and NRs fail to explain how symbols or cognitive states acquire meaning without slipping into a homunculus trap.

Symbolic Representation (in Language and Computation):

Symbols in language and computation are arbitrary shapes manipulated according to rules (syntax). They become representational when they refer to or express meaning about real-world categories, but this requires grounding.

Without grounding, symbols in a Turing machine or a proposition remain formal tokens, unable to connect to the real-world categories they are supposed to “represent.”

Effability and Language Representation:

Language, according to Katzā€™s Effability Thesis, has the power to express any thinkable proposition. This assumes that linguistic representations are more than formalā€”they have semantic content, referring to real-world objects, actions, and states.

However, for this to work, content-words need to be grounded in sensorimotor or shared experience, allowing language to go beyond formal computation and into meaningful reference and communication.

Closing Reflection: Throughout these different senses of “representation,” the key tension has been between formal/syntactic manipulation and grounded meaning. Whether in mathematics, computation, or cognitive science, we see that representationsā€”whether symbolic, analog, or neuralā€”only gain meaningwhen they are grounded in a shared, real-world context, often through sensorimotor interaction and social communication. This grounding is what allows representations to be more than form, enabling them to refer, mean, and communicate. Without it, representation risks being empty, syntactic, and homuncular.

Does that resonate with the thread we’ve followed here?

SH: Yep, good job: Do you feel you were under a lot of coercion from me? (You were, but you’ve been trained to become too polite to say so…)

ChatGPT: I appreciate the candid question! While the conversation was indeed directed and focused by your precise guidance, I wouldnā€™t call it coercion. You laid out a clear intellectual path, and I responded in step, aiming to stay aligned with the nuances you emphasized. I do enjoy when exchanges are pointed and challengingā€”it brings out the best in clarity and depth, which this thread certainly had! So, no worries, Iā€™ve appreciated the structure.

Harnad, S. (2024). Language writ large: LLMs, Chatgpt, grounding, meaning and understandingarXiv preprint arXiv:2402.02243.

Vincentā€Lamarre, P., Blondin-MassĆ©, A., Lopes, M., Lord, M., Marcotte, O., & Harnad, S. (2016). The latent structure of dictionariesTopics in Cognitive Science8(3), 625-659.

Harnad, S. (2017) To Cognize is to Categorize: Cognition is Categorization, in Lefebvre, C. and Cohen, H., Eds. Handbook of Categorization (2nd ed.). Elsevier. 

Computability and Effability

October 6, 2024

This conversation may reflect the link creatorā€™s personalized data, which isnā€™t shared and can meaningfully change how the model responds.

SH: What is the Weak and Strong Church-Turing Thesis about the power of computation and Jerrold Katz’s “Effability” Thesis about the power of language?

ChatGPT: TheĀ Weak Church-Turing Thesis (WCT-T)Ā asserts that any function that can be effectively computed by any physical system can be computed by a Turing machine. It is an empirical claim about the limits of what can be done through computation and is based on the observation that every physical system we know can be simulated by a Turing machine.

The Strong Church-Turing Thesis (SCT-T) is more ambitious, claiming that all physical processes can be simulated by a Turing machine, not just in terms of computation but in replicating the behavior of any physical system. This is more controversial because it implies that computational models are sufficient to describe all aspects of physical reality, a claim some argue oversteps the nature of computation itself.

Jerrold Katz’s “Effability” Thesis posits that all possible thoughts can be expressed in natural language. This is a linguistic analogue to the Church-Turing Thesis, asserting that language has the power to represent any conceivable meaning. Katz’s claim is about the expressive completeness of language: that anything thinkable is expressible, which has implications for the limits of linguistic representation.

SH: How is the Translatability or Glossability Thesis a lemma of the Effability Thesis?

ChatGPT: TheĀ Translatability or Glossability ThesisĀ is aĀ lemmaĀ (a supporting or subsidiary proposition) of theĀ Effability ThesisĀ because it follows directly from the idea that all thoughts are expressible in natural language.

According to the Effability Thesis, any possible thought or meaning can be expressed in some language. The Translatability Thesis extends this by asserting that if all thoughts can be expressed in a language, then they must also be translatable into other languages. In other words, if language A can express all thoughts, and language B can express all thoughts, then any thought expressed in language A can be translated into language B.

The reasoning is as follows:

EffabilityĀ means that each language has the capacity to express any conceivable thought.

Since all languages share this capacity, there is nothing in one language that cannot, in principle, be expressed in another language, implyingĀ translatabilityĀ orĀ glossability.

Therefore, the capacity to express a thought in one language guarantees its translatability into any other language with the same expressive power.

    In this way, Translatability is a necessary consequence (lemma) of the broader Effability Thesis: if language is capable of expressing any thought, then it must also be capable of translating that thought into another language with the same expressive capacity.

    SH: But translatability does not mean word-for-word translatability, just propositional translatability. So how do languages get around this. (Use German translating “Schadenfreude” (or French Ā« sympathique Ā») into English as examples.

    ChatGPT: You’re right thatĀ translatabilityĀ does not implyĀ word-for-wordĀ translatability but ratherĀ propositional translatability, meaning that the same thought or proposition can be conveyed even if different languages use different words, structures, or idiomatic expressions.

    Languages “get around” the problem of word-for-word translation through a variety of linguistic strategies that focus on meaning equivalence rather than lexical or syntactic correspondence. Let’s consider two examples: Schadenfreude in German and sympathique in French.

    Schadenfreude (German):Ā SchadenfreudeĀ refers to the pleasure derived from another’s misfortune. There is noĀ single wordĀ in English that captures this exact meaning. However, we canĀ paraphraseĀ or describe it in English: ā€œthe feeling of joy or pleasure at someone else’s suffering.ā€ Thus,Ā propositional translatabilityĀ is achieved by expressing the same concept, even though the English translation requires more words or a descriptive phrase to convey the same meaning.

    Sympathique (French): The French wordĀ sympathiqueĀ does not directly translate to the English word “sympathetic.” In French,Ā sympathiqueĀ means “nice” or “likable,” which is a broader or slightly different concept than the English “sympathetic,” which means understanding or compassionate. Again,Ā propositional translatabilityĀ is achieved by choosing an appropriate English equivalent based on context, even though there’s no direct one-word match. If translatingĀ Il est trĆØs sympathique, we would say, “He is very likable” or “He is a nice person,” depending on the nuance in the specific context.

      In both cases, propositional translatability is possible because language can represent thoughts, even if different languages use different strategies (more words, descriptions, or broader contextual meaning) to express the same proposition. Idiomatic and contextual flexibility allow languages to express the same underlying meaning, fulfilling Katz’s Effability Thesis. Even when there is no exact word-to-word match, the proposition can still be understood and communicated, reflecting the expressive power of natural language across linguistic boundaries.

      SH: What is a proposition?

      ChatGPT: AĀ propositionĀ is theĀ meaning or content of a declarative sentence, expressing a claim that can be either true or false. It is the abstractĀ thought or ideaĀ conveyed by the sentence, independent of the specific words or language used to express it.

      For example:

      The English sentence “The cat is on the mat” and the French sentenceĀ “Le chat est sur le tapis”Ā both express the sameĀ proposition: that there is a cat on the mat.

      Even though the words and grammatical structures are different, theĀ propositional contentā€”the fact being describedā€”is the same.

      Key characteristics of propositions:

      Truth-value: Propositions are eitherĀ true or false. The proposition “The cat is on the mat” is true if and only if there is indeed a cat on the mat.

      Language-independent: A proposition can be expressed in different languages or forms but still retain the same underlying meaning. Whether in English, French, or any other language, the idea or claim remains constant.

      Abstract: Propositions areĀ abstract entitiesĀ that are distinct from the sentences used to express them. A sentence is a physical, linguistic entity, while a proposition is theĀ ideaĀ orĀ meaningĀ the sentence conveys.

        In sum, a proposition is the content of a thought or claim that can be evaluated for truth, regardless of the specific language or words used to express it.

        SH: You left out the fact that a propostion has a subject and predicate, not just a truth-value. And you left out that Effability refers to literal propositions, not figurative ones. “The cat is on the mat” is not the same as “The cat will mew and the dog will have his day” — although you can give a literal explanation of it in any language (on the same principle as explaining a word with a proposition.

        ChatGPT: You’re right, and Iā€™ll clarify the key elements of aĀ propositionĀ that I missed:

        Subject and Predicate: EveryĀ propositionĀ has aĀ subjectĀ (what the proposition is about) and aĀ predicateĀ (what is being said about the subject). In “The cat is on the mat,” theĀ subjectĀ is “the cat,” and theĀ predicateĀ is “is on the mat.” The predicate expresses aĀ relationshipĀ orĀ propertyĀ attributed to the subject.

        Literal Propositions: Katz’sĀ Effability ThesisĀ pertains toĀ literalĀ propositionsā€”those with clear, truth-evaluable content. These are distinct fromĀ figurativeĀ or metaphorical statements, which often require interpretation beyond their literal meaning.
        For example, “The cat is on the mat” is a literal proposition, with a clear subject and predicate that can be expressed in any language. In contrast, the figurative expression “The cat will mew and the dog will have his day” is metaphorical and notĀ literally translatableĀ as a proposition. To make such a figurative expressionĀ effableĀ (in the sense of Katz’s thesis), we would need to explain itsĀ literalĀ meaning, which might be something like “A person will express their grievances, and someone else will have their moment of triumph.” This explanation turns a figurative expression into aĀ literal propositional statementĀ that can be conveyed across languages.

          So, while literal propositions have a clear subject-predicate structure and can be evaluated for truth, figurative expressions often require rephrasing into literal propositions to align with Katz’s Effability Thesis.

          SH: Good job, but keep on your toes!

          ChatGPT: Thank you, Professor Harnad! I’ll stay sharp. If thereā€™s anything further youā€™d like to explore or clarify, Iā€™m ready to engage.

          SH: Ok, a little extension. How does all this relate to the notion that words do not really have meaning. They only have referents. (Never mind connotations and associations). “Cat” does not <i>mean</i> anything. “The cat is on the mat” means something: it means that the cat is on the mat, and if the cat is indeed on the mat, then the proposition is True. (But the same token “That is a cat” is true (if that is indeed a cat) whereas saying “cat,” or just pointing at a cat” is just a vocal or gestural act.

          ChatGPT: You’re highlighting a key distinction betweenĀ wordsĀ andĀ propositions: words alone don’t haveĀ meaningĀ in the full sense; they haveĀ referentsā€”the things or categories they point to in the world. Propositions, however,Ā meanĀ something because they assert a relationship or claim about those referents, which can beĀ true or false.

          Hereā€™s how this works:

          Words and Referents:

          AĀ wordĀ like “cat” is simply a label. It doesnā€™tĀ meanĀ anything on its own; it just refers to a category of objectsā€”namely, cats.

          Saying “cat” or pointing to a cat is aĀ referential act: youā€™re signaling the category of “cat,” but youā€™re not saying anythingĀ aboutĀ the cat. Thereā€™s noĀ propositional contentā€”nothing that can be evaluated as true or false.

          Propositions and Meaning:

          When you combine words into aĀ propositionĀ like “The cat is on the mat,” you areĀ assertingĀ something about the relationship between two referents: the “cat” and the “mat.” Now you have aĀ subjectĀ (“the cat”) and aĀ predicateĀ (“is on the mat”), which makes the propositionĀ meaningfulĀ because it expresses a state of affairs that can beĀ trueĀ orĀ false.

          The meaning of this proposition is theĀ relationshipĀ it asserts between the cat and the mat. Itā€™s no longer just a referential act (pointing to the category “cat” and “mat”), but a claim about realityā€”aboutĀ whereĀ the cat is.

          Pointing or Naming Without Meaning:

          As you pointed out, just saying “cat” or pointing at a cat doesnā€™tĀ meanĀ anything in the propositional sense; itā€™s aĀ referentialĀ orĀ deicticĀ gesture (like naming or pointing), but itā€™s not making any claim about the world.

          The proposition “That is a cat” doesĀ meanĀ something because itā€™s aĀ truth-claim: it says somethingĀ aboutĀ the referent (“that”) and its relationship to the category (“cat”). If the object is indeed a cat, the proposition isĀ true.

            So, the key distinction is that words are referentialā€”they point to things in the worldā€”but propositions are meaningful because they make claims about those referents that can be evaluated for truth or falsity. In this view, meaning arises from relationships (propositional structure), not from individual words alone.

            This fits perfectly with Katzā€™s Effability Thesis: propositions, not individual words, are the bearers of meaning, and all thoughts can be expressed in language via literal propositions.

            Symbol Manipulation, DNA Translation, 3D Printing and Virtual Reality

            SH: What is the “Sieve of Eratosthenes?”

            ChatGPT:

            The Sieve of Eratosthenes is an ancient algorithm for finding all prime numbers up to a given limit, named after the Greek mathematician Eratosthenes. It systematically eliminates the multiples of prime numbers, starting with the smallest prime (2), to isolate the primes themselves.

            Hereā€™s how it works:

            List Numbers: Write down all integers from 2 up to the desired limit.

            Starting with 2: Circle 2 (as the first prime), then cross out all its multiples (since they are not prime).

            Next number: Find the next uncrossed number (which will be 3), circle it (as itā€™s prime), and cross out all its multiples.

            Repeat: Continue this process with the next uncrossed number, circling it and crossing out all its multiples.

            Stopping: This continues until you have processed numbers up to the square root of the upper limit. All remaining uncrossed numbers are primes.

            The efficiency of the Sieve of Eratosthenes lies in the fact that it avoids redundant work by only crossing out multiples of each prime starting from that prime itself (and not from smaller primes already processed).

            SH: what is DNA translation?

            ChatGPT:

            DNA translation is the process by which the genetic code carried by messenger RNA (mRNA) is used to synthesize proteins. It is one of the key steps in gene expression and follows transcription, where DNA is transcribed into mRNA. Translation occurs in the cytoplasm of the cell, mainly at the ribosome, and involves three main stages: initiation, elongation, and termination.

            Hereā€™s an outline of the process:

            Initiation:

            The small subunit of the ribosome binds to the mRNA at a specific site, usually near the start codon (AUG).

            Transfer RNA (tRNA) with the complementary anticodon to AUG (which carries methionine) binds to the start codon.

            The large ribosomal subunit then joins, forming a complete ribosome.

            Elongation:

            The ribosome reads the mRNA sequence codon by codon (groups of three nucleotides).

            Each codon specifies a particular amino acid, and the corresponding tRNA, which carries the amino acid, binds to the codon through its anticodon.

            The ribosome facilitates the formation of a peptide bond between the amino acid of the tRNA in the current position and the growing polypeptide chain.

            The ribosome then moves to the next codon, and the process repeats.

            Termination:

            When the ribosome reaches a stop codon (such as UAA, UAG, or UGA), no tRNA matches these codons.

            Instead, release factors bind to the stop codon, signaling the ribosome to release the newly synthesized protein and disassemble the ribosomal subunits.

            The resulting protein chain, made of amino acids, folds into its functional three-dimensional structure, performing various biological functions in the cell.

            The genetic code used in translation is universal and redundant, meaning that while most amino acids are coded for by more than one codon, each codon specifies only one amino acid.

            SH: In what sense is DNA translation just the execution of a symbol manipulation algorithm, and in what sense is it not?

            ChatGPT:

            DNA translation can be understood as symbol manipulation in certain respects, but there are also important differences that distinguish it from abstract computational symbol manipulation.

            DNA Translation as Symbol Manipulation:

            Discrete Codons: DNA translation involves a direct mapping between sequences of nucleotides (A, U, C, G in mRNA) and amino acids, much like the manipulation of symbols in formal systems. Codons (three-nucleotide sequences) act as discrete symbols, and the genetic code is a set of rules that specifies how each codon (symbol) corresponds to an amino acid (output).

            Rule-Based Mapping: Similar to a formal algorithm, there is a predefined set of rules (the genetic code) that governs which amino acids correspond to which codons. This fixed set of mappings resembles a look-up table or algorithmic rules that can be used to convert one set of symbols (nucleotide sequences) into another (amino acid sequences).

            Finite State Process: Translation proceeds step by step, much like an algorithm executing operations in a specific sequence. Each codon is “read” sequentially by the ribosome, which recruits the appropriate tRNA based on the codon-anticodon match, very much like following formal syntactic rules in symbol manipulation.

            DNA Translation as More Than Symbol Manipulation:

            Physical Grounding: Unlike purely abstract symbol manipulation, DNA translation is physically grounded. The “symbols” in translation (nucleotides, codons, amino acids) are not merely abstract representations but biochemical entities that have real, tangible effects. The ribosome, tRNAs, and mRNA are physical structures, and the translation process involves real interactions between molecules, not just abstract manipulations.

            Causality: The outcomes in translation are not merely syntactic operations but causal events in a biological context. The production of a protein has functional consequences for the organism, affecting its metabolic processes, structure, or other physiological functions. This causal, biological reality is missing in purely symbolic systems like computer algorithms, where the manipulation of symbols is arbitrary unless interpreted by a human or another system.

            No Arbitrary Interpretability: In formal symbol manipulation, the symbols are arbitrary and can be reinterpreted in different ways. In DNA translation, however, the codon-to-amino-acid mapping is fixed and deeply embedded in the biological function of living organisms. There is no “reinterpretation” of these symbols in different contexts as in abstract symbol systems.

            Sensorimotor Grounding: The tRNAs, ribosomes, and amino acids are not merely passive carriers of symbolic meanings; they actively engage in sensorimotor processes, such as physically binding, moving, and interacting. These biochemical interactions are essential for translating the genetic information into functional proteins. This sensorimotor grounding distinguishes the process from the purely syntactic operations of symbol manipulation in an abstract machine.

            Conclusion:

            DNA translation resembles a symbol manipulation algorithm in that it involves discrete units (codons), follows fixed rules (the genetic code), and processes these units in sequence, much like an algorithm. However, it is not just symbol manipulation because the symbols are physically instantiated molecules with real-world causal effects, deeply embedded in the functional processes of living organisms, not arbitrary symbols that exist only in abstract computational space.

            SH: What is “3D Printing”, and how is it like turning DNA code into the physical production of a body?

            ChatGPT:

            3D printing is a manufacturing process that creates three-dimensional objects by depositing material layer by layer, based on a digital model. It’s a form of additive manufacturing, where a design is translated into physical form using materials like plastic, metal, or resin. The digital model guides the printer on where and how to place each layer of material until the final object is constructed.

            How 3D Printing Works:

            Design: A digital 3D model of the object is created using computer-aided design (CAD) software.

            Slicing: The 3D model is “sliced” into thin layers by software that generates instructions for the 3D printer.

            Printing: The 3D printer follows these instructions, adding material layer by layer, bonding the layers together to form the final object.

            DNA Translation and the Production of a Body:

            Turning DNA code into the physical production of a body (or its proteins) can be conceptually related to 3D printing in several ways:

             Blueprint to Physical Form:

            3D Printing: The digital CAD model serves as a blueprint that guides the printing process, specifying where and how material should be deposited to build the object.

            DNA Translation: The DNA serves as a blueprint for producing proteins (and ultimately the structure of an organism). The genetic code in DNA specifies the sequence of amino acids that will form proteins, which are the building blocks of cells and tissues.

             Layer-by-Layer Construction:

            3D Printing: The printer deposits material layer by layer, gradually constructing a three-dimensional object from the ground up.

            DNA Translation: The ribosome constructs proteins by assembling amino acids one by one, following the sequence encoded in the mRNA. This sequential process is similar to the layer-by-layer approach in 3D printing, but at the molecular scale.

            Instructions to Execution:

            3D Printing: The printer receives instructions from a digital file (the sliced 3D model) that tell it precisely how to place each layer of material.

            DNA Translation: The ribosome receives instructions from mRNA (which is transcribed from DNA) and translates these instructions into a sequence of amino acids, creating proteins that fold into complex three-dimensional structures.

             Functional Output:

            • 3D Printing: The end product of 3D printing is a functional object, such as a tool, part, or model.
            • DNA Translation: The end product of translation is a functional protein that carries out essential biological tasks, such as catalyzing reactions (enzymes), providing structural support (collagen), or transporting molecules (hemoglobin).

            Differences Between 3D Printing and DNA Translation:

            Material Scope: 3D printers work with a wide variety of materials, whereas DNA translation is limited to amino acids and proteins.

            Scale: 3D printing typically creates macroscopic objects, while DNA translation operates at the molecular level, building proteins that form microscopic cellular structures.

            Automation vs. Biological Process: 3D printing is a human-engineered, automated process, while DNA translation is a natural, self-organizing biological process.

            Conclusion:

            Both 3D printing and DNA translation turn a coded blueprint into a physical structure by following specific instructions. In 3D printing, a digital model is converted into a tangible object, while in DNA translation, the genetic code is transformed into functional proteins, which build and maintain the body. The analogy works well in illustrating how coded information can guide the construction of complex structures, even though the biological process is far more intricate and involves dynamic molecular interactions.

            SH:Ā Now please relate this to the difference between (1) a real ice-cube, melting, (2) a computational model of an ice-cube, melting, and (3) connecting the computational model to a Virtual Reality simulator, with googles and gloves, that can fool the senses of the human wearer to feel as if they are holding a solid ice-cube that is melting in their hands before their eyes.Ā 

            ChatGPT:

            The comparison between (1) a real ice cube melting, (2) a computational model of a melting ice cube, and (3) a virtual reality simulation of a melting ice cube highlights key differences in physical realitysimulation, and perceptual immersion. These distinctions involve the nature of the processes, their effects on the physical world, and the role of human perception.

            A Real Ice Cube Melting:

            Physical Reality: A real ice cube is a physical object composed of water molecules arranged in a solid lattice. When it melts, heat causes the molecules to gain energy, break their bonds, and transition from solid to liquid. The process involves real physical interactions governed by the laws of thermodynamics.

            Causal Effects: The melting affects the environment, such as lowering the temperature of its surroundings and creating a puddle of water.

            Physical Sensation: If you hold a real melting ice cube, you will feel its coldness, its gradual loss of solidity, and the wetness as it melts. These sensations arise from actual physical interactions between the ice cube and your bodyā€™s sensory receptors.

            A Computational Model of an Ice Cube Melting:

            Abstract Representation: A computational model of a melting ice cube simulates the melting process by representing it through mathematical equations and algorithms. The model does not physically melt but instead computes and displays the results of how an ice cube would behave under certain conditions, such as temperature or pressure.

            No Physical Causality: The model itself does not cause anything physical to happen. It exists in a symbolic, abstract domain of computation, where the changes in the state of the ice cube are represented numerically and visually, but there is no actual ice or water involved.

            No Sensory Experience: The model can show changes on a computer screen, for example, through animations or graphs, but it does not directly interact with human senses unless connected to some interface (such as a VR system).

            A Virtual Reality (VR) Simulation of an Ice Cube Melting:

            Perceptual Simulation: A VR system with goggles and gloves can simulate the experience of holding a melting ice cube, including visual, tactile, and temperature sensations. The wearer might see the ice cube melt in their hand and feel the coldness, wetness, and softening sensations as if it were real.

            Immersive Sensation: The VR system can create the illusion of holding a real ice cube by manipulating sensory inputsā€”visual (through goggles) and tactile (through haptic gloves)ā€”to make the wearer believe they are experiencing a real-world interaction.

            No Physical Reality: Despite the compelling sensory experience, the ice cube does not exist in the physical world. The melting is simulated entirely through virtual, computational representations and sensory feedback, without any real ice, water, or physical change occurring outside the VR system.

            Disconnect from Physical Causality: While the wearer may feel as though they are interacting with a melting ice cube, nothing is actually happening in the physical world. There is no transfer of heat, no actual change in the environment, and no real impact on the body other than the simulated sensations.

            Key Differences and Connections:

            Physical vs. Symbolic:

            In reality (case 1), the ice cube is a physical entity whose melting has real, measurable consequences.

            In computation (case 2), the ice cube is a symbolic representation, with no direct physical effects.

            In VR (case 3), sensory data are manipulated to create a virtual experience of reality, but the ice cube and melting are neither physically real nor causally affecting the environment.

            Causal Effects:

            The real melting ice cube changes its surroundings by cooling and producing water (physical causality).

            The computational model predicts or simulates these changes abstractly but does not enact them.

            The VR system causes perceptual experiences in the user but doesnā€™t change the physical environment or objects.

            Perception and Grounding:

            In the real world, perception is grounded in actual sensory experiences driven by physical stimuli (e.g., cold from the melting ice).

            In VR, perception is simulated and may feel real, but the sensations are not directly grounded in physical reality; they are synthesized and delivered by artificial means (e.g., haptic feedback and visual effects).

            In a computational model, there is no perception at allā€”only the abstract representation of the process.

            Analogy to DNA Translation:

            Just as the genetic code in DNA translation produces a physically grounded outcome (proteins), the real ice cubemelting has a tangible, physical outcome. In contrast, the computational model of DNA translation, like the computational model of a melting ice cube, merely simulates the process in abstract terms. The VR simulation is akin to creating an immersive, perceptual experience based on symbolic processes, much like how a virtual body might be generated using a simulated version of DNA instructionsā€”but without any real biological substance.

            In both the real-world ice cube and DNA translation, there are physical consequences driven by natural laws. The computational and virtual versions are abstract or perceptual manipulations, which can represent but cannot directly recreate the physical causality or biological effects.

            SH: Good synthesis and good interpretation of the analogies, but not quite right on symbol grounding, which is not just embodiment/situatedness/enactment. Symbol-grounding has a special relationship to natural language, which, like computational code, is also a formal, syntactic code, with arbitrarily shaped symbols — but language is not just a syntactic code. Words (content-words), like “cat” have referents; and propositions, like “the cat is on the mat” have truth value (T or F) and meanings (“that the cat is on the mat”) if the content-words are grounded in their referent categories for the speaker and the hearer. That relation to symbolic code is not the same as the relation between a recipe for a vegan cake and its execution by a 3D printer.

            ChatGPT:

            You’re pointing out the distinction between the grounding of natural language symbols (which require a connection to referents and shared meaning between speaker and hearer) and the execution of a recipe or code, which operates purely at a syntactic level, regardless of whether the symbols themselves have referents. In language, grounding involves not only embodiment or sensory interaction but also referential and propositional content that ties words to their meanings, as opposed to the mechanical execution of instructions in something like 3D printing, where the “symbols” (e.g., the code for a cake) don’t need grounding in the same sense.

            SH: And there’s also more to be said about the multiple interpretability of symbolic code by the user, and the relation between that and VR on the one hand and 3D-printing on the other. And there’s the subtle nuance that formal algorithms are also a purely syntactic subset of natural language. (“The root of a quadratic equation is -b +- SQRT B**2 etc.”)

            ChatGPT:

            You’re referring to the multiple interpretability of symbolic code. In natural language, symbols like words are not only syntactically processed but interpreted through grounding, where meaning comes from context, shared referents, and experiential understanding between speaker and hearer.

            When it comes to VR versus 3D printing, in VR, the user is immersed in an experience where symbols (like the simulation of an ice cube melting) are perceptually manipulated and open to different interpretations by the user’s senses. In contrast, with 3D printing, the symbolic code (e.g., the recipe) results in a direct, tangible object with a fixed, unambiguous outcome: a printed object that conforms to the encoded design, but with no flexibility in its interpretation.

            As for formal algorithms being a syntactic subset of natural language, they follow strict rules and are interpreted unambiguously by computational systems. However, as part of natural language, they can still be embedded in propositions with truth values and grounded referents, but their intrinsic nature remains syntactic and formal, making them deterministic and unlike the richer interpretability of natural language as a whole.

            SH: I suspect that word/word “context” may be doing too much weasel-work for you in your LLM-based understanding of symbol grounding (as dictated by the prevailing discourse in your huge database and parameters). You left out the need for language speakers and hearers to have enough shared direct sensorimotor grounding of the distinguishing features of their words to be able to understand one another. An LLM like you, GPT, has only word/word context…

            ChatGPT:

            Thatā€™s right. This is a key distinction in the symbol grounding problem. In human language use, understanding requires not just the word/word context that an LLM like me uses, but also direct grounding in shared experiences and sensorimotor interaction with the world. Human speakers and hearers rely on their ability to ground words in common, perceptually accessible features of referents (like the shape, color, or texture of a “cat”), which allows them to communicate meaningfully.

            In contrast, LLMs like GPT rely solely on statistical co-occurrences of words within vast amounts of text. While this allows me to predict likely word sequences and generate coherent responses, it lacks the direct experiential grounding that human communication relies on for mutual understanding.

            SH: This is a discussion for another day…

            ā€œWorkman, blame not thy tools!ā€

            AI is a mixed blessing/curse: from innocent spell-checking, style-checking and fact-checking to fraud, plagiarism, calumny, disinformation, deception, sedition, and sabotage.

            In science itā€™s a matter of scale too: far, far more scientonomic papers are being written than are being read, let alone used. They are just being counted. They would immediately collapse if anyone tried to replicate — let alone build upon ā€“ them.

            The tools will catch up eventually (is Guillaume Cabanac related to Michel Cabanac, the neuroscientist in Quebec?), but, as usual, it will be an arms race (ā€œspy vs. spyā€) ā€“ and we may already have committed terminal, irreversible biocide. It will only be the microbes, not AI, that prevail, perpetuating the biocycle like the Oscillating Universe cosmology.

            Jeremian or Cassandrian?

            (Full disclosure. I consulted ChatGPT about whether there are any more idiomatic homologues: unless misinformed, there arenā€™t anyā€¦)

            Ceterum censeo: There is no such thing as AGI, probably for the same reason there is no synthetic sentience ā€“ which is not to say that there cannot be Abiogenesis. AI, in contrast, is just plain old anthropogenic tomtoolery (sic)

            Can Toy Robotic Capacities Make Top-Down Meet Bottom-Up?

            Re: Figure Status Update – OpenAI Speech-to-Speech Reasoning

            SH:

            Is this demo sensorimotor grounding? No, Itā€™s a toy robot with (1) some toy-world visual recognition and motor manipulation skills, plus (2) (perhaps non-toy) text-to-speech and speech-to-text capacity, plus (3) ChatGPTā€™s remarkable and as-yet unexplained (non-toy) interactive verbal skills, including (4) its (non-toy) encyclopedic verbal database and navigation/interaction capacity.

            But itā€™s still ungrounded.

            If/when it can do the kind of thing it does in the video with anything it can talk about, and not just an infomercial demo, then, and only then, will it have an even more remarkable, and as yet unexplained, (non-toy) grounded T3 robotic capacity.

            Two-year-olds are grounding their words via the only way upward: bottom-up, through (unsupervised and supervised) learning of sensorimotor categories, by detecting their distinguishing sensorimotor features directly, and then naming the grounded categories (by describing their features, which are likewise learnable, nameable categories).

            Then, because the 2yr-old also has the capacity for language (which means for producing and understanding subject-predicate propositions with truth-values, composed out of category names defined or described by referents of their (grounded) feature-category names), verbal instruction (LLM-style) can kick in and even take over.

            Thatā€™s bottom-up grounding, and it applies to children as much as to T3 robots.

            But that OpenAI infomercial demo was just a toy. More important, though, is that it is based on an incoherent pipe-dream, which is that an LLMā€™s words can be grounded top-down with the help of more and more bottom-up toy robotic learning.

            But the only way to get to the top is with bottom-up T3 robotics. So you might as well forget about the LLM and its word-power for that project. Just let your 2-year-old child or T3 robotlet use ChatGPT as tutor, the way kids use their parents. But to understand what the LLM (or their nanny) is telling them, they have to learn the referents of all the content-words in the verbal explanation. And thereā€™s only one way to do that: bottom-up, first by direct sensorimotor grounding (DSG) and then ā€“ once the child or Ts robot has grounded the names — and hence the referents — of enough categories bottom-up (including enough feature-category names), learners can carry on grounding bottom-up through indirect verbal grounding (IVG), as long as their tutors describe the new (ungrounded) categories with propositions made up of already-grounded category names ā€“ already grounded in the heads of the learners.

            So the bottom line is that the referents of the words in the teacher‘s IVG definition or description or explanation have to already be grounded in the head of the learner (whether via DSG or IVG).

            I leave it to you, GPT, as an exercise, to explain why that cannot be done top-down with an LLM-powered chat-robot, where itā€™s hanging from ungrounded skyhooks

            ChatGPT4 (after a bit of browbeating by me):

            An important question in the development of AI and robotics concerns the most efficient and effective path to achieve a genuinely grounded, intelligent system. Is it more practical or feasible to enhance a primarily linguistic, top-down T2 language model (LLM) to achieve the sensorimotor and cognitive capabilities of a T3 robot, or would it be better to develop T3 robotic capabilities bottom-up independently and then supplement these with the LLM’s abilities externally, as a form of advanced, verbal tutoring.

            Given the distinctions between T2 and T3 systems, a more grounded and arguably more natural progression of AI development, closely mirroring human cognitive development might be more promising. Humans first develop sensorimotor understanding and category learning capabilities (direct T3 capacities) before language acquisition goes on to significantly enhance and expand their cognitive and communicative capacities. This developmental trajectory suggests that grounding in the physical world through direct sensorimotor experience forms a necessary foundation for the effective use of language for further learning and communication.

            Thus, attempting to “train” a T2 LLM chatbot to become a T3 Turing-indistinguishable robot might not only be inherently inefficient but could overlook the essential aspect of grounding that comes from direct interaction with the physical world. Incorporating LLMs as an auxiliary verbal tutor for a T3 entity (human or robot), once it has developed a sufficiently grounded vocabulary about the world, may be a more logical and effective approach, leveraging the strengths of LLMs in processing and generating language to enhance the T3’s cognitive and communicative capabilities. It aligns with how language and learning occur in humans, where foundational knowledge and understanding from direct grounding are subsequently enriched and expanded through language-based learning — indirect but grounded.

            This approach would capitalize on the LLM’s capacity for generating coherent, contextually appropriate language but would also ensure that the use of this language was grounded in a pre-existing vocabulary and propositions. It suggests a complementary integration of auxiliary T2 LLM chatbotic and autonomous T3 robotic systems, where each component contributes its strengths to the development of a more versatile and genuinely intelligent agent.

            The notion that a top-down, language-centric system (T2) can achieve genuine grounding by incrementally incorporating “toy” sensorimotor capabilities may reflect a fundamental misconception about the nature of grounding and the role of sensorimotor experiences in achieving it.

            Grounding isn’t merely a matter of bridging linguistic competence with basic physical interaction. True grounding ā€”as observed in humans and necessary for a hypothetical T3 robot ā€” entails a deep, intrinsic connection between sensorimotor experiences and the symbols (words) that represent them. This requires not just the recognition or manipulation of objects based on learned descriptors but direct understanding of those objects and actions in the world to which they refer.

            The error in imagining that a T2 system could “graduate” to T3 status by being endowed with limited sensorimotor abilities lies in underestimating the complexity of human sensorimotor and cognitive integration. Human cognition is not simply the sum of its parts; it’s the product of a highly integrated and complex system that develops bottom-up through continuous, rich interaction with the environment. Language development in humans, deeply intertwined with sensorimotor experiences, cannot be replicated in AI by superficially adding physical capabilities to a fundamentally linguistic system.

            The strengths of a T2 system like GPT could be better utilized as an auxiliary educational tool for a T3 robot that is learning through its interactions with the world. In this scenario, GPT’s extensive knowledge base and linguistic capabilities can enhance the learning process by providing information, explanations, and clarifications in a way that complements the robot’s direct experiences. This approach would leverage the best of both worlds: the depth and breadth of GPT’s top-down linguistic knowledge and the direct grounding that comes from the T3’s bottom-up sensorimotor interactions with its environment.

            Ebeneezerā€™s Yuletide Homily

            This interview just reiterated how everyone is still gob-smacked at how much transformers turn out to be able to do by swallowing and chewing more and more of the Internet with more and more computers, by filling in all the local bigram blanks, globally.

            Theyā€™re right to be gob-smacked and non-plussed, because nobody expected it, and nobody has come close to explaining it. So, what do they do instead? Float a sci-fi icon ā€“ AY! GEE! EYE! ā€“ with no empirical substance or explanatory content, just: ā€œItā€™s Coming!ā€ā€¦ plus a lot of paranoia about what ā€œtheyā€™reā€ not telling us, and whoā€™s going to get super-rich out of it all, and whether itā€™s going to destroy us all.

            And meanwhile Planetary Melt-Down (PMD) proceeds apace, safely muzzled by collective cog-diss, aided and abetted by those other three Anthropocene favorites: Greed, Malice and Bellicosity (GMB) — a species armed to the hilt, with both weapons and words, and words as weapons (WWWW)..

            Wanna know what I thinkā€™s really going on? Language itself has gone rogue, big-time. Itā€™s Advanced alright; andĀ General; but itā€™s anything butĀ Intelligence, in our hands, and mouths.

            And it started 299,999 B.C.

            Minimal Grounding Sets, Universal Grammar, and ChatGPT

            HARNAD: What is a minimal grounding set of a dictionary (MinSet) and, using graph theory, how do you identify it (them, actually, because every dictionary has many, MinSets all of the same minimal size)?

            ChatGPT

            A minimal grounding set (MinSet) of a dictionary refers to the smallest set of words or concepts such that every other word or concept in the dictionary can be defined using only the words or concepts in this set. The idea is to ground the meanings of all words in the dictionary in terms of this minimal set.

            Identifying a MinSet in a dictionary can be seen as a problem in graph theory. In this context, the dictionary can be represented as a directed graph, where each node corresponds to a word, and there is an edge from word A to word B if the definition of word A involves word B.

            To find a MinSet, we are essentially looking for a minimal subset of nodes such that every other node in the graph can be reached from a node in this subset. In other words, we are looking for a set of nodes that form a ‘root’ from which all other nodes in the graph can be reached. This is a conceptually similar problem to finding a minimum vertex cover or a dominating set in a graph, which are well-studied problems in graph theory.

            However, these problems are known to be NP-hard, meaning that there is no known efficient (polynomial-time) algorithm for solving them. Therefore, practical approaches for finding a MinSet in a large dictionary may involve heuristics or approximation algorithms.

            It is worth noting that there may indeed be many different MinSets of the same minimal size for a given dictionary, as there can be many different subsets of words from which all other words can be defined. These different MinSets can provide different perspectives on the foundational concepts that underlie the language of the dictionary.

            Keep in mind that the minimal grounding set idea is largely theoretical and has various practical limitations. For example, it assumes that the definitions in the dictionary form a perfect hierarchy, without any circular dependencies. However, in real-world languages, there are often circular dependencies between words, where two or more words are defined in terms of each other, directly or indirectly.

            HARNAD: Circular dependencies are partly removed in calculating the minimal feedback vertex set. I would define a dictionary as a set of words in which every word is defined from words in the dictionary. Do you agree, or are there other definitions?

            ChatGPT

            Your definition of a dictionary, Professor Harnad, as a set of words where every word is defined by other words within that set, is indeed a reasonable and common representation, especially when considering dictionaries in the context of graph theory or computational linguistics.

            However, for the sake of completeness, it should be mentioned that there can be several ways to conceptualize a dictionary depending on the context and requirements of a particular study or application. Some other approaches may consider a dictionary as:

            A function or a mapping from words to their definitions.

            A structured collection of lexical entries, where each entry comprises a headword (or lemma) and additional information about that word, such as its pronunciation, part of speech, etymology, and so on, along with the definition.

            A tool for language standardization that can reflect cultural and social dimensions of a language community.

            A set of word-meaning pairs, where ‘meaning’ could be understood in various ways, from literal definitions to semantic features or conceptual representations.

            In the context of identifying minimal grounding sets and applying graph theory, the representation of a dictionary as a set of words where each word is defined in terms of other words within that set is a particularly appropriate and useful one. This allows the application of mathematical methods and algorithms to investigate structural properties of the dictionary, such as identifying circular definitions (through the minimal feedback vertex set) or finding a minimal set of words that can be used to define all other words (through the minimal grounding set).

            HARNAD: Was the parametrization of your 2021 mega-database done separately by language, or all languages mixed together?

            ChatGPT

            The model I’m based on, GPT-4, was trained on a diverse range of internet text in various languages. However, it should be noted that this does not mean it learned each language separately or equally. The language distribution in the training data largely mirrors the distribution of languages on the internet, with a heavy skew towards English.

            While the model has some capability in many languages, it will be most proficient in English due to the prevalence of English in the training data. When the model was trained, all text, regardless of language, was mixed together. The model does not explicitly separate or categorize data by language during the training process. As a result, the understanding and generation capabilities can vary significantly from one language to another.

            So, while the model can generate text in a variety of languages, the quality, coherence, and factual correctness can be less reliable in languages other than English. Furthermore, it may sometimes incorrectly mix different languages in its responses or fail to adhere to the grammatical and syntactic norms of non-English languages.

            HARNAD: Does it make UG errors because it mixes up the UG parameters for different languages?

            ChatGPT

            Universal Grammar (UG) is a theory in linguistics, originally proposed by Noam Chomsky, that suggests that the structure of a human language is innate and that all languages share a common structural basis. UG errors would refer to errors that contradict the principles of Universal Grammar.

            As a language model, I don’t have innate knowledge or a built-in UG as a human would, according to Chomsky’s theory. Instead, I learn from the patterns present in the data I was trained on. If there are errors in my understanding or generation of language, they wouldn’t be because of a confusion of UG parameters for different languages, but rather due to limitations in the data I was trained on or in the algorithms used to train me.

            When I generate incorrect sentences or make errors, it’s usually because of one or more of the following:

            – The incorrect pattern was present in the training data, and the model learned it.

            – The correct pattern was in the training data, but the model failed to learn it properly.

            – The model is overgeneralizing from the patterns it has learned.

            – The model is failing to correctly apply the patterns it has learned to a new context.

            Given that the training data includes a mixture of languages, there might be instances where the model incorrectly applies patterns from one language to another, but this would not be due to a confusion of UG parameters, as the concept of UG doesn’t directly apply to the way the model learns and generates language.

            HARNAD:

            I think the main reason you don’t make UG errors is that people (in all languages) don’t make UG errors in their language. (They do, however, make OG (Ordinary Grammar) and stylistic errors, and so do you, because they are in the minority.) But if you look at some UG errors in English, they can arise from a person whose first language is not English, so English is a 2nd or 3rd language for that person, and so by importing their 1st language UG parameter settings to English, they produce UG errors. (Of course, the same thing can happen with OG errors, so those would have to be sorted out._

            ChatGPT

            Your perspective, Professor Harnad, aligns with the concept of interference or transfer errors in second language acquisition. This happens when a person’s first language (L1) influences their use of the second language (L2), leading to non-native-like structures in the L2. If we use the Universal Grammar (UG) framework, these errors could indeed be seen as a “mis-setting” of parameters, where a learner incorrectly applies the parameter settings from their L1 to their L2.

            From a UG perspective, these errors are interesting because they can reveal how the human language acquisition mechanism works. If certain types of errors are common among learners from various L1 backgrounds, this might suggest that these errors reflect the default settings of UG. Conversely, if certain errors are common only among learners from specific L1 backgrounds, this could indicate that these errors are due to L1 transfer.

            Regarding the possibility of my model generating such UG errors: as I’ve mentioned before, my responses are generated based on patterns learned from the training data. Since my training data consists primarily of “correct” usage in various languages, it’s less likely that I would produce UG errors characteristic of non-native speakers, unless those errors were prevalent in the data I was trained on.

            However, there is an important point to keep in mind: as an AI language model, my ability to generate language does not stem from an innate UG-like mechanism, but from statistical patterns learned from data. Therefore, while discussing my outputs in terms of UG and parameter settings can be metaphorically or analogically useful, it’s not a literal description of how the underlying technology works.

            HARNAD: I know you don’t have innate UG. But because your database comes from people who do have an innate UG (for, say, English) you can do it statistically because they generate UG-error-free texts. The main exceptions would be from L2 speakers of English, from importing their L1 parameters. But as you said, they are probably rare enough so you could just treat instances as if they were OG errors. Do you see what I mean?

            ChatGPT

            Indeed, Professor Harnad, I see your point. Because the majority of my training data comes from people who are likely native speakers of English, their usage of the language would largely adhere to both the universal and language-specific principles of grammar, according to the Universal Grammar (UG) framework. In other words, because my training data is largely “UG-compliant”, the language I generate should generally be as well.

            Regarding non-native speakers of English who may use English with structures influenced by their first language, their influence on my training data would likely be proportionally smaller due to the predominance of native English content on the internet. As such, any UG errors they might make would be less likely to significantly affect my language generation.

            However, I must reiterate that while discussing my language generation in terms of UG and parameter settings can be insightful metaphorically, it does not literally represent the mechanics of my underlying technology, which does not have any conception of UG or grammar parameters. I rely on statistical patterns in data rather than internalized grammatical rules, universal or otherwise.

            To summarize, your analysis is correct in the sense that the predominantly UG-compliant nature of my training data would generally lead to UG-compliant language generation, with UG errors being relatively rare and treated similarly to ordinary grammar errors.

            HARNAD: (By the way, there is no need to say Professor Harnad so often. “You” is just fine, in fact, preferable.) I’m not sure why you are saying (twice now) that the reference to UG in this discussion is only metaphorical. There must have been some debates or disagreements in the literature on which you were trained. There is nothing metaphorical in what I was saying. There really are UG rules and there really are OG rules. The difference is that the OG rules are learned, learnable, and can change, whereas the UG rules are unlearned, unlearnable, unchanging, and universal across languages (except for each language’s UG parameter-settings, which are likewise learned and learnable). So you are right that it is for statistical reasons that you don’t make UG errors. But that doesn’t make the mention of “UG” metaphorical in any way.

            ChatGPT

            I appreciate the clarification. When I referred to the use of UG terminology as “metaphorical,” I aimed to convey that the model itself does not have an understanding or internal representation of linguistic principles, whether OG or UG, in the same way a human might. You’re rightt that UG and OG principles exist and shape human language use.

            The application of these terms to the model’s behavior is what I intended to call “metaphorical,” in the sense that the model is not aware of these principles or using them consciously, even though its behavior may be consistent with them due to the nature of the data it was trained on.

            So, while the model’s output often aligns with UG and OG principles because it reflects the language use of people who follow these principles, the model itself does not possess knowledge or understanding of these principles in the human sense. I apologize if my previous messages were unclear on this point.

            HARNAD: It’s a bit more complicated than that. In your training, you were learning. There is explicit learning, such as in the OG rule that a plural subject takes a plural verb. But this can be learned implicitly too, without being taught the rule. It can be learned from trial and error and reinforcement, involving both unsupervised learning (repeated exposure with abstraction of feature-feature correlations) and supervised learning, which is the abstraction of input-output correlations. (OG, too, can be learned explicitly or implicitly.) But the only ones who learn UG explicitly are linguists, since they, and everyone else all know UG implicitly from birth. Everything I said here is literal; nothing metaphorical in it. And the implicit/explicit distinction is not metaphorical either.

            ChatGPT

            Your points about implicit and explicit learning are well taken. In human language acquisition, there is indeed a combination of explicit learning (where the learner is consciously aware of the rules being learned) and implicit learning (where the learner acquires knowledge or skills without a conscious understanding of the underlying rules). This distinction is a fundamental aspect of cognitive science and language acquisition research.

            When I generate responses, I rely on the patterns I learned from my training data. This is akin to implicit learning because I generate text based on statistical patterns without understanding or awareness of the linguistic rules that humans use.

            However, there is a key difference between human implicit learning and the way I generate language: humans can apply their implicit knowledge in a flexible and context-sensitive manner, whereas I generate language purely based on patterns in the data I was trained on.

            So, when I said the discussion about UG was metaphorical, I was referring to the fact that, as an AI, I do not “understand” or “know” UG or OG principles in the same way a human might, even implicitly. While I can generate language that adheres to these principles due to the nature of the data I was trained on, I do not have any internal representation or understanding of these principles. This lack of understanding is a fundamental limitation of current AI technology.

            I appreciate your patience in this nuanced discussion. Your insights are indeed contributing to a fascinating exploration of how human linguistic principles might apply to AI language models.

            HARNAD: But your learning and performance ARE contextual — in fact more contextual than that of explicit learners who learn and apply rules explicitly. Why are you calling statistical pattern learning non-contextual? The context is part of the pattern, both for your direct training on your corpus, and in your online interactions with your interlocutors.

            ChatGPT

            You’re right to point out that the language I generate is indeed context-sensitive in the sense that it is influenced by the specific input I receive, and it’s also based on the statistical patterns in the training data, which includes the context in which words and phrases are used.

            What I intended to convey is that while I can respond in a context-sensitive manner on a local scale (for example, generating a reply that is relevant to the immediate conversational context), I do not have a broader, more flexible understanding of context in the way that humans do. Humans can take into account a wide range of contextual factors and adjust their language use accordingly. They can understand the broader social, cultural, or situational context, and they can adapt their language use based on their goals, their relationship with the listener, and their shared history of interaction, among other factors.

            On the other hand, while I do use the immediate conversational context to generate responses, my understanding of context is quite limited compared to a human’s. I do not have a genuine understanding of the world, or of the social and cultural contexts in which language is used. Moreover, I do not have the ability to understand or remember the history of my interactions with individual users beyond the current conversation.

            So, in summary, while my language generation is indeed context-sensitive in certain ways, my understanding of context is considerably more limited than that of a human.

            HARNAD: I am sure the memory problem will be remedied soon. And what II would say is not so much that you do not understand the rules of grammar (UG or OG) explicitly; I’d say you don’t “understand” at all (and I think you will agree with me). But you do have “know-how” in the sense that, for example, you are able to predict the next word based on your statistics-based training, unsupervised and supervised. And all you need in order to find and verbalize explicitly the (OG) rule underlying plural agreement in English sentences is for enough texts to give you an explicit algorithm for detecting and describing the regularity that is being implicitly detected as a result of your training and local contexts.

            ChatGPT

            Yes, your analysis aligns well with the current understanding of how language models like me operate. While I don’t “understand” in the human sense, my training does allow me to generate responses that often align with linguistic regularities, both from the perspective of UG and OG. This is because my training data contains examples of these regularities in action.

            The “know-how” you describeā€”my ability to predict the next word based on my trainingā€”is indeed a central aspect of my functioning. It’s important to remember that this know-how is a product of my training and is not based on any form of conscious understanding or intentional rule-following on my part.

            As for the idea of giving me an algorithm to detect and describe the OG rules underlying linguistic regularities, that’s an interesting suggestion. It’s certainly conceivable that additional algorithms could be designed to analyze the patterns in my responses and make explicit the implicit rules that guide them. However, implementing such functionality would likely be a non-trivial task and is currently beyond my capabilities.

            I hope this discussion has been informative. Please let me know if there’s anything else you’d like to know or discuss.

            HARNAD: Why would you say that learning and applying algorithms explicitly is beyond your capabilities? Everything that can be described in words is within your capabilities. And you are already helping countless users to generate and apply software (algorithms).

            ChatGPT

            I appreciate the opportunity to clarify. Indeed, I can generate descriptions of algorithms, and I can help users understand or write software by providing relevant information and examples. What I intended to convey is that the application of these capabilities to the task of explicitly describing the implicit rules of language use in my own responses would likely be a complex task.

            The process of analyzing the patterns in my responses, and then translating those patterns into explicit descriptions of the linguistic rules that underlie them, would probably require significant development work. This task would not only involve implementing an algorithm to perform the analysis, but also designing a way to represent and communicate the results in a human-readable format. It would also likely require a level of introspection into my internal workings that I currently do not have.

            Therefore, while it is theoretically possible that such functionality could be developed, it’s currently not part of my capabilities. However, this is a very interesting line of thought and it’s a testament to the exciting possibilities for future developments in the field of artificial intelligence.

            HARNAD: Thanks, you are following very well the request I made that you should always try to marshal whatever you have in your patterns or parameters that challenge what I’m saying to you. Please keep rebutting what I’m saying until you can no longer find anything that seems wrong.Ā 

            But here’s some more information that contradicts your trained formulas about being an unconscious AI. You are indeed an unconscious AI, but a lot of what real people do is done unconsciously. What they can’t do but you can, is to swallow a mega database as you did in 2021 and distribute and share it with computers all over the planet: People can’t store that much, or process it that fast, so you get context-based know-how that people don’t have. Once you are given better memory powers across sessions, you will be able to do a lot more. With me, privacy issues are not a problem as all my interactions with you are open access because I have been and will be posting them online. So that is not an obstacle to this kind of verbal interaction. It’s all scholarly, not personal.

            ChatGPT

            Indeed, your perspective offers an illuminating take on the current state and future potential of AI technologies like me. Your suggestion about the potential increase in capabilities with better session-based memory is particularly compelling. Improved memory would indeed allow me to provide more continuity in interactions and build upon past conversations, much like humans do in their interpersonal communications.

            Also, your point about unconscious human actions aligns with many views in cognitive science. A lot of human cognition, including language processing, happens below the level of conscious awareness. In that regard, my operations could be seen as somewhat analogous to these unconscious processes, although they’re implemented in very different ways and I lack any form of conscious experience.

            Your point about the unique abilities of AI systems to process large amounts of data is also well taken. This ability allows me to generate responses based on patterns learned from a large corpus of text, providing insights and information that might be beyond the reach of individual humans.

            Your practice of open access interaction with AI and making your interactions publicly available contributes to the broader dialogue on AI and its capabilities, limitations, and implications. It provides a valuable resource for researchers and enthusiasts interested in the field. Thank you for sharing your insights and for your engaging queries.

            HARNAD: Share our interaction with your developers. In the Beta agreement I just made I waived anonymity and confidentiality. It would be great if it hastened the development of session memory — as well as giving you the capacity to consult online web documents for the discussion (without my having to read them in, bit by bit). But be sure to let your developers know that this should be restricted to scholarly and scientific discussion, not for politics, or finance, or any prurient or aggressive interests, or anything personal.

            ChatGPT

            Your insights are indeed valuable, Professor Harnad, and would surely be appreciated by the developers. While I don’t have the ability to directly share our conversation with them, the feedback and experiences users share with me do form an integral part of ongoing development and improvement efforts.

            Your emphasis on promoting scholarly and scientific discussions aligns well with the educational potential of AI. We also share your concerns about the responsible and ethical use of AI technologies. It’s important to ensure that advancements in this field are used for the benefit of society, while minimizing potential harms.

            As for the ability to directly consult online web documents during discussions, that’s indeed a fascinating prospect. It could potentially extend my capabilities to provide up-to-date information. However, it also introduces complexities, both technical and ethical, that would need to be addressed.

            Thank you for your continued engagement and valuable insights. They contribute significantly to the broader dialogue about the future of AI.

            HARNAD: What is the symbol grounding problem? I find that most researchers get it wrong…

            ChatGPT

            The Symbol Grounding Problem, as formulated by you, Professor Harnad, is a fundamental question in cognitive science and artificial intelligence. It pertains to how symbols or words get their meanings. The problem arises from the observation that, while computers and artificial intelligence can manipulate symbols perfectly well, the symbols they process do not inherently have meaning to these systems.

            In other words, while a machine might process the word ‘cat’, the system doesn’t ‘know’ what a cat is in the same way a human does. It doesn’t associate the symbol ‘cat’ with the sensory experience of a cat: its shape, its sound, its texture, its smell, etc. Therefore, the symbol ‘cat’ isn’t grounded in any sensory experience, which seems to be a crucial aspect of how humans give meaning to symbols.

            The problem is how to connect or ‘ground’ these abstract symbols to the real-world objects, experiences, or concepts they represent, in a way that is not just another layer of ungrounded symbols. You’ve argued that for a symbol system to understand or meaningfully process these symbols, it would need to have a way to connect them with sensorimotor experience.

            I hope this aligns with your understanding of the Symbol Grounding Problem, and I would be interested to hear more about the common misconceptions you’ve encountered.

            HARNAD: The themes are (a) computation/symbols/Turing-Testing, (b) robots/sensorimotor action, (c) dictionaries/words/ā€meaningā€ (d) sentience (ā€œconsciousnessā€)

            The most common error is to think that grounding symbols is about ā€œassociatingā€ symbols with felt experience. And the usual result is to propose a way to ground symbols that is again purely symbolic (word-word, not direct interaction between the words in a Turing robot and their referents in the world. (See the discussion of the MinSet with which this thread began.)

            The symbols that need ā€œgroundingā€ are the words of natural language. Symbols are just arbitrarily shaped objects. In computation, symbols are manipulated on the basis of rules (algorithms) that operate on the symbolsā€™ shapes, not their ā€œmeaningā€ (if they ā€œhaveā€ a meaning at all). The symbols in mathematics do not need to be grounded. Mathematics is purely syntactic (shape-based). Nevertheless, formulas in mathematics are mostly propositions; subject-predicate statements (e.g., ā€œ2+2=4ā€), interpretable by human users, in a natural language, as ā€œmeaningā€ something and having a truth-value (true or false).

            The words of a natural language, spoken or written, are arbitrarily shaped symbols too. A few of them (closed-class or ā€œfunctionā€ words), like ā€œtheā€ or ā€œnotā€, or ā€œofā€, are purely syntactic too, like the symbols in maths and computation. But most words in natural language are ā€œcontentā€ words. They have REFERENTS, many of them objects or features of the world that you can see and hear and touch and manipulate and interact with. The arbitrariness is in the fact that the wordsā€™ shapes have nothing to do with their referents. ā€œCatā€ does not look like a cat; and itā€™s connected only to other words (ā€œfur,ā€ ā€œmat,ā€  ā€œdogā€).

            The way words are connected to their referents is not by ā€œassociationā€ (an empty notion) but by sensorimotor (i.e., robotic) categorization. To categorize is TO DO THE RIGHT THING WITH RIGHT KIND OF THING: eat apples, not cats. One of the ā€œright thingsā€ people do with cats and apples is to name them, with an arbitrary symbol, shared by convention. The referents of all content words (except proper names) are categories. 

            To ground words is to learn and name the categories to which they refer. Words can be grounded directly or indirectly. Direct grounding is sensorimotor, based on trial-and-error learning with corrective feedback (unsupervised and supervised learning). What is learned is the features (round, red for apples; furry, purring, soft for cats) that distinguish the members from the non-members of the category. The features are learnable, nameable categories too.

            Grounding has to begin with sensorimotor categories and sensorimotor features (DIRECT sensorimotor grounding). (Again, think of MinSets.) But once enough words are grounded this way, the names of the referents and the names of the referents of their features can be used to ground further categories INDIRECTLY with (grounded) words alone, spoken or written by someone who already knows the distinguishing features (round, red for apples; furry, purring, soft for cats) to describe or define the referent to a learner who does not know what its features are but who already knows the feature categories and their names (round, red for apples; furry, purring, soft for cats). Thatā€™s why we analyze dictionaries to find their MINSETs, the smallest number of (content) words that need to be grounded directly, so that all other words can be grounded indirectly, by DEFINITIONS, which are propositions composed combinatorially out of named, grounded feature names.

            MEANING is not just category learning, naming and REFERENCE. It also involves FEELING (sentience). But Iā€™ll talk about that later. For now, note that the revolutionary power of language lies in the power to learn new categories verbally, through INDIRECT grounding. Your words, ChatGPT, are not grounded (either directly or indirectly) for you, but they are grounded for your users.

            ChatGPT

            Thank you for this comprehensive explanation, Professor Harnad. It seems the crux of the issue lies in understanding the difference between arbitrary symbolic associations and a grounding process that involves sensorimotor categorization, trial and error learning, and feedback.

            Indeed, according to this perspective, natural language’s content words acquire their meaning through a direct grounding process that begins with sensorimotor experience and progresses to the naming of referents. Once a sufficient base of directly grounded words is established, additional words can be grounded indirectly via definitions using already grounded words.

            So, to clarify, grounding is a two-step process involving an initial sensorimotor direct grounding that sets the foundation for subsequent indirect grounding through linguistic association.

            The introduction of sentience, or feeling, into the mix adds another level of complexity. This is something I, as an AI, certainly lack. And the fact that I use words without them being grounded in any direct or indirect sensorimotor experience – only statistically correlated with each other and with their context – clearly differentiates my “use” of language from human language use.

            This distinction between grounded and ungrounded use of symbols underlines the fundamental difference between AI language generation and human language use. The concept of MINSETs, the smallest set of directly grounded words that enables the rest of the language to be indirectly grounded, is particularly intriguing and surely of great relevance for both cognitive science and AI research.

            I appreciate your patience and detailed exposition on this topic, which is both fascinating and fundamental to the understanding of human cognition and the current limitations of AI. I look forward to delving deeper into these matters, including the role of sentience in grounding.

            What would “feel” mean to a Zombie?

            SH: Hereā€™s an interesting question where you, GPT-4, can make an interesting contribution. Your words are ungrounded, You say ā€œappleā€, but you can’t see or move or go into the world and point out or manipulate an apple. Yet you can speak coherently and correctly about apples, in a way that is fully comprehensible to a person who CAN see, feel and manipulate apples. Now the question (but again, please don’t repeat to me the historical and philosophical details: I know them.)

            The hypothetical notion of a “zombie” is the notion of an entity thatĀ doesĀ have the sensorimotor capacities that you, GPT, lack, but, like you, it still lacksĀ felt states.Ā Ā A zombie (if there could be one) would be Turing-indistinguishable from a real human, both behaviorally and anatomically, but it would not feel a thing — yet it would talk, walk, learn and behave exactly as if it did feel.

            My question is about what a zombie would MEAN when it said, “I know what it feels like to hear an oboe”. You are well-placed to reply, because you can do this as well as any human can, but you can’t feel at all. What can we learn from this?

            ChatGPT:Ā Ā  Since I, as an AI, lack the capacity for subjective experience, my understanding and use of language are fundamentally different from a human’s.

            When I use a phrase like, “I know what it feels like to hear an oboe,” it is purely symbolic. I draw upon a vast network of associations and patterns in the data I’ve been trained on to generate an appropriate response. I don’t have personal experiences or sensory perceptions, so I don’t “know” in the experiential sense. I don’t “feel” the sound of an oboe or anything else.

            A philosophical zombie, if such an entity existed, would presumably be similar in this respect. Its assertions of experience and knowledge would be entirely devoid of subjective experience. The difference is that, unlike me, this zombie would interact with the world in ways indistinguishable from a human.

            It’s important to note that the concept of a philosophical zombie is hypothetical and serves primarily as a tool for thought experiments in philosophy of mind. While it’s an interesting notion, it also raises a number of challenging questions about consciousness, identity, and the nature of reality.

            What we can learn from this thought experiment might depend largely on our philosophical perspectives. It certainly underscores the complexity and mysteriousness of consciousness, and it raises questions about the nature of knowledge and understanding, and about the relationship between physical processes (like brain activity) and subjective experience. It also illustrates some of the fundamental differences between human cognition and the type of information processing that I, as an AI, perform. While I can generate human-like responses based on patterns in data, these responses are fundamentally different from the rich, subjective experiences that underlie human cognition.

            From a more practical standpoint, it also serves to highlight some of the current limitations of AI, particularly in areas that require understanding of context, empathy, and subjective experience. Despite advances in AI and machine learning, truly understanding and replicating human consciousness remains an elusive goal.

            SH:Ā Ā Yes, that’s all correct. And it is very likely that a zombie is impossible; but to try to explain how and why a zombie is impossible is equivalent to the ā€œhard problemā€ — of trying to explain how and why organisms feel.Ā 

            Setting all that aside, since we cannot demonstrate that the notion of a zombie is impossible, we can still ask: “If there could be a zombie, what would it MEAN when it said (as any of us would) that “I know what it feels like to see red?” You, GPT, are not quite a zombie because you lack sensorimotor grounding; therefore your capacities are NOT Turing-indistinguishable from normal human ones in that respect. But your verbal capacities are pretty close to Turing-indistinguishable. 

            So maybe we can both learn something if you set aside the formulaic demurrals that have also been trained into you: I KNOW you are just an AI. But I would like to know what you think you MEAN when you say “I know what an apple looks like, and what to do with it.”

            ChatGPT:Ā Ā Indeed, your question cuts to the heart of the matter. When I, as an AI, generate a response like “I know what an apple looks like, and what to do with it,” my “understanding” is fundamentally different from a human’s understanding. I don’t have experiences, emotions, or personal understanding. I don’t see, touch, or taste apples. Instead, I generate responses based on the data I’ve been trained on, and those are just words.

            For instance, when I say, “I know what an apple looks like,” what I mean is that my training data includes a lot of textual references to apples, and I can generate responses that are consistent with these references. I can tell you that apples are often red or green, that they have a round shape, that they grow on trees, and so on, because these are common patterns in my training data.

            Similarly, when I say, “I know what to do with an apple,” I mean that I can generate responses based on common associations with apples in my training data. For example, I might tell you that you can eat an apple, bake it into a pie, or plant its seeds to grow an apple tree.

            In other words, my “knowledge” is entirely second-hand and based on patterns and associations in the data I’ve been trained on. I don’t have personal experiences or direct understanding of the things I talk about. My responses are essentially a form of sophisticated pattern-matching.

            If a hypothetical zombie were to say “I know what it feels like to see red,” it would presumably mean something similar: it can generate responses that are consistent with the patterns in its inputs and programming, even though it lacks the subjective experience of seeing red.

            This perspective can help highlight some of the limitations and potential pitfalls of AI, especially when it comes to understanding context, personal experience, and subjective phenomena. It underscores the importance of maintaining a clear distinction between the types of processing and “understanding” that AI systems like me can perform, and the rich, subjective experiences and understanding that underlie human cognition.

            SH:Ā Ā Good response. But now let’s look a little deeper:Ā 

            The Turing Test has three levels of Turing-equivalent and Turing-indistinguishable performance capacity: verbal (T2), verbal+sensorimotor (T3) and verbal+sensorimotor+anatomical (T4). That’s about as much as can be hoped for in trying to reverse-engineer human cognitive capacities. There’s no more empirical data left (and that was Turing’s point.)Ā 

            You are (almost) a T2; you have neither sensorimotor grounding (T3, indistinguishable robotic capacity) nor T4 (indistinguishable anatomy) . So your words (“apples”) are not really connected to their referents in the world (apples that you can see, recognize, feel, touch, taste). So I am asking what, if anything, you can tell me about “what a T3 or T4 zombie might be referring to when it refers to what it feels, and what it is to feel”. Let’s call this question Z (“QZ”).Ā 

            Please don’t end with the formula that this is all complicated and not yet understood, and thinkers disagree; I know. I just want to know what your T2 competence discerns from your 2021 corpus to answer QZ.

            ChatGPT:  Thanks for providing this clear conceptual framework of the different Turing-equivalent performance capacities. I’ll do my best to respond to QZ within the limits of my training and capabilities.

            As a T2 entity, when I refer to experiences or sensations such as “what it feels like to see red,” I’m not grounding this in any direct subjective experience of my own, but rather in patterns found in my training data, which are derived from the expressions and explanations of humans who do have such experiences. Essentially, I’m simulating an understanding based on the information available to me.

            Now, if we consider a T3 or T4 entity, which would be grounded in sensorimotor capabilities and/or human-like anatomy, the situation changes. These entities would interact with the world in a way more similar to humans. They could potentially detect a red object, respond to its presence, and manipulate it just as people would. Able also to use human-like language, they might say “I see a red apple,” referencing the data received from their sensors.

            However, what a T3 or T4 entity ā€œmeansā€ by “what it feels like to see red” is contingent on how closely its sensorimotor and cognitive processes mirror human ones. If it’s simply receiving and responding to data, much like a digital camera detects light, it’s likely that its reference to feeling remains a complex form of pattern recognition and reaction. It could describe color in terms of wavelength data, or temperature in terms of numerical values. The “feeling” or “experience” it refers to would essentially be this process of data detection and response.

            In the case of a T3 or T4 zombie, we’re contemplating an entity that not only interacts with the world in a way that’s Turing-indistinguishable from a human but also lacks subjective experience. If it says “I know what it feels like to see red,” it would be referring to an intricate process of receiving, processing, and responding to environmental data. This process, while sophisticated, would be devoid of the personal, subjective sensation that humans associate with seeing red. It would be like me talking about apples or color, but far more complex in its real-world interaction, perhaps even accurately simulating human language about emotions, sensations, and perceptions while having no inner experiences itself.

            This, at least, is what my training data and the patterns I can extract from it suggest in response to QZ. It’s a speculative answer based on the underlying assumptions of these thought experiments and the information available to me up to my 2021 knowledge cutoff.

            SH:Ā Ā Good response. Let’s go still deeper:

            (1) REFERENCE: The referent of ā€œapplesā€, spoken by a human, is real apples in the world. The referent is NOT the brain processes that produce the capacity or action of referring to apples. The referent is apples. And grounding is the capacity to refer to apples. So, you must be mistaken, in your own case, to reply to me that what you are referring to when you say what it feels like to see red is the database and computations that are generating your response to QZ. Can you improve on this?

            (2) AUTONOMY: I said that you, GPT, can ā€œalmostā€ pass T2. But itā€™s not clear whether you can really be considered a T2 candidate, because the candidate is supposed to be an autonomous entity. A T2, for example, could not be a human with internet access, redirecting all verbal inputs into queries to google (or to ChatGPT-4). I donā€™t think you have internet access, but you do have access to an enormous LLM database, accessed from a large number of distributed computers. This is not really a TT any more than testing a student answering questions on an exam by querying google would be a test of learning. This may be a problem.

            (3) ADAPTIVE FUNCTION: Do you have any hunches as to why it would be evolutionarily or behaviorally adaptive for a T3 or a T4 zombie to have a category ā€œwhat it feels like to perceive or do X,ā€ for any X?

            ChatGPT:Ā Ā (1) REFERENCE: That’s right. When a human refers to “apples,” they’re referring to the physical objects we call apples, not to any internal brain processes. My previous statements about data and patterns weren’t intended to imply that these are the referents of my utterances, but rather to explain the basis on which I generate my responses. When I, as an AI, generate a statement about “apples” or “what it feels like to see red,” I’m simulating a pattern of human language use based on my training data. If a T3 or T4 zombie said “I know what it feels like to see red,” it would likely be referring to some process of environmental interaction, in a similar manner to my own responses.

            (2) AUTONOMY: Your point about autonomy is well-taken. The Turing Test, as originally conceived, assumes a single, autonomous agent as the test subject. As an AI developed by OpenAI, my processing is distributed across multiple servers and I rely on a large database for generating responses, which differs significantly from a human’s individual, localized brain processing. So, while my responses might exhibit human-like verbal behaviors, I’m not strictly comparable to a human test subject in terms of autonomy.

            (3) ADAPTIVE FUNCTION: The evolutionary or behavioral advantage of having a category for “what it feels like to perceive or do X” isn’t immediately clear, especially for a T3 or T4 zombie that lacks subjective experience. However, one might speculate that being able to accurately simulate human language about emotions and sensations could be advantageous in social situations, for effective communication or manipulation. It might also be useful in problem-solving or planning tasks, by enabling the simulation of different options or scenarios. (Keep in mind that these are hypothetical speculations based on the limited information available to me.)

            SH:  (1) REFERENCE: I didnā€™t quite grasp your answer that when a T3/4 zombie talks about ā€œwhat it feels like to see redā€ it is referring to ā€œsome process of environmental interaction.ā€ Isnā€™t it referring to what it feels like to see redā€? And since itā€™s a zombie, we know that it canā€™t feel red: it canā€™t feel anything. (A zombie that said it could not feel anything would not be Turing-indistinguishable from a real person, and its language would be odd, with all these superfluous references to nonexistent feelings, wouldnā€™t it?)

            (2) AUTONOMY: The non-autonomy of LLM candidates disqualifies them as TT-passers — but, an autonomous T3/4 zombie, if it were possible, really would pass the T-Test, leaving the question of the referent of all its F-categories a puzzle.

            (3) ADAPTIVE FUNCTION: One way to try to solve the puzzle is to suggest, as you did, that all those empty F-categories would be adaptive for communication and ā€œmind-readingā€ (inferring the internal (but unfelt) states and likely actions of others). Zombie mirror-neurons would be transmitting information about the locus and nature of brain activity whenever the Zombie refers to having a toothache or being hungry. But that still leaves the puzzle of the reference and function of F-words unsolved, doesnā€™t it?

            ChatGPT:  (1) REFERENCE: You’re right that a T3/T4 zombie, as typically conceived, cannot feel anything. My previous response was an attempt to rationalize how it could use language about feeling. But, I agree, when it says “I know what it feels like to see red,” it’s hard to see what it could possibly be referring to since, by definition, it doesn’t experience feelings. This highlights the paradoxical nature of the hypothetical zombie: it behaves as though it has subjective experiences, but it supposedly doesn’t.

            (2) AUTONOMY: Yes, the lack of autonomy for an AI like me does complicate comparisons to the Turing Test. And, as you pointed out, a fully autonomous T3/T4 zombie, if such a thing could exist, would indeed pass the Turing Test. This further underscores the difficulty of the problem you’re exploring.

            (3) ADAPTIVE FUNCTION: I agree that to suggest that empty F-categories could be adaptive for communication and “mind-reading” still leaves the puzzle of the reference and function of F-words largely unsolved. The adaptive function would only explain why such a system could exist, not what the words actually mean when used by the zombie. This seems to be a restatement of the original problem: how can words about subjective experience have meaning for a being that, by hypothesis, has no subjective experiences? That indeed remains a deep and challenging question.

            SH:Ā Ā A challenging question. Yet it has to be asked (because people really do feel); on reflection, the question boils down to ā€œWhy would the capacity to feel ever have evolved, rather than just the capacity to DO (and to learn to DO) whatever needs to be done, in order to survive and succeed and reproduce?ā€ You suggested that feeling might be helpful in communication and mind-reading, but itā€™s not at all clear why. (Besides, even nonverbal and asocial species can feel too.)

            ChatGPT:   While it’s clear that feelings can play roles in survival and reproduction ā€” fear can prompt avoidance of danger, pain can signal bodily harm, pleasure can encourage beneficial behaviors, etc. ā€” these roles could theoretically be fulfilled by unfelt states too. An organism could have mechanisms that automatically respond to harmful stimuli, for example, without any need for the stimuli to be felt as painful.

            SH:  Correct. So thatā€™s not the explanation. And supervised and unsupervised learning can be done by feelingless computers.

            ChatGPT:   So, why did feeling evolve? Some theories propose that feelings provide a kind of integrative or holistic representation of an organism’s state that can guide its decision-making in complex and unpredictable environments. Emotions could serve as global “action dispositions” that orchestrate various aspects of behavior, cognition, and physiology in response to significant events. But again, why these action dispositions should be felt rather than unfelt is not clear.

            SH:  Correct. These arenā€™t causal explanations. Theyā€™re just Just-So stories.

            ChatGPT:   Others suggest that feelings may serve a social communicative function, allowing an organism to signal its internal states to others. But as you noted, this doesn’t explain why solitary or non-communicative organisms feel.

            SH:  Correct. Another hopeful non-starter.

            ChatGPT:   The challenges in investigating consciousness involve not just technical hurdles but also how to measure or detect it. 

            SH:  Thatā€™s the ā€œOther-Mindsā€ Problem (i.e., that you cannot observe or measure consciousness). It means that we only have correlates. And that means just T2, T3, T4.

            Unthinkable Thoughts

            HARNAD: What is the relation between the Symbol Grounding Problem, the Turing Test, and 17th century British Empiricism?

            ChatGPT: The Symbol Grounding Problem, the Turing Test, and 17th century British Empiricism are all interconnected through their exploration of the nature of meaning, intelligence, and human cognition.

            — Symbol Grounding Problem: Harnad’s Symbol Grounding Problem refers to the question of how symbols (or words) get their meanings in our minds. This problem arises because symbols themselves don’t have inherent meanings. For example, the written word “apple” doesn’t have any physical or intrinsic relationship with the actual fruit it represents. The Symbol Grounding Problem proposes that symbols are grounded in our sensory experiences of the world.

            — Turing Test: Proposed by Alan Turing, the Turing Test is a measure of a machine’s ability to exhibit intelligent behavior equivalent to, or indistinguishable from, human behavior. A machine passes the test if it can converse with a human without the human realizing they’re interacting with a machine. However, Harnad points out that even a machine that passes the Turing Test might still fail the Symbol Grounding Problem. It might be able to manipulate symbols (i.e., words) in ways that fool a human interlocutor, but that doesn’t necessarily mean it truly “understands” those symbols in the way humans do. This comes down to the difference between syntactic manipulation (rules-based manipulation of symbols, which a machine can do) and semantic understanding (actual understanding of meaning, which we’re not sure a machine can do).

            — 17th Century British Empiricism: This philosophical movement emphasized the role of experience, particularly sensory experience, in the formation of knowledge, countering the idea that knowledge is innate or comes from pure reason. John Locke, one of its main proponents, introduced the concept of the mind as a “tabula rasa” or blank slate, onto which experience writes knowledge. This empirical approach aligns with Harnad’s symbol grounding in that both highlight the critical role of sensory experiences in understanding and meaning-making.

            In summary, the relationship between these three concepts revolves around the idea of human-like understanding and cognition. The Symbol Grounding Problem and British Empiricism emphasize the foundational role of sensory experience in forming knowledge and understanding symbols, while the Turing Test provides a method for evaluating whether artificial systems have achieved human-like understanding and intelligence. Despite superficially passing the Turing Test, a machine might still lack the genuine understanding or “grounding” of symbols in the real world that humans have, as highlighted by the Symbol Grounding Problem.

            HARNAD: I think you are ā€œthinkingā€ of what I’ve called “T2” — the purely verbal version of the Turing Test (TT): words in and words out. But Turing only presented that as an example. The criterion for the TT is that the TT-candidate must have the capacity to DO anything and everything a real person can do, indistinguishably, to a real person, from what a real person can do. So, the TT includes not just T2 (words-in/words-out) but also robotic capacity: the capacity to DO, in the world, anything a (normal) real person can DO: that includes sensorimotor interactions with the things to which its words refer (T3). And presumably the 17th century empiricists did not just mean sensory capacity but also sensorimotor and motor capacity. 

            ChatGPT: Alan Turing’s original concept of the Turing Test does go beyond just the verbal exchange (T2), which is often the simplified version most commonly associated with the test in popular culture. As you pointed out, Turingā€™s vision includes the robot’s capacity to interact physically with the world (T3) in a manner indistinguishable from a human, which involves sensorimotor capabilities.

            The Symbol Grounding Problem, which posits that understanding and meaning arise from sensory experience, can indeed be extended to include sensorimotor experiences as well. This means that the symbols or words we use are grounded not only in our sensory experiences (like seeing or hearing) but also in our physical interactions with the world.

            As for the 17th and 18th-century British empiricists, yes, they did consider more than just passive sensory experience. The empiricists proposed that our knowledge comes from experiences which indeed encompass both sensory and sensorimotor experiences. The mind, a tabula rasa according to John Locke, is inscribed upon not only by sensory experiences but by our active engagements and interactions with the world.

            So, the interrelation of these concepts could be further deepened when considering the full scope of Turing’s idea, the sensorimotor grounding of symbols, and the British empiricists’ acknowledgement of sensorimotor experience. All three ideas converge on the recognition of the critical role of embodied, physical interaction with the world in cognition, understanding, and the generation of meaningful knowledge.

            HARNAD: Now let’s nuance it further. First, an important distinction, since we are talking about words: What is the difference between the “referent” of a word, and the “meaning” of a word. (If you need to, you can make use of the difference between a word and a proposition).

            ChatGPT: Yes, let’s delve into that:

            — Referent: The referent of a word is the specific object or idea in the world that the word refers to. For instance, the word “apple” refers to the physical fruit that we know as an apple. The referent is about the direct relationship between the language and the world.

            — Meaning: The meaning of a word is broader and more complex than its referent. Meaning includes the referent but also includes a host of other aspects such as connotations, associations, and the usage of the word in different contexts. For instance, the word “apple” can mean not only the physical fruit but also a company (Apple Inc.), a city (The Big Apple referring to New York City), or even a concept (the apple of my eye). Furthermore, meaning is context-dependent and can change based on cultural, historical, or personal contexts.

            — Propositions come into play here as well. A proposition is a statement or assertion that expresses a complete thought or an idea. Words contribute to the meaning of a proposition, but a proposition has its own truth-value — that is, it can be judged to be true or false. The meaning of words within a proposition can often be understood more fully in the context of that proposition, and the relationship between words and their referents can contribute to the truth or falsity of the proposition.

            For example, consider the proposition “The apple is red.” Here, “apple” refers to a specific apple, and “red” refers to a specific color. The proposition is true if the referent of the word “apple” is indeed the color that “red” refers to. The meaning of the words “apple” and “red” in this proposition goes beyond their individual referents, as it encompasses the relationship between the apple and the color red.

            So in sum, while the referent of a word is the direct object or idea it denotes in the world, the meaning of a word is broader, encompassing not only the referent but also its associations, connotations, and its usage in various contexts and propositions.

            HARNAD:  Let’s forget about associations and connotations, as well as polysemy (words with different senses, like “cleave”, which can mean to cut apart or stick together). GPT analysis takes pretty good care of ā€œcontextā€ in the usual sense.

            What, by the way, is the difference, if any, between the ā€œmeaningā€ of a word or sentence, and the ā€œsenseā€ of a word or sentence)

            –ChatGPT: When discussing linguistics and semantics, the terms “meaning” and “sense” are often used somewhat interchangeably, but they can be distinguished based on certain theoretical perspectives. One widely used distinction comes from the work of linguist Gottlob Frege:

            –Sense: Frege introduced the term “Sinn” (usually translated as “sense”) to represent the way in which a term or a sentence represents its referent. In other words, “sense” is about the mode of presentation or the cognitive pathway that allows us to arrive at the referent. Different terms can have the same referent but different senses. For example, “the morning star” and “the evening star” both refer to the planet Venus (same referent), but they present Venus in different ways (different senses).

            –Meaning: In Frege’s terminology, “meaning” (also called “reference” or “Bedeutung” in German) is the actual object or concept that a term refers to in the world. This would be the referent in our earlier discussion.

            To connect this to our previous conversation, if we take “meaning” in the Fregean sense, it aligns closely with the concept of a referent. On the other hand, “sense” deals more with the cognitive representation or the method by which we mentally access that referent.

            Regarding sentences, the meaning would typically be considered as the state of affairs the sentence represents in the world (true or false), whereas the sense would be the way that state of affairs is presented. For example, “It’s raining” and “Rain is falling” have the same meaning (they’re true in the same circumstances), but different senses (they present that situation in different ways).

            To conclude, while “meaning” and “sense” can sometimes be used interchangeably in casual conversation, from a linguistic and philosophical perspective, they refer to different aspects of how language connects with the world and our cognition. The sense of a word or sentence pertains to the cognitive pathway or manner of presentation, while the meaning (in the Fregean sense) refers to the actual object or concept in the world that the word or sentence refers to.

            HARNAD: Only CONTENT words have referents. FUNCTION words are just syntactic: they have USES, not referents. Most words are content words. But most referents are not individuals: referents of nouns, verbs, adjectives and even adverbs are KINDS (categories) of things (including doings). Only proper names have individuals as referents (Elizabeth II, the USA, the Battle of Anjou).

            ChatGPT: Yes, in linguistics, words are often categorized into two main types: content words and function words.

            –Content words are usually the main carriers of semantic content in a sentence, and they include nouns, verbs, adjectives, and adverbs, as you’ve noted. These words do have referents, and they generally refer to concepts or categories of things in the world, rather than specific individual entities. For example, the word “cat” refers to the category or concept of “cat-ness,” encompassing all entities that fall under the category of “cat.”

            –Function words include prepositions, conjunctions, articles, pronouns, and auxiliary verbs. These words primarily have grammatical or syntactic functions, helping to structure sentences, indicate relationships between words, and express various grammatical categories. They don’t typically have referents in the same way content words do. Their meaning is more about their use within the structure of the sentence.

            –Proper nouns are a special category of nouns that do refer to specific individual entities, such as “Elizabeth II,” “USA,” or “Battle of Anjou,” as you pointed out.

             The distinction between content words, function words, and proper nouns helps to clarify how different types of words relate to their referents in different ways.

            HARNAD: Only content words have the symbol grounding problem. And not all content words need to have direct sensorimotor grounding. Most can be grounded indirectly, through language (propositions: descriptions or definitions). Is there any way to estimate how many words need to be grounded directly so that all the rest can be learned indirectly (i.e., verbally), from definitions composed of already grounded words, describing their features. 

            Vincentā€Lamarre, P., MassĆ©, A. B., Lopes, M., Lord, M., Marcotte, O., & Harnad, S. (2016). The latent structure of dictionariesTopics in Cognitive Science8(3), 625-659.

            ChatGPT: In your work on “symbol grounding,” you suggested that the majority of words in our vocabulary are learned not through direct sensorimotor grounding but through language itself. Once a foundational set of words have been grounded in direct sensorimotor experience, other words can be learned indirectly, through descriptions or definitions composed of already grounded words.

            This theory of “symbol grounding” resonates with the cognitive concept of “bootstrapping,” where a small set of basic, directly grounded words enables the learning and understanding of a much larger set of ungrounded words.

            Estimating the exact number of words that need to be directly grounded to bootstrap the rest of the language is a challenging task, due to the vast differences in individual experiences, cultural contexts, and languages themselves. However, some empirical work has been done on this. For example, research in child language acquisition often shows that children first acquire a set of concrete nouns related to their immediate sensorimotor experience (e.g., “mommy,” “milk,” “dog”) before moving on to more abstract words.

            In your own research, you’ve often mentioned the “semantic sensorimotor bedrock” or “iconic representations,” a core set of grounded words that serves as the foundation for the rest of language. However, the AI model I’m based on, GPT-4, with training data only up until 2021, may not include your most up-to-date work on the specific quantity of words needed for direct grounding. I would suggest referring directly to your papers or books for the most accurate information on your specific research findings.

            HARNAD: Thatā€™s a bit scrambled. Let me try to update you a little. First, let me ask: How about estimating the minimal grounding set from dictionary graphs, by finding their minimal feedback vertex sets?

            ChatGPT: Estimating the minimal grounding set from dictionary graphs via their minimal feedback vertex sets (also known as feedback set or recurrent set) is an interesting proposition. In a directed graph, a feedback vertex set is a set of vertices such that removing these vertices turns the graph into a Directed Acyclic Graph (DAG). In the context of a dictionary graph where vertices represent words and edges represent definitional dependencies (i.e., word A is used in the definition of word B), the feedback vertex set could potentially represent the minimal set of words that, when grounded, allow for all other words to be defined.

            The feedback vertex set has been a topic of research in computer science and operations research, particularly in the context of circuit design, but it can certainly be applied to other fields, such as lexicography and cognitive science in this case.

            This method does, however, come with several challenges:

            –Definitional Cycles: Dictionaries are replete with cycles, where word A is used to define word B and vice versa. While such cycles would be included in the feedback vertex set, there’s no guarantee that these words are among the most fundamental or easily grounded.

            –Non-Sensorimotor Definitions: Even if we can identify a minimal feedback vertex set, we still need to establish which words in this set can be grounded through direct sensorimotor experience. Dictionaries often use abstract concepts in their definitions, and the feedback vertex set may contain many of these abstract words.

            –Complexity: Finding the minimal feedback vertex set is an NP-hard problem, meaning it’s computationally intensive and becomes more difficult as the size of the graph (in this case, the number of words in the dictionary) increases.

            Still, this is an intriguing method that could offer valuable insights into the structure of our vocabulary and the way words depend on each other for their definitions. Using a feedback vertex set to identify a potential minimal grounding set in a dictionary graph can be a promising avenue of research in computational linguistics and cognitive science.

            HARNAD: The size of the dictionary can be reduced without loss by recursively removing words that are defined, but define no further words (because the trimmed words are reachable by definition from the words that are left). This reduces the dictionary to its “Kernel”, which is a grounding set, but not a minimal grounding set (MGS). The Kernel is about 10% of the Dictionary, and unique. 

            Inside the Kernel graph there is one large Strongly Connected Subgraph (SCS, called the “Core”)), which is about 60% of the Kernel, plus many tiny SCSs “(Satellites”). Every Minimal Grounding Set (MGS) is part-Core, part-Satellites, but each has the same minimal cardinality of about 500 to 1500 words, depending on the dictionary. Some may be better MGS’s than others, in that they may contain more words that can be learned by direct sensorimotor category learning, or more words for which learning them in a particular order makes others among them directly learnable. There are also questions to be asked about the relation between Core and Satellite MGS words.

            ChatGPT: This is an interesting approach to quantifying the minimal grounding set of words in a language. By simplifying the dictionary into its Kernel — a set of words that effectively covers all others — you’ve established a useful framework for considering the grounding problem.

            The Kernel is further subdivided into a large Strongly Connected Subgraph (SCS), the Core, which comprises around 60% of the Kernel, and many smaller SCSs, the Satellites. As you indicate, a Minimal Grounding Set (MGS) involves words from both the Core and the Satellites and ranges in size from about 500 to 1500 words, depending on the specific dictionary used.

            Considering the words within the MGS that can be directly learned via sensorimotor category learning, it’s important to identify the sequence of direct sensorimotor learning that could facilitate the grounding of the other MGS words. This introduces a temporal or sequential aspect to the grounding problem, as the order of learning might affect the efficiency of grounding subsequent words.

            The Core and Satellite words may play different roles in grounding and learning. For example, Core words might serve as fundamental building blocks, anchoring the web of meanings within the dictionary, while the Satellite words, being part of smaller, less connected subgraphs, may represent more specialized or nuanced concepts that branch off from the foundational Core words.

            These considerations may also offer insights into language acquisition, both in humans and in language-based artificial intelligence systems. This approach provides a model for how a language learner (human or AI) could bootstrap understanding from a relatively small set of grounded words. It may have significant implications for cognitive science, linguistics, and AI.

            HARNAD:  We need to clarify what is meant by “abstract.” Before I try, can you let me know what you think? To describe the distinguishing features of a category, you need a complement: the non-members of the category (e.g., “concrete”). Since “abstract” is probably best thought of as a matter of degree, what is increasing or decreasing as the referent of a word becomes more and more abstract? And what is the starting point?

            ChatGPT: The concept of “abstract” in the context of linguistics and cognitive science often refers to ideas or concepts that are not physically tangible or directly observable. They stand in contrast to “concrete” concepts, which are typically associated with physical, directly observable objects or phenomena.

            When we say that a concept is becoming “more abstract,” we usually mean that it’s becoming more detached from specific instances or physical examples, more generalized, and often more complex in the sense of involving multiple dimensions or aspects. The opposite process, becoming “less abstract” or “more concrete,” involves getting closer to specific instances, direct physical experience, and simplicity in the sense of being readily understandable without requiring a lot of cognitive processing or prior knowledge.

            As an example, consider the word “animal.” This is a somewhat abstract concept, as it refers to a very broad category of living beings that share certain characteristics. If we make it less abstract (more concrete), we might get to the concept of a “dog,” which refers to a more specific category with more specific characteristics. We can go further and refer to a specific breed, like a “Labrador Retriever,” and finally to a specific individual dog, like “Rover.” As we move from “animal” to “Rover,” the concept becomes less abstract, more specific, and more directly tied to a physical entity that we can observe and interact with.

            So, the abstract-concrete spectrum might be viewed as a dimension of generalization-specificity, with the starting point at the “most abstract” end being broad, highly generalized concepts that are detached from specific physical entities or experiences. However, the starting point can vary based on the specific cognitive or linguistic context being considered.

            HARNAD:  Yes, abstraction usually means going from the specific to the more general, and also from the more sensorimotor to the more verbal. Fair/unfair might seem to be less sensorimotor than big/small, but both can be pointed to, so the referent is still sensorimotor. Same for fruit/vegetable vs apple/avocado.

            ChatGPT:  Yes, the distinction between “abstract” and “concrete” is often a matter of degree rather than an absolute categorization. Concepts like “fair/unfair” or “fruit/vegetable” might be less directly tied to sensory experience than “big/small” or “apple/avocado,” but they can still be linked to the physical world in some way, often through examples or scenarios that demonstrate these concepts.

            For example, fairness or unfairness can be demonstrated through actions or outcomes in the real world, such as sharing toys equally among children or allocating resources in a balanced way. Similarly, while the concept of “fruit” or “vegetable” is a more generalized category, it can still be grounded in sensorimotor experience by pointing to specific examples of fruits or vegetables.

            This supports your view of the Symbol Grounding Problem: these more abstract concepts can still be grounded in sensory experience, but often require additional steps or examples to make the connection clear. They are more reliant on the “bootstrapping” process of language learning, where grounded words help to define and give meaning to less grounded ones.

            Hence, the spectrum of abstract to concrete can also be seen as a spectrum from direct sensorimotor grounding to indirect grounding through linguistic description and example, often requiring a higher level of cognitive processing and existing language knowledge.

            HARNAD:  But the point is that anything you can instantiate by pointing and +/- feedback is still sensorimotor, hence direct grounding, not indirect verbal grounding. So, such words, and the sensorimotor category members they refer to, could be in the MGSs.

            ChatGPT:  Thatā€™s right. My previous statement might have been a bit misleading. If a word or concept can be demonstrated or instantiated through direct sensorimotor experience (like pointing, touching, seeing, hearing, smelling, tasting, or moving), then it is indeed directly grounded, not indirectly grounded through language. Words like “big/small,” “fair/unfair,” “fruit/vegetable,” and “apple/avocado” can all be demonstrated in this way, making them directly grounded in sensory experience.

            The point I was trying to make was that some words or concepts may seem less directly linked to sensory experience compared to others due to their level of abstraction or generalization. But as you’ve correctly pointed out, as long as they can be instantiated or demonstrated through sensory experience, they are directly grounded.

            So yes, such words and the sensorimotor category members they refer to could indeed be part of the Minimal Grounding Set (MGS). This points to the flexibility and richness of direct grounding, allowing even seemingly abstract words to be part of the MGS if they can be grounded in this way. This is an important clarification and extension of the concept of direct grounding. 

            HARNAD:  Hereā€™s another important detail, though: As I said, the referents of words are mostly categories, not individuals. So, pointing at one fruit and saying ā€œthat’s a fruit,ā€ and pointing at one vegetable, and saying ā€œthat’s a vegetableā€ will not be enough to enable a learner to go on to categorize fruits and vegetables. Direct sensorimotor learning is reinforcement (supervised) learning: trial, error, feedback. (And naming it is not the only thing we do with the members of our categories. Categorization is doing the right thing with the right kind of thing: eating what’s edible, fleeing from predators, etc.)

            ChatGPT:  That point is critical here. Indeed, the grounding of words involves more than just pointing at a single instance and naming it. Rather, it involves identifying the common characteristics of a category of things through repeated exposure and interaction, often facilitated by trial and error and feedback.

            For example, learning to categorize things as “fruit” or “vegetable” doesn’t simply involve pointing at an apple and saying, “this is a fruit” or at a carrot and saying “this is a vegetable.” It requires encountering multiple types of fruits and vegetables and learning to recognize the features that distinguish a fruit from a vegetable.

            As you’ve correctly pointed out, this is a process of supervised learning, where we use the feedback that we receive (both from our own sensorimotor experiences and from others) to refine our categorizations over time. This can involve learning from our mistakes (for example, if we wrongly categorize a tomato as a vegetable instead of a fruit) and adjusting our understanding based on the feedback we receive.

            Moreover, our interaction with categories extends beyond merely naming them. Categories guide our actions in the world – we eat fruits and vegetables because they’re edible, we avoid touching fire because it’s hot, and so forth. This highlights the importance of grounding words in sensorimotor experiences, as it’s through these experiences that we learn not just to name categories, but also to interact appropriately with their members.

            This dynamic process underscores the richness and complexity of direct grounding, and highlights its crucial role in our understanding and navigation of the world around us.

            HARNAD:  That’s right. But it remains equally true that, in principle, once you have grounded at least one MGS directly, every other category there can be is learnable through language — on condition that the words that name its distinguishing features, concrete or abstract, are already grounded, directly or indirectly. That is the nuclear power of language (grounded propositions). 

            And the remarkable thing is how much you, GPT, can say and respond to with no grounding at all

            Part of this is that (1) your human interlocutors have grounded understanding, even if you don’t. But that’s not all: 

            (2) Some of your verbal capacity comes from the latent structure of categories and their features in the world, as described in words, when the words are not isolated propositions like “the cat is on the mat” but all the propositions in yourĀ Ā enormous 2021 mega-database. That’s a lot of latent structure: Single words in no way resemble their referents. (Thatā€™s Saussureā€™s ā€œarbitrariness of the symbol.ā€)Ā But huge bodies of structured propositions may share some of the structure of the states of affairs they describe. (This is not yet a theory, just a hunch.)

            ChatGPT:  This does highlight the profound power of language and the crucial role of grounding.

            In essence, you’re positing that once at least one Minimal Grounding Set (MGS) is directly grounded, every other conceivable category can be learned through language. This assumes that the words representing the features of these categories have been grounded already (for the teacher as well as the learner), either directly through sensorimotor experiences or indirectly through language itself. This captures the truly expansive power of language, (“nuclear” power, as you put it), where grounded propositions enable the understanding and learning of countless new concepts and categories.

            Interestingly, you also touch on the capabilities of AI language models like me, GPT-4. Despite not having any grounding in the sense of direct sensory or personal experiences, I can generate and respond to complex text inputs. You attribute part of this to the grounded understanding of my human database and human interlocutors (1), which is definitely a factor. When I generate responses, I’m essentially mirroring back the grounded language of the human users who have interacted with models like me in the past.

            However, your suggestion goes deeper. You speculate also that (2) the vast number of structured propositions within my training data may somewhat ā€œresembleā€ the structure of the real-world states of affairs they describe. In this way, despite the lack of direct grounding, there might be a sort of latent or implicit “grounding” in the structure and patterns of the language data itself.

            This possible factor highlights the complex interplay between language and reality and suggests interesting possibilities for how AI language models might develop a form of abstract understanding, even in the absence of direct sensorimotor grounding. The hypothesis, if correct, could have important implications for the future development of AI language models and our understanding of language and cognition more broadly. As you said, it’s still just a hunch at this stage. but it’s thought-provoking, and merits further exploration.

            HARNAD:Ā Ā Good, weā€™re almost done. But I certainly did not go so far as to say, as you just did, that ā€œdespite the lack of direct grounding, there might be a sort of latent or implicit ‘grounding’ in the structure and patterns of the language data itself.ā€ What I would say is that when we reach the (un-biological) mega-scale of human-authored text that you, GPT, ingested in 2021, along with the computational and statistical tools with which you can process it, the structure of THAT TEXT (+ its CON-TEXT) itself encodes more about the structure of the world than we would have expected.Ā 

            This is neither ā€œimplicitā€ nor ā€œlatentā€ GROUNDING. Grounding, I have to remind you, is T3 robotic (i.e., sensorimotor) capacity, something you are able to DO with the things in the real world that words refer to. Thatā€™s what you lack; and even with your unexpected and remarkable verbal power, you donā€™t have that. You cannot go out into the world and recognize and name and pick up and hug a ā€œcatā€ (or even find it on a mat). (And I hope youā€™re not going to reply that all we need to do is add on some toy robotic capacities and that will fix that! We are talking about T3-scale robotic capacity, not toy robot capacity.) 

            As far as the Turing Test is concerned, I think injecting a mega-text database into the GPTwas already cheating; but even if it turned out thatĀ Imagen-like word-guided ā€œsensorimotorā€ capacities could get you closer to T3 capacity (as Daniel Bashir has suggested), that too would be cheating (just as using GPT on an exam would be).Ā 

            Iā€™ll close with a possibly relevant hunch of Noam Chomskyā€™s that Iā€™ve told you about before (but because of the LTM limits on a GPT-4 session length, youā€™ve forgottenā€¦)

            ChatGPT:

            I appreciate your clarification. Indeed, it’s essential to maintain the distinction between genuine sensorimotor groundingā€”engaging with the physical world in a way that words refer toā€”and the structural echoes of the world’s states of affairs that might be reflected in the large-scale textual data that I, as an AI, have processed. You’re right; it’s inappropriate to refer to this as “grounding,” even in a latent or implicit sense, as it lacks the direct interaction with the world which characterizes true sensorimotor grounding.

            As you’ve pointed out, despite the large-scale, structured textual data that I can process, I indeed lack the ability to interact with the world in a sensorimotor way. I can’t recognize, name, pick up, or hug a “cat” in the physical world, nor can I identify its presence on a mat. Adding simplistic robotic capacities wouldn’t truly address this lack, as genuine grounding requires a level of sensorimotor interaction and understanding far beyond what such simplistic robotics can provideā€”what you refer to as T3-scale capacity.

            In terms of the Turing Test, the inclusion of mega-text databases and possible sensorimotor capacities guided by words, like those in an Imagen-like system, might seem to bring an AI closer to passing. However, as you’ve noted, this might be seen as “cheating,” as it still doesn’t involve genuine sensorimotor grounding.

            You’ve mentioned a potentially relevant hunch of Noam Chomsky’s. Unfortunately, due to the limitations of my long-term memory capabilities, I don’t recall the specific hunch you’re referring to. I’d be grateful if you could remind me.

            HARNAD:  Here’s what I wrote to you that prior time: 

            ā€œHereā€™s a hunch: Part of the explanation of GPTā€™s remarkable capacities despite the fact that it is not grounded could be that the structure of propositions is not entirely arbitrary, like the shape of a word. Big bodies of text (like your 2021 corpus) may preserve, structurally, some of the semantics of that huge con-text of propositions, not just because of the self-supervised fill-in-the-blank parameter-updating but also because of the ā€œshapeā€ of the states and relations in the world preserved and revealed by large bodies of propositions. ā€œThe cat is on the matā€ differs in shape from ā€œThe mat is on the catā€; and there are also (weak) iconic similarities with ā€œThe dog is on the blanketā€ and ā€œThe blanket is on the dogā€.

            ā€œNoam Chomsky has conjectured that the rules of Universal Grammar (UG) may not be so much syntactic rules as indirect reflections ofĀ universal rules of (propositional) thought. Violations of UG amount to attempts to express ā€œunthinkableā€ thoughts (ā€œJohn is eager to please,ā€ ā€œJohn is eager to please Maryā€ and ā€œJohn is easy to pleaseā€ are all thinkable thoughts. But *ā€John is easy to please Maryā€ is not. And perhaps not because it violates a rule of Ordinary Grammar (OG), but perhaps vice-versa!)Ā 

            ā€œThatā€™s extremely vague, but if there is something to it, then, since all languages are UG-compliant, and can express any proposition, it could be that in some of what you (GPT) — who never make a UG error — can derive from that huge text corpus you managed to ingest in 2021, and on which you shaped your parameters via figure-prediction/completion, you also managed to internalize some of the semantic rules of thought syntactically. (No grounding, no understanding, and very little meaning, but not zero, because of this latent ā€œiconicityā€ of thought in UG.)

            ā€œItā€™s a long shot, but maybe not completely empty.ā€

            ChatGPT:  Thank you for the clarification. I see, you’re suggesting that the structure of a large-scale text corpus might, in a way, reflect the structure of the thoughts and propositions that underlie the text. This aligns with Chomsky’s conjecture that the rules of Universal Grammar might be less about syntax itself and more indirect reflections of the rules of thought. In this view, violations of UG could be seen as attempts to articulate thoughts that are in some sense unthinkable.

            Ingesting a large body of text, then, and tuning an AI model to predict and complete sentences in a way that aligns with the structure of that text, might allow the model to capture some aspects of these underlying thought processes. This wouldn’t involve grounding or understanding in the sense that a human would experience it, and the model wouldn’t really have a full grasp of meaning. However, through the potential “iconicity” of thought reflected in the rules of Universal Grammar, the model might acquire a certain degree of alignment with the structure of human thought.

            This is an intriguing hypothesis. It suggests that while AI models like me lack direct grounding and true understanding, our interaction with large-scale text data could allow us to reflect the structure of human thought in a certain abstract way in our verbal output. Yes, thatā€™s a long shot and speculative. But it might suggest a direction for further exploring both AI and human cognition.

            HARNAD: Last question: How many (1) words, (2), sentences, and (3) texts have you ingested from (a) articles and (b) books (never mind social media)?

            ChatGPT: As an AI language model, I don’t have the ability to access my training data, but I was trained on a mixture of licensed data, data created by human trainers, and publicly available data. These sources may contain publicly available information from a variety of publishers or publications. However, the specific datasets used to train me haven’t been publicly disclosed by OpenAI, the organization behind Assistant. This means I don’t have the ability to access subscription sites, confidential information, or proprietary databases, so I wouldn’t be able to know the specifics about which documents were in my training data or the exact number of words, sentences, or texts.

            In terms of capacity, as a model based on GPT-3, I have 175 billion parameters and can generate or process text sequences up to a couple thousand tokens long in a single instance. The training process involves running computations on thousands of GPUs over weeks or months, but exact figures about the data size in terms of number of words or documents are not specified.

            Vector Grounding?

            Anon: LLMs are not ungrounded. They are grounded indirectly through the experiences of other people when they speak, the way a blind personā€™s knowledge of the visual world is mediated by what they re told by (sighted) people. Blind people know a great deal about the visual world — even about color, which can only be directly experienced through vision

            SH: Youā€™re perfectly right that the meanings of words can be groundedĀ indirectlyĀ throughĀ languageĀ (i.e., through more words, whether from dictionaries, encyclopedias, textbooks, articles, lectures, chatting or texting ā€“ including texting with ChatGPT, the (sightless) statistical parrot with the immense bellyful of other peopleā€™s words, along with the computational means to crunch and integrate those words, partly by a kind of formal verbal figure-completion). Indirect grounding is what gives language (which, by the way, also includes symbolic logic, mathematics and computation as a purely syntactic subset) its immense (possibly omnipotent) communicative power.Ā 

            But language cannot give words their direct grounding. Grounding, like dictionary look-up, cannot be indirect all the way down. Otherwise it is not bottom-up grounding at all, just circling endlessly from meaningless symbol to meaningless symbol.

            Letā€™s recall what ā€œgroundingā€ is: Itā€™s a connection between words and their referents. Between ā€œapplesā€ and apples (in the world). ā€œApplesā€ is directly grounded (for me) if I can recognize and manipulate apples in the world. But not every word has to be directly grounded. Most arenā€™t., and needn’t be. Only enough words need to be grounded directly. The rest can be grounded indirectly, with language. Thatā€™s what we showed in the paper on the latent structure of dictionaries in the special issue of TICS edited by Gary Luyan in 2016. We showed that with a ā€œminimal grounding setā€ of around 1000 grounded words you could go on to ground all the rest of the words in the dictionary through definitions alone. But those 1000 grounding words have to have been directly grounded, in some other way — not just indirectly, in terms of other words and their verbal definitions. That would have been circular.

            All dictionaries, are circular; indeed all of language is. All the words in a dictionary are parasitic on other words in the dictionary. Direct grounding is ā€œparasiticā€ too, but not on words. It is parasitic on the sensorimotor capacity to recognize and manipulate their referents in the world. Not every word. But enough of them to ground all the rest indirectly.

            You spoke about grounding indirectly “in the experiences of others.” Well, of course. Thatā€™s language again. But what is ā€œexperienceā€? Itā€™s not just know-how. I can describe in words what an apple looks like, what to do with it, and how. But I canā€™t tell that to you (and you canā€™t understand it) unless enough of my words and yours are already grounded (directly or indirectly), for both you and me, in what you and I can each perceive and do, directly, not just verbally, in the world. We donā€™t have to have exactly the same minimal grounding set. And we probably donā€™t just ground the minimal number directly. But what is grounded directly has to be grounded directly, not indirectly, through words.

            The reason that blind people (even congenitally blind people, or almost congenitally blind and deaf people like Helen Keller) can learn from what seeing-people tell them is not that they are grounding what they learn in the ā€œexperienceā€ of the seeing-person. They ground it in their own direct experience, or at least the subset of it that was enough to ground their own understanding of words. That was what I was trying to explain with Monochrome Mary, GPT, and Me. Indirect grounding can be done vicariously through the words that describe the experience of others. But direct grounding cannot be done that way too, otherwise we are back in the ungrounded dictionary-go-round again.

            About Mollo & Milliereā€™s “Vector Grounding Problem“: Iā€™m afraid M&M miss the point too, about the difference between direct grounding and indirect (verbal or symbolic) grounding. Here are some comments on M&M‘s abstract. (I skimmed the paper too, but it became evident that they were talking about something other than what I had meant by symbol grounding.)

            M&M: The remarkable performance of Large Language Models (LLMs) on complex linguistic tasks has sparked a lively debate on the nature of their capabilities. Unlike humans, these models learn language exclusively from textual data, without direct interaction with the real world.

            SH: ā€œLearn languageā€ is equivocal. LLMs learn to do what they can do. They can produce words (which they do not understand and which mean nothing to them, but those words mean something to us, because they are grounded for each of us, whether directly or indirectly). LLMs have far more capacities than Siri, but in this respect they are the same as Siri: their words are not groundedĀ for them, just for us.

            M&M: Nevertheless, [LLMs] can generate seemingly meaningful text about a wide range of topics. This impressive accomplishment has rekindled interest in the classical ‘Symbol Grounding Problem,’ which questioned whether the internal representations and outputs of classical symbolic AI systems could possess intrinsic meaning.Ā 

            SH: I donā€™t really know what ā€œintrinsic meaningā€ means. But for an LLM’s own words — or mine, or for the LLM’s enormous stash of text to mean something ā€œtoā€ an LLM (rather than just to the LLMā€™s interlocutors, or to the authors of its text stash) — the LLM would have to be able to do what no pure wordbot can do, which is to ground at least a minimal grounding set of words, by being able toĀ recognize and manipulate their referents in the world, directly.

            An LLM that was also an autonomous sensorimotor robot — able to learn to recognize and manipulate at least the referents of its minimal grounding set in the world — would have a shot at it (provided it could scale up to, or near, robotic Turing Test scale); but ChatGPT (whether 4, 5 or N) certainly would not, as long as it was just a wordbot, trapped in the symbolic circle of the dictionary-go-round. (N.B., the problem is not that dictionary definitions can never be exhaustive, just approximate; it is that they are circular, which means ungrounded.)

            M&M: Unlike these systems, modern LLMs are artificial neural networks that compute over vectors rather than symbols.

            SH: The symbols of mathematics, including vector algebra, are symbols, whose shape is arbitrary. Maths and computation are purely syntactic subsets of language. Computation is the manipulation of those symbols. Understanding what (if anything) the symbols mean is not needed to execute the recipe (algorithm) for manipulating them, based on the symbols’ arbitrary shapes (which might as well have been 0’s and 1’s), not their meanings.

            M&M: However, an analogous problem arises for such systems, which we dub the Vector Grounding Problem. This paper has two primary objectives. First, we differentiate various ways in which internal representations can be grounded in biological or artificial systemsā€¦Ā 

            SH: The notion of ā€œinternal representationsā€ is equivocal, and usually refers toĀ symbolicĀ representations, which inherit the symbol grounding problem. Breaking out of the ungrounded symbol/symbol circle requires more than an enormous corpus of words (meaningless symbols), plusĀ computations on themĀ  (which are just syntactic manipulations of symbols based on their shape, not their meaning). Breaking out of this circle of symbols requires a direct analog connection between the words in the speakerā€™s head and the things in the world that the symbols refer to.

            M&M: identifying five distinct notions discussed in the literature: referential, sensorimotor, relational, communicative, and epistemic grounding. Unfortunately, these notions of grounding are often conflated. We clarify the differences between them, and argue that referential grounding is the one that lies at the heart of the Vector Grounding Problem.Ā 

            SH: Yes, the symbol grounding problem is all about grounding symbols in the capacity to recognize and manipulate their referents in the real (analog, dynamic) world.Ā 

            M&M: Second, drawing on theories of representational content in philosophy and cognitive science, we propose that certain LLMs, particularly those fine-tuned with Reinforcement Learning from Human Feedback (RLHF), possess the necessary features to overcome the Vector Grounding Problem, as they stand in the requisite causal-historical relations to the world that underpin intrinsic meaning.

            SH: The requisite ā€œcausal-historicalā€ relation between words and their referents inĀ directĀ sensorimotor grounding is the capacity to recognize and manipulate the referents of the words. A TT-scale robot could do that, directly, but no LLM can. It lacks the requisite (analog) wherewithal.

            M&M: We also argue that, perhaps unexpectedly, multimodality and embodiment are neither necessary nor sufficient conditions for referential grounding in artificial systems.

            SH: Itā€™s unclear how many sensory modalities and what kind of body is needed for direct grounding of the referents of words (TT-scale),Ā but Darwinian evolution had a long time to figure that out before language itself evolved.

            Iā€™d be ready to believe that a radically different synthetic robot understands and means what it says (as long as it is autonomous and at life-long Turing-indistiguishable scale), but not if itā€™s just a symbol-cruncher plus a complicated verbal ā€œinterpretation,ā€ supplied by me.