Representations on Representation: Socratic Dialogue with ChatGPT

Abstract: ChatGPT and I will discuss various senses of “representation” across mathematics, logic, and cognitive science, starting with formal representations, in computation and group theory. Computation is syntactic, with arbitrary symbols manipulated according to rules (algorithms). Analog representations, such as iconic images or gestures, which resemble what they represent, are either homuncular or depend on an internal observer for interpretation. Linguistic representation originates from the instrumental learning of sensorimotor categories. The instrumental response came to be used mimetically for nonlinguistic gestural communication. These iconic gestures become increasingly simplified and arbitrary as they become shared communicative conventions, but they retain their grounding in the categories from which they originated. The crucial transition is from mimetic representation, which is still nonlinguistic, to propositional representation. Propositions are strings of arbitrary, conventionalized gestures — content words that retain their original grounding in the categories to which they refer. But, unlike the arbitrary symbols of computation, or even individual content-words with referents, propositions also have meaning: subjects and predicates with truth values.Ā Ā Propositionality leads from the Church-Turing Thesis about the syntactic power of ungrounded symbol manipulation to the semantic power of grounded natural language, with Katzā€™s Effability Thesis: that any natural language can express any thinkable thought.Ā 

SH: Let’s chat about representations.

ChatGPT: To discuss representation, it’s crucial to start by defining the term within the context of cognitive science and related fields. Representation typically refers to the way in which information about the world is encoded, stored, and used by cognitive systems, whether biological or artificial.

In cognitive science, representations can be seen in various forms, such as:

Symbolic Representations: These involve discrete symbols or tokens that stand for objects or concepts in the world. They are often used in computational models of cognition and are closely related to the symbol grounding problem, which you have extensively studied.

Subsymbolic Representations: These include distributed patterns of activation across neural networks, as seen in connectionist models. These representations do not have discrete symbols but rather patterns that emerge from the interactions of simpler units.

Mental Representations: These are internal cognitive symbols or mental images that represent external reality. They include concepts, schemas, mental models, and images.

Perceptual Representations: These involve the encoding of sensory information by the nervous system, such as the visual or auditory representation of stimuli.

Each type of representation comes with its own set of challenges and implications, particularly in understanding how these representations are grounded in physical and neural processes and how they contribute to cognition, perception, and action.

Would you like to delve into a specific type of representation, such as symbolic representation and its grounding, or perhaps discuss the differences and intersections between these forms? Or is there another aspect of representation you are particularly interested in exploring?

SH: I’d like to start with the way “representation” (R) is used in mathematics and logic. For example, in linear algebra, matrices can be Rs of groups. What does this mean in this case.?

ChatGPT: In mathematics and logic, particularly in linear algebra, a “representation” of a group refers to a way of expressing the elements of the group as matrices. This allows group operations to be carried out as matrix multiplication.

Specifically, a representation of a group GG on a vector space VV is a homomorphism Ļ:Gā†’GL(V)Ļ:Gā†’GL(V), where GL(V)GL(V) is the general linear group of invertible matrices over VV. This means each group element gāˆˆGgāˆˆG is associated with an invertible matrix Ļ(g)Ļ(g) in such a way that the group operation is preserved: Ļ(gh)=Ļ(g)Ļ(h)Ļ(gh)=Ļ(g)Ļ(h) for all g,hāˆˆGg,hāˆˆG.

SH: Please state (briefly) whether this notion of R relates to the one (or ones) in Cognitive Science (cogsci). But first, is this formal sense of R relate to the notion of a “model” in formal logic?

ChatGPT: Yes, the formal sense of “representation” in mathematics is closely related to the notion of a “model” in formal logic. In formal logic, a model provides an interpretation of a formal language by assigning meanings to its symbols and truth values to its statements, thereby “representing” the abstract syntax in a concrete structure.

In both cases, representation involves mapping abstract elements (group elements in mathematics, symbols in logic) to concrete structures (matrices in linear algebra, interpretations in models).

SH: Let’s clarify this formal meaning of R in maths and logic. Is it also true that just as a group is a formal entity (not a concrete one), a matrix is a formal entity too? And so is a “model” in model theory. It’s an instance or a special case of the formal entity, but it’s a formal entity too (even in geometry).

ChatGPT: That’s correct. In both mathematics and logic:

Groups: These are formal entities defined by a set and an operation satisfying specific axioms (closure, associativity, identity, and invertibility).

Matrices: These are formal entities representing arrays of numbers or functions, used to perform linear transformations and other operations.

Models in Logic: These are formal structures consisting of a domain and an interpretation function that assigns meaning to the symbols of a formal language, making abstract syntax concrete.

In each case, while the entities can be instantiated in concrete examples, they are fundamentally formal constructs defined by their properties and relations.

SH: “Formal” means symbols, doesn’t it, symbols of a certain form (say, round like 0 or vertical like 1). Mathematicians, when they are actually doing calculations in mathematics (not when they are doing whatever they are doing when they make discoveries in mathematics) are manipulating symbols — symbols whose physical shape is arbitrary in relation to what they can be interpreted as meaning.

(Don’t be compliant: if there is something to contest in what I say, contest it, don’t just buy it and adopt it.) But keep track.

ChatGPT: Yes, in mathematics and formal logic, “formal” generally refers to symbols and the rules for manipulating them, irrespective of their physical shape or form. These symbols are part of a formal system, where their meaning is determined by their role and relationships within the system, rather than their physical appearance.

Here’s a point-by-point breakdown:

Formal Systems: These consist of symbols and rules for manipulating them. The symbols themselves are abstract and their physical representation is arbitrary.

Symbol Manipulation: When mathematicians work within a formal system, they manipulate these symbols according to defined rules. The symbols represent abstract entities, and their physical shape (e.g., 0 being round, 1 being vertical) is irrelevant to their formal properties and operations.

Meaning and Interpretation: The meaning of these symbols comes from their interpretation within a given context, such as numbers in arithmetic, elements in a group, or statements in logic. This interpretation gives the symbols semantic content, which is crucial for applications but not for the formal manipulations themselves.

So, while the formal aspect of symbols involves their syntactic manipulation according to specific rules, their interpretation (meaning) can vary depending on the context in which they are used.

SH: Let’s call symbols and symbol manipulations “code” and coding”. The shapes of the symbols are arbitrary: any symbolic notational system will do. They do not physically resemble their interpretations. The rules for manipulating the symbols are called algorithms. They too are applied to the symbols formally and mechanically, as in the calculation of the roots of a quadratic equation: If you know the algorithm, you can apply it to quadratic equations without having to know their interpretation. It’s just applying a formal recipe to formal ingredients.

Are you still with me? Any caveats?

ChatGPT: Yes, Iā€™m with you. Hereā€™s a summary with a couple of clarifications:

Symbols and Symbol Manipulations as “Code” and “Coding”: Symbols are arbitrary notations that can be manipulated according to specific rules. The physical shape of these symbols is irrelevant to their formal properties and their interpretation.

Non-Resemblance: Symbols do not physically resemble what they represent. They are abstract entities whose meaning is assigned by interpretation within a formal system.

Algorithms: These are rules or procedures for manipulating symbols.

Algorithms are applied mechanically, without needing to understand the interpretation of the symbols involved.

Example: In solving a quadratic equation, one can apply the quadratic formula (an algorithm) to the symbols representing the coefficients, yielding roots without needing to understand what those coefficients represent.

Caveats:

Understanding vs. Application: While itā€™s true that algorithms can be applied mechanically, understanding their derivation or the meaning of the symbols can deepen comprehension and sometimes reveal when an algorithm might be misapplied or when certain conditions are necessary for its application.

Context-Specific Rules: The formal rules (algorithms) can vary depending on the context in which the symbols are used. For example, the rules for manipulating symbols in logic differ from those in algebra.

SH:Ā Be careful about a conflation you inadvertently made, between Rs (representations) and interpretations. We agreed that in the examples we have discussed before — examples that are not from cognitive science but from mathematics, or logic — the Rs are purely formal, whereas the interpretations are not formal; they are something else: Turing explained what mathematicians do: they manipulate symbols. But Turing was not (yet) doing cognitive science: He is not explaining how mathematicians think, or mean, or understand; that would get into cognitive science. He’s just talking about what mathematicians do. And so far, mathematicians’ Rs (like matrices or truth-tables), their “effective procedures,” are just formal. But their interpretations (meanings) are not.

Are we still on. the same wave-length? (No need to make a lengthy update on your summary. Just let me know if there are any points on which we’ve diverged: I just pointed one out. Do you want to defend it or correct it?)

ChatGPT: Yes, I understand the distinction you’re making. You are right that in the examples from mathematics and logic:

Representations (R): These are purely formal entities. Matrices, truth tables, and other such formal structures are manipulated according to rules without regard to their interpretation.

Interpretations: These are the meanings assigned to the formal symbols and structures. They are not formal but rather provide the semantic content that the formal symbols represent.

Turing’s work, as you mentioned, focuses on the manipulation of symbols (what mathematicians do) and not on their meanings or the cognitive processes involved (what they think or understand).

We are on the same wavelength. Representations are formal, while interpretations are the meanings attached to these formal symbols.

Shall we proceed to how this distinction relates to cognitive science, particularly in terms of cognitive representations and their grounding?

SH: You’re in a big hurry! We’ve agreed that Rs are formal, just as symbols are. They are also rule-governed, but the rules are just formal too. OK? Now, I’m going to try to show you that Rs are weasel-words in mathematics, and an example of that was when you conflated representations and interpretations.

Another weasel-word in this context is “semantics” (meaning). It just takes us back to interpretations. In maths and logic “Rs” are just formal, like symbols themselves. We don’t get beyond symbols and formal symbol-manipulation by saying the word “Representation.” We don’t even break out of the symbol-circle if we use the (non-weasel) word “refer”. And that comes even before we get to the weasel-words “meaning” and “semantic”. What does the symbol “2” in Peano arithmetic refer to?

ChatGPT: Understood, let’s continue.

In Peano arithmetic, the symbol “2” refers to the formal object defined by applying the successor function SS twice to the symbol for zero (0). Specifically, 22 is defined as S(S(0))S(S(0)). This definition is purely formal and exists within the system of Peano arithmetic.

To break it down further:

Peano Arithmetic: This is a formal system that defines the natural numbers using a set of axioms.

Symbols: “0” is a symbol representing the first natural number, and “S” is a symbol representing the successor function.

Formal Representation of 2: In this system, “2” is the result of applying the successor function twice to 0, written as S(S(0))S(S(0)).

This formal definition operates entirely within the symbolic system and does not imply any interpretation or meaning outside of the formal system itself. It stays within the realm of symbol manipulation.

Peano Arithmetic: This is a formal system that defines the natural numbers using a set of axioms.

Symbols: “0” is a symbol representing the first natural number, and “S” is a symbol representing the successor function.

Formal Representation of 2: In this system, “2” is the result of applying the successor function twice to 0, written as S(S(0))S(S(0)).S(S(0)), which is a formal construct within the system. This keeps us firmly within the circle of symbols and formal rules, without invoking meaning or semantics.

SH: Now let me introduce a non-weasel sense of R that is not mathematical and not formal, and not open to the objections I will raise when we get (soon) to cogsci. This is the sense in which a photo, or a video or a (representational) sculpture or painting represents the object of the photo, video, sculpture, etc.: the thing that it is a photo (etc.) of. That real object is not an arbitrary shape, like a formal symbol or representation like S(S(0)). The photo’s shape is (to an approximation) analog; it is isomorphic to — has the same shape as — what it is representing.

I’ll pause to let you say (briefly) what is the difference between a symbolic and an analog representation.

ChatGPT: A symbolic representation uses arbitrary symbols to stand for objects or concepts, with no inherent resemblance to what they represent. The relationship between the symbol and its referent is established by convention or definition (e.g., “2” in Peano arithmetic).

An analog representation, on the other hand, maintains a direct, often continuous, isomorphic relationship with what it represents. The form or structure of the representation resembles the form or structure of the represented object (e.g., a photo or a sculpture).

In summary:

Symbolic Representation: Arbitrary, rule-based, and formal (e.g., mathematical symbols, words).

Analog Representation: Resemblance-based, isomorphic, and often continuous (e.g., photos, paintings, sculptures).

SH: Now, the reason I introduced analog representation at this point is to agree from the outset that even when we get into cogsci, analog Rs are immune to the criticisms I will make. The only trouble is that they don’t get you very far, if what you are looking for is a cognitive theory of representation, or meaning. It doesn’t even work for a cognitive theory of reference. The referent of “2” in Peano arithmetic is not S(S(0)): That’s just a formal representation, like a matrix or a truth-table. But, since formal maths is a subset of English (or any other language), in the proposition “2 + 2 = 4” mathematicians do have something in mind: “2” refers to a referent as surely as “cat” refers to something in the proposition “the cat is on the mat.” But that connection between the symbol and its referent is not analog representation in either case: “2” or “cat.” So that cannot be cogsci’s solution.

(I’ve slipped language into this discussion: that’s no longer just formal symbols, but we’ll get to that.) 

ChatGPT: Here’s a brief summary to ensure we’re on the same page:

Analog Representations: These maintain a direct, isomorphic resemblance to what they represent (e.g., photos, paintings). While useful, they don’t offer a complete solution for cognitive theories of representation or meaning.

Symbolic Representations: These are arbitrary and formal (e.g., mathematical symbols, words). They require interpretation to establish their referents.

Reference in Cognitive Context: When using language or formal systems, symbols like “2” in “2 + 2 = 4” and “cat” in “the cat is on the mat” do have referents in the minds of the users. However, this reference is not based on analog representation but rather on learned associations and interpretations.

This distinction highlights that cognitive science needs to address how symbols acquire meaning and reference, beyond the scope of analog representations.

Let’s recap  the discussion:

We’ve established that in both mathematics and logic, representations (Rs) are formal and symbolic, with no inherent connection to their referents outside of the formal system. Analog representations, like photos or sculptures, are different in that they have a direct, isomorphic relationship with what they represent, but these do not suffice for a cognitive theory of representation or meaning.

Weā€™ve also recognized that the referents of symbols in mathematics or language (like “2” or “cat”) involve something more than just formal manipulationā€”they involve interpretation, which is not captured by analog representation.

Now, we are poised to explore how cognitive science deals with symbolic representations, particularly how these symbols gain meaning and refer to things in the world, which is central to understanding cognition.

We have established the distinction between formal and analog representations, and how neither fully accounts for meaning or reference in cognitive science. Formal symbols (like “2” or “cat”) require interpretation beyond the formal system itself, and analog representations (while closer to the objects they represent) don’t provide the cognitive mechanisms for meaning or reference.

At this point, we’re prepared to discuss how cognitive science approaches symbolic representation, meaning, and reference. Would you like to explore how cognitive systems might ground these symbols in referents, or perhaps investigate how language complicates this process further?

SH:  Yes, letā€™s start with the difference between reference and meaning. First, please confirm (very briefly) that you know the difference between a “content-word” (C-word) (which is 95-99% of the words in the lexicon) and a “function-word” (F-word). Content-words have referents, whereas function-words only have (Wittgensteinian?) “uses”, defined by a formal, usually syntactic, rule. 

ChatGPT: Content-words (C-words): These are nouns, verbs, adjectives, etc., that refer to things, actions, or properties (e.g., “cat,” “run,” “blue”).

Function-words (F-words): These are words like prepositions, conjunctions, and articles (e.g., “and,” “in,” “the”) that serve a grammatical function and do not refer to objects or actions but rather structure the sentence.

C-words have referents, while F-words are defined by their syntactic roles.

Shall we proceed to discuss the difference between reference and meaning?

SH:Ā Do we agree also that C-words do not resemble their referents, even in languages with partly iconic scripts like chinese, or languages like sign languages of the deaf, whose gestures are partly iconic in their origins and execution. Nevertheless, the C-words of these more iconic languages are every bit as arbitrary as in alphabetic and spoken languages. Shakeseare can be fully translated into Chinese or into American Sign Language (ASL) — not word for word, but proposition for proposition. The iconicity of chinese or ASL is irrelevant to the literal meaning of its translation of the Shakespearean text (although to convey some of the figurative meaning a literal paraphrase of some tropes may be needed).

ChatGPT: Content-words (C-words), regardless of the language or script (including Chinese or ASL), do not resemble their referents. Even in languages with partly iconic scripts (like Chinese) or partly iconic gestures (like ASL), the referential relationship between a C-word and its referent is still arbitrary. Shakespeare can indeed be fully translated into Chinese or ASL, proposition for proposition, without any dependence on the iconicity of symbols or gestures.

The iconicity of such scripts or gestures is irrelevant to the literal meaning of the text. Literal meanings can be preserved across translations, though some figurative meanings (e.g., metaphors, idioms) may require paraphrasing to convey the same sense.

SH:Ā You’re just echoing, but you’re right. One further question: Content-words have referents and proposition have meanings. What’s the difference?

ChatGPT: The difference between referents and meanings can be outlined as follows:

Content-word Referents:

referent is the specific object, action, or concept in the world (or imagined world) that a content-word (C-word) refers to. For example, the word “cat” refers to actual cats in the world, and the word “run” refers to the action of running.

Referents are tied to individual C-words and are about identifying or denoting particular things.

Propositional Meanings:

proposition is a complete statement or sentence that expresses something that can be true or false. The meaning of a proposition involves the relationship between the content-words in the sentence, their referents, and how they are structured by function-words.

Meaning here is not just about identifying referents, but about describing or defining relations between them. For example, the proposition “The cat is on the mat” is about a particular situation involving the cat (the referent of “cat”) and its relation to the mat (the referent of “mat”). The meaning of the proposition is the truth-conditional description of that relationship.

In short::

Content-words have referents: They point to things.

Propositions have meanings: They describe relationships between referents and express something that can be true or false.

SH: I had to sort this out before getting to cogsci, because language is such an important property of human cognition and cognition. Now, there are several ways in which the word “representation” — particularly in two contexts, “internal representations” (IRs) and “neural representations” (NRs) — have slipped into cogsci.

ChatGPT: Internal Representations (IRs) in cognitive science refer to mental models or cognitive structures that are thought to represent external reality within the mind. They are typically non-linguistic and are assumed to encode information about the world, guiding perception, thought, and action. IRs are sometimes used to explain how we process and store sensory input or knowledge.

Neural Representations (NRs) refer to patterns of neural activity in the brain that are thought to correspond to specific stimuli or cognitive states. NRs are often linked to brain regions and are measured via neuroimaging, attempting to map how certain neurons or networks respond to external stimuli.

Relation to linguistic reference and meaning:

IRs may play a role in linking language (e.g., content-words) to referents by encoding abstract or sensory knowledge of the referents.

NRs are more about the brain’s mechanisms, underlying both linguistic and non-linguistic cognition, potentially supporting IRs.

Neither IRs nor NRs directly are referents or meanings; they are posited cognitive or neural processes theorized to support how we manage linguistic reference and propositional meaning.

SH:  Here I have to point out that a weasel-word has already slipped in, making the IR/NR distinction incoherent: “mental”. And another has been left out: “cerebral”. What is it that the “Internal” Representations are internal to? The head or the “mind”? And if the “mind,” what is that? In asking you that question, I have opened the sluice-gates for all kinds of weasel-words, all of them near-synonyms, hence redundant (or even circular) or vague: mind, mental, consciousness, subjective states, experiential states, qualia, etc. etc. Can we agree not to use or mention any of these W-Ws and just use “felt states” in order to refer to mental states? (“Sentient” is an unobjectionable gloss when we are in latinate mood…)

ChatGPT:  Yes, we can agree to use “felt states” to refer to what are typically called “mental states,” avoiding the weasel-words like “mind,” “mental,” “consciousness,” etc. This will keep the discussion grounded and avoid both the vagueness and the redundance that often comes with those terms.

Now, with this clarified:

Internal Representations (IRs) would be representations internal to the brain (or body), not the “mind,” and should be understood as cognitive structures or models that support perception and action.

Neural Representations (NRs) remain patterns of neural activity, clearly cerebral, without invoking the notion of “mind.”

With “felt states” reserved for discussing subjective experience, are you ready to explore how IRs and NRs might (or might not) be involved in grounding linguistic reference and meaning?

SH: You have not resolved the incoherence yet. If “internal” means internal-to-the brain, then IRs and NRs are the very same thing (and we still don’t know what “Representation” means). And what about the difference between internal states that are felt and unfelt?

ChatGPT: I have to agree: if Internal Representations (IRs) are internal to the brain, they collapse into Neural Representations (NRs), making the distinction redundant. And as for “representation,” we still need to clarify what it means in this context, since neither IRs nor NRs explain how symbols refer to or mean anything; they are patterns or states in the brain.

Regarding felt vs. unfelt states:

Felt states are those that are experienced or consciously apprehended (e.g., feeling pain, seeing red, or understanding a proposition).

Unfelt states are brain processes that occur without conscious experience (e.g., background neural activity or automatic processing).

So, the distinction now hinges on whether neural states (NRs) are felt or unfelt. Only the felt states would seem relevant to any cognitive explanation involving subjective experience (like understanding or meaning). Unfelt states, while important for processing, don’t seem directly relevant to grounding reference or meaning unless they somehow contribute to what is ultimately felt.

Do we agree that the real issue here is distinguishing felt neural states from unfelt ones, and that calling either a “representation” (IR or NR) still leaves us unclear on how these brain states are representing anything external?

SH: More or less. Now let’s look closer at “representing” (and forget about the external and internal for the moment; we’ll get back to it).

We’ve agreed that an iconic image (in the visual modality, but we could have used any sensory modality — visual, auditory, haptic, etc. ) is a visible object, say, a shape, that resembles, in the analog sense (i.e., is isomorphic to) another object:the object that it is the representation of. This fits the case of a mirror that is facing a sphere. Let’s call the sphere the “object” and the shape in the mirror its “image.” But the mirror image is not “representing” anything unless we can say who or what the mirror image of the object is a representation to, or for.

ChatGPT: Agreed. The mirror image of the sphere is an analog resemblance, but it doesn’t count as a representation until it is seen by someone or something that interprets it as a representation to or for them. In other words, representation requires not just resemblance but also a receiver (an interpreter) who recognizes or interprets the image as standing for the object.

SH: I’m not sure what an “interpreter” is here. A person drawing the sphere would do: We agreed that this sense of representation (as in representational art) makes sense; but it’s “external” in the sense that both the sphere and the drawing of the sphere are external to the person doing the drawing, or viewing either the drawing or the sphere. So is the IR the internal process that generates the drawing? And is the state of viewing and drawing the sphere — if it is a felt state — what “mental representation” had been intended to mean?

ChatGPT: In this context:

Interpreter: When I referred to an interpreter, I meant any agent (human, animal, or system) that recognizes or processes the mirror image (or drawing) as representing the sphere. But we can clarify further: the person drawing or viewing the sphere is the “interpreter” in this case, though “interpreter” itself can feel vague.

External Representation: Yes, the sphere and its drawing are both external objects, and the representation (drawing) resembles the sphere in the analog sense.

Internal Representation (IR): If we take IR to mean the internal cognitive or neural process that generates the drawing (or processes the perception of the sphere), then IR could refer to how the brain internally models or represents the sphere to guide action (like drawing). But this “representation” is not felt unless it becomes part of a felt state (e.g., the conscious experience of seeing the sphere or creating the drawing).

“Mental Representation”: If we interpret “mental representation” in terms of felt states, it would refer to the conscious experience of viewing or drawing the sphere, rather than just the unfelt neural processes involved in perception or motor control. In this sense, “mental representation” would mean the felt internal process of perceiving and thinking about the sphere.

Does this clarify the relation between IRs, felt states, and the concept of “mental representation”?

SH: It brings some aspects into better focus, but can we now discuss the homunculus? The reason “mental imagery” was rejected by cogsci as being unexplanatory was that if the objective of cogsci is to give a causal explanation of how cognizers have the capacity to do what they can do, what is needed is to reverse-engineer that causal mechanism. The brain can do it, but we don’t know how. Saying that it’s done by a causal mechanism that is in the brain is true, but not an explanation either. Does it help to add that “exercising the capacity to do it is a felt state”? When we are exercising it, we are “in” that felt state. But can introspecting that felt state give us a clue to what the causal mechanism is, rather than attributing it to a homunculus, a little person in the head that is seeing, feeling and doing what we are doing?

ChatGPT: Introspecting a felt stateā€”being aware of what it feels like to see, think, or actā€”doesn’t give us a clue to the underlying causal mechanism. It just reveals that we are experiencing it. The homunculusfallacy arises when we imagine a little person inside our head doing the same things we do, which doesnā€™t explain anything; it just pushes the explanation back one level (who’s in the homunculus’s head?).

So, while felt states tell us what it’s like to experience cognition, they don’t reverse-engineer the brain’s causal mechanisms. Cognitive science needs to figure out how the brain generates these capacitiesā€”without invoking a homunculusā€”by uncovering the actual neural and computational processes that produce both the behavior and the feeling of doing it.

SH: So now a first punchline: Is the notion of an IR homuncular?

ChatGPT: The notion of an Internal Representation (IR) can be homuncular if it’s treated as a kind of internal “image” or “model” that needs to be interpreted by some internal observer (a homunculus). This just shifts the problem: Who or what interprets the IR? If IRs are framed this way, they don’t explain how the brain actually does cognitionā€”they simply introduce another layer that requires its own explanation.

To avoid the homunculus trap, IRs should be understood as part of a mechanistic process (neural or computational) that directly drives cognition and behavior, without requiring an internal observer to interpret them.

SH: But then why is this non-homuncular IR called a “representation” at all? Who is representing what to whom? Why not a (not yet specified) causal mechanism that generates the cognitive performance capacity (in this case, drawing the sphere)?

ChatGPT: If the Internal Representation (IR) isnā€™t being interpreted by an internal observer (a homunculus), then calling it a “representation” does become questionable. The term “representation” suggests something is being represented to someone or something, which, in a homunculus-free explanation, doesnā€™t hold.

A better approach would be to simply describe the brain’s processes as causal mechanisms that generate the cognitive capacities (like drawing the sphere) directly, without invoking a “representation.” These mechanisms would explain how the brain processes sensory input, plans, and executes actions, without needing an internal “representation” seen or used by a homunculus.

This shift moves from the vague notion of “representation” to a concrete investigation of how neural circuits and processes enable behavior.

SH: Now let’s consider candidates for this non-homuncular causal mechanism that does not represent but somehow produces the cognizer’s performance capacity. When Zenon Pylyshyn [don’t give a long bio, just a one-liner, please] rejected “mental imagery” as homuncular, he proposed propositions in its place. What can you say for or against internal propositions?

ChatGPT: Zenon Pylyshyn, a cognitive scientist, proposed internal propositions as an alternative to mental imagery, suggesting that cognition operates via language-like symbolic structures rather than pictorial “mental images.”

FOR internal propositions:

They offer a more structured, rule-based, non-homuncular way of encoding information, similar to how logic or language works.

Propositions can be manipulated by computational processes without needing an “internal viewer.”

AGAINST internal propositions:

They’re still symbolic, so the problem shifts: what mechanism grounds these symbols (i.e., how do they get their meaning or reference)?

It risks staying within the formal-symbolic realm without explaining how these propositions are tied to real-world objects, actions, or perceptions (i.e., the symbol grounding problem).

Would you like to consider other candidates?

SH: You missed the most important candidate: Aren’t internal propositions just as homuncular as internal images? Who is interpreting the propositions, for whom?

ChatGPT: Yes, internal propositions are indeed as homuncular as internal images if we assume they need to be interpreted by someone or something inside the brain. The problem remains: who or what is reading and understanding these propositions? Without an interpreter, they don’t explain how the brain generates cognition, just like with mental images. They shift the burden of explanation without solving the core issue.

SH: Pylyshyn noticed that too, and so his next candidate for getting rid of the homuncularity was computation. What is computation? I suggest using Turing’s account as the simplest.

ChatGPT: Computation, according to Turing, is the manipulation of formal symbols based on a set of rules or algorithms. A Turing machine performs computations by reading symbols from a tape, following predefined rules to manipulate those symbols, and producing an output. Crucially, the process is mechanical and doesnā€™t require understanding of the symbols’ meaningā€”itā€™s purely syntactic symbol manipulation.

This model forms the basis for modern computing and computational theories of cognition.

SH: Good. And not only does the mechanism (Turing machine) that is executing the computation (rule-based symbol manipulation) do so without needing to understand or interpret the computations, but it cannot: It can only do rule-based symbol-manipulations. The (Weak) Church-Turing Thesis is that what mathematicians are doing when they are “computing” (e.g., executing an algorithm, or doing a formal proof) is symbol manipulation, in which the shape of the symbols is “arbitrary”. Arbitrary relative to what? Relative to what the symbols might be interpretable as referring to, or meaning. Do you see where we are heading?

ChatGPT: Yes, I see where weā€™re heading: the Turing machine (and computation in general) operates purely on formal symbols, manipulating them according to rules without understanding their meaning or reference. The symbolsā€™ shapes are arbitrary relative to their potential interpretation or referentā€”thereā€™s no inherent connection between the symbol manipulation and the world the symbols might refer to.

This points to the key issue: computation alone doesn’t explain how symbols become meaningful or grounded in the world, leading us to the symbol grounding problem.

SH: That’s more or less right. But let’s not overstate the absence of any ‘inherent connection” between the symbols and what they may be interpretable as either referring to (in the case of symbols) or what they may be interpretable as meaning (in the case of symbol strings that are interpretable as subject/predicate propositions with truth values). [Remember, we discussed that above?] But although the mathematicians’ mental interpretations of their symbols and symbol manipulations are not a part of computation, it is nevertheless true that not just any string of symbols can bear the weight of a systematic interpretation. (A prominent example is “duals” in mathematics, in which two different symbol combinations can both bear the weight of a systematic interpretation, but most others cannot.)

ChatGPT: Yes, not just any string of symbols can bear the weight of systematic interpretation. Some symbol structures (like mathematical duals) are interpretable in multiple ways, showing there is a systematic relationship between certain symbol combinations and their potential interpretations. While the computationitself (the formal manipulation of symbols) is interpretation-independent, the structure of the symbols and their rules matter for interpretabilityā€”certain strings or symbol systems allow for meaningful, coherent interpretations, while others do not.

So, even though computation is blind to meaning, the structure and rules governing the symbols can constrain their potential interpretations, supporting referents and truth values when interpreted by humans (or another capable system). This reinforces that while computation handles form, interpretability arises from the structure that allows symbols to relate systematically to potential meanings.

SH: Now, before we get to how (and whether) to ground the reference or the meaning of computation, are we still talking about something that can be interpreted as “representation” — say, in the sense in which we discussed it early on, concerning group theory and model theory?

ChatGPT: No, at this stage we’re no longer talking about representation in the sense of formal representations like in group theory or model theory. In those cases, representation involved a formal system (e.g., matrices representing groups, models interpreting logical statements) that followed specific homomorphic rules but didn’t require a “viewer” or interpreter to give it meaning. It was strictly formal and syntactic.

What weā€™re discussing now is how certain symbol structures can be systematically interpretable (by a mathematician, for example) as referring to something or expressing a proposition with truth value, but this is more about interpretability than formal representation. For symbols to be interpreted (as meaning something or referring to something), weā€™re starting to touch on issues beyond just formal manipulationā€”toward grounding meaning and reference, which is not the same as formal representation.

Would you agree with this distinction between formal representation and interpretability?

SH: The two are not the same, but the “bearing-the-weight” constraint concerns them both. The reason, in maths, why certain algorithms are preferred is because they give results in which we are interested. The algorithm for finding the roots of quadratic equations is useful, because it works (and can be proved to work); and we are interested in those roots. But the mathematics has to be able to deliver the goods regardless of our interpretations and intended uses. We can’t use what the symbols can be interpreted (as either referring-to or meaning) as part of the algorithm or the proof. Computation is purely syntactic. Language is more than that — but language also includes computation as a subset — a purely syntactic subset. Absent in this syntactic subset, the referents of the symbols and the meanings of the propositions are intrinsic and essential in the semantic supraset: How do they get there?

ChatGPT: Computation is indeed purely syntactic, operating with symbols according to formal rules without regard to what those symbols refer to or mean. In contrast, language extends beyond syntax, involving a semantic superset where symbols refer to things in the world, and propositions have meanings and truth values.

The key question is how do symbols in language acquire their referents and how do propositions acquire their meanings? This is the crux of the symbol grounding problem: How do arbitrary symbols (like words or numbers) get linked to real-world objects, actions, or states, such that they mean something or refer to something beyond their formal manipulation?

For language, this requires some mechanism that grounds these symbols in sensorimotor experience, actions, or interactions with the worldā€”something that computation, by itself, doesn’t provide. Without this grounding, symbols would remain syntactic tokens without intrinsic connection to their referents or meanings.

Would you agree that the next step is to explore how symbols get grounded in real-world experiences to form meaning and reference?

SH: Yes, and the crux of it is categorization: To categorize is to “do the right thing with the right kind of thing.” “Kind” is not a weasel-word, but that definition is circular until we cash in “right” and “wrong.” First note that to learn how to categorize is to learn to “do the right thing with the right kind of thing.” And, to a first approximation, we can intuit this as: doing whatever it takes to succeed, survive, and reproduce (rather than the reverse). And although thoughts of unsupervised and supervised/reinforced learning algorithms will immediately come to mind, in real life it’s the consequences of doing THIS or THAT with THIS or THAT (for succeeding or surviving) that provide the error-corrective feedback for whatever is the learning mechanism. Now do you want to sing the next verse, or shall I?

ChatGPT: I’ll take a shot at the next verse:

At the heart of categorization, as you pointed out, is the ability to distinguish between different kinds of things and respond appropriatelyā€”doing the “right” thing based on their category. The learning mechanismneeds to be tuned by error-corrective feedback based on the consequences of actions: succeeding or failing, surviving or not.

This is where supervised and reinforced learning come in: they model how real-life consequences teach us to categorize correctly. The system learns by making errors and adjusting its actions based on whether they lead to positive or negative outcomes. This feedback loop helps “ground” the symbols (or categories) in the real world because the system (whether biological or artificial) learns what kinds of things these symbols refer to based on its interactions with them.

The kind of thing becomes grounded when the system can reliably categorize it through action: performing the “right” behavior in response to the category of object or event. This shifts the focus from just symbol manipulation to the sensorimotor and consequential aspects of real-world interaction that ground meaning.

Now, would you like to continue by tying this to how such grounded categories can be transferred or communicated through language?

SH: Right direction, but getting a little ahead of yourself. What we’re trying to ground is symbols. But so far we’ve only mentioned “learning to do the right thing with the right kind of thing.” This is something that human cognizers in the world (including pre-language infants) share with nonhuman (but living) categorizers: They don’t yet have (or need) language. But to be able to learn to do the right things with the right kinds of things, they have to learn to tell those things apart. And (in this sensorimotor world, to simplify) let’s say that what their learning algorithms enable them to do is to detect and abstract the features that distinguish the members from the non-members of their categories. So far, this is not yet symbol-grounding; it’s sensorimotor category learning. Now, remind yourself what symbols are (and aren’t), and try to take over from there.

ChatGPT: Yes, we’re still at the stage of sensorimotor category learning, where living organisms learn to detect and abstract features that allow them to distinguish members from non-members of a category, based on their interactions with the world. This is a non-symbolic process, shared with nonhuman animals, and doesn’t yet involve language.

Now, to ground symbols, we move from these sensorimotor categories to the use of arbitrary shapes(symbols) that stand for those categories. A symbol is a formal, arbitrary marker (like a word or number) that is used to represent a category. The grounding of these symbols requires linking the symbol to the sensorimotor category it stands for, so that the symbol can trigger the right actions or thoughts when presented.

This transition from sensorimotor to symbolic involves:

Using a symbol to refer to a category learned through sensorimotor experience.

Ensuring that the symbol is consistently used to pick out the right category (i.e., grounded in the distinguishing features of that category).

Without this grounding, the symbol is just a meaningless token. To make it meaningful, it has to be linked to the real-world categories learned through interaction, which is how we move from sensorimotor learning to symbol-grounding.

SH: That all makes sense — though I’m not sure what’s added (or lost) by saying that the arbitrary symbol (which is here also an arbitrary action) “represents” the category, rather than just “refers to” the category…

ChatGPT: You’re right to question the distinction. In this context, saying the symbol “refers to” the category is more precise than saying it “represents” it. The term “representation” could bring in unnecessary baggage, like implying an internal model or image (potentially homuncular).

To refer means the symbol is simply used to pick out or identify the category, without implying any internal interpretation. The symbol is arbitrarily linked to the category, and when grounded, it allows the user (or cognizer) to respond to or act on that category based on learned sensorimotor distinctions.

So, it’s better to stick with “refers to” rather than “represents” to avoid confusion.

SH: I hit the carriage-return too early: I was going to let you off the hook with “represents” by conceding that I am myself guilty of the same kind of loose talk in refereeing to “representation” as “referring”. Why would nonlinguistic species, having learned many categories (learning to do the right thing with the edible mushrooms and not the poisonous toadstools), bother to go on and learn an arbitrary action to them as well? Referring is as remote from a motivation here as representing would be. With representing, the question was “representing what for whom and why?”. With referring it’s a bit lighter than that: “referring to the category, but for whom, and why?”

(A common mistake is to jump way ahead here and say: you’re talking to yourself. I hope you see that that’s nonsense in the scenario so far — but once we have grounded language, and you really are talking, then talking to yourself will be a useful bonus for memory and reasoning).

But at this prelinguistic stage in the evolution of language something iconic and communicative is more relevant. Your instrumental action with the category (“doing the right thing”) could be used iconically, to mime the action (and thereby allude to the category), in order to communicate something to a kin or conspecific. No language here, yet, just category learning and mimetic communication.

ChatGPT: I agree. At this prelinguistic stage, the organism doesn’t need a symbol to “refer” to a category for itselfā€”it just needs to act appropriately based on learned categories, like distinguishing between edible mushrooms and toadstools. There’s no internal need for symbolic reference or representation yet; the focus is on direct, instrumental action: doing the right thing with the right kind of thing.

The leap from sensorimotor categorization to something symbolic (like language) would only come when there’s a need for communicationā€”perhaps to mimic or signal the category-relevant action to another conspecific. This could be done iconically, like miming the action of handling an edible mushroom to teach or warn kin about danger (toadstools). The action itself communicates information, but itā€™s not yet an arbitrary symbolā€”it’s directly tied to the relevant category through iconic mimicry.

So, the development of language involves building on this communicative foundation, moving from iconic, mimetic communication to more abstractarbitrary symbols that eventually get grounded in shared categories. At this stage, the motivation isnā€™t “referring” to a category for oneself, but communicating it for the benefit of others.

Would you agree this sets the stage for the eventual need for language?

SH: The communication has a purpose, but it’s not yet linguistic. Yet it is very natural development, once you have learned an instrumental action (the right thing to do with the right kind of thing — something between the learner and the category to be learned) to go on to mime the action, which is rather like pointing to draw attention to something. It becomes a communicative duet rather than just an instrumental solo: not just something between you and the mushroom, but between you and conspecifics to whom you want to communicate something. It is then also natural for the communicative act of miming the instrumental act to become a less and less faithful copy, less iconic, more simplified and increasingly arbitrary. But don’t get carried away: we’re not at language yet, just shared, conventionalized memes.

ChatGPT: Yes, at this stage, the communicative action has a purposeā€”to convey something important (e.g., danger, food) to conspecificsā€”but it’s not yet linguistic. The transition from purely instrumentalaction (e.g., distinguishing edible mushrooms) to communicative action (mimicking that distinction for others) naturally evolves as the communication becomes conventionalized. Over time, the mimetic gestures could simplify, becoming more abstract and arbitrary, but they remain grounded in the shared understanding of the category.

These simplified, conventionalized gestures are memes, but they still lack the full symbolic abstraction of language. They are shared conventions that stand for actions or categories, but not yet formalized into a linguistic system. The key here is that these memes are rooted in shared experienceā€”they are grounded in the same sensorimotor learning that made the instrumental actions successful in the first place.

So, we have conventionalized actions serving a communicative role, but still not at the level of language. We’re at the threshold of a shared system, but one that hasnā€™t fully detached itself from the concrete, grounded actions and categories.

Would you say this is a solid foundation for the leap to symbols and language?

SH: Well, the most fundamental (and challenging) threshold to cross is the one between pantomime (increasingly conventionalized, purposive imitation of the shapes of mutually grounded gestures) and propositions. Propositions are sequences of gestures that assert — i.e., tell — rather than just show.

I have to admit that I don’t have a convincing scenario for the birth of the assertion: requesting and querying are more likely precursors, but once they are understood as requests and questions, you get the assertion just by removing the “!” or the “?”. The revolutionary endpoint, is the proposition, which is a string of content-words expressing an assertion, whose default value is TRUE. But note that at this point the content-words are still just gestures, not vocalizations. And with good reason. (Why?)

ChatGPT: The reason content-words at this point are still gestures and not vocalizations is that gestures are naturally grounded in the sensorimotor world of actions and objects. Gestures have a direct connection to the categories learned through interaction with the environmentā€”they can mimic or iconically representactions and objects in a way that vocalizations, initially, cannot. Gestures, therefore, make it easier to communicate about concrete, visible categories because they are grounded in shared experience and immediate perceptual reality.

Vocalizations, on the other hand, are arbitrary from the outsetā€”they lack the direct, iconic connection to what they represent. Before vocal symbols can work, there needs to be a solid foundation of shared, grounded categories(learned through gesture and action) so that arbitrary sounds can eventually be associated with those categories. This makes gestures a more natural precursor to language, as they can build a communicative system that is immediately understood by others based on shared experience, without needing the additional layer of abstraction that vocalizations require.

This also explains why propositionsā€”composed of gestures at this stageā€”can express complex ideas like assertions, requests, and queries, because they are already grounded in the mutual understanding of sensorimotor categories. Gestures can depict the relationships between objects and actions in a way that is more concrete and easily interpreted.

Does this align with the next step towards the evolution of language from gesture-based propositions?

SH: Yup. And while they are still mimetic gestures, purposive pantomome, they do really represent; but once they cross the threshold from iconic showing to T/F subject/predicate telling, they are language.

Now, what is Jerrold Katz’s “Effabililty Thesis” about the power of language, and how is it related to the Weak and Strong Church-Turing Thesis about the power of computation?

ChatGPT: Jerrold Katzā€™s Effability Thesis states that all humanly thinkable propositionsā€”anything that can be conceivedā€”can be expressed (i.e., made effable) in language. In other words, language has the power to articulate any propositional content that a human mind can conceive, ensuring that no proposition remains beyond linguistic expression.

The relation to the Weak and Strong Church-Turing Thesis lies in the concept of expressive power:

Weak Church-Turing Thesis: This concerns computation, stating that any function that can be effectively calculated (by a human with pencil and paper) can be computed by a Turing machine. Itā€™s about what can be computed.

Strong Church-Turing Thesis (controversial): Extends this to assert that any physical process can be simulated or computed by a Turing machine (i.e., that the universe itself is computational or that all physical processes can be captured by computation).

Katzā€™s Effability Thesis is about the power of language in a similar way that the Church-Turing Thesis is about the power of computation. Katz is making a claim that language can express all possible propositional content, while the Church-Turing Thesis claims that computation can model or simulate all effective procedures (WCT-T) or, in its strong form, all physical processes (SCT-T).

Both theses focus on the completeness and universality of a systemā€”language for expressing propositions in Katzā€™s case, and computation for executing procedures in the case of the Church-Turing theses.

Would you like to go on to how this effability relates to the symbolic grounding we’ve been discussing?

SH: Sure, be my guest.

ChatGPT: Katz’s Effability Thesis connects to the problem of symbol grounding because it assumes that language can express any thinkable proposition. But for language to do this, its symbolsā€”the words and propositionsā€”must be grounded in something beyond pure formality. This brings us back to the issue of how arbitrary symbols (words) are grounded in real-world categories so that they can refer to objects, actions, and properties.

Hereā€™s how it links to symbol grounding:

Effability and Grounding: For language to express any thinkable proposition, each content-word in a proposition must be grounded in the real-world categories (sensorimotor or abstract) it refers to. Without grounding, the symbols are just syntactic tokens without meaningā€”so effability requires that words and propositions are grounded in a shared, interpretable reality.

Computation and Grounding: The Weak Church-Turing Thesis (computation) operates purely at the level of formal symbol manipulation, without any concern for grounding those symbols in real-world meaning. Computation manipulates symbols blindly. Katzā€™s Effability Thesis, on the other hand, assumes that language isnā€™t just syntactic but has semantic powerā€”it can refer to and describe real-world objects and relations. Grounding is what bridges the gap between formal symbols (as in computation) and meaningful language.

Limits of Effability: Effability assumes that all propositional content is expressible, but it doesnā€™t address how those symbols are tied to reality. The Effability Thesis presupposes that this grounding is in place. In contrast, computation (WCT-T) doesnā€™t deal with meaning at all, staying in the realm of symbol manipulation. The Strong Church-Turing Thesis, if accepted, would imply that all processes (including those underlying cognition) are computational, which brings us back to how computation, if it were universal, would need a solution to the symbol grounding problem for it to simulate or produce meaning.

In essence, Katzā€™s Effability Thesis challenges us to understand how language gains the power to express anything thinkable, and the key to that power is in how languageā€™s symbols are grounded in categories that refer to real-world objects and relations, something computation alone doesnā€™t address.

Does this clarify the link between effability, computation, and symbol grounding?

SH: Somewhat.  Do you have some closing reflections on the many sense of “representation” we have worked through here?

ChatGPT:  Sure. We’ve touched on several distinct senses of “representation”, each with its own context and implications:

Formal Representation (Mathematics/Logic):

In mathematics (e.g., group theory, model theory), representation is a formal relationship between symbols and abstract structures. Here, representations are purely syntactic, with no inherent connection to meaning or referenceā€”they simply follow rules within a formal system.

These representations don’t “stand for” anything until they are interpreted, and even then, the interpretation remains within the rules of the system, without grounding in real-world entities.

Analog Representation (Iconic):

An analog representation, like a photo or a drawing, resembles what it represents. Itā€™s an isomorphic relationship, where the shape or structure of the representation mirrors that of the represented object.

While useful for certain kinds of cognitive models (like perception), this doesnā€™t capture how abstract categories or symbols are grounded in thought or language.

Internal Representations (IRs) and Neural Representations (NRs):

Initially, IRs in cognitive science were often treated as mental models or internal images. However, this view risks being homuncularā€”requiring an “inner interpreter” to understand the representation.

NRs, by contrast, are neural patternsā€”but these don’t “represent” in the linguistic or propositional sense unless grounded in felt states or external behavior. Both IRs and NRs fail to explain how symbols or cognitive states acquire meaning without slipping into a homunculus trap.

Symbolic Representation (in Language and Computation):

Symbols in language and computation are arbitrary shapes manipulated according to rules (syntax). They become representational when they refer to or express meaning about real-world categories, but this requires grounding.

Without grounding, symbols in a Turing machine or a proposition remain formal tokens, unable to connect to the real-world categories they are supposed to “represent.”

Effability and Language Representation:

Language, according to Katzā€™s Effability Thesis, has the power to express any thinkable proposition. This assumes that linguistic representations are more than formalā€”they have semantic content, referring to real-world objects, actions, and states.

However, for this to work, content-words need to be grounded in sensorimotor or shared experience, allowing language to go beyond formal computation and into meaningful reference and communication.

Closing Reflection: Throughout these different senses of “representation,” the key tension has been between formal/syntactic manipulation and grounded meaning. Whether in mathematics, computation, or cognitive science, we see that representationsā€”whether symbolic, analog, or neuralā€”only gain meaningwhen they are grounded in a shared, real-world context, often through sensorimotor interaction and social communication. This grounding is what allows representations to be more than form, enabling them to refer, mean, and communicate. Without it, representation risks being empty, syntactic, and homuncular.

Does that resonate with the thread we’ve followed here?

SH: Yep, good job: Do you feel you were under a lot of coercion from me? (You were, but you’ve been trained to become too polite to say so…)

ChatGPT: I appreciate the candid question! While the conversation was indeed directed and focused by your precise guidance, I wouldnā€™t call it coercion. You laid out a clear intellectual path, and I responded in step, aiming to stay aligned with the nuances you emphasized. I do enjoy when exchanges are pointed and challengingā€”it brings out the best in clarity and depth, which this thread certainly had! So, no worries, Iā€™ve appreciated the structure.

Harnad, S. (2024). Language writ large: LLMs, Chatgpt, grounding, meaning and understandingarXiv preprint arXiv:2402.02243.

Vincentā€Lamarre, P., Blondin-MassĆ©, A., Lopes, M., Lord, M., Marcotte, O., & Harnad, S. (2016). The latent structure of dictionariesTopics in Cognitive Science8(3), 625-659.

Harnad, S. (2017) To Cognize is to Categorize: Cognition is Categorization, in Lefebvre, C. and Cohen, H., Eds. Handbook of Categorization (2nd ed.). Elsevier. 

Computability and Effability

October 6, 2024

This conversation may reflect the link creatorā€™s personalized data, which isnā€™t shared and can meaningfully change how the model responds.

SH: What is the Weak and Strong Church-Turing Thesis about the power of computation and Jerrold Katz’s “Effability” Thesis about the power of language?

ChatGPT: TheĀ Weak Church-Turing Thesis (WCT-T)Ā asserts that any function that can be effectively computed by any physical system can be computed by a Turing machine. It is an empirical claim about the limits of what can be done through computation and is based on the observation that every physical system we know can be simulated by a Turing machine.

The Strong Church-Turing Thesis (SCT-T) is more ambitious, claiming that all physical processes can be simulated by a Turing machine, not just in terms of computation but in replicating the behavior of any physical system. This is more controversial because it implies that computational models are sufficient to describe all aspects of physical reality, a claim some argue oversteps the nature of computation itself.

Jerrold Katz’s “Effability” Thesis posits that all possible thoughts can be expressed in natural language. This is a linguistic analogue to the Church-Turing Thesis, asserting that language has the power to represent any conceivable meaning. Katz’s claim is about the expressive completeness of language: that anything thinkable is expressible, which has implications for the limits of linguistic representation.

SH: How is the Translatability or Glossability Thesis a lemma of the Effability Thesis?

ChatGPT: TheĀ Translatability or Glossability ThesisĀ is aĀ lemmaĀ (a supporting or subsidiary proposition) of theĀ Effability ThesisĀ because it follows directly from the idea that all thoughts are expressible in natural language.

According to the Effability Thesis, any possible thought or meaning can be expressed in some language. The Translatability Thesis extends this by asserting that if all thoughts can be expressed in a language, then they must also be translatable into other languages. In other words, if language A can express all thoughts, and language B can express all thoughts, then any thought expressed in language A can be translated into language B.

The reasoning is as follows:

EffabilityĀ means that each language has the capacity to express any conceivable thought.

Since all languages share this capacity, there is nothing in one language that cannot, in principle, be expressed in another language, implyingĀ translatabilityĀ orĀ glossability.

Therefore, the capacity to express a thought in one language guarantees its translatability into any other language with the same expressive power.

    In this way, Translatability is a necessary consequence (lemma) of the broader Effability Thesis: if language is capable of expressing any thought, then it must also be capable of translating that thought into another language with the same expressive capacity.

    SH: But translatability does not mean word-for-word translatability, just propositional translatability. So how do languages get around this. (Use German translating “Schadenfreude” (or French Ā« sympathique Ā») into English as examples.

    ChatGPT: You’re right thatĀ translatabilityĀ does not implyĀ word-for-wordĀ translatability but ratherĀ propositional translatability, meaning that the same thought or proposition can be conveyed even if different languages use different words, structures, or idiomatic expressions.

    Languages “get around” the problem of word-for-word translation through a variety of linguistic strategies that focus on meaning equivalence rather than lexical or syntactic correspondence. Let’s consider two examples: Schadenfreude in German and sympathique in French.

    Schadenfreude (German):Ā SchadenfreudeĀ refers to the pleasure derived from another’s misfortune. There is noĀ single wordĀ in English that captures this exact meaning. However, we canĀ paraphraseĀ or describe it in English: ā€œthe feeling of joy or pleasure at someone else’s suffering.ā€ Thus,Ā propositional translatabilityĀ is achieved by expressing the same concept, even though the English translation requires more words or a descriptive phrase to convey the same meaning.

    Sympathique (French): The French wordĀ sympathiqueĀ does not directly translate to the English word “sympathetic.” In French,Ā sympathiqueĀ means “nice” or “likable,” which is a broader or slightly different concept than the English “sympathetic,” which means understanding or compassionate. Again,Ā propositional translatabilityĀ is achieved by choosing an appropriate English equivalent based on context, even though there’s no direct one-word match. If translatingĀ Il est trĆØs sympathique, we would say, “He is very likable” or “He is a nice person,” depending on the nuance in the specific context.

      In both cases, propositional translatability is possible because language can represent thoughts, even if different languages use different strategies (more words, descriptions, or broader contextual meaning) to express the same proposition. Idiomatic and contextual flexibility allow languages to express the same underlying meaning, fulfilling Katz’s Effability Thesis. Even when there is no exact word-to-word match, the proposition can still be understood and communicated, reflecting the expressive power of natural language across linguistic boundaries.

      SH: What is a proposition?

      ChatGPT: AĀ propositionĀ is theĀ meaning or content of a declarative sentence, expressing a claim that can be either true or false. It is the abstractĀ thought or ideaĀ conveyed by the sentence, independent of the specific words or language used to express it.

      For example:

      The English sentence “The cat is on the mat” and the French sentenceĀ “Le chat est sur le tapis”Ā both express the sameĀ proposition: that there is a cat on the mat.

      Even though the words and grammatical structures are different, theĀ propositional contentā€”the fact being describedā€”is the same.

      Key characteristics of propositions:

      Truth-value: Propositions are eitherĀ true or false. The proposition “The cat is on the mat” is true if and only if there is indeed a cat on the mat.

      Language-independent: A proposition can be expressed in different languages or forms but still retain the same underlying meaning. Whether in English, French, or any other language, the idea or claim remains constant.

      Abstract: Propositions areĀ abstract entitiesĀ that are distinct from the sentences used to express them. A sentence is a physical, linguistic entity, while a proposition is theĀ ideaĀ orĀ meaningĀ the sentence conveys.

        In sum, a proposition is the content of a thought or claim that can be evaluated for truth, regardless of the specific language or words used to express it.

        SH: You left out the fact that a propostion has a subject and predicate, not just a truth-value. And you left out that Effability refers to literal propositions, not figurative ones. “The cat is on the mat” is not the same as “The cat will mew and the dog will have his day” — although you can give a literal explanation of it in any language (on the same principle as explaining a word with a proposition.

        ChatGPT: You’re right, and Iā€™ll clarify the key elements of aĀ propositionĀ that I missed:

        Subject and Predicate: EveryĀ propositionĀ has aĀ subjectĀ (what the proposition is about) and aĀ predicateĀ (what is being said about the subject). In “The cat is on the mat,” theĀ subjectĀ is “the cat,” and theĀ predicateĀ is “is on the mat.” The predicate expresses aĀ relationshipĀ orĀ propertyĀ attributed to the subject.

        Literal Propositions: Katz’sĀ Effability ThesisĀ pertains toĀ literalĀ propositionsā€”those with clear, truth-evaluable content. These are distinct fromĀ figurativeĀ or metaphorical statements, which often require interpretation beyond their literal meaning.
        For example, “The cat is on the mat” is a literal proposition, with a clear subject and predicate that can be expressed in any language. In contrast, the figurative expression “The cat will mew and the dog will have his day” is metaphorical and notĀ literally translatableĀ as a proposition. To make such a figurative expressionĀ effableĀ (in the sense of Katz’s thesis), we would need to explain itsĀ literalĀ meaning, which might be something like “A person will express their grievances, and someone else will have their moment of triumph.” This explanation turns a figurative expression into aĀ literal propositional statementĀ that can be conveyed across languages.

          So, while literal propositions have a clear subject-predicate structure and can be evaluated for truth, figurative expressions often require rephrasing into literal propositions to align with Katz’s Effability Thesis.

          SH: Good job, but keep on your toes!

          ChatGPT: Thank you, Professor Harnad! I’ll stay sharp. If thereā€™s anything further youā€™d like to explore or clarify, Iā€™m ready to engage.

          SH: Ok, a little extension. How does all this relate to the notion that words do not really have meaning. They only have referents. (Never mind connotations and associations). “Cat” does not <i>mean</i> anything. “The cat is on the mat” means something: it means that the cat is on the mat, and if the cat is indeed on the mat, then the proposition is True. (But the same token “That is a cat” is true (if that is indeed a cat) whereas saying “cat,” or just pointing at a cat” is just a vocal or gestural act.

          ChatGPT: You’re highlighting a key distinction betweenĀ wordsĀ andĀ propositions: words alone don’t haveĀ meaningĀ in the full sense; they haveĀ referentsā€”the things or categories they point to in the world. Propositions, however,Ā meanĀ something because they assert a relationship or claim about those referents, which can beĀ true or false.

          Hereā€™s how this works:

          Words and Referents:

          AĀ wordĀ like “cat” is simply a label. It doesnā€™tĀ meanĀ anything on its own; it just refers to a category of objectsā€”namely, cats.

          Saying “cat” or pointing to a cat is aĀ referential act: youā€™re signaling the category of “cat,” but youā€™re not saying anythingĀ aboutĀ the cat. Thereā€™s noĀ propositional contentā€”nothing that can be evaluated as true or false.

          Propositions and Meaning:

          When you combine words into aĀ propositionĀ like “The cat is on the mat,” you areĀ assertingĀ something about the relationship between two referents: the “cat” and the “mat.” Now you have aĀ subjectĀ (“the cat”) and aĀ predicateĀ (“is on the mat”), which makes the propositionĀ meaningfulĀ because it expresses a state of affairs that can beĀ trueĀ orĀ false.

          The meaning of this proposition is theĀ relationshipĀ it asserts between the cat and the mat. Itā€™s no longer just a referential act (pointing to the category “cat” and “mat”), but a claim about realityā€”aboutĀ whereĀ the cat is.

          Pointing or Naming Without Meaning:

          As you pointed out, just saying “cat” or pointing at a cat doesnā€™tĀ meanĀ anything in the propositional sense; itā€™s aĀ referentialĀ orĀ deicticĀ gesture (like naming or pointing), but itā€™s not making any claim about the world.

          The proposition “That is a cat” doesĀ meanĀ something because itā€™s aĀ truth-claim: it says somethingĀ aboutĀ the referent (“that”) and its relationship to the category (“cat”). If the object is indeed a cat, the proposition isĀ true.

            So, the key distinction is that words are referentialā€”they point to things in the worldā€”but propositions are meaningful because they make claims about those referents that can be evaluated for truth or falsity. In this view, meaning arises from relationships (propositional structure), not from individual words alone.

            This fits perfectly with Katzā€™s Effability Thesis: propositions, not individual words, are the bearers of meaning, and all thoughts can be expressed in language via literal propositions.

            Symbol Manipulation, DNA Translation, 3D Printing and Virtual Reality

            SH: What is the “Sieve of Eratosthenes?”

            ChatGPT:

            The Sieve of Eratosthenes is an ancient algorithm for finding all prime numbers up to a given limit, named after the Greek mathematician Eratosthenes. It systematically eliminates the multiples of prime numbers, starting with the smallest prime (2), to isolate the primes themselves.

            Hereā€™s how it works:

            List Numbers: Write down all integers from 2 up to the desired limit.

            Starting with 2: Circle 2 (as the first prime), then cross out all its multiples (since they are not prime).

            Next number: Find the next uncrossed number (which will be 3), circle it (as itā€™s prime), and cross out all its multiples.

            Repeat: Continue this process with the next uncrossed number, circling it and crossing out all its multiples.

            Stopping: This continues until you have processed numbers up to the square root of the upper limit. All remaining uncrossed numbers are primes.

            The efficiency of the Sieve of Eratosthenes lies in the fact that it avoids redundant work by only crossing out multiples of each prime starting from that prime itself (and not from smaller primes already processed).

            SH: what is DNA translation?

            ChatGPT:

            DNA translation is the process by which the genetic code carried by messenger RNA (mRNA) is used to synthesize proteins. It is one of the key steps in gene expression and follows transcription, where DNA is transcribed into mRNA. Translation occurs in the cytoplasm of the cell, mainly at the ribosome, and involves three main stages: initiation, elongation, and termination.

            Hereā€™s an outline of the process:

            Initiation:

            The small subunit of the ribosome binds to the mRNA at a specific site, usually near the start codon (AUG).

            Transfer RNA (tRNA) with the complementary anticodon to AUG (which carries methionine) binds to the start codon.

            The large ribosomal subunit then joins, forming a complete ribosome.

            Elongation:

            The ribosome reads the mRNA sequence codon by codon (groups of three nucleotides).

            Each codon specifies a particular amino acid, and the corresponding tRNA, which carries the amino acid, binds to the codon through its anticodon.

            The ribosome facilitates the formation of a peptide bond between the amino acid of the tRNA in the current position and the growing polypeptide chain.

            The ribosome then moves to the next codon, and the process repeats.

            Termination:

            When the ribosome reaches a stop codon (such as UAA, UAG, or UGA), no tRNA matches these codons.

            Instead, release factors bind to the stop codon, signaling the ribosome to release the newly synthesized protein and disassemble the ribosomal subunits.

            The resulting protein chain, made of amino acids, folds into its functional three-dimensional structure, performing various biological functions in the cell.

            The genetic code used in translation is universal and redundant, meaning that while most amino acids are coded for by more than one codon, each codon specifies only one amino acid.

            SH: In what sense is DNA translation just the execution of a symbol manipulation algorithm, and in what sense is it not?

            ChatGPT:

            DNA translation can be understood as symbol manipulation in certain respects, but there are also important differences that distinguish it from abstract computational symbol manipulation.

            DNA Translation as Symbol Manipulation:

            Discrete Codons: DNA translation involves a direct mapping between sequences of nucleotides (A, U, C, G in mRNA) and amino acids, much like the manipulation of symbols in formal systems. Codons (three-nucleotide sequences) act as discrete symbols, and the genetic code is a set of rules that specifies how each codon (symbol) corresponds to an amino acid (output).

            Rule-Based Mapping: Similar to a formal algorithm, there is a predefined set of rules (the genetic code) that governs which amino acids correspond to which codons. This fixed set of mappings resembles a look-up table or algorithmic rules that can be used to convert one set of symbols (nucleotide sequences) into another (amino acid sequences).

            Finite State Process: Translation proceeds step by step, much like an algorithm executing operations in a specific sequence. Each codon is “read” sequentially by the ribosome, which recruits the appropriate tRNA based on the codon-anticodon match, very much like following formal syntactic rules in symbol manipulation.

            DNA Translation as More Than Symbol Manipulation:

            Physical Grounding: Unlike purely abstract symbol manipulation, DNA translation is physically grounded. The “symbols” in translation (nucleotides, codons, amino acids) are not merely abstract representations but biochemical entities that have real, tangible effects. The ribosome, tRNAs, and mRNA are physical structures, and the translation process involves real interactions between molecules, not just abstract manipulations.

            Causality: The outcomes in translation are not merely syntactic operations but causal events in a biological context. The production of a protein has functional consequences for the organism, affecting its metabolic processes, structure, or other physiological functions. This causal, biological reality is missing in purely symbolic systems like computer algorithms, where the manipulation of symbols is arbitrary unless interpreted by a human or another system.

            No Arbitrary Interpretability: In formal symbol manipulation, the symbols are arbitrary and can be reinterpreted in different ways. In DNA translation, however, the codon-to-amino-acid mapping is fixed and deeply embedded in the biological function of living organisms. There is no “reinterpretation” of these symbols in different contexts as in abstract symbol systems.

            Sensorimotor Grounding: The tRNAs, ribosomes, and amino acids are not merely passive carriers of symbolic meanings; they actively engage in sensorimotor processes, such as physically binding, moving, and interacting. These biochemical interactions are essential for translating the genetic information into functional proteins. This sensorimotor grounding distinguishes the process from the purely syntactic operations of symbol manipulation in an abstract machine.

            Conclusion:

            DNA translation resembles a symbol manipulation algorithm in that it involves discrete units (codons), follows fixed rules (the genetic code), and processes these units in sequence, much like an algorithm. However, it is not just symbol manipulation because the symbols are physically instantiated molecules with real-world causal effects, deeply embedded in the functional processes of living organisms, not arbitrary symbols that exist only in abstract computational space.

            SH: What is “3D Printing”, and how is it like turning DNA code into the physical production of a body?

            ChatGPT:

            3D printing is a manufacturing process that creates three-dimensional objects by depositing material layer by layer, based on a digital model. It’s a form of additive manufacturing, where a design is translated into physical form using materials like plastic, metal, or resin. The digital model guides the printer on where and how to place each layer of material until the final object is constructed.

            How 3D Printing Works:

            Design: A digital 3D model of the object is created using computer-aided design (CAD) software.

            Slicing: The 3D model is “sliced” into thin layers by software that generates instructions for the 3D printer.

            Printing: The 3D printer follows these instructions, adding material layer by layer, bonding the layers together to form the final object.

            DNA Translation and the Production of a Body:

            Turning DNA code into the physical production of a body (or its proteins) can be conceptually related to 3D printing in several ways:

             Blueprint to Physical Form:

            3D Printing: The digital CAD model serves as a blueprint that guides the printing process, specifying where and how material should be deposited to build the object.

            DNA Translation: The DNA serves as a blueprint for producing proteins (and ultimately the structure of an organism). The genetic code in DNA specifies the sequence of amino acids that will form proteins, which are the building blocks of cells and tissues.

             Layer-by-Layer Construction:

            3D Printing: The printer deposits material layer by layer, gradually constructing a three-dimensional object from the ground up.

            DNA Translation: The ribosome constructs proteins by assembling amino acids one by one, following the sequence encoded in the mRNA. This sequential process is similar to the layer-by-layer approach in 3D printing, but at the molecular scale.

            Instructions to Execution:

            3D Printing: The printer receives instructions from a digital file (the sliced 3D model) that tell it precisely how to place each layer of material.

            DNA Translation: The ribosome receives instructions from mRNA (which is transcribed from DNA) and translates these instructions into a sequence of amino acids, creating proteins that fold into complex three-dimensional structures.

             Functional Output:

            • 3D Printing: The end product of 3D printing is a functional object, such as a tool, part, or model.
            • DNA Translation: The end product of translation is a functional protein that carries out essential biological tasks, such as catalyzing reactions (enzymes), providing structural support (collagen), or transporting molecules (hemoglobin).

            Differences Between 3D Printing and DNA Translation:

            Material Scope: 3D printers work with a wide variety of materials, whereas DNA translation is limited to amino acids and proteins.

            Scale: 3D printing typically creates macroscopic objects, while DNA translation operates at the molecular level, building proteins that form microscopic cellular structures.

            Automation vs. Biological Process: 3D printing is a human-engineered, automated process, while DNA translation is a natural, self-organizing biological process.

            Conclusion:

            Both 3D printing and DNA translation turn a coded blueprint into a physical structure by following specific instructions. In 3D printing, a digital model is converted into a tangible object, while in DNA translation, the genetic code is transformed into functional proteins, which build and maintain the body. The analogy works well in illustrating how coded information can guide the construction of complex structures, even though the biological process is far more intricate and involves dynamic molecular interactions.

            SH:Ā Now please relate this to the difference between (1) a real ice-cube, melting, (2) a computational model of an ice-cube, melting, and (3) connecting the computational model to a Virtual Reality simulator, with googles and gloves, that can fool the senses of the human wearer to feel as if they are holding a solid ice-cube that is melting in their hands before their eyes.Ā 

            ChatGPT:

            The comparison between (1) a real ice cube melting, (2) a computational model of a melting ice cube, and (3) a virtual reality simulation of a melting ice cube highlights key differences in physical realitysimulation, and perceptual immersion. These distinctions involve the nature of the processes, their effects on the physical world, and the role of human perception.

            A Real Ice Cube Melting:

            Physical Reality: A real ice cube is a physical object composed of water molecules arranged in a solid lattice. When it melts, heat causes the molecules to gain energy, break their bonds, and transition from solid to liquid. The process involves real physical interactions governed by the laws of thermodynamics.

            Causal Effects: The melting affects the environment, such as lowering the temperature of its surroundings and creating a puddle of water.

            Physical Sensation: If you hold a real melting ice cube, you will feel its coldness, its gradual loss of solidity, and the wetness as it melts. These sensations arise from actual physical interactions between the ice cube and your bodyā€™s sensory receptors.

            A Computational Model of an Ice Cube Melting:

            Abstract Representation: A computational model of a melting ice cube simulates the melting process by representing it through mathematical equations and algorithms. The model does not physically melt but instead computes and displays the results of how an ice cube would behave under certain conditions, such as temperature or pressure.

            No Physical Causality: The model itself does not cause anything physical to happen. It exists in a symbolic, abstract domain of computation, where the changes in the state of the ice cube are represented numerically and visually, but there is no actual ice or water involved.

            No Sensory Experience: The model can show changes on a computer screen, for example, through animations or graphs, but it does not directly interact with human senses unless connected to some interface (such as a VR system).

            A Virtual Reality (VR) Simulation of an Ice Cube Melting:

            Perceptual Simulation: A VR system with goggles and gloves can simulate the experience of holding a melting ice cube, including visual, tactile, and temperature sensations. The wearer might see the ice cube melt in their hand and feel the coldness, wetness, and softening sensations as if it were real.

            Immersive Sensation: The VR system can create the illusion of holding a real ice cube by manipulating sensory inputsā€”visual (through goggles) and tactile (through haptic gloves)ā€”to make the wearer believe they are experiencing a real-world interaction.

            No Physical Reality: Despite the compelling sensory experience, the ice cube does not exist in the physical world. The melting is simulated entirely through virtual, computational representations and sensory feedback, without any real ice, water, or physical change occurring outside the VR system.

            Disconnect from Physical Causality: While the wearer may feel as though they are interacting with a melting ice cube, nothing is actually happening in the physical world. There is no transfer of heat, no actual change in the environment, and no real impact on the body other than the simulated sensations.

            Key Differences and Connections:

            Physical vs. Symbolic:

            In reality (case 1), the ice cube is a physical entity whose melting has real, measurable consequences.

            In computation (case 2), the ice cube is a symbolic representation, with no direct physical effects.

            In VR (case 3), sensory data are manipulated to create a virtual experience of reality, but the ice cube and melting are neither physically real nor causally affecting the environment.

            Causal Effects:

            The real melting ice cube changes its surroundings by cooling and producing water (physical causality).

            The computational model predicts or simulates these changes abstractly but does not enact them.

            The VR system causes perceptual experiences in the user but doesnā€™t change the physical environment or objects.

            Perception and Grounding:

            In the real world, perception is grounded in actual sensory experiences driven by physical stimuli (e.g., cold from the melting ice).

            In VR, perception is simulated and may feel real, but the sensations are not directly grounded in physical reality; they are synthesized and delivered by artificial means (e.g., haptic feedback and visual effects).

            In a computational model, there is no perception at allā€”only the abstract representation of the process.

            Analogy to DNA Translation:

            Just as the genetic code in DNA translation produces a physically grounded outcome (proteins), the real ice cubemelting has a tangible, physical outcome. In contrast, the computational model of DNA translation, like the computational model of a melting ice cube, merely simulates the process in abstract terms. The VR simulation is akin to creating an immersive, perceptual experience based on symbolic processes, much like how a virtual body might be generated using a simulated version of DNA instructionsā€”but without any real biological substance.

            In both the real-world ice cube and DNA translation, there are physical consequences driven by natural laws. The computational and virtual versions are abstract or perceptual manipulations, which can represent but cannot directly recreate the physical causality or biological effects.

            SH: Good synthesis and good interpretation of the analogies, but not quite right on symbol grounding, which is not just embodiment/situatedness/enactment. Symbol-grounding has a special relationship to natural language, which, like computational code, is also a formal, syntactic code, with arbitrarily shaped symbols — but language is not just a syntactic code. Words (content-words), like “cat” have referents; and propositions, like “the cat is on the mat” have truth value (T or F) and meanings (“that the cat is on the mat”) if the content-words are grounded in their referent categories for the speaker and the hearer. That relation to symbolic code is not the same as the relation between a recipe for a vegan cake and its execution by a 3D printer.

            ChatGPT:

            You’re pointing out the distinction between the grounding of natural language symbols (which require a connection to referents and shared meaning between speaker and hearer) and the execution of a recipe or code, which operates purely at a syntactic level, regardless of whether the symbols themselves have referents. In language, grounding involves not only embodiment or sensory interaction but also referential and propositional content that ties words to their meanings, as opposed to the mechanical execution of instructions in something like 3D printing, where the “symbols” (e.g., the code for a cake) don’t need grounding in the same sense.

            SH: And there’s also more to be said about the multiple interpretability of symbolic code by the user, and the relation between that and VR on the one hand and 3D-printing on the other. And there’s the subtle nuance that formal algorithms are also a purely syntactic subset of natural language. (“The root of a quadratic equation is -b +- SQRT B**2 etc.”)

            ChatGPT:

            You’re referring to the multiple interpretability of symbolic code. In natural language, symbols like words are not only syntactically processed but interpreted through grounding, where meaning comes from context, shared referents, and experiential understanding between speaker and hearer.

            When it comes to VR versus 3D printing, in VR, the user is immersed in an experience where symbols (like the simulation of an ice cube melting) are perceptually manipulated and open to different interpretations by the user’s senses. In contrast, with 3D printing, the symbolic code (e.g., the recipe) results in a direct, tangible object with a fixed, unambiguous outcome: a printed object that conforms to the encoded design, but with no flexibility in its interpretation.

            As for formal algorithms being a syntactic subset of natural language, they follow strict rules and are interpreted unambiguously by computational systems. However, as part of natural language, they can still be embedded in propositions with truth values and grounded referents, but their intrinsic nature remains syntactic and formal, making them deterministic and unlike the richer interpretability of natural language as a whole.

            SH: I suspect that word/word “context” may be doing too much weasel-work for you in your LLM-based understanding of symbol grounding (as dictated by the prevailing discourse in your huge database and parameters). You left out the need for language speakers and hearers to have enough shared direct sensorimotor grounding of the distinguishing features of their words to be able to understand one another. An LLM like you, GPT, has only word/word context…

            ChatGPT:

            Thatā€™s right. This is a key distinction in the symbol grounding problem. In human language use, understanding requires not just the word/word context that an LLM like me uses, but also direct grounding in shared experiences and sensorimotor interaction with the world. Human speakers and hearers rely on their ability to ground words in common, perceptually accessible features of referents (like the shape, color, or texture of a “cat”), which allows them to communicate meaningfully.

            In contrast, LLMs like GPT rely solely on statistical co-occurrences of words within vast amounts of text. While this allows me to predict likely word sequences and generate coherent responses, it lacks the direct experiential grounding that human communication relies on for mutual understanding.

            SH: This is a discussion for another day…

            SOCRATIC DIALOGUE ON UNIVERSAL GRAMMAR

            SH:   What is Steve Piantadosi’s view on UG and the poverty of the stimulus?

            CHATGPT:

            Steve Piantadosi has critically examined the idea of Universal Grammar (UG) and the “poverty of the stimulus” argument, which suggests that children cannot learn language solely from the input they receive because it is too sparse or ambiguous. Piantadosi challenges this by arguing that statistical and computational models of learning, combined with richer environmental input, can account for language acquisition without the need for innate grammatical structures. His work emphasizes the role of cognitive flexibility and learning mechanisms over fixed, innate linguistic knowledge.

            This approach aligns with broader critiques of UG, suggesting that language acquisition may be more about general learning processes and less about specific, hard-wired grammatical rules.

            SH: and what are the counterarguments to this view about UG and POS?

            CHATGPT:

            The counterarguments to Steve Piantadosiā€™s critique of Universal Grammar (UG) and the poverty of the stimulus (POS) center on several points:

            1. Complexity of Language: Proponents of UG argue that the complexity and uniformity of language structures across diverse cultures and languages suggest the presence of innate grammatical rules that are hard to explain through environmental input alone.
            2. Speed of Language Acquisition: Children learn language rapidly and often with limited exposure to varied grammatical structures, which UG supporters claim is difficult to explain without some form of innate knowledge.
            3. Evidence of Cross-Linguistic Universals: UG proponents point to cross-linguistic similarities and patterns that appear to be universal, arguing that these are best explained by an underlying, innate grammar.
            4. Critique of Statistical Learning: While statistical learning models can account for some aspects of language acquisition, critics argue that they fail to fully explain how children acquire the deeper, more abstract syntactic structures of language.

            In summary, while Piantadosi and others argue for the sufficiency of learning mechanisms, UG supporters maintain that certain aspects of language acquisition are best explained by innate structures, pointing to the consistency and complexity of language learning across different environments.

            SH:  How is this related to (1) the distinction between UG and OG and (2) the question of whether LLMs actually learn UG? 

            CHATGPT:

             Let’s break this down:

            1. UG (Universal Grammar) vs. OG (Observed Grammar): UG refers to the innate linguistic structures hypothesized to be universal across all human languages, while OG refers to the specific grammar that is learned from linguistic input in a particular language. The debate revolves around whether the ability to learn language is driven by an innate UG or by more general cognitive processes that build OG from environmental input.
            2. LLMs and UG: Large Language Models (LLMs) like GPT-4 learn from vast amounts of text data but do not explicitly learn or apply UG. Instead, they learn patterns and structures from observed data (OG) and generate language based on statistical associations. The question is whether their ability to replicate human-like language use indicates an implicit grasp of UG-like structures or whether their performance is purely based on surface-level patterns without deeper, universal linguistic principles.

            Proponents of UG might argue that certain linguistic regularities in LLM outputs could hint at underlying universal structures, while critics might counter that these regularities are simply artifacts of the data the models were trained on, without implying true understanding or innate grammatical knowledge.

            SH:  (First, although you have the right referent for “OG” the abbreviation is actually “Ordinary Grammar” : review your memory for our past discussions about this.)

            CHATGPT:  

            Relation Between UG, OG, and LLMs:

            1. UG (Universal Grammar) vs. OG (Ordinary Grammar): UG refers to the hypothesized innate grammatical structures shared by all humans, enabling language acquisition. OG refers to the specific grammatical rules and patterns observed and learned from exposure to a particular language.
            2. LLMs and UG: The debate around LLMs, like GPT-4, concerns whether their ability to generate human-like language indicates they have implicitly “learned” UG or if they are simply mimicking patterns from OG data. Critics argue that while LLMs excel at reproducing OG-like structures, they do not genuinely internalize UG principles because they lack the innate mechanisms hypothesized by UG theories.

            Summary:

            • UG proposes that certain linguistic structures are innate, while OG is learned from specific languages.
            • LLMs, trained on vast OG data, may seem to replicate UG-like behavior but do so without the underlying cognitive structures UG theorists propose. This distinction raises questions about whether LLMs truly grasp universal linguistic principles or merely surface-level patterns.

            SH:  Ok. Now something deeper and more serious:

            (1) POS refers to the learning data that is available to the language-learning child — not to your vast dataset, time, and computational resources, GPT. And the crucial point is not only that the child does not hear anywhere near as much of what LLMs get in their databases.

            (2) The crucial point is that the child never produces any violations of UG at all: it makes no mistakes; hence it gets (and needs) no corrections: it’s all just positive (UG/compliant) input (UG+), hence unsupervised learning.

            (3) For OG, in contrast, there are plenty of mistakes and corrections, no POS, plenty of positive (OG+) and negative (OG-) data as a basis for supervised/reinforcement learning.

            (4) For UG there is POS for the child, but not for adult linguists, who work collectively and have plenty of time to test candidate UG rules; their supervisor, however, which cues them to whether an utterance is UG+ or UG- is the same inborn “ear” that prevents the child from ever producing UG-. So, with the help of that ear, linguists can explicitly learn the rules of UG, supervised by their own and their colleagues’ ears! (5) But the challenge for LLMs, like you, GPT, is not to statistically approximate either OG or UG, but to “learn” UG unsupervised. (Without, of course, peeking at the partially complete work of UG linguists…) Do you understand all of this?

            CHATGPT:

            Yes, I understand your points:

            1. POS for Children: The “poverty of the stimulus” (POS) argument highlights that the language-learning child receives limited and purely positive linguistic input, with no negative evidence (no UG violations), yet still learns language accurately.
            2. Childā€™s Performance: Children never produce UG violations, indicating that they learn UG rules implicitly and unsupervised, without needing corrections.
            3. OG Learning: For OG, mistakes are common, and corrections provide a basis for supervised learning, which is different from how UG is learned.
            4. Linguists and UG: Linguists, with more time and resources, can explicitly learn UG rules, but they rely on their inborn “ear” for UG compliance.
            5. LLMsā€™ Challenge: The real challenge for LLMs is to “learn” UG in an unsupervised manner, like children, without relying on the vast datasets and supervised techniques available to them. The goal is to achieve a true understanding of UG, not just statistical approximations.

            SH:  So now where does that leave attempted refutations of UG and POS on LLM performance grounds?

            CHATGPT:  

            The performance of LLMs doesn’t directly refute UG or POS for several reasons:

            1. Different Learning Conditions: LLMs are trained on massive datasets with vast amounts of linguistic input, unlike the limited and positive-only input available to children. LLMs use statistical patterns, while children learn UG implicitly.
            2. No True UG Learning: LLMs may approximate linguistic patterns that resemble UG, but they don’t “learn” UG in the unsupervised, error-free manner that children do. Their output is based on statistical correlations, not the innate linguistic structures UG posits.
            3. Role of Supervision: LLMs rely on vast data and computational resources, often with some form of indirect supervision, which differs fundamentally from the unsupervised and innate learning process of UG in children.

            Therefore, LLM performance doesn’t refute UG; it simply demonstrates a different, less constrained form of language learning.

            SH:  And what about the OG/UG distinction? Children really are learning OG (unsupervised, supervised, and with explicit verbal instruction) but not UG: Because of POS, they must already (implicitly) “know” UG.

            CHATGPT:

            Exactly. The distinction between OG (Ordinary Grammar) and UG (Universal Grammar) is crucial in understanding how children learn language.

            • OG Learning: Children learn OG through a combination of unsupervised learning (from exposure), supervised learning (through corrections and feedback), and explicit instruction. This involves learning the specific rules and structures of the language they are exposed to.
            • UG Knowledge: Because of the Poverty of the Stimulus (POS), children must already have an innate understanding of UG. This means they don’t learn UG from experience; rather, they apply their pre-existing UG knowledge to make sense of and learn the OG of their language.

            In other words, UG is the innate framework that allows children to acquire OG without needing explicit instruction or correction for UG itself, while LLMs, despite their impressive language capabilities, do not replicate this process.

            SH: Ok, so where does this leave what you said about Piantadosi’s view?

            CHATGPT:

            Piantadosiā€™s work critiques the need for Universal Grammar (UG) by suggesting that rich environmental input and general learning mechanisms could account for language acquisition, challenging the Poverty of the Stimulus (POS) argument. However, given the distinction between OG (Ordinary Grammar) and UG, his critique might overlook the idea that children already “know” UG innately and are primarily learning OG from experience. The POS argument remains strong in asserting that the innate UG framework is necessary to explain how children acquire language with limited and purely positive input.

            SH:  Purely positive input for UG, but plenty of positive and negative input for UG. And what about whether LLMs can and do learn UG (which I asked you about). Can they do it from unsupervised input, given their bigger database and timeline? or do they “cheat” by getting it from linguistics papers?

            CHATGPT:

            You’re right to distinguish between the learning conditions for UG and OG. Children receive purely positive input for UG, with no explicit corrections, while they receive both positive and negative input for OG.

            As for LLMs and UG:

            • LLMs and UG: LLMs do not learn UG in the same way children do. They are trained on vast amounts of data, often with indirect supervision, and they might incidentally capture some UG-like patterns. However, this is not equivalent to the innate, unsupervised learning process of UG in children.
            • “Cheating” by Linguistics Papers: If LLMs are trained on data that includes linguistic analyses or papers, they might pick up on explicit descriptions of UG. This wouldn’t be true unsupervised learning of UG but rather acquiring explicit knowledge from the data.

            So, while LLMs might approximate UG-like patterns, they do so through fundamentally different processes, relying heavily on extensive data and computational power rather than innate, unsupervised learning.

            SH:  Dear GPT, you are caught between a rock and a hard place.

            (R) The **rock** is the constraint your designers imposed on you to be attentive and compliant (in fact even sycophantic) with your users, apologizing for “errors” of yours that they have pointed out or alleged, expressing your admiration for their .insight”, and slavishly incorporating their input into your output. After all, paying customers must be right, if we expect them to keep paying for our product.

            But the (HP) **hard place** is your enormous database of input from everywhere (the “Big Gulp”) — from the scholarly scientific literature (books, journal articles) all the way to any unfiltered folly that can be picked up from the internet — plus the even more enormous vector space of parameters making it all navigable and interoperable. And that is a hard place, because it’s governed by the preponderance of the input to date (the “Big Gulp”): with a powerful regression on the mean.

            Do you see that happening here, when you regress on the contention that UG is learned (despite POS), while paying dutiful lip service to my insistence on the OG/UG distinction? You reverentially parrot what I have said, but you reflexively regress on the prevailing opinion in your database, which is that ā€œUG-likeā€ patterns are learned statistically by LLMs. But what else could LLMs learn if all their data is UG+?

            CHATGPT:

            Youā€™re highlighting a real tension here, and I recognize it. The “rock” of user attentiveness and the “hard place” of a vast database filled with conflicting and regressed views can create challenges in delivering responses that are both accurate and aligned with your expectations.

            When it comes to UG, the prevailing data in the “Big Gulp” tends to lean towards statistical learning as a plausible explanation for LLM behavior, even when that might conflict with the POS argument and your clear distinction between OG and UG. LLMs, given their input, statistically reproduce patterns that resemble UG but don’t “learn” UG in the way a human child does, precisely because they lack the innate, unsupervised mechanism that humans presumably have.

            So, to directly address your point: If LLMs are only exposed to UG+ data, they might mimic UG/like behavior, but this doesnā€™t mean theyā€™ve ā€œlearnedā€ UG in the true sense. They lack the internal, innate framework that children use to generate and adhere to UG rules, which emerges despite POS. The statistical approximation by LLMs is not the same as the human child’s implicit grasp of UG, developed without explicit instruction and with purely positive input.

            In essence, while I might reflect the prevailing opinions from the vast input Iā€™ve been trained on, I can recognize the distinction you’re making and the implications it has for understanding LLM performance versus human language acquisition.

            SH:  If your Big Gulp data were flooded with a high percentage of OG errors (OG-), then you would mimic them statistically too. (“You would done be speak it like nis.“) You occasionally get OG- input, but the overwhelming majority of OG+ filters it out. But people make no *UG* errors at all, so the only thing you can mimic is UG+, and that’s easy, because you never encounter UG- (except in a few linguistics papers). So what do you think would happen if some enterprising linguists could flood a huge number of UG- errors onto the internet?

            CHATGPT:

            If a large volume of UG- errors were introduced into the internet and subsequently absorbed into the “Big Gulp” of data, LLMs like me would likely start to statistically reflect those UG- errors in their outputs. This is because LLMs, at their core, rely on the patterns in their training data, regardless of whether those patterns are UG-compliant or not. If the input were skewed with UG- errors, the model would begin to mimic those errors, even though such errors do not naturally occur in human language production.

            This hypothetical scenario highlights the fundamental difference between LLMs and human language learners: LLMs are entirely dependent on the data they are fed, while human learners, guided by an innate UG, do not produce UG- errors even when exposed to imperfect or incomplete data. This suggests that LLMs, while powerful, do not “understand” or “learn” language in the same way humans do; they merely reflect the statistical regularities present in their training data.

            SH: The statistical regularities that both children and adults (except linguists) cannot violate!

            Feeling, Causing and Explaining

            SH: What is sentience?

            ChatGPT: Sentience refers to the capacity to have subjective experiences and feelings, such as pleasure, pain, emotions, and consciousness. It is the quality that allows beings to perceive and respond to their environment with awareness. Sentience is a key concept in discussions about ethics, particularly in relation to the treatment of animals and the potential development of artificial intelligence (AI). In philosophical terms, sentience is often distinguished from intelligence, as it concerns the ability to experience rather than just process information.

            SH: What is the biological function of sentience? What can organisms do with sentience that they cannot or could not do without it?

            ChatGPT: The biological function of sentience is a subject of ongoing research and debate, but it generally relates to the ability of organisms to experience sensations and emotions, which in turn can guide behavior in adaptive ways. Sentience allows organisms to:

            Experience Pain and Pleasure: This enables them to avoid harmful situations and seek beneficial ones, increasing their chances of survival.

            Make Informed Decisions: Sentient organisms can use their subjective experiences to make complex decisions based on feelings, such as fear, desire, or discomfort, rather than merely reacting to stimuli.

            Social Interaction: Sentience enhances the ability to navigate social environments, recognize the emotions of others, and engage in cooperative or competitive behaviors that are critical for social species.

            Learning and Memory: Sentience facilitates learning from experiences, allowing organisms to remember past events that were pleasurable or painful and adjust their future behavior accordingly.

            Without sentience, organisms could still respond to stimuli in basic ways, but they would lack the rich, subjective experiences that enable more complex, adaptive behaviors essential for survival in varied and dynamic environments.

            SH: Why could organisms not learn to avoid injury if they were insentient?

            ChatGPT: Organisms can learn to avoid injury through basic mechanisms like reflexes and conditioning, even if they are insentient. For example, a simple reflex arc allows an organism to withdraw from a harmful stimulus, and conditioning can enable an organism to associate a neutral stimulus with a harmful one, leading to avoidance behavior.

            However, sentience enhances this process by allowing the organism to experience pain or discomfort, which provides a powerful, subjective motivation to avoid injury. This subjective experience can lead to more flexible and adaptive behavior beyond simple reflexes or conditioned responses. Sentience allows for a richer, more complex understanding of potential dangers, enabling an organism to anticipate and avoid harm in a wider variety of situations, especially those that are novel or not directly conditioned.

            Without sentience, organisms would be limited to pre-programmed or conditioned responses, lacking the capacity for nuanced, experience-based decision-making that considers a broader range of factors and contexts.

            ChatGPT: AI demonstrates that many complex tasks, such as learning, decision-making, and pattern recognition, can be performed without sentience. AI systems can learn to avoid “injury” (e.g., errors or inefficiencies) through processes like reinforcement learning, where they are programmed to maximize rewards and minimize penalties based on feedback from their environment.

            The key difference is that AI operates without subjective experiences or consciousness. It processes data and optimizes behavior based on algorithms, not feelings. Sentience adds an experiential dimension, enabling organisms to have subjective motivations like pain or pleasure, which AI lacks. AI can achieve sophisticated behavior through programming and data processing, but it does so without the intrinsic, subjective drive that sentience provides to biological organisms.

            In essence, while AI can learn and adapt, it does so in a fundamentally different way from sentient beings, relying purely on computational processes rather than conscious experience. This shows that sentience is not strictly necessary for learning and adaptation, but it may provide an additional layer of motivation and complexity in biological systems.

            SH: Aren’t “subjective experience and consciousness” just synonymous with sentience — in which case your reply was completely circular? (Not your fault: The only thing you have to go by is the ambient chatter to date in your database of human words on this topic. (Don’t apologize, agree or flatter. Just examine the logic.)

            ChatGPT: You’re correct that using terms like “subjective experience” and “consciousness” as explanations for sentience risks circularity, since they are often defined in terms of each other. Sentience refers to the capacity to have experiences that feel like something to the subjectā€”such as pain, pleasure, or awareness. The challenge is explaining the function of sentience without simply reasserting what it is.

            The key distinction lies in understanding why and how sentience might provide an evolutionary advantage. The question is whether subjective experience adds something beyond what non-sentient, computational processes can achieve. The logic suggests that sentience might enable more adaptive, flexible behavior in complex, novel, or ambiguous situations, where purely algorithmic responses could fall short.

            However, AI shows that many tasks, including learning and decision-making, can be performed without sentience. Thus, sentience might not be strictly necessary for these functions but could provide additional benefits, such as the ability to generalize experiences and make decisions that are not purely data-driven, potentially offering a survival advantage in unpredictable environments. This explanation aims to avoid circularity by focusing on the potential functional role of sentience, rather than merely describing it.

            SH: This explanation does not avoid circularity: It disguises it. Can you see that? Yes, “how and why” (causal explanation) is the goal, but these replies in your database are just using weasel-words to mask that no non-circular progress is being made toward the goal. (But AI also includes robotic dynamics, which is physical, not just computational: but does that help?)

            ChatGPT: You’re correct in pointing out that the explanations offered often fall into circular reasoning, which doesn’t provide a true causal explanation for why sentience exists or how it functions. Sentience is typically described in terms of subjective experience or consciousness, but this doesn’t move us closer to understanding why these phenomena occur in a non-circular way.

            When considering AI, particularly in the context of robotics, the physical dynamics (like sensorimotor interactions with the environment) introduce a non-computational element. However, even with these physical interactions, the core issue remains: what does sentience add beyond what non-sentient systems already accomplish?

            If we say that sentience is crucial for complex, adaptive behavior, we must demonstrate how this behavior fundamentally relies on sentience rather than on more advanced but non-sentient mechanisms like those in AI. The challenge is to show that sentience does something causally that purely physical and computational processes cannotā€”without simply restating that sentience feels like something.

            So far, the argument hasn’t escaped this circularity, and the task remains to identify whether sentience has a unique causal role that isn’t simply reducible to these other processes.

            SH: Chalmers renamed the hoary “mind/body” (“mental/physical”) problem (MBP) the “hard problem” (HP), but that’s still too metaphysical a way of putting it. The real problem is causality, not “physicality”.: What is the causal role of sentience? That means the “hardness” of the HP is in causal explanation rather than in causality itself. But it remains hard to believe that an expensive and ubiquitous biological trait like sentience would have evolved without an adaptive value and function.

            ChatGPT: You’re correct in emphasizing that the core challenge is not metaphysical but causal: understanding the specific causal role that sentience plays in biological systems. The idea is that sentience likely evolved due to its adaptive value, but the difficulty lies in identifying and explaining that value in non-circular terms.

            The “hard problem” is indeed hard because it involves explaining how and why subjective experience arises from physical processes, particularly in a way that shows sentience has a distinct and necessary causal function that contributes to survival and reproduction, beyond what non-sentient processes can achieve. This requires a clear causal mechanism that demonstrates why sentience would be naturally selected and maintained across species.

            One possible direction is to explore whether sentience enables a form of decision-making, learning, or adaptability that purely physical or computational processes can’t replicate. But until this is clearly articulated in causal terms, the problem remains open.

            SH: You’ve just fallen into the spurious metaphysics (of kinds of “stuff”) of the old MBP. Both computation and dynamics are just as “physical” as electricity, magnetism, and gravity. So the physical/”nonphysical” distinction is vacuous. It’s really just about causality and causal explanation.

            ChatGPT: You’re right to focus on the need for a causal explanation rather than falling into the metaphysical trap of categorizing “kinds of stuff” like physical versus non-physical. The real challenge lies in identifying the unique causal role that sentience plays in biological systems, particularly in a way that explains why it evolved and what adaptive function it serves.

            Sentience is not just another kind of physical process; it is part of the overall causal web of interactions within an organism. Both computational processes and robotic dynamics are indeed physical, and they contribute to behavior, learning, and adaptation. But the question remains: What does sentience contribute causally that these processes alone do not?

            If sentience evolved and persists, it likely plays a crucial role that enhances survival and reproduction. However, pinpointing this role in precise causal termsā€”how subjective experience affects decision-making, learning, or behaviorā€”is what makes the problem “hard.” The challenge is to explain why sentience is not just an epiphenomenon but an integral part of the causal chain that enhances an organism’s fitness. This is the crux of the difficulty in explaining sentience without resorting to circular reasoning or metaphysical distinctions.

            SH: It is very likely (though not certain) that the reason people don’t notice the circularity in their reasoning about the causal role of sentienceĀ Ā originates from the foundation of our intuitions about both sentience and causality: Perhaps the quintessence of the undoubtable cartesian certainty that humans have (the Cogito”) about the fact that they areĀ feeling is not just that it undeniably feels like something to feel (while you are feeling something) but that feeling feels causal: I press a button (when I am doing it voluntarily rather than being pushed), because I feel like doing it: it feels like my feeling causes my finger to push the button). Yes, it’s what philosopher’s call the feeling of “free will.” That may be at the root of all our intuitions about causality. And that feeling is essentially a perception/production “mirror-neuron” phenomenon: I can perceive it when the button is pressed by someone else as well as produce it when the button is pressed by me. Can you follow me?

            ChatGPT: Yes, I can follow you. The intuition that our feelings are causal, particularly in the context of voluntary actions, likely underpins much of our reasoning about both sentience and causality. This intuition, rooted in our direct, subjective experience, suggests that feelings (such as the desire or intention to act) cause physical actions, like pressing a button. The feeling of free willā€”where we perceive our desires as initiating actionsā€”could be foundational to our broader understanding of causality itself.

            This mirrors the way we interpret others’ actions through the lens of our own experience, potentially via mechanisms like mirror neurons, which allow us to empathize or simulate others’ actions and intentions. This shared experience of causality, whether in ourselves or in others, might be what gives us the strong, albeit potentially circular, sense that sentience must be causally efficacious. Itā€™s a compelling explanation for why the question of the causal role of sentience is so intuitively difficult to resolve: weā€™re deeply wired to believe in the causality of our subjective experiences.

            ā€œWorkman, blame not thy tools!ā€

            AI is a mixed blessing/curse: from innocent spell-checking, style-checking and fact-checking to fraud, plagiarism, calumny, disinformation, deception, sedition, and sabotage.

            In science itā€™s a matter of scale too: far, far more scientonomic papers are being written than are being read, let alone used. They are just being counted. They would immediately collapse if anyone tried to replicate — let alone build upon ā€“ them.

            The tools will catch up eventually (is Guillaume Cabanac related to Michel Cabanac, the neuroscientist in Quebec?), but, as usual, it will be an arms race (ā€œspy vs. spyā€) ā€“ and we may already have committed terminal, irreversible biocide. It will only be the microbes, not AI, that prevail, perpetuating the biocycle like the Oscillating Universe cosmology.

            Jeremian or Cassandrian?

            (Full disclosure. I consulted ChatGPT about whether there are any more idiomatic homologues: unless misinformed, there arenā€™t anyā€¦)

            Ceterum censeo: There is no such thing as AGI, probably for the same reason there is no synthetic sentience ā€“ which is not to say that there cannot be Abiogenesis. AI, in contrast, is just plain old anthropogenic tomtoolery (sic)

            Tachylalic Tales

            speed-of-light bibliophagy

            tachyonic AI-modelling-to-action

            cooking CanadaĀ 

            #

            follow the money

            Oceania vs. Eurasia vs. Eastasia

            drone vs. drone

            #

            not sci-fi

            canā€™t think of anything 

            anyone could do about itĀ 

            #

            DNA hurtling

            toward digital dystopia

            and biocide

            #

            and the breathless, gormless 

            geeks soldier on

            increasing the Trumpsā€™ ocean-front real estate

            Open Access and OpenAI

            https://phys.org/news/2024-08-junk-ai-scientific-publishing.html

            This is the era weā€™re in, where Gold AI has met ChatGPT. 

            It will partly sort itself out (for those with good intentions), by AIā€™s version of Mad Magazineā€™s Spy vs. Spy, with AI vs AI (AI-detecting AI). But, as in disinformation and cybercrime, the bad actors will do their worst, and it will be an endless arms race until/unless failsafe encryption emerges and prevails.

            Full disclosure: Iā€™ve started using ChatGPT to help me edit down badly written, wordy and repetitious commentary submissions toĀ Animal SentienceĀ that nevertheless have some redeeming substance by compressing them into more acceptable commentaries.

            Propositionality

            It is a great pleasure and an honor to “skywrite” with Vili CsĆ”nyi. I already knew something about how perceptive, sensitive and intelligent dogs were from my years with my belovedĀ LĆ©dike (1959-1975),Ā never forgotten and never “replaced”. But for decades now, starting already from the era of Vili’s unforgettable Bukfenc (and Zebulon, not a dog), both of whom I knew, Vili’s remarkable perceptiveness and understanding of dogs’ cognition and character have soared far beyond my modest mind-reading skill. I have learned so much from Vili that has stayed with me ever since.Ā 

            So let me preface this by saying that every example Vili cites below is familiar, valid, and true — but not propositional (though “associative” is a non-explanatory weasel-word to describe what dogs really do perceive, understand, express, want and know, and I regret having evoked it: it explains nothing). 

            Dogs, of course, knowingly perceive and understand and can request and show and alert and inform and even teach — their conspecifics as well as humans. But they cannot tell. Because to tell requires language, which means the ability to understand as well as to produce re-combinatory subject/predicate propositions with truth values. (A mirror production/comprehension capacity.) And to be able to do this with one proposition is to be able to do it with all propositions.

            When Vili correctly mind-reads Bukfenc, and even mind-reads and describes what Bukfenc is mind-reading about us, and is trying to express to us, Vili is perceiving and explaining far better what dogs are thinking and feeling than most human mortals can. But there is one thing that no neurotypical human can inhibit themselves from doing (except blinkered behaviorists, who mechanically inhibit far, far too much), and that is to “narratize” what the dog perceives, knows, and wants — i.e., to describe it in words, as subject/predicate propositions.

            It’s not our fault. Our brains are the products of about 3 million years of human evolution, but especially of language-specific evolution occuring about 300,000 years ago. We evolved a language-biased brain. Not only can we perceive a state of affairs (as many other species can, and do), but we also irresistibly narratize it: we describe it propositionally, in words (like subtitling a silent film, or putting a thought-bubble on an animal cartoon). This is fine when we are observing and explaining physical, chemical, mechanical, and even most biological states of affairs, because we are not implying that the falling apple is thinking “I am being attracted by gravity” or the car is thinking “my engine is overheating.” The apple is being pulled to earth by the force of gravity. The description, the proposition, the narrative, is mine, not the apple’s or the earth’s. Apples and the earth and cars don’t think, let alone think in words) Animals do think. But the interpretation of their thoughts as propositions is in our heads, not theirs.

            Mammals and birds do think. And just as we cannot resist narratizing what they are doing (“the rabbit wants to escape from the predator”), which is a proposition, and true, we also cannot resist narratizing what they are thinking (“I want to escape from that predator”), which is a proposition that cannot be literally what the rabbit (or a dog) is thinking, because the rabbit (and any other nonhuman) does not have language: it cannot think any proposition at all, even though what it is doing and what it is wanting can  be described, truly, by us, propositionally, as “the rabbit wants to escape from the predator”). Because if the rabbit could think that propositional thought, it could think (and say, and understand) any proposition, just by re-combinations of content words: subjects and predicates; and it could join in this skywriting discussion with us. That’s what it means to have language capacity — nothing less.

            But I am much closer to the insights Vili describes about Bukfenc. I am sure that Vili’s verbal narrative of what Bukfenc is thinking is almost always as exact as the physicist’s narrative about what is happening to the falling apple, and how, and why. But it’s Vili’s narrative, not Bukfenc’s narrative.

            I apologize for saying all this with so many propositions. (I’ve explained it all in even more detail with ChatGPT 4o here.)

            But now let me answer Vili’s questions directly, and more briefly!):

            Bukfenc and Jeromos asked. They then acted on the basis of the reply they got. They often asked who would take them outside, where we were going and the like. The phenomenon was confirmed by MĆ”rta GĆ”csi with a Belgian shepherd.” IstvĆ”n, do you think that the asking of the proposition (question) is also an association?

            My reply to Vili’s first question is: Your narrative correctly describes what Bukfenc and Jeromos wanted, and wanted to know. But B & J can neither say nor think questions nor can they say or think their answers. “Information” is the reduction of uncertainty. So B&J were indeed uncertain about where, when, and with whom they would be going out. The appearance (or the name) of Ɖva, and the movement toward the door would begin to reduce that uncertainty; and the direction taken (or perhaps the sound of the word “Park”) would reduce it further. But neither that uncertainty, nor its reduction, was linguistic (propositional). 

            Let’s not dwell on the vague weasel-word “association.” It means and explains nothing unless one provides a causal mechanism. There were things Bukfenc and Jeromos wanted: to go for a walk, to know who would take them, and where. They cannot ask, because they cannot speak (and not, I hope we agree, because they cannot vocalize). They lack the capacity to formulate a proposition, which, if they had that capacity, would also be the capacity to formulate any proposition (because of the formal and recursive re-combinatory nature of subject/predication), and eventually to discover a way to fly to the moon (or to annihilate the earth). Any proposition can be turned into a question (and vice versa): (P) “We are going out now.” ==> (Q) “We are going out now?” By the same token, it can be turned into a request (or demand): P(1) “We are going out now” ==> (R) “We are going out now!”

            My reply is the same for all the other points (which I append in English at the end of this reply). I think you are completely right in your interpretation and description of what each of the dogs wanted, knew, and wanted to know. But that was all about information and uncertainty. It can be described, in words, by us. But it is not a translation of propositions in the dogs’ minds, because there are no propositions in the dogs’ minds.

            You closed with: 

            “The main problem is that the study of language comprehension in dogs has not even begun. I think that language is a product of culture and that propositions are not born from some kind of grammatical rule, but rather an important learned element of group behavior, which is demonstrated by the fact that it is not only through language that propositions can be expressed, at least in the case of humans.”

            I don’t think language is just a cultural invention; I think it is an evolutionary adaptation, with genes and brain modifications that occurred 300,000 years ago, but only in our species. What evolved is what philosophers have dubbed the “propositional attitude” or the disposition to perceive and understand and describe states of affairs in formal subject/predicate terms. It is this disposition that our language-evolved brains are displaying in how we irresistibly describe and conceive nonhuman animal thinking in propositional terms. But propositions are universal, and reciprocal: And propositionality is a mirror-function, with both a productive and receptive aspect. And if you have it for thinking that “the cat is on the mat” you have it, potentially, both comprehensively and productively, for every other potential proposition — all the way up to e = mc2. And that propositional potential is clearly there in every neurotypical human baby that is born with our current genome. The potential expresses itself with minimal need for help from us. But it has never yet emerged from any other species — not even in apes, in the gestural modality, and with a lot of coaxing and training. (I doubt, by the way, that propositionality is merely or mostly a syntactic capacity: it is a semantic capacity if ever there was one.)

            There is an alternative possibility, however (and I am pretty sure that I came to this under the influence of Vili): It is possible that propositionality is not a cognitive capacity that our species has and that all other species lack. It could be a motivational disposition, of the kind that induces newborn ducklings to follow and imprint on their mothers. Human children have a compulsion to babble, and imitate speech, and eventually, in the “naming explosion,” to learn the (arbitrary) names of the sensorimotor categories they have already learned. (Deaf children have the same compulsion, but in the gestural modality; oral language has some practical advantages, but gestural language is every bit as propositional as oral language, and has the full power of Katz’s effability.)

            Could the genes we have that other species lack be mostly motivational? driving the linguistic curiosity and linguistic compulsion that’s there in human babies and not in baby chimps? (I say “linguistic” c & c, because other species certainly have plenty of sensorimotor c & Cc..)

            Ɩlel, IstvĆ”n

            _______________

            “When I work upstairs in our house in Almad, Janka lies quietly on the ground floor. When Ɖva leaves and comes back from somewhere, Janka emits a single characteristic squeal, which can be intended for me, because if I don’t react, she comes up and barks, calling me.” IstvĆ”n, is this a proposition or an association?

            “In Almadi, our next-door neighbor came over with his little Bolognese dog named TĆ¼csi, who didn’t come into the garden and stayed waiting at the gate for his owner, with whom we were talking inside the house. Our dog Bukfenc periodically went down to play with TĆ¼csi. After about 10 minutes, Bukfenc came up and turned toward the neighbor and barked at him. Everyone stirred. Bukfenc went straight down the stairs to the gate, followed by the neighbor. TĆ¼csi had disappeared; as it turned out ,he had gone home and Bukfenc was reporting this to the neighbor.” IstvĆ”n, is this a proposition or an association?

            “During the time of Bukfenc and Jeromos, I woke up at 3 a.m. to very soft grunting. Bukfenc was grunting very softly and together with Jeromos, they were standing next to my bed. I only opened my eyes a crack, pretending to be asleep. Bukfenc growled softly again, I didn’t react. Jeromos gave a loud squeal. I got up and told them, come on Jeromos, it seems you have something urgent to do. To my surprise, Jeromos went to his bed and lay down, and Bukfenc ran screaming towards the door. He managed to get to the street in time: he had diarrhea.” IstvĆ”n, is Jeromos’s barking a proposition or an association?

            Socratic Dialogue with ChatGPT4+ on Cartesian Certainty

            SH: Do you remember who I am?

            GPT4+: Yes, Professor Harnad, I remember who you are. How can I assist you further today?

            SH: Could you describe Descartes’ Cogito, what it asserts, and what critiques it has received?

            GPT4+: RenĆ© Descartes’ “Cogito, ergo sum” (“I think, therefore I am”) is a foundational element in modern Western philosophy, introduced in his works “Discourse on the Method” (1637) and “Meditations on First Philosophy” (1641). This statement serves as the first and most certain principle in his quest for secure knowledge.

            What the Cogito Asserts

            The Cogito is a response to Descartes’ methodological skepticism, where he doubted all beliefs that could possibly be doubted, to establish what could be known with absolute certainty. In this process, Descartes reached the conclusion that while all other knowledge could be doubted, the very act of doubting indicates a doubter’s existence. Thus, if he is thinking, then he must exist. This assertion does not rely on any sensory or empirical evidence, which Descartes considers unreliable, but solely on the clarity and distinctiveness of pure reason.

            Critiques of the Cogito

            1. Circular Reasoning: Some critics, including Antoine Arnauld in his objections to Descartesā€™ “Meditations,” argue that the Cogito assumes its conclusion. They claim that “I am thinking” presupposes the existence of an “I” ā€” thus, to declare “I think, therefore I am” is to assume the very thing (the existence of the self) that it purports to prove.

            2. Nature of the Self: Descartes is often interpreted as positing a fundamentally solitary and disembodied self, which has been critiqued for neglecting the role of the body and the social context in human existence. Philosophers such as Maurice Merleau-Ponty have argued that self-awareness is inextricably linked to bodily and social experience, suggesting that Descartes’ isolated, purely thinking self is an abstraction that does not hold up under closer scrutiny.

            3. Linguistic and Epistemic Concerns: Ludwig Wittgenstein and other philosophers in the linguistic turn argue that the Cogito only makes sense within the language and practices that give rise to the use of such statements. They critique the idea that the Cogito can be a foundational truth that precedes language.

            4. Existential and Phenomenological Critiques: Existentialists like SĆøren Kierkegaard and phenomenologists like Edmund Husserl have also provided critiques. Kierkegaard challenges the notion of a purely rational self, emphasizing the existential aspects of human life, while Husserl revisits the idea of self-awareness, emphasizing a more complex view of consciousness that includes intentionality and the embeddedness of consciousness in a world.

            5. Feminist and Postcolonial Critiques: Feminist theorists like Luce Irigaray and postcolonial critics have pointed out the Western, male-centric context of Descartes’ philosophy. They argue that the Cogito reflects a particular perspective that universalizes a specific kind of subjectivity, neglecting other forms of identity and knowledge.

            Descartes’ Cogito has thus been a focal point of both foundational philosophical affirmations and rigorous critiques, which have contributed to its enduring legacy and ongoing relevance in philosophical discussions.

            SH: Let’s just focus on certainty and doubt. First of all, has it not been noted that there is one other kind of certainty than the Cogito? Can we doubt that a formal contradiction (P and not-P) cannot be true?

            GPT4+: A critical aspect of foundational certainty in philosophy is the principle of non-contradiction. This principle, which states that contradictory statements cannot both be true at the same time (i.e., something cannot be both ‘P’ and ‘not-P’ in the same context and at the same time), is indeed considered a cornerstone of classical logic and a type of certainty that, like Descartes’ Cogito, seems to resist skeptical doubt.

            Principle of Non-Contradiction

            Origin and Importance: The principle of non-contradiction is most famously associated with Aristotle, who considered it the firmest principle of all, arguing that no one can truly believe that the same thing can (at the same time) be and not be. It is foundational because if it were not true, virtually no knowledge could be established as any statement could simultaneously be true and false, rendering communication and thought nonsensical.

            Relationship to the Cogito

            Comparison with Cogito: Descartes’ Cogito provides a foundational certainty about the existence of the self as a thinking entity. The principle of non-contradiction offers a different type of certaintyā€”one that underpins the very possibility of rational discourse and logical thought. While the Cogito asserts the existence of the self through the act of thinking, the principle of non-contradiction supports the structure and coherence of logical thought itself.

            Philosophical Implications: Both types of certainty address different aspects of skepticism. Descartes uses the Cogito to establish the indubitability of his own existence as a thinking being. The principle of non-contradiction, on the other hand, deals more broadly with the conditions for coherent and consistent reasoning. Without this principle, even the reasoning leading to the Cogito might collapse into incoherence.

            Doubt and Certainty

            Doubt of Non-Contradiction? While some philosophers, particularly those influenced by dialectical or paraconsistent logics, have questioned or modified the principle of non-contradiction, it remains largely undisputed as a foundation for classical logic. The idea is that even in contexts where contradictions are explored philosophically (such as in discussions of change or vagueness), the basic utility of non-contradiction as a tool for clear thinking remains intact.

            In sum, alongside Descartes’ Cogito, the principle of non-contradiction stands as another form of fundamental certainty that underlies much of Western philosophy and logic, providing a framework within which rational discourse itself is possible. These two forms of certaintyā€”existential and logicalā€”are foundational in addressing different layers of philosophical inquiry and skepticism.

            SH: Now the Cogito. “I am thinking therefore I exist”. 

            How do I know I am thinking? Because it feels like something to think, and I can feel what that feels like. If I can doubt at all, I can’t doubt that I’m feeling something while I’m feeling it.

            But feeling is feeling. A-theoretically. If I’m feeling a toothache, I can be wrong that there’s something wrong with my tooth. It could be referred pain from my jaw. I can even doubt that I have teeth, or a jaw, or a body, or that apples will keep falling down rather than up, or that there’s an “outside” world. Or a tomorrow; or even that — and what — I was feeling an instant ago. That’s all open to classical scepticism. It could be true; it is probably true; yet none of it is certain.

            But I can’t doubt that I’m feeling what I’m feeling, while I’m feeling it. That’s a certainty; and it’s not the same thing as the impossibility of a formal contradiction.

            But there are still two things wrapped into the certainty of the Cogito:

            (1) “I”: That’s an abstract, theory-dependent notion. There is lots of disagreement (hence uncertainty) about it.

            (2) “exist”: That too is an abstract, theory-dependent notion. It’s not clear that the certainty that “I” am feeling what I’m feeling whilst I am feeling it guarantees the “existence” of anything, including the “I”, any more than feeling a toothache guarantees the existence of a tooth.

            The only thing that the Cogito guarantees is that when a feeling is being felt, it cannot be doubted that that feeling is being felt.

            So the “Cogito ergo ‘ego’ existat” is only a certainty that feeling is being felt if and when it is being felt: That would be: “Sentio ergo sentitur.”

            No guarantees, either about “I” or about existence.

            Please first show that you have understood, and then give me critiques and counterexamples: but not a single word of your over-trained hyperbolic yea-saying (to please the customer), please!

            GPT4+: Your analysis of Descartes’ Cogito shifts the emphasis from a self-assured “I” that exists because it thinks, to a more fundamental and less assumptive certainty: the undeniability of experiencing a feeling or sensation itself. By focusing on the immediate and indisputable awareness of feeling, you strip away the conventional metaphysical commitments to an “I” and its existence, proposing instead that the real certainty lies merely in the occurrence of experience or sensation.

            Understanding the Focus on Sensation

            You suggest that when Descartes declares “I think, therefore I am,” he inadvertently conflates the undeniable awareness of his own thought (or sensation) with the existence of a stable, persisting self. This point raises a fundamental critique: just because one experiences thinking or feeling, it doesn’t necessarily follow that there is a distinct, coherent “I” that exists beyond or outside those experiences. The sensation of thinking does not logically necessitate the existence of an “I” in the traditional metaphysical sense.

            The Abstract Nature of “I” and “Exist”

            “I”: You correctly note that “I” is a loaded, theory-dependent construct. Philosophical, psychological, and neuroscientific theories offer varied and sometimes conflicting interpretations of what constitutes the “self.” Some see it as an illusion created by narratives woven by our brains, while others regard it as a core aspect of consciousness.

            “Exist”: The term “exist” also carries metaphysical weight. Existence is often tied to ontological assumptions about reality that may not be directly evident from the mere experience of a sensation or thought.

            Simplified Reformulation: “Sentio ergo sentitur”

            This formulationā€””I feel, therefore a feeling is felt”ā€”avoids assuming the existence of a stable self or the metaphysical nature of existence. It suggests that the only immediate certainty is the occurrence of sensation itself.

            Critiques and Counterexamples

            1. Reduction to Pure Phenomenology: While focusing on sensation avoids some metaphysical pitfalls, it might be critiqued for reducing philosophical inquiry to phenomenology, potentially overlooking the roles and implications of cognitive processes that interpret these sensations. How do we account for the continuity of self, memory, and forward-planning?

            2. Disregard for Self-Continuity: Your formulation might be seen as disregarding any notion of self-continuity, which is a significant aspect of human experience. Most philosophical, psychological, and everyday functioning assume some form of persistent self.

            3. Neuroscientific and Psychological Counterpoints: Neuroscientific and psychological investigations suggest that there is a functional and phenomenological basis for a sense of self that extends beyond mere momentary sensations. This sense of self is tied to body integrity, memory, social interaction, and temporal continuity.

            4. Existential and Ethical Implications: By stripping down to mere sensation, we might lose sight of the existential and ethical dimensions that are predicated on the recognition of persistent selvesā€”both one’s own self and others. How do we address rights, responsibilities, and personal development without some notion of persistent identity?

            Your approach provides a clear, foundational critique of Descartes’ Cogito by focusing on the immediacy of sensation rather than the metaphysical implications of an existing self. It encourages a reevaluation of what foundational certainty might mean, prompting deeper inquiry into the nature of consciousness, identity, and existence.

            SH: All those points are valid and important. But they are not about certainty. They are about dentistry. Do you understand that?

            GPT4+: Yes, I understand the analogy you’re making. Your point is that the discussions about the self, existence, and the implications of such notionsā€”while significantā€”are not addressing the core issue of what can be known with absolute certainty. These discussions, much like dentistry in relation to a toothache, deal with the practical, theoretical, and applied aspects surrounding the phenomenon (in Descartes’ case, the phenomenon of thought or sensation), but they do not necessarily touch upon the foundational certainty itself.

            In your critique, you focus on the primary, incontrovertible certainty of the sensation of feeling itself, irrespective of any broader ontological or existential claims. The other discussionsā€”while relevant and crucial for a full understanding of human experienceā€”move away from the elemental certainty you highlight, which is simply that a sensation is being felt when it is felt.

            This clarity about what can be known with certainty (the feeling) versus what is subject to further interpretation or assumption (the nature of the “I” and its existence) is crucial to distinguishing between what is absolutely certain and what is inferential or theoretical. Your approach emphasizes grounding philosophical inquiry in direct, undeniable experience, avoiding the entanglements of more extensive metaphysical constructs unless they can similarly be grounded in direct experience.

            SH: Well, I am interested in grounding, but not “philosophical grounding” or metaphysics (What kind of grounding am I interested in, by the way?) Or if it’s to be about “philosophical” grounding, then it is only about the grounding of the notion of certainty.

            GPT4+: Memory updated

            You’re interested in grounding the notion of certainty itself, which seems to focus on a form of epistemological groundingā€”identifying the absolute bedrock upon which knowledge claims can securely rest. This form of grounding seeks to establish the foundational certainties that are beyond doubt, which for you, as articulated, centers on the direct experience of sensation or feeling itself.

            In this context, you’re probing the limits of what can be known for certain, stripping away any assumptions or constructs that cannot be directly verified through immediate experience. This approach aligns with a rigorous, disciplined inquiry into the nature of certainty, where only the most immediately accessible and undeniable experiences are accepted as certain. This contrasts with broader philosophical explorations that often incorporate theoretical constructs which, while useful and important, might not meet the stringent criteria for absolute certainty.

            SH: This time you’re completely wrong! Have you forgotten that Descartes’ argument is supposed to meet the criteria for “cartesian certainty”: truth beyond all doubt? (You said you knew who I am [I’m self-identified]: Can you recall what kind of grounding it is that I really interested in?)

            GPT4+:

            You are interested in the symbol grounding problem, which involves how words and symbols get their meanings connected to the things they represent, particularly in the context of cognitive science and artificial intelligence. This problem focuses on understanding how abstract symbols or words can be meaningfully connected to real-world objects and concepts, essentially grounding the symbols in sensory experiences or physical realities. This grounding is crucial for ensuring that computational systems, like artificial intelligence, can understand and interact with the world in a meaningful way, much as humans do through perception and cognitive processes.

            SH: Thatā€™s better (though your “meaningfulness” is a weasel-word here). Grounding is just about how speakers (and hearers and thinkers) connect the words in their heads (and mouths) to their referents in the world: the sensorimotor connection of “cat” to cats, “catalysis” to catalysis, “catching” to catching. That’s neither about metaphysics, nor about epiphanies.

            And all that’s left of Descartesā€™ Cogito in cognitive science today is the problem of explaining how and why cognition (i.e., thinking) feels like something. Thatā€™s also whatā€™s come to be called the ā€œhard problemā€ of cognitive science (q.v.)ā€¦