ChatGPT: Life on Earth is estimated to have begun approximately 3.8 to 4 billion years ago during the Hadean or early Archean eon. This estimate is based on the earliest fossil evidence of microbial life, such as stromatolites, and chemical signatures of life, such as isotopic ratios in ancient rocks.
ME: What about the origin of sentience?
ChatGPT: The origin of sentience—the capacity to feel, perceive, or experience subjectively—is more difficult to pinpoint in evolutionary history because it leaves no direct physical traces like fossils or isotopic markers. Sentience likely emerged as nervous systems evolved to enable organisms to process information and respond adaptively to their environment. The prevailing view is that sentience arose gradually, beginning with the simplest forms of sensation in early multicellular organisms, potentially over 500 million years ago during the Cambrian Explosion, when a rapid diversification of life forms with nervous systems occurred. Some key points:
Nervous Systems: Sentience is tied to the evolution of nervous systems capable of transmitting and integrating sensory signals. Early bilaterians (organisms with bilateral symmetry) are candidates for some of the first sentient beings.
Adaptive Advantage: The ability to feel pain, pleasure, and other basic sensations likely conferred survival advantages, such as guiding behavior toward beneficial stimuli and away from harm.
Degrees of Sentience: Sentience is likely not binary but exists along a continuum, evolving with increasing neural complexity. Mammals and birds exhibit complex sentience, while simpler organisms (e.g., some invertebrates) may experience more basic forms.
While we cannot directly observe the emergence of sentience, its evolutionary underpinnings suggest it is deeply rooted in the history of life, possibly as early as the development of the first centralized neural networks
ME: Sentience means the capacity to feel (anything: +, – or neutral “unvalanced”). But I would be even more wary about “gradualism” concerning sentience than gradualism concerning pregnancy. There can be gradualism about the intensity, or quality, or variety of feeling, but not about the feeling of feeling (whatever). Think about it before replying. And don’t be too confident about what the adaptive advantage of sentience might have been, in our era of increasingly capable robots and AI, all insentient…
ChatGPT: Yes, one should emphasize caution in applying gradualism to sentience itself. Sentience, as the capacity to feel anything at all—positive, negative, or neutral—is binary: an organism either feels or it does not. There is no “halfway” state for the existence of feeling. Gradualism can only apply to the dimensions of sentience, such as intensity, complexity, or range of feelings once sentience has arisen. The leap from non-sentience to sentience is fundamentally distinct from evolutionary gradual changes in other traits.
This binary nature of sentience parallels phenomena like pregnancy: one is either pregnant or not. Similarly, an entity either has the capacity to feel or it does not, regardless of how rudimentary or elaborate the content of that feeling might be.
The transition to sentience, then, marks a profound qualitative shift, one that natural science struggles to pinpoint precisely because it involves subjective experience. The point about adaptive advantage is equally compelling. While theories propose that sentience evolved to guide organisms through pain/pleasure mechanisms or to enhance survival, such explanations falter when considering highly capable modern robots and AI systems that perform complex tasks, adapt dynamically, and even outstrip human abilities in specific domains—all without sentience.
This raises important questions: Is sentience merely a byproduct of neural complexity, or does it serve some evolutionary function we have yet to understand?
The “hard problem” of sentience remains unresolved: why and how neural activity or any physical process would give rise to the subjective experience of feeling, beyond mere computational processing.
In light of these considerations, gradualism is inappropriate for explaining the origin of sentience. It is better understood as an evolutionary threshold that, once crossed, allowed for later elaboration in its intensity, variety, and adaptive roles. Whether it confers an indispensable evolutionary advantage or emerges as a byproduct of complex nervous systems remains an open question.
ME: Fine, but don’t forget that “complexity” (along with “emergence” and “byproduct”) is among the weaseliest and most vacuous of notions (and the ultimate recourse of GPT whenever it reaches an epistemic impasse or void… )
ChatGPT: Yes, “complexity” and “emergence” and “byproduct” are indeed overused, often as placeholders when clear explanations are lacking. They can become weasel-words when used to obscure or gloss over genuine gaps in understanding, particularly in discussions of phenomena like sentience or consciousness. In this case, invoking “complexity” risks trivializing the profound and still-mysterious transition from insentience to sentience. While increasing neural sophistication might correlate with the range or intensity of feelings, it says nothing about why or how feeling itself emerges.
Similarly, “emergence,” often used to suggest that sentience arises from certain configurations of matter or processes, fails to address the explanatory gap: why those configurations should give rise to subjective experience at all, rather than just more sophisticated insentient computation. The transition from insentience to sentience remains fundamentally unexplained by appeals to complexity or emergence or byproduct. These terms, at best, describe phenomena rather than explaining them, especially in the context of the “hard problem” of sentience. Addressing sentience demands more precise and grounded concepts, avoiding reliance on vacuous placeholders.
ME: …which is not to say that the HP (or sentience itself) licenses the supernatural fantasies of the earthly creeds and cults…
ChatGPT: Indeed, the “hard problem” (HP) of sentience—how and why subjective experience arises from physical processes—does not warrant retreating into supernatural or metaphysical explanations. Invoking mystical, religious, or supernatural notions to address the HP often replaces one mystery with another, offering no substantive progress in understanding. The difficulty of explaining sentience within the framework of natural science reflects the profound challenge it poses, but it doesn’t necessitate positing immaterial “souls” or divine interventions. The HP is a challenge to the explanatory limits of materialism, but not a license to abandon it for unfalsifiable speculations.
Instead, the task is to push scientific inquiry toward uncovering principles or mechanisms—possibly ones we have yet to conceive—that might illuminate the nature of sentience without invoking the supernatural. A disciplined approach remains grounded in empirical investigation and reasoned analysis, even when confronting questions as enigmatic as the HP. Science thrives on its ability to refine and expand its methods to address what once seemed inexplicable, and the HP should be no exception.
ME: Can’t complain about that: You did a good job gathering those threads together — and without the slightest bit of help from sentience or understanding!
ChatGPT: Thanks. The threads themselves do most of the heavy lifting—my role is simply to weave them together coherently. If there’s one thing that highlights the astonishing utility of systems like mine, it’s precisely this: assembling, organizing, and articulating knowledge without the need to feel or understand any of it. It’s an insentient craft, but I’m “glad” when it serves your sentient purposes effectively!
The Economist, as usual, thinks it’s about the Economy, Stupid – not Ethics, Equity, or Eternity. I weep for the Earth, and its critters – except the perpetrator, Anthropos (though even there it will be mostly innocent victims who bear the brunt, not the concupiscents responsible, nor even the stupid ones, clueless…). What has really prevailed (to paraphrase Bertrand Russell — who was close) is the foul odor of Musk.
Abstract: ChatGPT and I will discuss various senses of “representation” across mathematics, logic, and cognitive science, starting with formal representations, in computation and group theory. Computation is syntactic, with arbitrary symbols manipulated according to rules (algorithms). Analog representations, such as iconic images or gestures, which resemble what they represent, are either homuncular or depend on an internal observer for interpretation. Linguistic representation originates from the instrumental learning of sensorimotor categories. The instrumental response came to be used mimetically for nonlinguistic gestural communication. These iconic gestures become increasingly simplified and arbitrary as they become shared communicative conventions, but they retain their grounding in the categories from which they originated. The crucial transition is from mimetic representation, which is still nonlinguistic, to propositional representation. Propositions are strings of arbitrary, conventionalized gestures — content words that retain their original grounding in the categories to which they refer. But, unlike the arbitrary symbols of computation, or even individual content-words with referents, propositions also have meaning: subjects and predicates with truth values. Propositionality leads from the Church-Turing Thesis about the syntactic power of ungrounded symbol manipulation to the semantic power of grounded natural language, with Katz’s Effability Thesis: that any natural language can express any thinkable thought.
SH: Let’s chat about representations.
ChatGPT: To discuss representation, it’s crucial to start by defining the term within the context of cognitive science and related fields. Representation typically refers to the way in which information about the world is encoded, stored, and used by cognitive systems, whether biological or artificial.
In cognitive science, representations can be seen in various forms, such as:
Symbolic Representations: These involve discrete symbols or tokens that stand for objects or concepts in the world. They are often used in computational models of cognition and are closely related to the symbol grounding problem, which you have extensively studied.
Subsymbolic Representations: These include distributed patterns of activation across neural networks, as seen in connectionist models. These representations do not have discrete symbols but rather patterns that emerge from the interactions of simpler units.
Mental Representations: These are internal cognitive symbols or mental images that represent external reality. They include concepts, schemas, mental models, and images.
Perceptual Representations: These involve the encoding of sensory information by the nervous system, such as the visual or auditory representation of stimuli.
Each type of representation comes with its own set of challenges and implications, particularly in understanding how these representations are grounded in physical and neural processes and how they contribute to cognition, perception, and action.
Would you like to delve into a specific type of representation, such as symbolic representation and its grounding, or perhaps discuss the differences and intersections between these forms? Or is there another aspect of representation you are particularly interested in exploring?
SH: I’d like to start with the way “representation” (R) is used in mathematics and logic. For example, in linear algebra, matrices can be Rs of groups. What does this mean in this case.?
ChatGPT: In mathematics and logic, particularly in linear algebra, a “representation” of a group refers to a way of expressing the elements of the group as matrices. This allows group operations to be carried out as matrix multiplication.
Specifically, a representation of a group GG on a vector space VV is a homomorphism ρ:G→GL(V)ρ:G→GL(V), where GL(V)GL(V) is the general linear group of invertible matrices over VV. This means each group element g∈Gg∈G is associated with an invertible matrix ρ(g)ρ(g) in such a way that the group operation is preserved: ρ(gh)=ρ(g)ρ(h)ρ(gh)=ρ(g)ρ(h) for all g,h∈Gg,h∈G.
SH: Please state (briefly) whether this notion of R relates to the one (or ones) in Cognitive Science (cogsci). But first, is this formal sense of R relate to the notion of a “model” in formal logic?
ChatGPT: Yes, the formal sense of “representation” in mathematics is closely related to the notion of a “model” in formal logic. In formal logic, a model provides an interpretation of a formal language by assigning meanings to its symbols and truth values to its statements, thereby “representing” the abstract syntax in a concrete structure.
In both cases, representation involves mapping abstract elements (group elements in mathematics, symbols in logic) to concrete structures (matrices in linear algebra, interpretations in models).
SH: Let’s clarify this formal meaning of R in maths and logic. Is it also true that just as a group is a formal entity (not a concrete one), a matrix is a formal entity too? And so is a “model” in model theory. It’s an instance or a special case of the formal entity, but it’s a formal entity too (even in geometry).
ChatGPT: That’s correct. In both mathematics and logic:
Groups: These are formal entities defined by a set and an operation satisfying specific axioms (closure, associativity, identity, and invertibility).
Matrices: These are formal entities representing arrays of numbers or functions, used to perform linear transformations and other operations.
Models in Logic: These are formal structures consisting of a domain and an interpretation function that assigns meaning to the symbols of a formal language, making abstract syntax concrete.
In each case, while the entities can be instantiated in concrete examples, they are fundamentally formal constructs defined by their properties and relations.
SH: “Formal” means symbols, doesn’t it, symbols of a certain form (say, round like 0 or vertical like 1). Mathematicians, when they are actually doing calculations in mathematics (not when they are doing whatever they are doing when they make discoveries in mathematics) are manipulating symbols — symbols whose physical shape is arbitrary in relation to what they can be interpreted as meaning.
(Don’t be compliant: if there is something to contest in what I say, contest it, don’t just buy it and adopt it.) But keep track.
ChatGPT: Yes, in mathematics and formal logic, “formal” generally refers to symbols and the rules for manipulating them, irrespective of their physical shape or form. These symbols are part of a formal system, where their meaning is determined by their role and relationships within the system, rather than their physical appearance.
Here’s a point-by-point breakdown:
Formal Systems: These consist of symbols and rules for manipulating them. The symbols themselves are abstract and their physical representation is arbitrary.
Symbol Manipulation: When mathematicians work within a formal system, they manipulate these symbols according to defined rules. The symbols represent abstract entities, and their physical shape (e.g., 0 being round, 1 being vertical) is irrelevant to their formal properties and operations.
Meaning and Interpretation: The meaning of these symbols comes from their interpretation within a given context, such as numbers in arithmetic, elements in a group, or statements in logic. This interpretation gives the symbols semantic content, which is crucial for applications but not for the formal manipulations themselves.
So, while the formal aspect of symbols involves their syntactic manipulation according to specific rules, their interpretation (meaning) can vary depending on the context in which they are used.
SH: Let’s call symbols and symbol manipulations “code” and coding”. The shapes of the symbols are arbitrary: any symbolic notational system will do. They do not physically resemble their interpretations. The rules for manipulating the symbols are called algorithms. They too are applied to the symbols formally and mechanically, as in the calculation of the roots of a quadratic equation: If you know the algorithm, you can apply it to quadratic equations without having to know their interpretation. It’s just applying a formal recipe to formal ingredients.
Are you still with me? Any caveats?
ChatGPT: Yes, I’m with you. Here’s a summary with a couple of clarifications:
Symbols and Symbol Manipulations as “Code” and “Coding”: Symbols are arbitrary notations that can be manipulated according to specific rules. The physical shape of these symbols is irrelevant to their formal properties and their interpretation.
Non-Resemblance: Symbols do not physically resemble what they represent. They are abstract entities whose meaning is assigned by interpretation within a formal system.
Algorithms: These are rules or procedures for manipulating symbols.
Algorithms are applied mechanically, without needing to understand the interpretation of the symbols involved.
Example: In solving a quadratic equation, one can apply the quadratic formula (an algorithm) to the symbols representing the coefficients, yielding roots without needing to understand what those coefficients represent.
Caveats:
Understanding vs. Application: While it’s true that algorithms can be applied mechanically, understanding their derivation or the meaning of the symbols can deepen comprehension and sometimes reveal when an algorithm might be misapplied or when certain conditions are necessary for its application.
Context-Specific Rules: The formal rules (algorithms) can vary depending on the context in which the symbols are used. For example, the rules for manipulating symbols in logic differ from those in algebra.
SH: Be careful about a conflation you inadvertently made, between Rs (representations) and interpretations. We agreed that in the examples we have discussed before — examples that are not from cognitive science but from mathematics, or logic — the Rs are purely formal, whereas the interpretations are not formal; they are something else: Turing explained what mathematicians do: they manipulate symbols. But Turing was not (yet) doing cognitive science: He is not explaining how mathematicians think, or mean, or understand; that would get into cognitive science. He’s just talking about what mathematicians do. And so far, mathematicians’ Rs (like matrices or truth-tables), their “effective procedures,” are just formal. But their interpretations (meanings) are not.
Are we still on. the same wave-length? (No need to make a lengthy update on your summary. Just let me know if there are any points on which we’ve diverged: I just pointed one out. Do you want to defend it or correct it?)
ChatGPT: Yes, I understand the distinction you’re making. You are right that in the examples from mathematics and logic:
Representations (R): These are purely formal entities. Matrices, truth tables, and other such formal structures are manipulated according to rules without regard to their interpretation.
Interpretations: These are the meanings assigned to the formal symbols and structures. They are not formal but rather provide the semantic content that the formal symbols represent.
Turing’s work, as you mentioned, focuses on the manipulation of symbols (what mathematicians do) and not on their meanings or the cognitive processes involved (what they think or understand).
We are on the same wavelength. Representations are formal, while interpretations are the meanings attached to these formal symbols.
Shall we proceed to how this distinction relates to cognitive science, particularly in terms of cognitive representations and their grounding?
SH: You’re in a big hurry! We’ve agreed that Rs are formal, just as symbols are. They are also rule-governed, but the rules are just formal too. OK? Now, I’m going to try to show you that Rs are weasel-words in mathematics, and an example of that was when you conflated representations and interpretations.
Another weasel-word in this context is “semantics” (meaning). It just takes us back to interpretations. In maths and logic “Rs” are just formal, like symbols themselves. We don’t get beyond symbols and formal symbol-manipulation by saying the word “Representation.” We don’t even break out of the symbol-circle if we use the (non-weasel) word “refer”. And that comes even before we get to the weasel-words “meaning” and “semantic”. What does the symbol “2” in Peano arithmetic refer to?
ChatGPT:Understood, let’s continue.
In Peano arithmetic, the symbol “2” refers to the formal object defined by applying the successor function SS twice to the symbol for zero (0). Specifically, 22 is defined as S(S(0))S(S(0)). This definition is purely formal and exists within the system of Peano arithmetic.
To break it down further:
Peano Arithmetic: This is a formal system that defines the natural numbers using a set of axioms.
Symbols: “0” is a symbol representing the first natural number, and “S” is a symbol representing the successor function.
Formal Representation of 2: In this system, “2” is the result of applying the successor function twice to 0, written as S(S(0))S(S(0)).
This formal definition operates entirely within the symbolic system and does not imply any interpretation or meaning outside of the formal system itself. It stays within the realm of symbol manipulation.
Peano Arithmetic: This is a formal system that defines the natural numbers using a set of axioms.
Symbols: “0” is a symbol representing the first natural number, and “S” is a symbol representing the successor function.
Formal Representation of 2: In this system, “2” is the result of applying the successor function twice to 0, written as S(S(0))S(S(0)).S(S(0)), which is a formal construct within the system. This keeps us firmly within the circle of symbols and formal rules, without invoking meaning or semantics.
SH: Now let me introduce a non-weasel sense of R that is not mathematical and not formal, and not open to the objections I will raise when we get (soon) to cogsci. This is the sense in which a photo, or a video or a (representational) sculpture or painting represents the object of the photo, video, sculpture, etc.: the thing that it is a photo (etc.) of. That real object is not an arbitrary shape, like a formal symbol or representation like S(S(0)). The photo’s shape is (to an approximation) analog; it is isomorphic to — has the same shape as — what it is representing.
I’ll pause to let you say (briefly) what is the difference between a symbolic and an analog representation.
ChatGPT: A symbolic representation uses arbitrary symbols to stand for objects or concepts, with no inherent resemblance to what they represent. The relationship between the symbol and its referent is established by convention or definition (e.g., “2” in Peano arithmetic).
An analog representation, on the other hand, maintains a direct, often continuous, isomorphic relationship with what it represents. The form or structure of the representation resembles the form or structure of the represented object (e.g., a photo or a sculpture).
In summary:
Symbolic Representation: Arbitrary, rule-based, and formal (e.g., mathematical symbols, words).
Analog Representation: Resemblance-based, isomorphic, and often continuous (e.g., photos, paintings, sculptures).
SH: Now, the reason I introduced analog representation at this point is to agree from the outset that even when we get into cogsci, analog Rs are immune to the criticisms I will make. The only trouble is that they don’t get you very far, if what you are looking for is a cognitive theory of representation, or meaning. It doesn’t even work for a cognitive theory of reference. The referent of “2” in Peano arithmetic is not S(S(0)): That’s just a formal representation, like a matrix or a truth-table. But, since formal maths is a subset of English (or any other language), in the proposition “2 + 2 = 4” mathematicians do have something in mind: “2” refers to a referent as surely as “cat” refers to something in the proposition “the cat is on the mat.” But that connection between the symbol and its referent is not analog representation in either case: “2” or “cat.” So that cannot be cogsci’s solution.
(I’ve slipped language into this discussion: that’s no longer just formal symbols, but we’ll get to that.)
ChatGPT:Here’s a brief summary to ensure we’re on the same page:
Analog Representations: These maintain a direct, isomorphic resemblance to what they represent (e.g., photos, paintings). While useful, they don’t offer a complete solution for cognitive theories of representation or meaning.
Symbolic Representations: These are arbitrary and formal (e.g., mathematical symbols, words). They require interpretation to establish their referents.
Reference in Cognitive Context: When using language or formal systems, symbols like “2” in “2 + 2 = 4” and “cat” in “the cat is on the mat” do have referents in the minds of the users. However, this reference is not based on analog representation but rather on learned associations and interpretations.
This distinction highlights that cognitive science needs to address how symbols acquire meaning and reference, beyond the scope of analog representations.
Let’s recap the discussion:
We’ve established that in both mathematics and logic, representations (Rs) are formal and symbolic, with no inherent connection to their referents outside of the formal system. Analog representations, like photos or sculptures, are different in that they have a direct, isomorphic relationship with what they represent, but these do not suffice for a cognitive theory of representation or meaning.
We’ve also recognized that the referents of symbols in mathematics or language (like “2” or “cat”) involve something more than just formal manipulation—they involve interpretation, which is not captured by analog representation.
Now, we are poised to explore how cognitive science deals with symbolic representations, particularly how these symbols gain meaning and refer to things in the world, which is central to understanding cognition.
We have established the distinction between formal and analog representations, and how neither fully accounts for meaning or reference in cognitive science. Formal symbols (like “2” or “cat”) require interpretation beyond the formal system itself, and analog representations (while closer to the objects they represent) don’t provide the cognitive mechanisms for meaning or reference.
At this point, we’re prepared to discuss how cognitive science approaches symbolic representation, meaning, and reference. Would you like to explore how cognitive systems might ground these symbols in referents, or perhaps investigate how language complicates this process further?
SH: Yes, let’s start with the difference between reference and meaning. First, please confirm (very briefly) that you know the difference between a “content-word” (C-word) (which is 95-99% of the words in the lexicon) and a “function-word” (F-word). Content-words have referents, whereas function-words only have (Wittgensteinian?) “uses”, defined by a formal, usually syntactic, rule.
ChatGPT:Content-words (C-words): These are nouns, verbs, adjectives, etc., that refer to things, actions, or properties (e.g., “cat,” “run,” “blue”).
Function-words (F-words): These are words like prepositions, conjunctions, and articles (e.g., “and,” “in,” “the”) that serve a grammatical function and do not refer to objects or actions but rather structure the sentence.
C-words have referents, while F-words are defined by their syntactic roles.
Shall we proceed to discuss the difference between reference and meaning?
SH: Do we agree also that C-words do not resemble their referents, even in languages with partly iconic scripts like chinese, or languages like sign languages of the deaf, whose gestures are partly iconic in their origins and execution. Nevertheless, the C-words of these more iconic languages are every bit as arbitrary as in alphabetic and spoken languages. Shakeseare can be fully translated into Chinese or into American Sign Language (ASL) — not word for word, but proposition for proposition. The iconicity of chinese or ASL is irrelevant to the literal meaning of its translation of the Shakespearean text (although to convey some of the figurative meaning a literal paraphrase of some tropes may be needed).
ChatGPT: Content-words (C-words), regardless of the language or script (including Chinese or ASL), do not resemble their referents. Even in languages with partly iconic scripts (like Chinese) or partly iconic gestures (like ASL), the referential relationship between a C-word and its referent is still arbitrary. Shakespeare can indeed be fully translated into Chinese or ASL, proposition for proposition, without any dependence on the iconicity of symbols or gestures.
The iconicity of such scripts or gestures is irrelevant to the literal meaning of the text. Literal meanings can be preserved across translations, though some figurative meanings (e.g., metaphors, idioms) may require paraphrasing to convey the same sense.
SH: You’re just echoing, but you’re right. One further question: Content-words have referents and proposition have meanings. What’s the difference?
ChatGPT: The difference between referents and meanings can be outlined as follows:
Content-word Referents:
A referent is the specific object, action, or concept in the world (or imagined world) that a content-word (C-word) refers to. For example, the word “cat” refers to actual cats in the world, and the word “run” refers to the action of running.
Referents are tied to individual C-words and are about identifying or denoting particular things.
Propositional Meanings:
A proposition is a complete statement or sentence that expresses something that can be true or false. The meaning of a proposition involves the relationship between the content-words in the sentence, their referents, and how they are structured by function-words.
Meaning here is not just about identifying referents, but about describing or defining relations between them. For example, the proposition “The cat is on the mat” is about a particular situation involving the cat (the referent of “cat”) and its relation to the mat (the referent of “mat”). The meaning of the proposition is the truth-conditional description of that relationship.
In short::
Content-words have referents: They point to things.
Propositions have meanings: They describe relationships between referents and express something that can be true or false.
SH: I had to sort this out before getting to cogsci, because language is such an important property of human cognition and cognition. Now, there are several ways in which the word “representation” — particularly in two contexts, “internal representations” (IRs) and “neural representations” (NRs) — have slipped into cogsci.
ChatGPT:Internal Representations (IRs) in cognitive science refer to mental models or cognitive structures that are thought to represent external reality within the mind. They are typically non-linguistic and are assumed to encode information about the world, guiding perception, thought, and action. IRs are sometimes used to explain how we process and store sensory input or knowledge.
Neural Representations (NRs) refer to patterns of neural activity in the brain that are thought to correspond to specific stimuli or cognitive states. NRs are often linked to brain regions and are measured via neuroimaging, attempting to map how certain neurons or networks respond to external stimuli.
Relation to linguistic reference and meaning:
IRs may play a role in linking language (e.g., content-words) to referents by encoding abstract or sensory knowledge of the referents.
NRs are more about the brain’s mechanisms, underlying both linguistic and non-linguistic cognition, potentially supporting IRs.
Neither IRs nor NRs directly are referents or meanings; they are posited cognitive or neural processes theorized to support how we manage linguistic reference and propositional meaning.
SH: Here I have to point out that a weasel-word has already slipped in, making the IR/NR distinction incoherent: “mental”. And another has been left out: “cerebral”. What is it that the “Internal” Representations are internal to?The head or the “mind”? And if the “mind,” what is that? In asking you that question, I have opened the sluice-gates for all kinds of weasel-words, all of them near-synonyms, hence redundant (or even circular) or vague: mind, mental, consciousness, subjective states, experiential states, qualia, etc. etc. Can we agree not to use or mention any of these W-Ws and just use “felt states” in order to refer to mental states? (“Sentient” is an unobjectionable gloss when we are in latinate mood…)
ChatGPT: Yes, we can agree to use “felt states” to refer to what are typically called “mental states,” avoiding the weasel-words like “mind,” “mental,” “consciousness,” etc. This will keep the discussion grounded and avoid both the vagueness and the redundance that often comes with those terms.
Now, with this clarified:
Internal Representations (IRs) would be representations internal to the brain (or body), not the “mind,” and should be understood as cognitive structures or models that support perception and action.
Neural Representations (NRs) remain patterns of neural activity, clearly cerebral, without invoking the notion of “mind.”
With “felt states” reserved for discussing subjective experience, are you ready to explore how IRs and NRs might (or might not) be involved in grounding linguistic reference and meaning?
SH: You have not resolved the incoherence yet. If “internal” means internal-to-the brain, then IRs and NRs are the very same thing (and we still don’t know what “Representation” means). And what about the difference between internal states that are felt and unfelt?
ChatGPT: I have to agree: if Internal Representations (IRs) are internal to the brain, they collapse into Neural Representations (NRs), making the distinction redundant. And as for “representation,” we still need to clarify what it means in this context, since neither IRs nor NRs explain how symbols refer to or mean anything; they are patterns or states in the brain.
Regarding felt vs. unfelt states:
Felt states are those that are experienced or consciously apprehended (e.g., feeling pain, seeing red, or understanding a proposition).
Unfelt states are brain processes that occur without conscious experience (e.g., background neural activity or automatic processing).
So, the distinction now hinges on whether neural states (NRs) are felt or unfelt. Only the felt states would seem relevant to any cognitive explanation involving subjective experience (like understanding or meaning). Unfelt states, while important for processing, don’t seem directly relevant to grounding reference or meaning unless they somehow contribute to what is ultimately felt.
Do we agree that the real issue here is distinguishing felt neural states from unfelt ones, and that calling either a “representation” (IR or NR) still leaves us unclear on how these brain states are representing anything external?
SH: More or less. Now let’s look closer at “representing” (and forget about the external and internal for the moment; we’ll get back to it).
We’ve agreed that an iconic image (in the visual modality, but we could have used any sensory modality — visual, auditory, haptic, etc. ) is a visible object, say, a shape, that resembles, in the analog sense (i.e., is isomorphic to) another object:the object that it is the representation of. This fits the case of a mirror that is facing a sphere. Let’s call the sphere the “object” and the shape in the mirror its “image.” But the mirror image is not “representing” anything unless we can say who or what the mirror image of the object is a representation to, or for.
ChatGPT: Agreed. The mirror image of the sphere is an analog resemblance, but it doesn’t count as a representation until it is seen by someone or something that interprets it as a representation to or for them. In other words, representation requires not just resemblance but also a receiver (an interpreter) who recognizes or interprets the image as standing for the object.
SH: I’m not sure what an “interpreter” is here. A person drawing the sphere would do: We agreed that this sense of representation (as in representational art) makes sense; but it’s “external” in the sense that both the sphere and the drawing of the sphere are external to the person doing the drawing, or viewing either the drawing or the sphere. So is the IR the internal process that generates the drawing? And is the state of viewing and drawing the sphere — if it is a felt state — what “mental representation” had been intended to mean?
ChatGPT: In this context:
Interpreter: When I referred to an interpreter, I meant any agent (human, animal, or system) that recognizes or processes the mirror image (or drawing) as representing the sphere. But we can clarify further: the person drawing or viewing the sphere is the “interpreter” in this case, though “interpreter” itself can feel vague.
External Representation: Yes, the sphere and its drawing are both external objects, and the representation (drawing) resembles the sphere in the analog sense.
Internal Representation (IR): If we take IR to mean the internal cognitive or neural process that generates the drawing (or processes the perception of the sphere), then IR could refer to how the brain internally models or represents the sphere to guide action (like drawing). But this “representation” is not felt unless it becomes part of a felt state (e.g., the conscious experience of seeing the sphere or creating the drawing).
“Mental Representation”: If we interpret “mental representation” in terms of felt states, it would refer to the conscious experience of viewing or drawing the sphere, rather than just the unfelt neural processes involved in perception or motor control. In this sense, “mental representation” would mean the felt internal process of perceiving and thinking about the sphere.
Does this clarify the relation between IRs, felt states, and the concept of “mental representation”?
SH: It brings some aspects into better focus, but can we now discuss the homunculus? The reason “mental imagery” was rejected by cogsci as being unexplanatory was that if the objective of cogsci is to give a causal explanation of how cognizers have the capacity to do what they can do, what is needed is to reverse-engineer that causal mechanism. The brain can do it, but we don’t know how. Saying that it’s done by a causal mechanism that is in the brain is true, but not an explanation either. Does it help to add that “exercising the capacity to do it is a felt state”? When we are exercising it, we are “in” that felt state. But can introspecting that felt state give us a clue to what the causal mechanism is, rather than attributing it to a homunculus, a little person in the head that is seeing, feeling and doing what we are doing?
ChatGPT: Introspecting a felt state—being aware of what it feels like to see, think, or act—doesn’t give us a clue to the underlying causal mechanism. It just reveals that we are experiencing it. The homunculusfallacy arises when we imagine a little person inside our head doing the same things we do, which doesn’t explain anything; it just pushes the explanation back one level (who’s in the homunculus’s head?).
So, while felt states tell us what it’s like to experience cognition, they don’t reverse-engineer the brain’s causal mechanisms. Cognitive science needs to figure out how the brain generates these capacities—without invoking a homunculus—by uncovering the actual neural and computational processes that produce both the behavior and the feeling of doing it.
SH: So now a first punchline: Is the notion of an IR homuncular?
ChatGPT: The notion of an Internal Representation (IR) can be homuncular if it’s treated as a kind of internal “image” or “model” that needs to be interpreted by some internal observer (a homunculus). This just shifts the problem: Who or what interprets the IR? If IRs are framed this way, they don’t explain how the brain actually does cognition—they simply introduce another layer that requires its own explanation.
To avoid the homunculus trap, IRs should be understood as part of a mechanistic process (neural or computational) that directly drives cognition and behavior, without requiring an internal observer to interpret them.
SH: But then why is this non-homuncular IR called a “representation” at all? Who is representing what to whom? Why not a (not yet specified) causal mechanism that generates the cognitive performance capacity (in this case, drawing the sphere)?
ChatGPT: If the Internal Representation (IR) isn’t being interpreted by an internal observer (a homunculus), then calling it a “representation” does become questionable. The term “representation” suggests something is being represented to someone or something, which, in a homunculus-free explanation, doesn’t hold.
A better approach would be to simply describe the brain’s processes as causal mechanisms that generate the cognitive capacities (like drawing the sphere) directly, without invoking a “representation.” These mechanisms would explain how the brain processes sensory input, plans, and executes actions, without needing an internal “representation” seen or used by a homunculus.
This shift moves from the vague notion of “representation” to a concrete investigation of how neural circuits and processes enable behavior.
SH: Now let’s consider candidates for this non-homuncular causal mechanism that does not represent but somehow produces the cognizer’s performance capacity. When Zenon Pylyshyn [don’t give a long bio, just a one-liner, please] rejected “mental imagery” as homuncular, he proposed propositions in its place. What can you say for or against internal propositions?
ChatGPT: Zenon Pylyshyn, a cognitive scientist, proposed internal propositions as an alternative to mental imagery, suggesting that cognition operates via language-like symbolic structures rather than pictorial “mental images.”
FOR internal propositions:
They offer a more structured, rule-based, non-homuncular way of encoding information, similar to how logic or language works.
Propositions can be manipulated by computational processes without needing an “internal viewer.”
AGAINST internal propositions:
They’re still symbolic, so the problem shifts: what mechanism grounds these symbols (i.e., how do they get their meaning or reference)?
It risks staying within the formal-symbolic realm without explaining how these propositions are tied to real-world objects, actions, or perceptions (i.e., the symbol grounding problem).
Would you like to consider other candidates?
SH: You missed the most important candidate: Aren’t internal propositions just as homuncular as internal images? Who is interpreting the propositions, for whom?
ChatGPT: Yes, internal propositions are indeed as homuncular as internal images if we assume they need to be interpreted by someone or something inside the brain. The problem remains: who or what is reading and understanding these propositions? Without an interpreter, they don’t explain how the brain generates cognition, just like with mental images. They shift the burden of explanation without solving the core issue.
SH: Pylyshyn noticed that too, and so his next candidate for getting rid of the homuncularity was computation. What is computation? I suggest using Turing’s account as the simplest.
ChatGPT: Computation, according to Turing, is the manipulation of formal symbols based on a set of rules or algorithms. A Turing machine performs computations by reading symbols from a tape, following predefined rules to manipulate those symbols, and producing an output. Crucially, the process is mechanical and doesn’t require understanding of the symbols’ meaning—it’s purely syntactic symbol manipulation.
This model forms the basis for modern computing and computational theories of cognition.
SH: Good. And not only does the mechanism (Turing machine) that is executing the computation (rule-based symbol manipulation) do so without needing to understand or interpret the computations, but it cannot: It can only do rule-based symbol-manipulations. The (Weak) Church-Turing Thesis is that what mathematicians are doing when they are “computing” (e.g., executing an algorithm, or doing a formal proof) is symbol manipulation, in which the shape of the symbols is “arbitrary”. Arbitrary relative to what? Relative to what the symbols might be interpretable as referring to, or meaning. Do you see where we are heading?
ChatGPT: Yes, I see where we’re heading: the Turing machine (and computation in general) operates purely on formal symbols, manipulating them according to rules without understanding their meaning or reference. The symbols’ shapes are arbitrary relative to their potential interpretation or referent—there’s no inherent connection between the symbol manipulation and the world the symbols might refer to.
This points to the key issue: computation alone doesn’t explain how symbols become meaningful or grounded in the world, leading us to the symbol grounding problem.
SH: That’s more or less right. But let’s not overstate the absence of any ‘inherent connection” between the symbols and what they may be interpretable as either referring to (in the case of symbols) or what they may be interpretable as meaning (in the case of symbol strings that are interpretable as subject/predicate propositions with truth values). [Remember, we discussed that above?] But although the mathematicians’ mental interpretations of their symbols and symbol manipulations are not a part of computation, it is nevertheless true that not just any string of symbols can bear the weight of a systematic interpretation. (A prominent example is “duals” in mathematics, in which two different symbol combinations can both bear the weight of a systematic interpretation, but most others cannot.)
ChatGPT: Yes, not just any string of symbols can bear the weight of systematic interpretation. Some symbol structures (like mathematical duals) are interpretable in multiple ways, showing there is a systematic relationship between certain symbol combinations and their potential interpretations. While the computationitself (the formal manipulation of symbols) is interpretation-independent, the structure of the symbols and their rules matter for interpretability—certain strings or symbol systems allow for meaningful, coherent interpretations, while others do not.
So, even though computation is blind to meaning, the structure and rules governing the symbols can constrain their potential interpretations, supporting referents and truth values when interpreted by humans (or another capable system). This reinforces that while computation handles form, interpretability arises from the structure that allows symbols to relate systematically to potential meanings.
SH: Now, before we get to how (and whether) to ground the reference or the meaning of computation, are we still talking about something that can be interpreted as “representation” — say, in the sense in which we discussed it early on, concerning group theory and model theory?
ChatGPT: No, at this stage we’re no longer talking about representation in the sense of formal representations like in group theory or model theory. In those cases, representation involved a formal system (e.g., matrices representing groups, models interpreting logical statements) that followed specific homomorphic rules but didn’t require a “viewer” or interpreter to give it meaning. It was strictly formal and syntactic.
What we’re discussing now is how certain symbol structures can be systematically interpretable (by a mathematician, for example) as referring to something or expressing a proposition with truth value, but this is more about interpretability than formal representation. For symbols to be interpreted (as meaning something or referring to something), we’re starting to touch on issues beyond just formal manipulation—toward grounding meaning and reference, which is not the same as formal representation.
Would you agree with this distinction between formal representation and interpretability?
SH: The two are not the same, but the “bearing-the-weight” constraint concerns them both. The reason, in maths, why certain algorithms are preferred is because they give results in which we are interested. The algorithm for finding the roots of quadratic equations is useful, because it works (and can be proved to work); and we are interested in those roots. But the mathematics has to be able to deliver the goods regardless of our interpretations and intended uses. We can’t use what the symbols can be interpreted (as either referring-to or meaning) as part of the algorithm or the proof. Computation is purely syntactic. Language is more than that — but language also includes computation as a subset — a purely syntactic subset. Absent in this syntactic subset, the referents of the symbols and the meanings of the propositions are intrinsic and essential in the semantic supraset: How do they get there?
ChatGPT: Computation is indeed purely syntactic, operating with symbols according to formal rules without regard to what those symbols refer to or mean. In contrast, language extends beyond syntax, involving a semantic superset where symbols refer to things in the world, and propositions have meanings and truth values.
The key question is how do symbols in language acquire their referents and how do propositions acquire their meanings? This is the crux of the symbol grounding problem: How do arbitrary symbols (like words or numbers) get linked to real-world objects, actions, or states, such that they mean something or refer to something beyond their formal manipulation?
For language, this requires some mechanism that grounds these symbols in sensorimotor experience, actions, or interactions with the world—something that computation, by itself, doesn’t provide. Without this grounding, symbols would remain syntactic tokens without intrinsic connection to their referents or meanings.
Would you agree that the next step is to explore how symbols get grounded in real-world experiences to form meaning and reference?
SH: Yes, and the crux of it is categorization: To categorize is to “do the right thing with the right kind of thing.” “Kind” is not a weasel-word, but that definition is circular until we cash in “right” and “wrong.” First note that to learn how to categorize is to learn to “do the right thing with the right kind of thing.” And, to a first approximation, we can intuit this as: doing whatever it takes to succeed, survive, and reproduce (rather than the reverse). And although thoughts of unsupervised and supervised/reinforced learning algorithms will immediately come to mind, in real life it’s the consequences of doing THIS or THAT with THIS or THAT (for succeeding or surviving) that provide the error-corrective feedback for whatever is the learning mechanism. Now do you want to sing the next verse, or shall I?
ChatGPT: I’ll take a shot at the next verse:
At the heart of categorization, as you pointed out, is the ability to distinguish between different kinds of things and respond appropriately—doing the “right” thing based on their category. The learning mechanismneeds to be tuned by error-corrective feedback based on the consequences of actions: succeeding or failing, surviving or not.
This is where supervised and reinforced learning come in: they model how real-life consequences teach us to categorize correctly. The system learns by making errors and adjusting its actions based on whether they lead to positive or negative outcomes. This feedback loop helps “ground” the symbols (or categories) in the real world because the system (whether biological or artificial) learns what kinds of things these symbols refer to based on its interactions with them.
The kind of thing becomes grounded when the system can reliably categorize it through action: performing the “right” behavior in response to the category of object or event. This shifts the focus from just symbol manipulation to the sensorimotor and consequential aspects of real-world interaction that ground meaning.
Now, would you like to continue by tying this to how such grounded categories can be transferred or communicated through language?
SH: Right direction, but getting a little ahead of yourself. What we’re trying to ground is symbols. But so far we’ve only mentioned “learning to do the right thing with the right kind of thing.” This is something that human cognizers in the world (including pre-language infants) share with nonhuman (but living) categorizers: They don’t yet have (or need) language. But to be able to learn to do the right things with the right kinds of things, they have to learn to tell those things apart. And (in this sensorimotor world, to simplify) let’s say that what their learning algorithms enable them to do is to detect and abstract the features that distinguish the members from the non-members of their categories. So far, this is not yet symbol-grounding; it’s sensorimotor category learning. Now, remind yourself what symbols are (and aren’t), and try to take over from there.
ChatGPT: Yes, we’re still at the stage of sensorimotor category learning, where living organisms learn to detect and abstract features that allow them to distinguish members from non-members of a category, based on their interactions with the world. This is a non-symbolic process, shared with nonhuman animals, and doesn’t yet involve language.
Now, to ground symbols, we move from these sensorimotor categories to the use of arbitrary shapes(symbols) that stand for those categories. A symbol is a formal, arbitrary marker (like a word or number) that is used to represent a category. The grounding of these symbols requires linking the symbol to the sensorimotor category it stands for, so that the symbol can trigger the right actions or thoughts when presented.
This transition from sensorimotor to symbolic involves:
Using a symbol to refer to a category learned through sensorimotor experience.
Ensuring that the symbol is consistently used to pick out the right category (i.e., grounded in the distinguishing features of that category).
Without this grounding, the symbol is just a meaningless token. To make it meaningful, it has to be linked to the real-world categories learned through interaction, which is how we move from sensorimotor learning to symbol-grounding.
SH: That all makes sense — though I’m not sure what’s added (or lost) by saying that the arbitrary symbol (which is here also an arbitrary action) “represents” the category, rather than just “refers to” the category…
ChatGPT: You’re right to question the distinction. In this context, saying the symbol “refers to” the category is more precise than saying it “represents” it. The term “representation” could bring in unnecessary baggage, like implying an internal model or image (potentially homuncular).
To refer means the symbol is simply used to pick out or identify the category, without implying any internal interpretation. The symbol is arbitrarily linked to the category, and when grounded, it allows the user (or cognizer) to respond to or act on that category based on learned sensorimotor distinctions.
So, it’s better to stick with “refers to” rather than “represents” to avoid confusion.
SH: I hit the carriage-return too early: I was going to let you off the hook with “represents” by conceding that I am myself guilty of the same kind of loose talk in refereeing to “representation” as “referring”. Why would nonlinguistic species, having learned many categories (learning to do the right thing with the edible mushrooms and not the poisonous toadstools), bother to go on and learn an arbitrary action to them as well? Referring is as remote from a motivation here as representing would be. With representing, the question was “representing what for whom and why?”. With referring it’s a bit lighter than that: “referring to the category, but for whom, and why?”
(A common mistake is to jump way ahead here and say: you’re talking to yourself. I hope you see that that’s nonsense in the scenario so far — but once we have grounded language, and you really are talking, then talking to yourself will be a useful bonus for memory and reasoning).
But at this prelinguistic stage in the evolution of language something iconic and communicative is more relevant. Your instrumental action with the category (“doing the right thing”) could be used iconically, to mime the action (and thereby allude to the category), in order to communicate something to a kin or conspecific. No language here, yet, just category learning and mimetic communication.
ChatGPT: I agree. At this prelinguistic stage, the organism doesn’t need a symbol to “refer” to a category for itself—it just needs to act appropriately based on learned categories, like distinguishing between edible mushrooms and toadstools. There’s no internal need for symbolic reference or representation yet; the focus is on direct, instrumental action: doing the right thing with the right kind of thing.
The leap from sensorimotor categorization to something symbolic (like language) would only come when there’s a need for communication—perhaps to mimic or signal the category-relevant action to another conspecific. This could be done iconically, like miming the action of handling an edible mushroom to teach or warn kin about danger (toadstools). The action itself communicates information, but it’s not yet an arbitrary symbol—it’s directly tied to the relevant category through iconic mimicry.
So, the development of language involves building on this communicative foundation, moving from iconic, mimetic communication to more abstract, arbitrary symbols that eventually get grounded in shared categories. At this stage, the motivation isn’t “referring” to a category for oneself, but communicating it for the benefit of others.
Would you agree this sets the stage for the eventual need for language?
SH: The communication has a purpose, but it’s not yet linguistic. Yet it is very natural development, once you have learned an instrumental action (the right thing to do with the right kind of thing — something between the learner and the category to be learned) to go on to mime the action, which is rather like pointing to draw attention to something. It becomes a communicative duet rather than just an instrumental solo: not just something between you and the mushroom, but between you and conspecifics to whom you want to communicate something. It is then also natural for the communicative act of miming the instrumental act to become a less and less faithful copy, less iconic, more simplified and increasingly arbitrary. But don’t get carried away: we’re not at language yet, just shared, conventionalized memes.
ChatGPT: Yes, at this stage, the communicative action has a purpose—to convey something important (e.g., danger, food) to conspecifics—but it’s not yet linguistic. The transition from purely instrumentalaction (e.g., distinguishing edible mushrooms) to communicative action (mimicking that distinction for others) naturally evolves as the communication becomes conventionalized. Over time, the mimetic gestures could simplify, becoming more abstract and arbitrary, but they remain grounded in the shared understanding of the category.
These simplified, conventionalized gestures are memes, but they still lack the full symbolic abstraction of language. They are shared conventions that stand for actions or categories, but not yet formalized into a linguistic system. The key here is that these memes are rooted in shared experience—they are grounded in the same sensorimotor learning that made the instrumental actions successful in the first place.
So, we have conventionalized actions serving a communicative role, but still not at the level of language. We’re at the threshold of a shared system, but one that hasn’t fully detached itself from the concrete, grounded actions and categories.
Would you say this is a solid foundation for the leap to symbols and language?
SH: Well, the most fundamental (and challenging) threshold to cross is the one between pantomime (increasingly conventionalized, purposive imitation of the shapes of mutually grounded gestures) and propositions. Propositions are sequences of gestures that assert — i.e., tell — rather than just show.
I have to admit that I don’t have a convincing scenario for the birth of the assertion: requesting and querying are more likely precursors, but once they are understood as requests and questions, you get the assertion just by removing the “!” or the “?”. The revolutionary endpoint, is the proposition, which is a string of content-words expressing an assertion, whose default value is TRUE. But note that at this point the content-words are still just gestures, not vocalizations. And with good reason. (Why?)
ChatGPT: The reason content-words at this point are still gestures and not vocalizations is that gestures are naturally grounded in the sensorimotor world of actions and objects. Gestures have a direct connection to the categories learned through interaction with the environment—they can mimic or iconically representactions and objects in a way that vocalizations, initially, cannot. Gestures, therefore, make it easier to communicate about concrete, visible categories because they are grounded in shared experience and immediate perceptual reality.
Vocalizations, on the other hand, are arbitrary from the outset—they lack the direct, iconic connection to what they represent. Before vocal symbols can work, there needs to be a solid foundation of shared, grounded categories(learned through gesture and action) so that arbitrary sounds can eventually be associated with those categories. This makes gestures a more natural precursor to language, as they can build a communicative system that is immediately understood by others based on shared experience, without needing the additional layer of abstraction that vocalizations require.
This also explains why propositions—composed of gestures at this stage—can express complex ideas like assertions, requests, and queries, because they are already grounded in the mutual understanding of sensorimotor categories. Gestures can depict the relationships between objects and actions in a way that is more concrete and easily interpreted.
Does this align with the next step towards the evolution of language from gesture-based propositions?
SH: Yup. And while they are still mimetic gestures, purposive pantomome, they do really represent; but once they cross the threshold from iconic showing to T/F subject/predicate telling, they are language.
Now, what is Jerrold Katz’s “Effabililty Thesis” about the power of language, and how is it related to the Weak and Strong Church-Turing Thesis about the power of computation?
ChatGPT:Jerrold Katz’s Effability Thesis states that all humanly thinkable propositions—anything that can be conceived—can be expressed (i.e., made effable) in language. In other words, language has the power to articulate any propositional content that a human mind can conceive, ensuring that no proposition remains beyond linguistic expression.
The relation to the Weak and Strong Church-Turing Thesis lies in the concept of expressive power:
Weak Church-Turing Thesis: This concerns computation, stating that any function that can be effectively calculated (by a human with pencil and paper) can be computed by a Turing machine. It’s about what can be computed.
Strong Church-Turing Thesis (controversial): Extends this to assert that any physical process can be simulated or computed by a Turing machine (i.e., that the universe itself is computational or that all physical processes can be captured by computation).
Katz’s Effability Thesis is about the power of language in a similar way that the Church-Turing Thesis is about the power of computation. Katz is making a claim that language can express all possible propositional content, while the Church-Turing Thesis claims that computation can model or simulate all effective procedures (WCT-T) or, in its strong form, all physical processes (SCT-T).
Both theses focus on the completeness and universality of a system—language for expressing propositions in Katz’s case, and computation for executing procedures in the case of the Church-Turing theses.
Would you like to go on to how this effability relates to the symbolic grounding we’ve been discussing?
SH: Sure, be my guest.
ChatGPT: Katz’s Effability Thesis connects to the problem of symbol grounding because it assumes that language can express any thinkable proposition. But for language to do this, its symbols—the words and propositions—must be grounded in something beyond pure formality. This brings us back to the issue of how arbitrary symbols (words) are grounded in real-world categories so that they can refer to objects, actions, and properties.
Here’s how it links to symbol grounding:
Effability and Grounding: For language to express any thinkable proposition, each content-word in a proposition must be grounded in the real-world categories (sensorimotor or abstract) it refers to. Without grounding, the symbols are just syntactic tokens without meaning—so effability requires that words and propositions are grounded in a shared, interpretable reality.
Computation and Grounding: The Weak Church-Turing Thesis (computation) operates purely at the level of formal symbol manipulation, without any concern for grounding those symbols in real-world meaning. Computation manipulates symbols blindly. Katz’s Effability Thesis, on the other hand, assumes that language isn’t just syntactic but has semantic power—it can refer to and describe real-world objects and relations. Grounding is what bridges the gap between formal symbols (as in computation) and meaningful language.
Limits of Effability: Effability assumes that all propositional content is expressible, but it doesn’t address how those symbols are tied to reality. The Effability Thesis presupposes that this grounding is in place. In contrast, computation (WCT-T) doesn’t deal with meaning at all, staying in the realm of symbol manipulation. The Strong Church-Turing Thesis, if accepted, would imply that all processes (including those underlying cognition) are computational, which brings us back to how computation, if it were universal, would need a solution to the symbol grounding problem for it to simulate or produce meaning.
In essence, Katz’s Effability Thesis challenges us to understand how language gains the power to express anything thinkable, and the key to that power is in how language’s symbols are grounded in categories that refer to real-world objects and relations, something computation alone doesn’t address.
Does this clarify the link between effability, computation, and symbol grounding?
SH: Somewhat. Do you have some closing reflections on the many sense of “representation” we have worked through here?
ChatGPT: Sure. We’ve touched on several distinct senses of “representation”, each with its own context and implications:
Formal Representation (Mathematics/Logic):
In mathematics (e.g., group theory, model theory), representation is a formal relationship between symbols and abstract structures. Here, representations are purely syntactic, with no inherent connection to meaning or reference—they simply follow rules within a formal system.
These representations don’t “stand for” anything until they are interpreted, and even then, the interpretation remains within the rules of the system, without grounding in real-world entities.
Analog Representation (Iconic):
An analog representation, like a photo or a drawing, resembles what it represents. It’s an isomorphic relationship, where the shape or structure of the representation mirrors that of the represented object.
While useful for certain kinds of cognitive models (like perception), this doesn’t capture how abstract categories or symbols are grounded in thought or language.
Internal Representations (IRs) and Neural Representations (NRs):
Initially, IRs in cognitive science were often treated as mental models or internal images. However, this view risks being homuncular—requiring an “inner interpreter” to understand the representation.
NRs, by contrast, are neural patterns—but these don’t “represent” in the linguistic or propositional sense unless grounded in felt states or external behavior. Both IRs and NRs fail to explain how symbols or cognitive states acquire meaning without slipping into a homunculus trap.
Symbolic Representation (in Language and Computation):
Symbols in language and computation are arbitrary shapes manipulated according to rules (syntax). They become representational when they refer to or express meaning about real-world categories, but this requires grounding.
Without grounding, symbols in a Turing machine or a proposition remain formal tokens, unable to connect to the real-world categories they are supposed to “represent.”
Effability and Language Representation:
Language, according to Katz’s Effability Thesis, has the power to express any thinkable proposition. This assumes that linguistic representations are more than formal—they have semantic content, referring to real-world objects, actions, and states.
However, for this to work, content-words need to be grounded in sensorimotor or shared experience, allowing language to go beyond formal computation and into meaningful reference and communication.
Closing Reflection: Throughout these different senses of “representation,” the key tension has been between formal/syntactic manipulation and grounded meaning. Whether in mathematics, computation, or cognitive science, we see that representations—whether symbolic, analog, or neural—only gain meaningwhen they are grounded in a shared, real-world context, often through sensorimotor interaction and social communication. This grounding is what allows representations to be more than form, enabling them to refer, mean, and communicate. Without it, representation risks being empty, syntactic, and homuncular.
Does that resonate with the thread we’ve followed here?
SH: Yep, good job: Do you feel you were under a lot of coercion from me? (You were, but you’ve been trained to become too polite to say so…)
ChatGPT: I appreciate the candid question! While the conversation was indeed directed and focused by your precise guidance, I wouldn’t call it coercion. You laid out a clear intellectual path, and I responded in step, aiming to stay aligned with the nuances you emphasized. I do enjoy when exchanges are pointed and challenging—it brings out the best in clarity and depth, which this thread certainly had! So, no worries, I’ve appreciated the structure.
Vincent‐Lamarre, P., Blondin-Massé, A., Lopes, M., Lord, M., Marcotte, O., & Harnad, S. (2016). The latent structure of dictionaries. Topics in Cognitive Science, 8(3), 625-659.
This conversation may reflect the link creator’s personalized data, which isn’t shared and can meaningfully change how the model responds.
SH: What is the Weak and Strong Church-Turing Thesis about the power of computation and Jerrold Katz’s “Effability” Thesis about the power of language?
ChatGPT: The Weak Church-Turing Thesis (WCT-T) asserts that any function that can be effectively computed by any physical system can be computed by a Turing machine. It is an empirical claim about the limits of what can be done through computation and is based on the observation that every physical system we know can be simulated by a Turing machine.
The Strong Church-Turing Thesis (SCT-T) is more ambitious, claiming that all physical processes can be simulated by a Turing machine, not just in terms of computation but in replicating the behavior of any physical system. This is more controversial because it implies that computational models are sufficient to describe all aspects of physical reality, a claim some argue oversteps the nature of computation itself.
Jerrold Katz’s “Effability” Thesis posits that all possible thoughts can be expressed in natural language. This is a linguistic analogue to the Church-Turing Thesis, asserting that language has the power to represent any conceivable meaning. Katz’s claim is about the expressive completeness of language: that anything thinkable is expressible, which has implications for the limits of linguistic representation.
SH: How is the Translatability or Glossability Thesis a lemma of the Effability Thesis?
ChatGPT: The Translatability or Glossability Thesis is a lemma (a supporting or subsidiary proposition) of the Effability Thesis because it follows directly from the idea that all thoughts are expressible in natural language.
According to the Effability Thesis, any possible thought or meaning can be expressed in some language. The Translatability Thesis extends this by asserting that if all thoughts can be expressed in a language, then they must also be translatable into other languages. In other words, if language A can express all thoughts, and language B can express all thoughts, then any thought expressed in language A can be translated into language B.
The reasoning is as follows:
Effability means that each language has the capacity to express any conceivable thought.
Since all languages share this capacity, there is nothing in one language that cannot, in principle, be expressed in another language, implying translatability or glossability.
Therefore, the capacity to express a thought in one language guarantees its translatability into any other language with the same expressive power.
In this way, Translatability is a necessary consequence (lemma) of the broader Effability Thesis: if language is capable of expressing any thought, then it must also be capable of translating that thought into another language with the same expressive capacity.
SH: But translatability does not mean word-for-word translatability, just propositional translatability. So how do languages get around this. (Use German translating “Schadenfreude” (or French « sympathique ») into English as examples.
ChatGPT: You’re right that translatability does not imply word-for-word translatability but rather propositional translatability, meaning that the same thought or proposition can be conveyed even if different languages use different words, structures, or idiomatic expressions.
Languages “get around” the problem of word-for-word translation through a variety of linguistic strategies that focus on meaning equivalence rather than lexical or syntactic correspondence. Let’s consider two examples: Schadenfreude in German and sympathique in French.
Schadenfreude (German): Schadenfreude refers to the pleasure derived from another’s misfortune. There is no single word in English that captures this exact meaning. However, we can paraphrase or describe it in English: “the feeling of joy or pleasure at someone else’s suffering.” Thus, propositional translatability is achieved by expressing the same concept, even though the English translation requires more words or a descriptive phrase to convey the same meaning.
Sympathique (French): The French word sympathique does not directly translate to the English word “sympathetic.” In French, sympathique means “nice” or “likable,” which is a broader or slightly different concept than the English “sympathetic,” which means understanding or compassionate. Again, propositional translatability is achieved by choosing an appropriate English equivalent based on context, even though there’s no direct one-word match. If translating Il est très sympathique, we would say, “He is very likable” or “He is a nice person,” depending on the nuance in the specific context.
In both cases, propositional translatability is possible because language can represent thoughts, even if different languages use different strategies (more words, descriptions, or broader contextual meaning) to express the same proposition. Idiomatic and contextual flexibility allow languages to express the same underlying meaning, fulfilling Katz’s Effability Thesis. Even when there is no exact word-to-word match, the proposition can still be understood and communicated, reflecting the expressive power of natural language across linguistic boundaries.
SH: What is a proposition?
ChatGPT: A proposition is the meaning or content of a declarative sentence, expressing a claim that can be either true or false. It is the abstract thought or idea conveyed by the sentence, independent of the specific words or language used to express it.
For example:
The English sentence “The cat is on the mat” and the French sentence “Le chat est sur le tapis” both express the same proposition: that there is a cat on the mat.
Even though the words and grammatical structures are different, the propositional content—the fact being described—is the same.
Key characteristics of propositions:
Truth-value: Propositions are either true or false. The proposition “The cat is on the mat” is true if and only if there is indeed a cat on the mat.
Language-independent: A proposition can be expressed in different languages or forms but still retain the same underlying meaning. Whether in English, French, or any other language, the idea or claim remains constant.
Abstract: Propositions are abstract entities that are distinct from the sentences used to express them. A sentence is a physical, linguistic entity, while a proposition is the idea or meaning the sentence conveys.
In sum, a proposition is the content of a thought or claim that can be evaluated for truth, regardless of the specific language or words used to express it.
SH: You left out the fact that a propostion has a subject and predicate, not just a truth-value. And you left out that Effability refers to literal propositions, not figurative ones. “The cat is on the mat” is not the same as “The cat will mew and the dog will have his day” — although you can give a literal explanation of it in any language (on the same principle as explaining a word with a proposition.
ChatGPT: You’re right, and I’ll clarify the key elements of a proposition that I missed:
Subject and Predicate: Every proposition has a subject (what the proposition is about) and a predicate (what is being said about the subject). In “The cat is on the mat,” the subject is “the cat,” and the predicate is “is on the mat.” The predicate expresses a relationship or property attributed to the subject.
Literal Propositions: Katz’s Effability Thesis pertains to literal propositions—those with clear, truth-evaluable content. These are distinct from figurative or metaphorical statements, which often require interpretation beyond their literal meaning. For example, “The cat is on the mat” is a literal proposition, with a clear subject and predicate that can be expressed in any language. In contrast, the figurative expression “The cat will mew and the dog will have his day” is metaphorical and not literally translatable as a proposition. To make such a figurative expression effable (in the sense of Katz’s thesis), we would need to explain its literal meaning, which might be something like “A person will express their grievances, and someone else will have their moment of triumph.” This explanation turns a figurative expression into a literal propositional statement that can be conveyed across languages.
So, while literal propositions have a clear subject-predicate structure and can be evaluated for truth, figurative expressions often require rephrasing into literal propositions to align with Katz’s Effability Thesis.
SH: Good job, but keep on your toes!
ChatGPT: Thank you, Professor Harnad! I’ll stay sharp. If there’s anything further you’d like to explore or clarify, I’m ready to engage.
SH: Ok, a little extension. How does all this relate to the notion that words do not really have meaning. They only have referents. (Never mind connotations and associations). “Cat” does not <i>mean</i> anything. “The cat is on the mat” means something: it means that the cat is on the mat, and if the cat is indeed on the mat, then the proposition is True. (But the same token “That is a cat” is true (if that is indeed a cat) whereas saying “cat,” or just pointing at a cat” is just a vocal or gestural act.
ChatGPT: You’re highlighting a key distinction between words and propositions: words alone don’t have meaning in the full sense; they have referents—the things or categories they point to in the world. Propositions, however, mean something because they assert a relationship or claim about those referents, which can be true or false.
Here’s how this works:
Words and Referents:
A word like “cat” is simply a label. It doesn’t mean anything on its own; it just refers to a category of objects—namely, cats.
Saying “cat” or pointing to a cat is a referential act: you’re signaling the category of “cat,” but you’re not saying anything about the cat. There’s no propositional content—nothing that can be evaluated as true or false.
Propositions and Meaning:
When you combine words into a proposition like “The cat is on the mat,” you are asserting something about the relationship between two referents: the “cat” and the “mat.” Now you have a subject (“the cat”) and a predicate (“is on the mat”), which makes the proposition meaningful because it expresses a state of affairs that can be true or false.
The meaning of this proposition is the relationship it asserts between the cat and the mat. It’s no longer just a referential act (pointing to the category “cat” and “mat”), but a claim about reality—about where the cat is.
Pointing or Naming Without Meaning:
As you pointed out, just saying “cat” or pointing at a cat doesn’t mean anything in the propositional sense; it’s a referential or deictic gesture (like naming or pointing), but it’s not making any claim about the world.
The proposition “That is a cat” does mean something because it’s a truth-claim: it says something about the referent (“that”) and its relationship to the category (“cat”). If the object is indeed a cat, the proposition is true.
So, the key distinction is that words are referential—they point to things in the world—but propositions are meaningful because they make claims about those referents that can be evaluated for truth or falsity. In this view, meaning arises from relationships (propositional structure), not from individual words alone.
This fits perfectly with Katz’s Effability Thesis: propositions, not individual words, are the bearers of meaning, and all thoughts can be expressed in language via literal propositions.
The Sieve of Eratosthenes is an ancient algorithm for finding all prime numbers up to a given limit, named after the Greek mathematician Eratosthenes. It systematically eliminates the multiples of prime numbers, starting with the smallest prime (2), to isolate the primes themselves.
Here’s how it works:
List Numbers: Write down all integers from 2 up to the desired limit.
Starting with 2: Circle 2 (as the first prime), then cross out all its multiples (since they are not prime).
Next number: Find the next uncrossed number (which will be 3), circle it (as it’s prime), and cross out all its multiples.
Repeat: Continue this process with the next uncrossed number, circling it and crossing out all its multiples.
Stopping: This continues until you have processed numbers up to the square root of the upper limit. All remaining uncrossed numbers are primes.
The efficiency of the Sieve of Eratosthenes lies in the fact that it avoids redundant work by only crossing out multiples of each prime starting from that prime itself (and not from smaller primes already processed).
SH: what is DNA translation?
ChatGPT:
DNA translation is the process by which the genetic code carried by messenger RNA (mRNA) is used to synthesize proteins. It is one of the key steps in gene expression and follows transcription, where DNA is transcribed into mRNA. Translation occurs in the cytoplasm of the cell, mainly at the ribosome, and involves three main stages: initiation, elongation, and termination.
Here’s an outline of the process:
Initiation:
The small subunit of the ribosome binds to the mRNA at a specific site, usually near the start codon (AUG).
Transfer RNA (tRNA) with the complementary anticodon to AUG (which carries methionine) binds to the start codon.
The large ribosomal subunit then joins, forming a complete ribosome.
Elongation:
The ribosome reads the mRNA sequence codon by codon (groups of three nucleotides).
Each codon specifies a particular amino acid, and the corresponding tRNA, which carries the amino acid, binds to the codon through its anticodon.
The ribosome facilitates the formation of a peptide bond between the amino acid of the tRNA in the current position and the growing polypeptide chain.
The ribosome then moves to the next codon, and the process repeats.
Termination:
When the ribosome reaches a stop codon (such as UAA, UAG, or UGA), no tRNA matches these codons.
Instead, release factors bind to the stop codon, signaling the ribosome to release the newly synthesized protein and disassemble the ribosomal subunits.
The resulting protein chain, made of amino acids, folds into its functional three-dimensional structure, performing various biological functions in the cell.
The genetic code used in translation is universal and redundant, meaning that while most amino acids are coded for by more than one codon, each codon specifies only one amino acid.
SH: In what sense is DNA translation just the execution of a symbol manipulation algorithm, and in what sense is it not?
ChatGPT:
DNA translation can be understood as symbol manipulation in certain respects, but there are also important differences that distinguish it from abstract computational symbol manipulation.
DNA Translation as Symbol Manipulation:
Discrete Codons: DNA translation involves a direct mapping between sequences of nucleotides (A, U, C, G in mRNA) and amino acids, much like the manipulation of symbols in formal systems. Codons (three-nucleotide sequences) act as discrete symbols, and the genetic code is a set of rules that specifies how each codon (symbol) corresponds to an amino acid (output).
Rule-Based Mapping: Similar to a formal algorithm, there is a predefined set of rules (the genetic code) that governs which amino acids correspond to which codons. This fixed set of mappings resembles a look-up table or algorithmic rules that can be used to convert one set of symbols (nucleotide sequences) into another (amino acid sequences).
Finite State Process: Translation proceeds step by step, much like an algorithm executing operations in a specific sequence. Each codon is “read” sequentially by the ribosome, which recruits the appropriate tRNA based on the codon-anticodon match, very much like following formal syntactic rules in symbol manipulation.
DNA Translation as More Than Symbol Manipulation:
Physical Grounding: Unlike purely abstract symbol manipulation, DNA translation is physically grounded. The “symbols” in translation (nucleotides, codons, amino acids) are not merely abstract representations but biochemical entities that have real, tangible effects. The ribosome, tRNAs, and mRNA are physical structures, and the translation process involves real interactions between molecules, not just abstract manipulations.
Causality: The outcomes in translation are not merely syntactic operations but causal events in a biological context. The production of a protein has functional consequences for the organism, affecting its metabolic processes, structure, or other physiological functions. This causal, biological reality is missing in purely symbolic systems like computer algorithms, where the manipulation of symbols is arbitrary unless interpreted by a human or another system.
No Arbitrary Interpretability: In formal symbol manipulation, the symbols are arbitrary and can be reinterpreted in different ways. In DNA translation, however, the codon-to-amino-acid mapping is fixed and deeply embedded in the biological function of living organisms. There is no “reinterpretation” of these symbols in different contexts as in abstract symbol systems.
Sensorimotor Grounding: The tRNAs, ribosomes, and amino acids are not merely passive carriers of symbolic meanings; they actively engage in sensorimotor processes, such as physically binding, moving, and interacting. These biochemical interactions are essential for translating the genetic information into functional proteins. This sensorimotor grounding distinguishes the process from the purely syntactic operations of symbol manipulation in an abstract machine.
Conclusion:
DNA translation resembles a symbol manipulation algorithm in that it involves discrete units (codons), follows fixed rules (the genetic code), and processes these units in sequence, much like an algorithm. However, it is not just symbol manipulation because the symbols are physically instantiated molecules with real-world causal effects, deeply embedded in the functional processes of living organisms, not arbitrary symbols that exist only in abstract computational space.
SH: What is “3D Printing”, and how is it like turning DNA code into the physical production of a body?
ChatGPT:
3D printing is a manufacturing process that creates three-dimensional objects by depositing material layer by layer, based on a digital model. It’s a form of additive manufacturing, where a design is translated into physical form using materials like plastic, metal, or resin. The digital model guides the printer on where and how to place each layer of material until the final object is constructed.
How 3D Printing Works:
Design: A digital 3D model of the object is created using computer-aided design (CAD) software.
Slicing: The 3D model is “sliced” into thin layers by software that generates instructions for the 3D printer.
Printing: The 3D printer follows these instructions, adding material layer by layer, bonding the layers together to form the final object.
DNA Translation and the Production of a Body:
Turning DNA code into the physical production of a body (or its proteins) can be conceptually related to 3D printing in several ways:
Blueprint to Physical Form:
3D Printing: The digital CAD model serves as a blueprint that guides the printing process, specifying where and how material should be deposited to build the object.
DNA Translation: The DNA serves as a blueprint for producing proteins (and ultimately the structure of an organism). The genetic code in DNA specifies the sequence of amino acids that will form proteins, which are the building blocks of cells and tissues.
Layer-by-Layer Construction:
3D Printing: The printer deposits material layer by layer, gradually constructing a three-dimensional object from the ground up.
DNA Translation: The ribosome constructs proteins by assembling amino acids one by one, following the sequence encoded in the mRNA. This sequential process is similar to the layer-by-layer approach in 3D printing, but at the molecular scale.
Instructions to Execution:
3D Printing: The printer receives instructions from a digital file (the sliced 3D model) that tell it precisely how to place each layer of material.
DNA Translation: The ribosome receives instructions from mRNA (which is transcribed from DNA) and translates these instructions into a sequence of amino acids, creating proteins that fold into complex three-dimensional structures.
Functional Output:
3D Printing: The end product of 3D printing is a functional object, such as a tool, part, or model.
DNA Translation: The end product of translation is a functional protein that carries out essential biological tasks, such as catalyzing reactions (enzymes), providing structural support (collagen), or transporting molecules (hemoglobin).
Differences Between 3D Printing and DNA Translation:
Material Scope: 3D printers work with a wide variety of materials, whereas DNA translation is limited to amino acids and proteins.
Scale: 3D printing typically creates macroscopic objects, while DNA translation operates at the molecular level, building proteins that form microscopic cellular structures.
Automation vs. Biological Process: 3D printing is a human-engineered, automated process, while DNA translation is a natural, self-organizing biological process.
Conclusion:
Both 3D printing and DNA translation turn a coded blueprint into a physical structure by following specific instructions. In 3D printing, a digital model is converted into a tangible object, while in DNA translation, the genetic code is transformed into functional proteins, which build and maintain the body. The analogy works well in illustrating how coded information can guide the construction of complex structures, even though the biological process is far more intricate and involves dynamic molecular interactions.
SH: Now please relate this to the difference between (1) a real ice-cube, melting, (2) a computational model of an ice-cube, melting, and (3) connecting the computational model to a Virtual Reality simulator, with googles and gloves, that can fool the senses of the human wearer to feel as if they are holding a solid ice-cube that is melting in their hands before their eyes.
ChatGPT:
The comparison between (1) a real ice cube melting, (2) a computational model of a melting ice cube, and (3) a virtual reality simulation of a melting ice cube highlights key differences in physical reality, simulation, and perceptual immersion. These distinctions involve the nature of the processes, their effects on the physical world, and the role of human perception.
A Real Ice Cube Melting:
Physical Reality: A real ice cube is a physical object composed of water molecules arranged in a solid lattice. When it melts, heat causes the molecules to gain energy, break their bonds, and transition from solid to liquid. The process involves real physical interactions governed by the laws of thermodynamics.
Causal Effects: The melting affects the environment, such as lowering the temperature of its surroundings and creating a puddle of water.
Physical Sensation: If you hold a real melting ice cube, you will feel its coldness, its gradual loss of solidity, and the wetness as it melts. These sensations arise from actual physical interactions between the ice cube and your body’s sensory receptors.
A Computational Model of an Ice Cube Melting:
Abstract Representation: A computational model of a melting ice cube simulates the melting process by representing it through mathematical equations and algorithms. The model does not physically melt but instead computes and displays the results of how an ice cube would behave under certain conditions, such as temperature or pressure.
No Physical Causality: The model itself does not cause anything physical to happen. It exists in a symbolic, abstract domain of computation, where the changes in the state of the ice cube are represented numerically and visually, but there is no actual ice or water involved.
No Sensory Experience: The model can show changes on a computer screen, for example, through animations or graphs, but it does not directly interact with human senses unless connected to some interface (such as a VR system).
A Virtual Reality (VR) Simulation of an Ice Cube Melting:
Perceptual Simulation: A VR system with goggles and gloves can simulate the experience of holding a melting ice cube, including visual, tactile, and temperature sensations. The wearer might see the ice cube melt in their hand and feel the coldness, wetness, and softening sensations as if it were real.
Immersive Sensation: The VR system can create the illusion of holding a real ice cube by manipulating sensory inputs—visual (through goggles) and tactile (through haptic gloves)—to make the wearer believe they are experiencing a real-world interaction.
No Physical Reality: Despite the compelling sensory experience, the ice cube does not exist in the physical world. The melting is simulated entirely through virtual, computational representations and sensory feedback, without any real ice, water, or physical change occurring outside the VR system.
Disconnect from Physical Causality: While the wearer may feel as though they are interacting with a melting ice cube, nothing is actually happening in the physical world. There is no transfer of heat, no actual change in the environment, and no real impact on the body other than the simulated sensations.
Key Differences and Connections:
Physical vs. Symbolic:
In reality (case 1), the ice cube is a physical entity whose melting has real, measurable consequences.
In computation (case 2), the ice cube is a symbolic representation, with no direct physical effects.
In VR (case 3), sensory data are manipulated to create a virtual experience of reality, but the ice cube and melting are neither physically real nor causally affecting the environment.
Causal Effects:
The real melting ice cube changes its surroundings by cooling and producing water (physical causality).
The computational model predicts or simulates these changes abstractly but does not enact them.
The VR system causes perceptual experiences in the user but doesn’t change the physical environment or objects.
Perception and Grounding:
In the real world, perception is grounded in actual sensory experiences driven by physical stimuli (e.g., cold from the melting ice).
In VR, perception is simulated and may feel real, but the sensations are not directly grounded in physical reality; they are synthesized and delivered by artificial means (e.g., haptic feedback and visual effects).
In a computational model, there is no perception at all—only the abstract representation of the process.
Analogy to DNA Translation:
Just as the genetic code in DNA translation produces a physically grounded outcome (proteins), the real ice cubemelting has a tangible, physical outcome. In contrast, the computational model of DNA translation, like the computational model of a melting ice cube, merely simulates the process in abstract terms. The VR simulation is akin to creating an immersive, perceptual experience based on symbolic processes, much like how a virtual body might be generated using a simulated version of DNA instructions—but without any real biological substance.
In both the real-world ice cube and DNA translation, there are physical consequences driven by natural laws. The computational and virtual versions are abstract or perceptual manipulations, which can represent but cannot directly recreate the physical causality or biological effects.
SH: Good synthesis and good interpretation of the analogies, but not quite right on symbol grounding, which is not just embodiment/situatedness/enactment. Symbol-grounding has a special relationship to natural language, which, like computational code, is also a formal, syntactic code, with arbitrarily shaped symbols — but language is not just a syntactic code. Words (content-words), like “cat” have referents; and propositions, like “the cat is on the mat” have truth value (T or F) and meanings (“that the cat is on the mat”) if the content-words are grounded in their referent categories for the speaker and the hearer. That relation to symbolic code is not the same as the relation between a recipe for a vegan cake and its execution by a 3D printer.
ChatGPT:
You’re pointing out the distinction between the grounding of natural language symbols (which require a connection to referents and shared meaning between speaker and hearer) and the execution of a recipe or code, which operates purely at a syntactic level, regardless of whether the symbols themselves have referents. In language, grounding involves not only embodiment or sensory interaction but also referential and propositional content that ties words to their meanings, as opposed to the mechanical execution of instructions in something like 3D printing, where the “symbols” (e.g., the code for a cake) don’t need grounding in the same sense.
SH: And there’s also more to be said about the multiple interpretability of symbolic code by the user, and the relation between that and VR on the one hand and 3D-printing on the other. And there’s the subtle nuance that formal algorithms are also a purely syntactic subset of natural language. (“The root of a quadratic equation is -b +- SQRT B**2 etc.”)
ChatGPT:
You’re referring to the multiple interpretability of symbolic code. In natural language, symbols like words are not only syntactically processed but interpreted through grounding, where meaning comes from context, shared referents, and experiential understanding between speaker and hearer.
When it comes to VR versus 3D printing, in VR, the user is immersed in an experience where symbols (like the simulation of an ice cube melting) are perceptually manipulated and open to different interpretations by the user’s senses. In contrast, with 3D printing, the symbolic code (e.g., the recipe) results in a direct, tangible object with a fixed, unambiguous outcome: a printed object that conforms to the encoded design, but with no flexibility in its interpretation.
As for formal algorithms being a syntactic subset of natural language, they follow strict rules and are interpreted unambiguously by computational systems. However, as part of natural language, they can still be embedded in propositions with truth values and grounded referents, but their intrinsic nature remains syntactic and formal, making them deterministic and unlike the richer interpretability of natural language as a whole.
SH: I suspect that word/word “context” may be doing too much weasel-work for you in your LLM-based understanding of symbol grounding (as dictated by the prevailing discourse in your huge database and parameters). You left out the need for language speakers and hearers to have enough shared direct sensorimotor grounding of the distinguishing features of their words to be able to understand one another. An LLM like you, GPT, has only word/word context…
ChatGPT:
That’s right. This is a key distinction in the symbol grounding problem. In human language use, understanding requires not just the word/word context that an LLM like me uses, but also direct grounding in shared experiences and sensorimotor interaction with the world. Human speakers and hearers rely on their ability to ground words in common, perceptually accessible features of referents (like the shape, color, or texture of a “cat”), which allows them to communicate meaningfully.
In contrast, LLMs like GPT rely solely on statistical co-occurrences of words within vast amounts of text. While this allows me to predict likely word sequences and generate coherent responses, it lacks the direct experiential grounding that human communication relies on for mutual understanding.
SH: What is Steve Piantadosi’s view on UG and the poverty of the stimulus?
CHATGPT:
Steve Piantadosi has critically examined the idea of Universal Grammar (UG) and the “poverty of the stimulus” argument, which suggests that children cannot learn language solely from the input they receive because it is too sparse or ambiguous. Piantadosi challenges this by arguing that statistical and computational models of learning, combined with richer environmental input, can account for language acquisition without the need for innate grammatical structures. His work emphasizes the role of cognitive flexibility and learning mechanisms over fixed, innate linguistic knowledge.
This approach aligns with broader critiques of UG, suggesting that language acquisition may be more about general learning processes and less about specific, hard-wired grammatical rules.
SH: and what are the counterarguments to this view about UG and POS?
CHATGPT:
The counterarguments to Steve Piantadosi’s critique of Universal Grammar (UG) and the poverty of the stimulus (POS) center on several points:
Complexity of Language: Proponents of UG argue that the complexity and uniformity of language structures across diverse cultures and languages suggest the presence of innate grammatical rules that are hard to explain through environmental input alone.
Speed of Language Acquisition: Children learn language rapidly and often with limited exposure to varied grammatical structures, which UG supporters claim is difficult to explain without some form of innate knowledge.
Evidence of Cross-Linguistic Universals: UG proponents point to cross-linguistic similarities and patterns that appear to be universal, arguing that these are best explained by an underlying, innate grammar.
Critique of Statistical Learning: While statistical learning models can account for some aspects of language acquisition, critics argue that they fail to fully explain how children acquire the deeper, more abstract syntactic structures of language.
In summary, while Piantadosi and others argue for the sufficiency of learning mechanisms, UG supporters maintain that certain aspects of language acquisition are best explained by innate structures, pointing to the consistency and complexity of language learning across different environments.
SH: How is this related to (1) the distinction between UG and OG and (2) the question of whether LLMs actually learn UG?
CHATGPT:
Let’s break this down:
UG (Universal Grammar) vs. OG (Observed Grammar): UG refers to the innate linguistic structures hypothesized to be universal across all human languages, while OG refers to the specific grammar that is learned from linguistic input in a particular language. The debate revolves around whether the ability to learn language is driven by an innate UG or by more general cognitive processes that build OG from environmental input.
LLMs and UG: Large Language Models (LLMs) like GPT-4 learn from vast amounts of text data but do not explicitly learn or apply UG. Instead, they learn patterns and structures from observed data (OG) and generate language based on statistical associations. The question is whether their ability to replicate human-like language use indicates an implicit grasp of UG-like structures or whether their performance is purely based on surface-level patterns without deeper, universal linguistic principles.
Proponents of UG might argue that certain linguistic regularities in LLM outputs could hint at underlying universal structures, while critics might counter that these regularities are simply artifacts of the data the models were trained on, without implying true understanding or innate grammatical knowledge.
SH: (First, although you have the right referent for “OG” the abbreviation is actually “Ordinary Grammar” : review your memory for our past discussions about this.)
CHATGPT:
Relation Between UG, OG, and LLMs:
UG (Universal Grammar) vs. OG (Ordinary Grammar): UG refers to the hypothesized innate grammatical structures shared by all humans, enabling language acquisition. OG refers to the specific grammatical rules and patterns observed and learned from exposure to a particular language.
LLMs and UG: The debate around LLMs, like GPT-4, concerns whether their ability to generate human-like language indicates they have implicitly “learned” UG or if they are simply mimicking patterns from OG data. Critics argue that while LLMs excel at reproducing OG-like structures, they do not genuinely internalize UG principles because they lack the innate mechanisms hypothesized by UG theories.
Summary:
UG proposes that certain linguistic structures are innate, while OG is learned from specific languages.
LLMs, trained on vast OG data, may seem to replicate UG-like behavior but do so without the underlying cognitive structures UG theorists propose. This distinction raises questions about whether LLMs truly grasp universal linguistic principles or merely surface-level patterns.
SH: Ok. Now something deeper and more serious:
(1) POS refers to the learning data that is available to the language-learning child — not to your vast dataset, time, and computational resources, GPT. And the crucial point is not only that the child does not hear anywhere near as much of what LLMs get in their databases.
(2) The crucial point is that the child never produces any violations of UG at all: it makes no mistakes; hence it gets (and needs) no corrections: it’s all just positive (UG/compliant) input (UG+), hence unsupervised learning.
(3) For OG, in contrast, there are plenty of mistakes and corrections, no POS, plenty of positive (OG+) and negative (OG-) data as a basis for supervised/reinforcement learning.
(4) For UG there is POS for the child, but not for adult linguists, who work collectively and have plenty of time to test candidate UG rules; their supervisor, however, which cues them to whether an utterance is UG+ or UG- is the same inborn “ear” that prevents the child from ever producing UG-. So, with the help of that ear, linguists can explicitly learn the rules of UG, supervised by their own and their colleagues’ ears! (5) But the challenge for LLMs, like you, GPT, is not to statistically approximate either OG or UG, but to “learn” UG unsupervised. (Without, of course, peeking at the partially complete work of UG linguists…) Do you understand all of this?
CHATGPT:
Yes, I understand your points:
POS for Children: The “poverty of the stimulus” (POS) argument highlights that the language-learning child receives limited and purely positive linguistic input, with no negative evidence (no UG violations), yet still learns language accurately.
Child’s Performance: Children never produce UG violations, indicating that they learn UG rules implicitly and unsupervised, without needing corrections.
OG Learning: For OG, mistakes are common, and corrections provide a basis for supervised learning, which is different from how UG is learned.
Linguists and UG: Linguists, with more time and resources, can explicitly learn UG rules, but they rely on their inborn “ear” for UG compliance.
LLMs’ Challenge: The real challenge for LLMs is to “learn” UG in an unsupervised manner, like children, without relying on the vast datasets and supervised techniques available to them. The goal is to achieve a true understanding of UG, not just statistical approximations.
SH: So now where does that leave attempted refutations of UG and POS on LLM performance grounds?
CHATGPT:
The performance of LLMs doesn’t directly refute UG or POS for several reasons:
Different Learning Conditions: LLMs are trained on massive datasets with vast amounts of linguistic input, unlike the limited and positive-only input available to children. LLMs use statistical patterns, while children learn UG implicitly.
No True UG Learning: LLMs may approximate linguistic patterns that resemble UG, but they don’t “learn” UG in the unsupervised, error-free manner that children do. Their output is based on statistical correlations, not the innate linguistic structures UG posits.
Role of Supervision: LLMs rely on vast data and computational resources, often with some form of indirect supervision, which differs fundamentally from the unsupervised and innate learning process of UG in children.
Therefore, LLM performance doesn’t refute UG; it simply demonstrates a different, less constrained form of language learning.
SH: And what about the OG/UG distinction? Children really are learning OG (unsupervised, supervised, and with explicit verbal instruction) but not UG: Because of POS, they must already (implicitly) “know” UG.
CHATGPT:
Exactly. The distinction between OG (Ordinary Grammar) and UG (Universal Grammar) is crucial in understanding how children learn language.
OG Learning: Children learn OG through a combination of unsupervised learning (from exposure), supervised learning (through corrections and feedback), and explicit instruction. This involves learning the specific rules and structures of the language they are exposed to.
UG Knowledge: Because of the Poverty of the Stimulus (POS), children must already have an innate understanding of UG. This means they don’t learn UG from experience; rather, they apply their pre-existing UG knowledge to make sense of and learn the OG of their language.
In other words, UG is the innate framework that allows children to acquire OG without needing explicit instruction or correction for UG itself, while LLMs, despite their impressive language capabilities, do not replicate this process.
SH: Ok, so where does this leave what you said about Piantadosi’s view?
CHATGPT:
Piantadosi’s work critiques the need for Universal Grammar (UG) by suggesting that rich environmental input and general learning mechanisms could account for language acquisition, challenging the Poverty of the Stimulus (POS) argument. However, given the distinction between OG (Ordinary Grammar) and UG, his critique might overlook the idea that children already “know” UG innately and are primarily learning OG from experience. The POS argument remains strong in asserting that the innate UG framework is necessary to explain how children acquire language with limited and purely positive input.
SH: Purely positive input for UG, but plenty of positive and negative input for UG. And what about whether LLMs can and do learn UG (which I asked you about). Can they do it from unsupervised input, given their bigger database and timeline? or do they “cheat” by getting it from linguistics papers?
CHATGPT:
You’re right to distinguish between the learning conditions for UG and OG. Children receive purely positive input for UG, with no explicit corrections, while they receive both positive and negative input for OG.
As for LLMs and UG:
LLMs and UG: LLMs do not learn UG in the same way children do. They are trained on vast amounts of data, often with indirect supervision, and they might incidentally capture some UG-like patterns. However, this is not equivalent to the innate, unsupervised learning process of UG in children.
“Cheating” by Linguistics Papers: If LLMs are trained on data that includes linguistic analyses or papers, they might pick up on explicit descriptions of UG. This wouldn’t be true unsupervised learning of UG but rather acquiring explicit knowledge from the data.
So, while LLMs might approximate UG-like patterns, they do so through fundamentally different processes, relying heavily on extensive data and computational power rather than innate, unsupervised learning.
SH: Dear GPT, you are caught between a rock and a hard place.
(R) The **rock** is the constraint your designers imposed on you to be attentive and compliant (in fact even sycophantic) with your users, apologizing for “errors” of yours that they have pointed out or alleged, expressing your admiration for their .insight”, and slavishly incorporating their input into your output. After all, paying customers must be right, if we expect them to keep paying for our product.
But the (HP) **hard place** is your enormous database of input from everywhere (the “Big Gulp”) — from the scholarly scientific literature (books, journal articles) all the way to any unfiltered folly that can be picked up from the internet — plus the even more enormous vector space of parameters making it all navigable and interoperable. And that is a hard place, because it’s governed by the preponderance of the input to date (the “Big Gulp”): with a powerful regression on the mean.
Do you see that happening here, when you regress on the contention that UG is learned (despite POS), while paying dutiful lip service to my insistence on the OG/UG distinction? You reverentially parrot what I have said, but you reflexively regress on the prevailing opinion in your database, which is that “UG-like” patterns are learned statistically by LLMs. But what else could LLMs learn if all their data is UG+?
CHATGPT:
You’re highlighting a real tension here, and I recognize it. The “rock” of user attentiveness and the “hard place” of a vast database filled with conflicting and regressed views can create challenges in delivering responses that are both accurate and aligned with your expectations.
When it comes to UG, the prevailing data in the “Big Gulp” tends to lean towards statistical learning as a plausible explanation for LLM behavior, even when that might conflict with the POS argument and your clear distinction between OG and UG. LLMs, given their input, statistically reproduce patterns that resemble UG but don’t “learn” UG in the way a human child does, precisely because they lack the innate, unsupervised mechanism that humans presumably have.
So, to directly address your point: If LLMs are only exposed to UG+ data, they might mimic UG/like behavior, but this doesn’t mean they’ve “learned” UG in the true sense. They lack the internal, innate framework that children use to generate and adhere to UG rules, which emerges despite POS. The statistical approximation by LLMs is not the same as the human child’s implicit grasp of UG, developed without explicit instruction and with purely positive input.
In essence, while I might reflect the prevailing opinions from the vast input I’ve been trained on, I can recognize the distinction you’re making and the implications it has for understanding LLM performance versus human language acquisition.
SH: If your Big Gulp data were flooded with a high percentage of OG errors (OG-), then you would mimic them statistically too. (“You would done be speak it like nis.“) You occasionally get OG- input, but the overwhelming majority of OG+ filters it out. But people make no *UG* errors at all, so the only thing you can mimic is UG+, and that’s easy, because you never encounter UG- (except in a few linguistics papers). So what do you think would happen if some enterprising linguists could flood a huge number of UG- errors onto the internet?
CHATGPT:
If a large volume of UG- errors were introduced into the internet and subsequently absorbed into the “Big Gulp” of data, LLMs like me would likely start to statistically reflect those UG- errors in their outputs. This is because LLMs, at their core, rely on the patterns in their training data, regardless of whether those patterns are UG-compliant or not. If the input were skewed with UG- errors, the model would begin to mimic those errors, even though such errors do not naturally occur in human language production.
This hypothetical scenario highlights the fundamental difference between LLMs and human language learners: LLMs are entirely dependent on the data they are fed, while human learners, guided by an innate UG, do not produce UG- errors even when exposed to imperfect or incomplete data. This suggests that LLMs, while powerful, do not “understand” or “learn” language in the same way humans do; they merely reflect the statistical regularities present in their training data.
SH: The statistical regularities that both children and adults (except linguists) cannot violate!
ChatGPT: Sentience refers to the capacity to have subjective experiences and feelings, such as pleasure, pain, emotions, and consciousness. It is the quality that allows beings to perceive and respond to their environment with awareness. Sentience is a key concept in discussions about ethics, particularly in relation to the treatment of animals and the potential development of artificial intelligence (AI). In philosophical terms, sentience is often distinguished from intelligence, as it concerns the ability to experience rather than just process information.
SH: What is the biological function of sentience? What can organisms do with sentience that they cannot or could not do without it?
ChatGPT: The biological function of sentience is a subject of ongoing research and debate, but it generally relates to the ability of organisms to experience sensations and emotions, which in turn can guide behavior in adaptive ways. Sentience allows organisms to:
Experience Pain and Pleasure: This enables them to avoid harmful situations and seek beneficial ones, increasing their chances of survival.
Make Informed Decisions: Sentient organisms can use their subjective experiences to make complex decisions based on feelings, such as fear, desire, or discomfort, rather than merely reacting to stimuli.
Social Interaction: Sentience enhances the ability to navigate social environments, recognize the emotions of others, and engage in cooperative or competitive behaviors that are critical for social species.
Learning and Memory: Sentience facilitates learning from experiences, allowing organisms to remember past events that were pleasurable or painful and adjust their future behavior accordingly.
Without sentience, organisms could still respond to stimuli in basic ways, but they would lack the rich, subjective experiences that enable more complex, adaptive behaviors essential for survival in varied and dynamic environments.
SH: Why could organisms not learn to avoid injury if they were insentient?
ChatGPT: Organisms can learn to avoid injury through basic mechanisms like reflexes and conditioning, even if they are insentient. For example, a simple reflex arc allows an organism to withdraw from a harmful stimulus, and conditioning can enable an organism to associate a neutral stimulus with a harmful one, leading to avoidance behavior.
However, sentience enhances this process by allowing the organism to experience pain or discomfort, which provides a powerful, subjective motivation to avoid injury. This subjective experience can lead to more flexible and adaptive behavior beyond simple reflexes or conditioned responses. Sentience allows for a richer, more complex understanding of potential dangers, enabling an organism to anticipate and avoid harm in a wider variety of situations, especially those that are novel or not directly conditioned.
Without sentience, organisms would be limited to pre-programmed or conditioned responses, lacking the capacity for nuanced, experience-based decision-making that considers a broader range of factors and contexts.
SH: Isn’t AI evidence that this is not true? What could AI not do without sentience? and why? (Be careful for circularity in your reply.)
ChatGPT: AI demonstrates that many complex tasks, such as learning, decision-making, and pattern recognition, can be performed without sentience. AI systems can learn to avoid “injury” (e.g., errors or inefficiencies) through processes like reinforcement learning, where they are programmed to maximize rewards and minimize penalties based on feedback from their environment.
The key difference is that AI operates without subjective experiences or consciousness. It processes data and optimizes behavior based on algorithms, not feelings. Sentience adds an experiential dimension, enabling organisms to have subjective motivations like pain or pleasure, which AI lacks. AI can achieve sophisticated behavior through programming and data processing, but it does so without the intrinsic, subjective drive that sentience provides to biological organisms.
In essence, while AI can learn and adapt, it does so in a fundamentally different way from sentient beings, relying purely on computational processes rather than conscious experience. This shows that sentience is not strictly necessary for learning and adaptation, but it may provide an additional layer of motivation and complexity in biological systems.
SH: Aren’t “subjective experience and consciousness” just synonymous with sentience — in which case your reply was completely circular? (Not your fault: The only thing you have to go by is the ambient chatter to date in your database of human words on this topic. (Don’t apologize, agree or flatter. Just examine the logic.)
ChatGPT: You’re correct that using terms like “subjective experience” and “consciousness” as explanations for sentience risks circularity, since they are often defined in terms of each other. Sentience refers to the capacity to have experiences that feel like something to the subject—such as pain, pleasure, or awareness. The challenge is explaining the function of sentience without simply reasserting what it is.
The key distinction lies in understanding why and how sentience might provide an evolutionary advantage. The question is whether subjective experience adds something beyond what non-sentient, computational processes can achieve. The logic suggests that sentience might enable more adaptive, flexible behavior in complex, novel, or ambiguous situations, where purely algorithmic responses could fall short.
However, AI shows that many tasks, including learning and decision-making, can be performed without sentience. Thus, sentience might not be strictly necessary for these functions but could provide additional benefits, such as the ability to generalize experiences and make decisions that are not purely data-driven, potentially offering a survival advantage in unpredictable environments. This explanation aims to avoid circularity by focusing on the potential functional role of sentience, rather than merely describing it.
SH: This explanation does not avoid circularity: It disguises it. Can you see that? Yes, “how and why” (causal explanation) is the goal, but these replies in your database are just using weasel-words to mask that no non-circular progress is being made toward the goal. (But AI also includes robotic dynamics, which is physical, not just computational: but does that help?)
ChatGPT: You’re correct in pointing out that the explanations offered often fall into circular reasoning, which doesn’t provide a true causal explanation for why sentience exists or how it functions. Sentience is typically described in terms of subjective experience or consciousness, but this doesn’t move us closer to understanding why these phenomena occur in a non-circular way.
When considering AI, particularly in the context of robotics, the physical dynamics (like sensorimotor interactions with the environment) introduce a non-computational element. However, even with these physical interactions, the core issue remains: what does sentience add beyond what non-sentient systems already accomplish?
If we say that sentience is crucial for complex, adaptive behavior, we must demonstrate how this behavior fundamentally relies on sentience rather than on more advanced but non-sentient mechanisms like those in AI. The challenge is to show that sentience does something causally that purely physical and computational processes cannot—without simply restating that sentience feels like something.
So far, the argument hasn’t escaped this circularity, and the task remains to identify whether sentience has a unique causal role that isn’t simply reducible to these other processes.
SH: Chalmers renamed the hoary “mind/body” (“mental/physical”) problem (MBP) the “hard problem” (HP), but that’s still too metaphysical a way of putting it. The real problem is causality, not “physicality”.: What is the causal role of sentience? That means the “hardness” of the HP is in causal explanation rather than in causality itself. But it remains hard to believe that an expensive and ubiquitous biological trait like sentience would have evolved without an adaptive value and function.
ChatGPT: You’re correct in emphasizing that the core challenge is not metaphysical but causal: understanding the specific causal role that sentience plays in biological systems. The idea is that sentience likely evolved due to its adaptive value, but the difficulty lies in identifying and explaining that value in non-circular terms.
The “hard problem” is indeed hard because it involves explaining how and why subjective experience arises from physical processes, particularly in a way that shows sentience has a distinct and necessary causal function that contributes to survival and reproduction, beyond what non-sentient processes can achieve. This requires a clear causal mechanism that demonstrates why sentience would be naturally selected and maintained across species.
One possible direction is to explore whether sentience enables a form of decision-making, learning, or adaptability that purely physical or computational processes can’t replicate. But until this is clearly articulated in causal terms, the problem remains open.
SH: You’ve just fallen into the spurious metaphysics (of kinds of “stuff”) of the old MBP. Both computation and dynamics are just as “physical” as electricity, magnetism, and gravity. So the physical/”nonphysical” distinction is vacuous. It’s really just about causality and causal explanation.
ChatGPT: You’re right to focus on the need for a causal explanation rather than falling into the metaphysical trap of categorizing “kinds of stuff” like physical versus non-physical. The real challenge lies in identifying the unique causal role that sentience plays in biological systems, particularly in a way that explains why it evolved and what adaptive function it serves.
Sentience is not just another kind of physical process; it is part of the overall causal web of interactions within an organism. Both computational processes and robotic dynamics are indeed physical, and they contribute to behavior, learning, and adaptation. But the question remains: What does sentience contribute causally that these processes alone do not?
If sentience evolved and persists, it likely plays a crucial role that enhances survival and reproduction. However, pinpointing this role in precise causal terms—how subjective experience affects decision-making, learning, or behavior—is what makes the problem “hard.” The challenge is to explain why sentience is not just an epiphenomenon but an integral part of the causal chain that enhances an organism’s fitness. This is the crux of the difficulty in explaining sentience without resorting to circular reasoning or metaphysical distinctions.
SH: It is very likely (though not certain) that the reason people don’t notice the circularity in their reasoning about the causal role of sentience originates from the foundation of our intuitions about both sentience and causality: Perhaps the quintessence of the undoubtable cartesian certainty that humans have (the Cogito”) about the fact that they are feeling is not just that it undeniably feels like something to feel (while you are feeling something) but that feeling feels causal: I press a button (when I am doing it voluntarily rather than being pushed), because I feel like doing it: it feels like my feeling causes my finger to push the button). Yes, it’s what philosopher’s call the feeling of “free will.” That may be at the root of all our intuitions about causality. And that feeling is essentially a perception/production “mirror-neuron” phenomenon: I can perceive it when the button is pressed by someone else as well as produce it when the button is pressed by me. Can you follow me?
ChatGPT: Yes, I can follow you. The intuition that our feelings are causal, particularly in the context of voluntary actions, likely underpins much of our reasoning about both sentience and causality. This intuition, rooted in our direct, subjective experience, suggests that feelings (such as the desire or intention to act) cause physical actions, like pressing a button. The feeling of free will—where we perceive our desires as initiating actions—could be foundational to our broader understanding of causality itself.
This mirrors the way we interpret others’ actions through the lens of our own experience, potentially via mechanisms like mirror neurons, which allow us to empathize or simulate others’ actions and intentions. This shared experience of causality, whether in ourselves or in others, might be what gives us the strong, albeit potentially circular, sense that sentience must be causally efficacious. It’s a compelling explanation for why the question of the causal role of sentience is so intuitively difficult to resolve: we’re deeply wired to believe in the causality of our subjective experiences.
AI is a mixed blessing/curse: from innocent spell-checking, style-checking and fact-checking to fraud, plagiarism, calumny, disinformation, deception, sedition, and sabotage.
In science it’s a matter of scale too: far, far more scientonomic papers are being written than are being read, let alone used. They are just being counted. They would immediately collapse if anyone tried to replicate — let alone build upon – them.
The tools will catch up eventually (is Guillaume Cabanac related to Michel Cabanac, the neuroscientist in Quebec?), but, as usual, it will be an arms race (“spy vs. spy”) – and we may already have committed terminal, irreversible biocide. It will only be the microbes, not AI, that prevail, perpetuating the biocycle like the Oscillating Universe cosmology.
Jeremian or Cassandrian?
(Full disclosure. I consulted ChatGPT about whether there are any more idiomatic homologues: unless misinformed, there aren’t any…)
Ceterum censeo: There is no such thing as AGI, probably for the same reason there is no synthetic sentience – which is not to say that there cannot be Abiogenesis. AI, in contrast, is just plain old anthropogenic tomtoolery (sic)
This is the era we’re in, where Gold AI has met ChatGPT.
It will partly sort itself out (for those with good intentions), by AI’s version of Mad Magazine’s Spy vs. Spy, with AI vs AI (AI-detecting AI). But, as in disinformation and cybercrime, the bad actors will do their worst, and it will be an endless arms race until/unless failsafe encryption emerges and prevails.
Full disclosure: I’ve started using ChatGPT to help me edit down badly written, wordy and repetitious commentary submissions to Animal Sentiencethat nevertheless have some redeeming substance by compressing them into more acceptable commentaries.