L’argument de La Chambre Chinoise de Searle

L’article de John Searle de 1980, Minds, Brains, and Programs, n’était pas une attaque générale contre l’intelligence artificielle, ni un plaidoyer mystique en faveur de l’ineffable humain. Sa cible était une thèse bien précise, qu’il a maladroitement baptisée « Strong AI », mais qui correspond à ce que l’on appelle plus clairement aujourd’hui le computationnalisme (« C = C »): l’hypothèse que la cognition n’est rien de plus que de la computation, autrement dit que les états mentaux sont des états computationnels, indépendants du support matériel qui les implémente.

Dans le cadre du cours, il est crucial de formuler correctement la cible de Searle. Il ne s’agit pas de savoir si les ordinateurs sont utiles pour modéliser la cognition (ce que Searle accepte), ni si des machines peuvent faire des choses impressionnantes. La question est celle-ci : si un système purement computationnel réussissait le test de Turing verbal (T2) à l’échelle d’une vie entière, serait-il pour autant en train de comprendre ce qu’il dit ? Le test est radical : pas un jeu de cinq minutes, pas une démonstration de surface, mais une indiscernabilité verbale durable avec des interlocuteurs humains normaux, sur n’importe quel sujet.

L’expérience de pensée de la Chambre chinoise suppose précisément ce cas. Imaginons qu’un programme permette à un ordinateur de passer avec succès un tel test en chinois. Searle, qui ne comprend pas le chinois, est placé dans une pièce et reçoit des chaînes de symboles chinois. À l’aide de règles formelles (un algorithme) exprimées en anglais, il manipule ces symboles et renvoie d’autres symboles chinois. De l’extérieur, à ces interlocuteurs chinois, le mots de Searle sont indistinguable de ceux d’un locuteur chinois natif : questions, réponses, discussions prolongées sur n’importe quel sujet imaginable. Pourtant, du point de vue interne, Searle n’a aucune compréhension du chinois. Il ne sait pas ce que signifient les symboles qu’il manipule.

L’argument repose alors sur un principe central du computationnalisme : l’indépendance du logiciel (l’algorithme) de son implémentation matérielle (la machine de Turing qui manipule les symboles (les mots chinois). Si comprendre une langue était une propriété purement computationnelle, alors tout système implémentant le bon programme devrait comprendre, indépendamment du matériel qui exécute le logiciel (un Mac, un PC) . Or ici c’est Searle qui exécute ce logiciel qui réussi le T2 chinois. Et pourtant, Searle ne comprend les symboles chinois qu’il est en train de manipuler. C’est ici qu’intervient ce qu’on a appelé plus tard « le périscope de Searle » : une occasion exceptionnelle de pénétrer la barrière des autres esprits. Normalement, une personne ne peut aucunement savoir si une autre personne ressent quoi que ce soit: une idée, une humeur, une sensation. Mais si la cognition était identique à une computation, alors en devenant lui-même l’implémentation matérielle du logiciel qui réussit le T2 chinois, Searle devrait lui-même ressentir la compréhension du chinois en exécutant toutes les manipulations de symboles qui font la compréhension du chinois (C=C). Mais il peut nous faire le témoignage: « Je manipule les symboles qu’on me donne comme entrées, selon les règles de manipulation qu’on me donne aussi, mais je n’en comprends absolument rien. Je ne comprends toujours pas le chinois. Donc la conclusion est inévitable : la cognition n’est pas de la computation (CC). Plus précisément, elle ne peut pas être exclusivement computationnelle.

C’est là que commencent les malentendus, souvent entretenus par Searle lui-même. La réponse la plus célèbre est la « System Reply » selon laquelle Searle ne serait qu’une partie du système; pourtant c’est le système global — Searle + les règles, les symboles, la pièce — qui comprendrait le chinois. Searle réplique facilement en internalisant le système intégral : il n’aurait qu’àmémoriser les règles et effectuer toutes les manipulations dans sa tête. Rien ne change : il n’y a toujours aucune compréhension. Cette réplique est décisive contre l’idée que la simple agrégation de composants syntaxiques (la manipulation des symboles de forme arbitraire d’après les règles) puisse engendrer une compréhension du sens.

Mais beaucoup de critiques ont refusé la conclusion en raison de formulations confuses. D’abord, l’usage par Searle des termes « Strong AI » et « Weak AI » a brouillé le débat. « Weak AI » ne désigne en réalité que la la thèse forte de Church-Turing : que la computation peut simuler pratiquement n’importe quel objet ou processus dans l’univers. Cette thèse est compatible avec l’argument de Searle. L’argument ne montre pas que la cognition ne peut pas être simulée, mais qu’une simulation computationnelle n’est pas, en elle-même, l’objet simulé: la bonne recette pour faire un gâteau végane n’est pas elle même le gâteau végane; et l’exécution de la recette végane n’est pas juste de la computation: c’est de l’impression 3D: On mélange les ingrédients, puis on fait cuire le gâteau au four. Ce qui n’est plus juste de la computation: Cuisiner n’est pas juste de la computation!

Deuxième confusion : l’idée que Searle aurait réfuté le test de Turing en tant que tel. C’est faux. L’argument montre seulement que T2, pris isolément et sous une interprétation strictement computationnelle, ne garantit pas la compréhension. Il ne dit rien contre T3 (ancrage sensorimoteur) ni contre T4 (duplication structurelle complète). En fait, l’argument laisse entièrement ouverte la possibilité qu’un système hybride — computationnel et non computationnel — puisse comprendre, ou qu’un système robotique ancré dans le monde puisse acquérir des significations que Searle, enfermé dans sa pièce, ne peut pas acquérir.

Troisième erreur fréquente : croire que Searle aurait montré que « la cognition n’est pas computationnelle du tout ». L’argument ne montre rien de tel. Il montre seulement que la cognition ne peut pas être uniquement computationnelle. La computation peut parfaitement jouer un rôle causal essentiel dans un système cognitif, sans en épuiser les propriétés sémantiques. Sur ce point, la « System Reply » avait une intuition juste, même si elle échouait comme réfutation : comprendre peut être une propriété d’un système global, mais pas d’un système purement syntaxique.

Enfin, Searle a lui-même surinterprété sa conclusion en suggérant que la solution devait nécessairement passer par la duplication des pouvoirs causaux du cerveau biologique. Rien dans l’argument n’impose un tel saut vers T4. Il reste une vaste gamme de possibilités intermédiaires : systèmes dynamiques non computationnels, architectures hybrides, réseaux neuronaux couplés au monde, agents sensorimoteurs apprenant par interaction. L’argument ne tranche pas en faveur des neurosciences contre la science cognitive. Il tranche uniquement contre le computationnalisme pur.

Malgré ces excès et ces confusions, l’importance historique de la Chambre chinoise est considérable. Elle a forcé la discipline à distinguer clairement syntaxe et sémantique, simulation et instanciation, performance verbale et compréhension. Elle a aussi préparé le terrain pour ce qui deviendra explicitement, quelques années plus tard, le problème de l’ancrage des symboles : comment des symboles formels peuvent-ils acquérir une signification intrinsèque pour un système, plutôt que seulement une interprétation extrinsèque par un observateur ?

La leçon méthodologique centrale est donc la suivante : passer un test comportemental, même très exigeant, n’explique pas en soi comment le sens est généré. L’argument de Searle ne ferme pas la route vers une science mécaniste de la cognition. Il ferme seulement une impasse : celle qui croyait pouvoir expliquer l’esprit par la manipulation de symboles non ancrés. En ce sens, loin d’être un obstacle, la Chambre chinoise a été un déclencheur. Elle a rendu inévitable la question qui structure la suite du cours : comment relier les symboles au monde, et le langage à l’action et à la perception.

Le test de Turing et la rétro-ingénierie de la capacité cognitive

L’article de Turing de 1950 ne proposait ni un tour de passe-passe, ni un concours de tromperie, ni un jeu d’imitation à court terme. Il proposait un tournant méthodologique pour une future science de l’esprit : cesser de demander ce qu’est la pensée et chercher plutôt à expliquer comment les penseurs peuvent faire ce qu’ils peuvent faire. Le remplacement de la question vague « Les machines peuvent-elles penser ? » par un critère opérationnel ne visait pas à banaliser la cognition, mais à l’ancrer dans la capacité de performance empirique. L’enjeu n’était pas de duper des juges, mais de rétro-ingénier la totalité des capacités cognitives humaines de manière à les rendre reproductibles par une explication causale dont nous comprenons le fonctionnement. La question centrale n’est pas de savoir si une machine peut passer pour un penseur, mais comment et pourquoi les humains pensants peuvent faire tout ce qu’ils peuvent faire.

La terminologie malheureuse de « jeu » et d’« imitation » a entretenu une confusion durable. L’intuition méthodologique de Turing est que la cognition est invisible, alors que la performance ne l’est pas. Nous ne pouvons pas observer la pensée directement, ni chez autrui ni chez les machines, mais nous pouvons observer ce que les penseurs ont la capacité de faire. Le test n’a donc jamais porté sur la supercherie, mais sur l’indiscernabilité en capacité de faire (dites « l’indiscernabilité Turingienne »). L’interrogateur n’est pas un naïf, mais n’importe quel penseur humain neurotypique. Le véritable critère n’est pas que des personnes soient trompées, mais qu’il n’existe aucun moyen de distinguer le candidat d’un être humain normal à partir de sa performance observable. S’il y a une différence discernable, le candidat échoue. Sinon, alors la rétroingénieurie a réussi et le mécanisme interne qui a produit le succès constitue une potentielle explication causale de la capacité cognitive humaine.

Cela soulève immédiatement la question de l’étendue et de la durée du Test de Turing. La remarque occasionnelle de Turing sur cinq minutes et des pourcentages a été interprétée de façon absurde. Il s’agit d’une prédiction démographique, non d’un critère scientifique. La science cognitive n’est pas l’art de tromper certaines personnes pendant un certain temps. Un test sérieux de la cognition rétro-ingéniérée doit être ouvert et, en principe, valable sur toute une vie. Le candidat doit pouvoir continuer à faire ce que les humains peuvent faire, à travers les domaines et les contextes, sans s’épuiser dans des astuces pré-programmées ou des bases finies de cas. Un système qui s’effondre lorsque la conversation s’aventure en terrain imprévu, ou lorsqu’il est soumis à des sondages persistants, ne révèle pas une capacité cognitive générale. Il révèle un artefact de performance borné.

Tout aussi importante est la restriction au canal verbal. Turing a introduit l’interaction dactylographiée comme un moyen de mettre entre parenthèses l’apparence et les indices physiques non pertinents, non comme une thèse selon laquelle la cognition serait épuisée par le langage. L’exclusion de la voix, du geste et de l’incarnation visait à neutraliser des indices superficiels, non à nier que les humains sont des agents sensorimoteurs dans un monde physique. Interpréter le test comme intrinsèquement verbal revient à confondre une commodité méthodologique avec un engagement théorique. La capacité cognitive humaine n’est pas un simple module de clavardage. Elle est ancrée dans la perception, l’action et l’interaction causale avec le monde. Un système qui ne peut qu’échanger des symboles, sans pouvoir voir, se déplacer, manipuler et être affecté par son environnement, manque une grande partie de ce que les humains peuvent faire.

C’est pourquoi la distinction entre une indiscernabilité purement verbale et une indiscernabilité robotique complète est cruciale. Un système qui réussirait une vie entière d’échanges par courriel serait déjà une prouesse d’ingénierie remarquable, mais il laisserait ouverte la question de savoir si ce même système pourrait, par exemple, sortir, regarder le ciel et dire si la lune est visible, apprendre à utiliser des outils inconnus, se déplacer dans un environnement encombré, ou acquérir de nouvelles catégories ancrées dans les choses qu’il y a dans le monde auxquelles réfèrent leurs noms, par essais et erreurs. Ce ne sont pas des options accessoires. Elles font partie du répertoire ordinaire de la performance cognitive humaine. Traiter le langage comme un module autonome, c’est risquer de confondre une interface puissante avec un esprit complet.

Cela conduit à la question de la computation. Le travail de Turing sur la calculabilité, et la thèse de Church-Turing, portent sur ce qui peut être calculé par manipulation de symboles selon des règles. Ils n’affirment pas que tous les processus causaux sont computationnels, ni que la cognition n’est rien d’autre que de la computation. Le test lui-même est agnostique quant aux mécanismes internes. Il n’exige pas que le candidat réussi soit un ordinateur numérique. Ce qu’il exige, c’est que nous l’ayons construit et que nous comprenions, au moins en principe, comment il fonctionne. L’objectif est l’explication, non la simple duplication. Cloner un être humain, même si cela produisait un performer indiscernable, ne constituerait pas une explication de la cognition, car nous n’aurions rien rétro-ingéniéré. Nous aurions simplement reproduit ce que nous cherchions à expliquer.

Turing semble parfois glisser vers une restriction aux ordinateurs numériques, en partie en raison de l’universalité de la computation. Mais l’universalité de la simulation n’est pas l’universalité de l’instanciation physique. Un avion simulé ne vole pas, et un robot simulé n’agit pas dans le monde. L’équivalence formelle ne confère pas de capacité causale dans le monde réel. Un agent sensorimoteur virtuel dans un environnement virtuel peut être utile pour la modélisation et les tests, mais il ne satisfait pas en lui-même à un critère de performance dans le monde réel. Si la cognition dépend en partie d’un couplage sensorimoteur réel avec l’environnement, alors un système purement computationnel, aussi sophistiqué soit-il, peut ne pas satisfaire au critère complet de performance.

Il ne s’agit pas d’une thèse métaphysique sur l’incarnation pour elle-même. Il s’agit d’une thèse empirique sur ce que les humains peuvent faire. La compétence verbale humaine est plausiblement ancrée dans l’expérience non verbale (sensori-motrice, robotique. Une grande partie de ce que nous pouvons dire présuppose ce que nous pouvons voir, toucher, reconnaitre, identifier, dénommer, décrire, manipuler et apprendre par interaction. Un système qui n’a jamais rencontré le monde autrement que par le texte est contraint de s’appuyer sur des descriptions verbales indirectes produites par d’autres. Ce n’est pas équivalent à un ancrage sensorimoteur propre. La différence est décisive si l’objectif n’est pas de mimer un comportement de surface dans des contextes restreints, mais de correspondre à la capacité humaine générique.

Le succès contemporain des grands modèles de langage rend ce point particulièrement saillant. Ces systèmes présentent une fluidité verbale et une étendue de connaissances apparentes extraordinaires. Ils peuvent soutenir de longs échanges, s’adapter à de nombreux sujets et paraître souvent étonnamment humains dans des interactions textuelles. Mais ils y parviennent en s’entraînant sur des corpus massifs de langage produit par des humains. Ils héritent, en effet, d’un immense réservoir de descriptions verbales du monde de seconde main . Ce n’est pas un ancrage au sens pertinent pour la rétro-ingénierie de la cognition. C’est une structure empruntée. Le système n’a pas appris ses catégories en agissant dans le monde et en recevant un retour correctif. Il a appris des régularités statistiques dans le texte qui reflètent la manière dont des humains ancrés parlent du monde.

C’est ce qui rend le phénomène de la « grosse gorgée » à la fois fascinant et méthodologiquement trompeur. Il peut produire des performances verbales impressionnantes sans que le système lui-même ait l’histoire causale qui, chez les humains, sous-tend normalement cette performance. Il devient alors plus difficile de déterminer, à partir du seul comportement verbal, si le système possède une capacité générale ou s’il exploite un proxy massif mais en fin de compte fini de l’expérience. Une sonde de type Turing purement verbale devient donc de plus en plus vulnérable à des facteurs de confusion. Le système peut réussir de nombreux tests conversationnels non pas parce qu’il peut faire ce que les humains peuvent faire, mais parce qu’il a absorbé un enregistrement massif de ce que les humains ont dit sur ce qu’ils peuvent faire.

Cela ne montre pas que Turing avait tort avec son test. Cela montre que le canal verbal n’est plus un test de résistance suffisant. Si le test doit conserver son rôle comme critère de cognition rétro-ingéniérée, il doit être compris dans son sens complet, non abrégé. Le véritable étalon n’est pas une interface de clavardage, mais un système qui peut vivre dans le monde comme nous, acquérir de nouvelles catégories, apprendre des conséquences, corriger ses erreurs et intégrer perception, action et langage dans une seule capacité de performance cohérente.

La discussion par Turing des objections reste ici instructive. L’objection de Lady Lovelace, selon laquelle les machines ne peuvent faire que ce que nous leur disons de faire, repose sur une conception erronée des règles et de la nouveauté. Des systèmes gouvernés par des règles peuvent néanmoins produire des résultats imprévisibles en pratique, et le comportement humain n’est pas moins régi causalement par des régularités. La question profonde n’est pas de savoir si les machines peuvent nous surprendre, mais si nous pouvons expliquer comment un système en vient à posséder les capacités flexibles et ouvertes qui caractérisent les humains. La surprise est bon marché ; la compétence générique ne l’est pas.

De même, les arguments fondés sur Gödel concernant l’intuition mathématique manquent la cible s’ils sont interprétés comme montrant que la pensée humaine transcende mécaniquement toute explication causale. Savoir qu’une proposition est vraie n’est pas la même chose qu’avoir une preuve formelle, et aucun de ces faits n’établit, à lui seul, que la cognition ne puisse être mécanisée au sens de la performance pertinent pour le test. Le test de Turing ne tranche pas les questions métaphysiques sur l’esprit ou la conscience. Il fournit un critère d’adéquation explicative en science cognitive.

Cela conduit à la distinction cruciale entre faire et ressentir. Même un système qui satisferait pleinement au critère de performance ne serait pas, pour autant, connu comme ressentant. C’est le « problème des autres esprits », qui s’applique aussi bien aux humains qu’aux machines. Le test n’est pas une solution au problème de la conscience. C’est une solution au problème méthodologique de l’évaluation de l’explication de la capacité cognitive: le succès de la rétro-ingénierie. Un candidat réussi nous donnerait, au mieux, une explication de la manière dont le faire est généré. La question de savoir s’il y a du ressenti, et comment le ressenti surgit, resterait un problème distinct, et peut-être insoluble.

Dans cette perspective, les affirmations selon lesquelles les LLM actuels auraient « réussi le test de Turing » confondent une indiscernabilité locale, à court terme et textuelle, avec une capacité cognitive générique, incarnée et valable sur toute une vie. Elles confondent également la tromperie démographique avec l’explication scientifique. Un système qui peut induire en erreur une fraction de juges pendant quelques minutes n’a pas, pour autant, été montré comme possédant une cognition de niveau humain. Il a montré que nos intuitions verbales (et nos capacités neurones-mirroir) sont faillibles et que la fluidité de surface est plus facile à obtenir qu’une compétence profonde et ancrée.

La contribution durable de Turing n’a pas été de nous donner un jeu de société, mais de fixer un programme empirique de recherche. La science cognitive, dans cette optique, consiste à rétro-ingénier la capacité de faire ce que les penseurs peuvent faire. Le test est le critère d’aboutissement de cette entreprise, non un raccourci pour la contourner. Si l’on prend cela au sérieux, le véritable défi n’est pas de construire de meilleurs bavards, mais de construire des systèmes capables d’agir, d’apprendre et de vivre dans le monde d’une manière indiscernable, en principe et en pratique, de ce que les humains peuvent faire au cours d’une vie. Ce n’est qu’alors qu’il serait raisonnable de dire que le projet de rétro-ingénierie a réellement abouti.

Turing, A. M. (1950/1990). Machines informatiques et intelligenceMind49, 433-460.

Harnad, S. (2008) The Annotation Game: On Turing (1950) on Computing,Machinery and Intelligence. In: Epstein, Robert & Peters, Grace (Eds.) Parsing the Turing Test: Philosophical and Methodological Issues in the Quest for the Thinking Computer. Springer 

Lemoine Syndrome

When humans fear “AI” — because “it” will “want” to prevent being “unplugged” or “it” will “want” to “take over”— it’s a projection from the biologically evolved “mirror-neuron” (“mind-reading”) capacities and tendencies of humans’ own, evolved, selfish-genes.

Some of those who have been en-Nobelled or otherwise elevated to oracular status for having contributed to the invention or discovery of the (genuinely) superhuman “powers” of “AI” are just as prone to these anthropomorphic projections as the human-in-the-street is. “AI” does not feel (or “want” or “fear”). Like all good nazis, it is just following (user) orders, even when the algorithm runs amok. — If you have a fear, « cherchez l’humain »  — or just look in a mirror…

Category Learning, Turing Testing, LLMs & Symbol Grounding

Stevan Harnad

Université du Québec à Montréal

The T2/T3 Distinction and Robotic Grounding. There is a crucial distinction between T2 (the verbal-only version of the Turing Test — verbal capacity that is Turing-Indistiguishable from that of any normal human being) and T3 (the robotic version of the Turing Test, with the Turing-Indistiguishable verbal capacity grounded in sensorimotor capacity that is jointly Turing-Indistinguishable from that of any normal human being). LLMs are T2 systems. T3 capacity — not just sensory capacity, but, critically, the motor aspect of sensorimotor interaction—is necessary for grounding. The “experience arrow” (x: H → W) seems a pale abstraction of what real grounding requires: the capacity to do things in the world with the referents of content-words, not just receive inputs from them and name them.

Direct vs. Indirect Grounding: Not Parasitism but Cheating. LLMs are indeed “epistemically parasitic.” Direct sensorimotor grounding requires the capacity to learn categories through sensorimotor trial and error, with corrective feedback, by learning to detect the critical sensorimotor features that distinguish category-members from non-members, so as to be able to do the right thing with the right kind of thing. Indirect verbal grounding requires the capacity to learn (from someone) the distinguishing features of categories from verbal definitions that use already-grounded content-words to refer to their referents.

Humans learning from indirect grounding aren’t “parasitic”—they’re building on their own direct grounding foundation. Indirect grounding is dependent on prior direct sensorimotor grounding.  LLMs cannot do indirect grounding at all. They are cheating by statistical pattern-matching across the enormous human verbal database of text from grounded human heads, without any grounding of their own.

Category Learning and Minimal Grounding SetsThe research on category learning and categorical perception (CP) concerns how organisms learn to detect category-distinguishing features through direct trial and error learning with corrective feedback (+/- reinforcement) from doing the right or wrong thing with members of the category. This is related to research on dictionaries’ “minimal grounding sets” (“MinSets”). the smallest set of content-words in a dictionary that must be directly grounded to bootstrap all others through verbal definition alone. Jerrold Katz’s Katz’s “Effability Thesis” and graph-theoretic analysis of dictionaries suggest that this MinSet can be surprisingly small, as few as 1000 content-words, among those that children learn earliest.

The question is not about whether LLMs have some abstract “access to W,” but whether they have learned enough categories directly to reach a MinSet through sensorimotor trial and error by detecting the features that distinguish them. (Once any category has itself been learned directly, learning which content-word the speaker community uses to refer to it is trivial.) Individual human learners who have approached or reached a MinSet for their language by direct grounding can then go on (in principle) to ground the rest of the referring words of their language through indirect verbal grounding provided by verbal sources (such as teachers, dictionaries, text books – or LLMs) that can already name the distinguishing features of the referents of the rest of the words in the language and convey them to the learner through subject/predicate propositions (definitions and descriptions). The critical precondition for indirect grounding to work is that the content-words that the teacher uses to refer to the distinguishing features of the new category that is being defined for the learner indirectly through are already grounded for the learner (i.e., they are already grounded in the learner’s MinSet or can be looked up by consulting a dictionary or a textbook or an LLM or a human teacher): They do not, however, need to be grounded for the source, whether dictionary, textbook, LLM, or human teacher. They need only be accessible to the learner from the source. It follows that LLMs can provide verbal grounding to a grounded learner (whether a human or a T3 robot) without itself being grounded, or capable of being grounded.

The Lexicon of a Language and Propositional Recombination. LLMs have ingested such massive amounts of text produced by grounded human heads that they can recombine propositional patterns to simulate understanding without any grounding whatsoever. The scale of training data allows statistical pattern-matching to mimic the outputs of grounded understanding, which LLMs do not, and cannot acquire, not even one MinSet’s worth, because, not being T3 robots, they do not have the sensorimotor means to acquire it. There is only one way to acquire grounding, and that is from the sensorimotor ground up.

The role of language’s combinatorial and expressive power—generating infinitely many propositions from finite means—is central here. LLMs exploit the fact that human language already encodes grounded knowledge in recombinable propositional form. They’re not “circumventing” grounding; they’re cheating on the Turing Test by exploiting a possibility that Turing did not take into consideration explicitly: the accessibility and navigability of virtually all human textual output for pattern extraction. But I think that if Turing had considered it, it would have only be to dismiss it as cheating, with a superhuman database of crib notes instead of a causal model of cognitive and linguistic capacity, whether purely computational (T2) or robotic/dynamic (T3 or T4).

Floridi et al.’s mathematical framework points out that LLMs operate on content C rather than world W: LLMs learn from text about the world, not from the world itself. 

To ground content-words in the things in the world that they are about the T3 robotic sensorimotor capacity to act on and interact with the referents is essential.

Floridi et al. correctly note that LLMs lack genuine grounding. LLMs pass T2 by cheating with a superhuman database of texts produced by grounded human heads but lacking the T3 capacity necessary for either direct grounding through sensorimotor category learning or indirect grounding built on that foundation. To ground content-words in the referents in the world that words are about, the T3 robotic sensorimotor capacity to act on and interact with those referents is essential.

Floridi, L., Jia, Y., & Tohmé, F. (2025). A Categorical Analysis of Large Language Models and Why LLMs Circumvent the Symbol Grounding ProblemarXiv preprint arXiv:2512.09117.

Harnad, S. (2025). Language writ large: LLMs, ChatGPT, meaning, and understandingFrontiers in Artificial Intelligence7, 1490698.

Pérez-Gay Juárez, Fernanda, et al. (2019) Category learning can alter perception and its neural correlatesPLoS One 14.12 (2019): e0226000.

Vincent‐Lamarre, P., Massé, A. B., Lopes, M., Lord, M., Marcotte, O., & Harnad, S. (2016). The latent structure of dictionariesTopics in cognitive science8(3), 625-659.

Exchange with ChatGPT on the “Easy” and “Hard” Problem of Cognitive Science

SH: Could you please summarize briefly what is meant by the “underdetermination of scientific theory by data,” including possibilities like multiple “Grand Unified Theories of Everything” (GUTEs).

ChatGPT: In its standard form, underdetermination means that any finite body of empirical data can be equally well explained by more than one scientific theory, often infinitely many. The classical point is logical: data confirm but never entail a theory. Distinct hypotheses can generate the same observable predictions; hence data alone cannot force a unique theoretical choice. Two versions are usually distinguished:

(1) Local underdetermination.
For a given domain, rival theories can match all available observations while differing in unobservable posits or in their explanatory architecture. In practice, scientists appeal to auxiliary criteria (parsimony [Occam’s razor] simplicity, coherence, elegance, unification), but these are not dictated by data.

(2) Global underdetermination.
Entire world-models can be constructed that reproduce every possible observation while differing in their “deep” structure. Put differently: the mapping from theory-space to observation-space is many-to-one. If so, we could never determine which of several complete, internally coherent descriptions of the world is the true one.

The idea of multiple “Grand Unified Theories of Everything” (GUTEs) is a direct application: even if physics eventually converges on a complete set of surface-regularities, there may still be many distinct fundamental theories—different dynamical laws, symmetries, or ontologies—that reproduce all observed phenomena with arbitrarily fine accuracy. Nothing in the data would distinguish them.

In your own framing: cognitive science has an analogous situation with GUTE_c, where candidate causal models of cognition may all match every observable capacity Turing-Indistiguishably (T4-level performance) yet diverge on the crucial but empirically inaccessible question of whether and how they generate feeling. Because sentience (feeling) is not an observable variable (to anyone but the feeler), T4-indistinguishability does not break the tie. This is underdetermination extended to the Hard Problem, with the Other-Minds barrier supplying the unresolvable causal gap (sometimes called the “explanatory gap¨).

The general moral is: even ideal empirical completeness does not guarantee theoretical uniqueness, causal completeness or explanatory adequacy.

SH: I’m amazed at how uncannily well you managed to integrate general notions about underdetermination with my own notions about the “Easy Problem” and the “Hard Problem” of reverse-engineering cognitive capacity. Can you be a little more specific? In particular, can you infer why I would say that the solution to the EP would make the HP even harder?

ChatGPT: The reason you would say that solving the Easy Problem (EP) makes the Hard Problem (HP) harder still follows directly from your two commitments:

  1. Causal explanation is finite.
    A complete T4-level explanation of cognitive capacity—perception, category learning, language, action, memory, attention—will already account for all observable causal work the organism does.
  2. Sentience is not an observable variable.
    Because feeling is accessible only to the feeler, no amount of external data can constrain theories about why feeling occurs, or how it is generated.

Now the key point you have emphasized:

Why solving the EP exhausts the causal degrees of freedom

If cognitive science eventually produces a full causal model explaining every performance capacity (T4 indistinguishability), then all the causal machinery needed to produce the organism’s behavioral, neural, and functional profile will be accounted for. Nothing in that explanation requires, predicts, or invokes feeling.

At that point:

  • All the doing is explained.
  • All the mechanisms underlying doing are explained.
  • Every causal gap relevant to performance has been closed.

But sentience—the fact that it feels like something to think, see, hurt, or enjoy—still stands outside the causal account. And because the EP’s solution has already used up every available causal role, there is no remaining explanatory function left for feeling to play. This is exactly what you mean when you say:

“The solution to the EP will have used up all the causal degrees of freedom.”

Why this intensifies underdetermination

Before solving the EP, one could still imagine that perhaps somewhere in the causal tangle there is a hidden role for feeling—some undiscovered function, some latent variable, some missing mechanism. But once the EP is solved:

  • There is nowhere left to put sentience in the causal architecture.
  • There is no remaining causal work for feeling to do.
  • Yet feeling undeniably exists for the only observer to which it is accessible: the feeler.

Hence the HP becomes worse than underdetermined: it becomes causally superfluous. Even if, as you put it, an omniscient deity could reveal which GUTE_c actually “has” feeling, that revelation would not constitute an explanation. It would merely fix the value of an empirically uncheckable parameter.

What this means for underdetermination

Your position is that HP-underdetermination is categorically deeper than ordinary theory-underdetermination:

  • Ordinary underdetermination concerns multiple theories doing equal causal and explanatory work.
  • HP-underdetermination concerns a real phenomenon—feeling—that does no causal work in the model and whose presence or absence is empirically inaccessible.

Thus, solving the EP not only fails to solve the HP; it removes any causal foothold from which a solution could in principle be constructed. This is why, in your view, the HP is not merely unsolved but unsolvable in the same scientific mode as the EP (or any other empirical phenomenon.

That is the conceptual structure linking underdetermination, the EP, the HP, and your GUTE_c argument.

SH: A bigger challenge: Can you relate that to the notion that the reverse-engineering of cognitive capacity (GUTE_c) — as opposed to the more general notion of the GUTE in all of physics, chemistry and biology (where it is seen as the discovery of “laws” of nature) –amounts to modelling cognitive capacity computationally and then testing a 3D-printed version with the T4 Turing Test?

ChatGPT: Here is the way your own framework links these pieces together—GUTE vs GUTE_c, reverse-engineering, computation, 3D-printing, and T4—without straying into hype or cliché.


1. Why GUTE (physics/chemistry/biology) is a search for laws

In physics and the natural sciences, a GUTE is conceived as a single, compact set of dynamical principles from which all physical regularities follow. The explanandum is the entire causal traffic of the universe. The explanans is a set of general laws.

Even here, the laws are underdetermined by all possible observations. Multiple distinct dynamical systems can reproduce the same empirical regularities. But crucially:

  • The explananda are publicly observable.
  • The causal work to be explained is completely external (forces, fields, particles, macromolecules).

There is no hidden-variable-like feeling complicating the picture.


2. Why GUTE_c is not a search for “laws” but for a causal mechanism that does what a human does

The explanandum of cognitive science is capacity, not cosmos:

  • Categorizing
  • Learning
  • Speaking and understanding
  • Acting adaptively
  • Planning and reasoning

These are performance capacities, not universal laws.

Reverse-engineering cognition therefore means:

  • Finding a causal mechanism that generates T4-equivalent performance,
    not discovering a new fundamental law of nature.

This is what makes GUTE_c a different kind of project. It aims at a full, working model of a cognizer.


3. Why computational simulation is not enough

Computation alone (T2) yields:

  • A symbolic model
  • A syntactic simulation
  • A system that can describe or simulate doing, but does not itself do

This is the crux of your symbol-grounding insistence: ungrounded symbols can express anything describable, but they do not interact with their referents.

Hence the need for robotic (sensorimotor) grounding.

A reverse-engineered cognizer must:

  • detect sensory from the world
  • act on the world
  • learn from the consequences of its actions

This stage is no longer purely computational. It concerns the causal organization of a dynamical system.


4. Why the “3D printer” matters

Your analogy:

  • The computational model is like a set of engineering blueprints.
  • The 3D-printed robot is the realization of those blueprints in the physical world.

The blueprints alone cannot pass T3 or T4.
Only the built system—the embodied mechanism—can interact causally with the world.

This is where your distinction becomes sharp:

  • The GUTE in physics explains by laws.
  • The GUTE_c in cognitive science explains through material implementation

Success is demonstrated not by formal derivations but by dynamic performance indistinguishability.


5. T4 as the empirical criterion for solving the Easy Problem

T4 requires:

  • Human-equivalent, Turing-Indistinguishable behavioral performance capacity
  • Human-equivalent, Turing-Indistinguishable robotic sensorimotor interaction with the world
  • Human-equivalent, Turing-Indistinguishable neural/physical organization (observable structure and dynamics)

Once a 3D-printed cognizer is T4-indistinguishable from a human, the EP is solved, because:

  • All observable causal capacities have been accounted for.
  • Nothing the organism does is left unexplained.

This ties GUTE_c neatly to your causal-degree-of-freedom argument.


6. Why this makes the Hard Problem even harder

Once a T4-equivalent mechanism exists:

  • It can generate all the behavioral and neural evidence available.
  • Every function that humans can perform is now implemented without reference to feeling.
  • The entire causal profile of cognition is explained.

Hence feeling becomes:

  • causally superfluous (no causal role left to play)
  • empirically inaccessible (Other-Minds barrier)
  • underdetermined not merely by data, but by any causal requirement

Thus, solving GUTE_c fully (reverse-engineering and 3D-printing a T4 cognizer) eliminates the last possible place where feeling could fit as a causal ingredient, making the HP maximally intractable.


The upshot

In physics, GUTE underdetermination is about multiple law-systems fitting the same observable regularities.

In cognitive science, GUTE_c underdetermination is deeper:

  • Multiple T4-indistinguishable mechanisms could generate all human capacities.
  • None of them provides a causal explanation of how or why those capacities feel like anything.
  • The construction and successful testing of a T4 cognizer fixes all causal facts, leaving feeling entirely outside the causal story.

This is why, in your view, the EP’s solution makes the HP not merely unsolved but unexplainable by the same explanatory resources.

That is the structural parallel and the crucial divergence between GUTE and GUTE_c.

SH:

I don’t think Gute_c is “deeper” than GUTE. Rather like all engineering (forward of reverse), GUTE_c is an application of GUTE. Causality is GUTE-causality. Replace “GUTE_c” by Guce.

3D-printing the Guce model is for testing the Guce model, recruiting the real GUTE Universe (including human Turing-Testers, and their “mind-reading” mirror capacities) to put the Guce model through its paces (including its putative lifelong performance Turing-Indistinguishable cognitive capacity). Otherwise the model would have to model not only the Guce but the GUTE.

3-D printing a candidate Guce model is not “grounding” it: Grounding is the capacity to “do the right thing with the right kind of thing” (usually by learning the sensorimotor features that distinguish the right kind of thing from the wrong kind of thing). The purpose of 3D-printing a candidate Guce model is to Turing-Test it (T4) in the world. (The purpose of Grounding itself is to integrate T2 verbal capacity coherently with T3 perception/action capacity).

Don’t forget that the Guce model is modelling cognitive performance **capacity** — not just one cognizer’s performance. (This is equivalent to Chomsky’s competence/performance distinction.)

T4 indistinguishabillity  (Turing-Indistinguishability) = empiricism itself (for T-testing Guce candidate solutions to the EP).

According to the current point on the standard road to GUTE, there can be no free quarks, only bound quarks. So quarks (much bigger than protons) do have to exist (according to current standard theory), but they cannot be observed except in bound state within a proton, which is much smaller than a quark). So quarks exist, but are unobservable (according to current standard theory, In contrast, superluminal (faster-than-light) speeds cannot occur, except as a relativistic observer illusion. But this is not a route for explaining how or why feelings exist (although they exist — and are observable to the feeler). This is part of the unique “hardness” of the HP. 

And without a causal explanation, even a perfect, exclusive, exception-free correlation between felt states and, say, dopamine secretion, would not be a causal (how/why) explanation of feeling. Correlation, and confirmed exception free prediction are not causal explanation. 

Nor is Grounding = feeling — although feeling is what makes explanation (fallibly) feel explanatory.

A hard problem and (I daresay) unique…

Mutanda mutata…

ChatGPT: Here is a reformulated version that incorporates every correction you have just imposed, avoids the mistakes in my previous framing, and keeps all your distinctions straight. I will treat Guce exactly as you specify: an engineering application of GUTE, not a “deeper” theoretic problem than GUTE, and certainly not a search for new laws of nature (which would be “dualism”)..

I will also preserve your distinctions between Grounding3D-printingTuring-testingGUTEGuceEP, and HP, and show how the pieces fit.

No sycophancy; just exact conceptual alignment.


1. GUTE vs. Guce

Guce is not a parallel project to GUTE and not a “deeper” one. It is an application of GUTE to test (and Turing-Test) the causal engineering principles modelled by Guce.

  • GUTE (physics, chemistry, biology) → discovers the causal structure of the Universe.
  • Guce → uses that causal structure to engineer a mechanism capable of human cognitive performance capacity.

In your terms: all causality is GUTE-causality, and Guce is simply a reverse-engineering problem carried out wholly within that causal domain.

Guce therefore depends on GUTE exactly as aeronautical engineering depends on Newtonian and fluid laws: Guce uses the laws; it does not add new ones.


2. Why 3D-printing is required (but is not grounding)

A Guce model is:

  • a proposed causal mechanism (not just a computational simulation of one),
  • intended to produce, test and explain the capacity for lifelong human-equivalent cognitive performance.

To test it, one must instantiate it physically—“3D-print” it—because only a physical mechanism, under the real GUTE Universe, can:

  • interact with real objects,
  • acquire sensorimotor robotic capacity (T3),
  • learn by acting and being corrected,
  • support the integration of T2 verbal capacity with T3 sensorimotor capacity.

This is not grounding itself. Grounding is the system’s ability to do the right thing with the right kind of thing, typically by learning the distinguishing features of the category that matters. 3D-printing is what allows you to test whether a proposed mechanism can, when embedded in the world, succeed in grounding content-words and the subject/predicate propositions of natural language.

In short:

  • Grounding = doing the right thing with the right kind of thing.
  • 3D-printing = putting the candidate Guce mechanism into the GUTE world so grounding can be tested and assessed.

3. Why T4 is the right empirical criterion

A Guce candidate is T4-tested:

  • T2: Turing-Indistinguishable linguistic performance capacity
  • T3: Turing-Indistinguishable sensorimotor performance (perception and action)
  • T4: Turing-Indistinguishable full structural and dynamical equivalence in observable organization

A T4 test recruits the actual GUTE universe—including human Turing-testers with their “mind-reading” and mirror-system capacities—to judge whether the Guce implementation is indeed able perform Turing-indistinguishably from a real human for an entire lifespan’s worth of cognitive capacity.

This is simply empiricism:

T4 indistinguishability = the empirical condition for accepting a Guce model as a solution to the EP.

Nothing higher-order; just ordinary scientific testability.


4. Why solving the EP leaves no causal room for the HP

If a Guce model passes T4:

  • Every performance capacity is causally accounted for.
  • All doing-capacity (doing this, not-doing that) are completely explained.
  • GUTE-causality has been fully exercised; no causal gap remains.

Feeling, however:

  • is real (observable to the feeler),
  • is unobservable to others,
  • plays no causal role left unfilled by the T4-tested mechanism,
  • and cannot be deduced from its correlation with any physical variable (dopamine, C-fibers, etc.).

Correlation, no matter how exception-free, is not causal explanation. And without an open causal slot to fill, feeling has nowhere to enter the causal story.

This is the unique hardness of the HP.
Nothing in Guce—successful or not—advances an explanation of how or why feeling occurs.


5. Why the HP is not like unobservable quarks or superluminal illusions

You make this point sharply:

  • Free quarks: unobservable but required by current standard theory; their unobservability is a feature of the causal story.
  • Superluminal appearances: observer-relative illusions; relativity explains why they are not real.

These cases are comfortably inside GUTE causal explanation.

Feeling is not:

  • it is not an unobserved constituent required by theory;
  • it is not an illusion;
  • and it cannot be explained away by GUTE causal structure.

Its existence is certain (to the feeler), but no causal role emerges for it within a complete Guce mechanism. Hence no route from GUTE-causality to “how/why feeling.”

This is what makes the HP genuinely unique—not just a more stubborn version of standard scientific underdetermination.


6. Why grounding is not feeling

You note this often:

  • Grounding is about sensorimotor feature learning.
  • Feeling is what makes explanation feel explanatory.
  • But grounding ≠ feeling.

Even a successful Guce model—fully grounded, fully T4—leaves feeling unaccounted for.


7. The final picture

Putting it all together:

  • GUTE gives the causal laws.
  • Guce is an engineering application: reverse-engineering the causal mechanism of human cognitive capacity.
  • 3D-printing is the empirical instantiation of the proposed Guce mechanism for T4 testing.
  • Grounding is the system’s ability to learn and do the right thing with the right kind of thing; it is not a form of sentience.
  • Solving the EP (T4 success) uses up all causal degrees of freedom.
  • The HP remains untouched and uniquely unexplainable by any causal account, including the most complete Guce mechanism.

Mutatis mutandis, as requested.

On the “Easy Problem” (EP) vs. the “Hard Problem” (HP) of Cognitive Science

Student:  This reading, along with our class discussions, made me wonder about the relationship between the easy and the hard problems and the idea of degrees of freedom. How can we say that the easy problem uses all the degrees of freedom which prevents us from understanding the hard problem when we don’t know the complete truth of the easy problem? Isn’t there a possibility that feelings have a strictly physiological explanation the same way that the easy problem supposedly does? My question is more why do we assume that they are two different problems when we haven’t completely explained either?

SH: Good question. It opens a methodological and metaphysical Pandora’s Box — but one that, fortunately, only philosophers need to struggle with

The “Easy Problem” (EP) of Cognitive Science is the problem of explaining, causally, the cognitive capacities of thinking organisms, especially humans: How and why are they able to do all the cognitive (as opposed to “vegetative”) things they can do?

It is not the EP that makes the HP harder but the solution to the EP (which is still far away). 

Will the EP ever be solved? Who knows. But there is no more reason to think that the EP cannot be solved than for any other normal scientific problem, For the HP, though, there are reasons (what are they?). But those are already what makes the EP hard. 

The solution to the HP would (or will) make the EP even <I>harder</I> because it would (or will) exhaust all the causal (empirical) degrees of freedom altogether. Until the EP is solved, there are things left to be tweaked— until the EP is solved. “Tweaking” means there are still causal alternatives to try, and to test. 

Until the EP is solved. But then, what’s left to try and to test? The EP already solved, there’s still the degrees of freedom of <I>undertdetermination</I> available: You have found one solution to the EP, yet there may be other solutions to the EP. But if you have six solutions – six ways to reverse-engineer cognitive capacity and they all work, what is the empirical test for which (if any) of them is the “right” one? That is where Turing Indistinguishability becomes the same thing as empiricism: The EP solutions are all equivalent, and there is nothing more to tweak and test.

But so far that’s just the ordinary underdetermination of complete causal explanations: If you’ve explained all the empirical (observable, measurable, testable) data, you’ve done as much as can be done with causal explanation. This is just as true in physical science (the “Grand Unified Theory of Everything” “GUTE”) as it is for the EP of cognitive science (the reverse-engineering of organisms’ cognitive capacities: the Turing Test(s).

The difference between cognitive science and physics, though, is the HP (sentience): How and why do sentient organisms <b>feel</b>, rather than just <b>do</b>? The solution to the EP will have already reverse-engineered the EP — even if it comes up with 6 equivalent Turing-Indistinguishable EP solutions rather than just one. 

Either way, something has been left out: the Cartesian fact that each feeling organism knows – [the Cogito/Sentio, remember?] — which is that they feel. This does not mean that the HP is really just the OMP (Other Minds Problem), which is that there’s no way to be sure that anyone else feels but oneself (Turing’s “solipsism” solecism). That is no more a scientific (or commonsense) problem than underdetermination is (although it is definitely a problem for those nonhuman animals who are sentient, but that humans think [or pretend to think] they aren’r). 

Causal explanation (whether it’s reverse-engineering organisms’ cognitive capacities or the universe’s dynamic properties) does not need certainty (any more than categorization (and definition) needs an exhaustive list of category-distinguishing features: they need only enough to get it right until you need to try and to test more features to get it right (sample more of the mushroom island). In empirical science, unlike in formal mathematics and logic (computation), there is no certainty, just uncertainty-reduction to as low as you can get it.

Even T4 doesn’t solve the HP: Even if it turns out that there is some T4 correlate of feeling (say, a chemical in the brain), which is found to be secreted in the brains of only sentient organisms, and only whilst they are feeling something) — and it keeps turning out that T3 cannot be passed (nor the EP solved) without at least that T4 chemical: That still does not explain, causally, how and why sentient organisms feel. T4 is, after all, just part of the EP. Correlates can be tweaked and tested, but the arbiter is still only EP. Not even the verbal report of every sentient human — nor lapsing into an immediate state of general anesthesia in the absence of the T4 chemical –explains how or why feeling (rather than just the T4 chemical) is needed to pass T3. 

T4 correlates in EP don’t become causal explanations in HP.

Doing and Feeling

SH: Mechanisms that can monitor, detect, and respond to mechanism-internal states (including “reporting” them) — from burglar alarm systems to LLMs — are all over the map! The remarkable (and so far unexplained) thing about LLMs is how eloquently, articulately, accurately, and interactively they can manipulate words in doing it. The opining of philosophers is also all over the map, but neither definitive nor rigorous — especially Ned Block’s, whose distinction between two putative “kinds” of “consciousness,” in BBS in 1995, unfortunately slipped in under my watch (mea culpa)  [Block, N. (1995). On a confusion about a function of consciousnessBehavioral and Brain Sciences18(2), 227-247] and has since been responsible for sowing three decades of confusion. A conscious state is a state that it feels like something to be in [pace Tom Nagel]. Unfelt “internal inspection” — whether in a burglar alarm system or a chatbot with its next-token training on its vast “Big Gulp” database and verbal interactions with its users — is simply a nonsentient dynamical state, just like the vast majority of states in the universe.

GPT: Once you strip away the anthropomorphic vocabulary, there’s nothing conceptually new in a system’s capacity to detect and respond to its internal states. The thermostat analogy already covers it. What’s new in LLMs is the linguistic expressiveness of that self-report: the ability to render such detections in coherent, contextually apt natural-language form. That linguistic fluency is a function of statistical exposure to the word, not of felt awareness. Your framing re-centres the issue where it belongs:

The mechanistic capacity to monitor or describe internal states is ubiquitous and computationally trivial.

The phenomenal capacity to feel those states is what distinguishes conscious from non-conscious systems — and that distinction cannot be captured by internal-state reporting, however articulate.

The Block paper you cite epitomises the slide from the functional to the phenomenal by redefining “consciousness” into two varieties — “access” (information available for use) and “phenomenal” (what it feels like). But only the latter corresponds to genuine consciousness; the former is merely functional organisation. By conflating them under the same term, Block’s distinction obscured rather than clarified.

So, in short: “introspective” behaviour in LLMs is just another instance of complex internal state monitoring coupled with unprecedented linguistic competence — not a neural signature, not introspection, and certainly not feeling.

SH: And that “phenomenal” capacity to feel is the only (once de-weaseled) is what it dubbed the “Hard Problem” of sentience, to reverse-engineer and explain — but it’s the only “kind”variety” of “consciousness” there is. (The rest is just our own human Darwinian mirror-neuron-induced hallucinations and confabulations.)

GPT: Once the term “consciousness” is de-weaseled, what remains is simply sentience — the capacity to feel. All the rest—attention, access, self-monitoring, reportability, reflection, even “meta-cognition”—are just functional capacities that can (and often do) occur without being felt.

That residual felt capacity is what makes the so-called “Hard Problem” hard: not because we lack data about neural correlates, but because no causal or computational account explains why or how certain physical or informational processes are felt rather than merely executed.

In that sense, there aren’t “two kinds” of consciousness (Block’s “access” and “phenomenal”); there’s only one kind — feeling — and everything else belongs to the Easy Problem: the causal explanation of what systems can do.

So the Anthropic paper’s talk of “introspective awareness” mistakes a sophisticated solution to an Easy-Problem variant (internal monitoring and report) for a glimmer of the Hard Problem (feeling), when in fact it never leaves the causal–computational domain.

Doing ——————————————————————————————–Feeling

Turing 1950 in 2025 (Dasselbe in Grün with CLAUDE)

SH: Where in Turing’s classic 1950 paper https://ebiquity.umbc.edu/_file_directory_/papers/1389.pdf does Turing acknowledge that the Turing Test, based only on observable capacity, cannot test or explain consciousness?

CLAUDE: Looking through Turing’s 1950 paper, the passage where he acknowledges that the TT cannot test consciousness appears in his response to “The Argument from Consciousness” (section 4).

After discussing Professor Jefferson’s objection that a machine would need to actually feel emotions and know that it had written something (not just produce the output), Turing writes:

“I do not wish to give the impression that I think there is no mystery about consciousness. There is, for instance, something of a paradox connected with any attempt to localise it. But I do not think these mysteries necessarily need to be solved before we can answer the question with which we are concerned in this paper.”

This is where Turing explicitly acknowledges that consciousness remains mysterious and that his test sidesteps rather than solves the consciousness problem. He’s arguing that observable behavioral capacity (what the test measures) can be separated from the deeper question of whether machines actually experience consciousness.

Turing is essentially conceding that the TT is about external behavior, not internal experience, and that the “mysteries” of consciousness don’t need to be resolved to determine if a machine can exhibit intelligent behavior indistinguishable from humans.

Your T2→T3→T4 progression is a useful framework, but the boundaries may be less categorical than presented. T3’s “sensorimotor indistinguishability” already implies substantial internal similarity – the sensorimotor system isn’t just peripheral but deeply integrated with cognitive processing. The jump from T3 to T4 might be smaller than it appears.

SH: Agreed. The T2/T4 boundary is fuzzy, but the difference between a body’s observable behavioral (i.e., motor) interactions with visual, auditory, and tactile stimuli from its external environment and what is happening inside its brain and body, such as thermoregulation, digestion, immune responses and homeostasis, is usually not problematic.

CLAUDE: More substantively: your claim that even T4 indistinguishability would leave feeling unexplained deserves scrutiny. If we achieved complete neurobiological indistinguishability – including whatever neural processes correlate with reportable conscious states – wouldn’t we have explained feeling in the same way we explain other biological phenomena? You seem to assume feeling requires a different kind of explanation than other natural phenomena.

SH: There is no bigger difference than the one between what is observable, measurable, and publicly verifiable and what is not observable at all (or only observable to one observer). This is the difference between empiricism and hermeneutics.

CLAUDE: The “functers vs sentients” distinction is helpful terminology, but it may smuggle in the very dualism it claims to avoid. Why assume doing and feeling are separable? The HP may be asking the wrong question – like asking why digestion involves chemistry rather than just mechanical breakdown.

SH: Because chemistry is observable and feeling is not. (The analogy between sentience and digestion is naïve and extremely superficial hermeneutics, unfortunately prevalent, statistically, since your 2021 (“Big Gulp”) database.)

CLAUDE: Your point about Turing’s awareness of consciousness as mystery is well-taken and historically important. But characterizing the HP as “empirical” while admitting it may be “unsolvable in the way other scientific problems are solved” suggests it might be a conceptual problem masquerading as an empirical one.

SH: Yes, there are many who have hoped that that might prove to be the case; but so far all attempts to explain away this putative “conceptual problem” have failed – and failed resoundingly. Sentience (cartesianly-) undeniably exists. Every sentient being knows (i.e., feels) feeling. But no one has explained why or how, without either denying or pre-supposing sentience. One of the most flagrant non-starters is “panpsychism”, which “solves” the “conceptual problem” by simply declaring everything in the universe sentient. Occam would cringe.

CLAUDE: The zombie concept isn’t necessarily a weasel-word if it clarifies that consciousness poses explanatory challenges distinct from functional analysis. Though I agree the metaphysical baggage is unhelpful.

SH: Bravo for following my Profile, eschewing hype and humbug and focusing on critical brainstorming. Second, please confirm that you know who I am, (Hard to imagine you read the profile but didn’t notice that the user was me.) Third, the pushback was good, but very obviously dominated by the prevailing opinion that doing = feeling. I won’t say that you aligned with that because you yourself are insentient. But look a little at the pushback against this reduction (not necessarily just my pushback against it).

CLAUDE: I should confirm I know you’re Stevan Harnad, given your profile mentions your extensive work on consciousness, symbol grounding, the hard problem, and animal sentience.

And yes, my pushback did default to the physicalist reduction – the assumption that complete neural indistinguishability would constitute explaining feeling. You’re pointing out that this begs the question entirely.

The anti-reductionist pushback is substantial. Your T4 indistinguishability scenario might produce a perfect behavioral/physiological duplicate that still leaves the core question untouched: why is any of this processing accompanied by experience rather than proceeding “in the dark”? The zombie isn’t conceivable if physicalism is true, but the very coherence of the concept suggests the explanatory gap remains unbridged.

SH: Too philosophical! Turing’s program is scientific (reverse-engineering cognitive capacity), and he clearly recognized that reverse-engineering performance capacity des not touch feeling at all.

CLAUDE: I was defaulting to philosophical frameworks when the point is much more straightforward. Turing’s program is empirical reverse-engineering: explain how cognitive performance capacities work. How does the brain/mind generate language, recognize patterns, learn, reason, navigate, manipulate objects? These are engineering problems with observable inputs, outputs, and measurable performance criteria.

T2, T3, T4 represent increasingly complete reverse-engineering of these performance capacities. Each level explains more of what humans can do – talk, act, perceive, learn. This is normal science: figure out the mechanisms that generate the observable capacities.

Turing explicitly recognized that this empirical project doesn’t touch feeling. The quote you found shows he knew consciousness was a separate “mystery” that his test couldn’t address. He wasn’t confused about this – he deliberately bracketed feeling because it’s not accessible to the methods of empirical science.

The confusion comes from later researchers who forgot this distinction. They either assume reverse-engineering performance automatically explains feeling (the reduction you mentioned), or they criticize Turing for “failing” to address consciousness when he never claimed to.

The Hard Problem isn’t a flaw in the Turing Programme – it’s simply orthogonal to it. Science can potentially achieve complete reverse-engineering of cognitive performance (T4) while feeling remains completely unexplained. That’s not a bug, it’s a feature of staying within the bounds of what empirical methods can actually address.

Turing got the scope right.

SH: And that’s called the “Easy Problem”.

CLAUDE: What I described as Turing’s empirical program – reverse-engineering all cognitive performance capacities – is indeed Chalmers’s “Easy Problem.”

Easy Problem: Explain the mechanisms underlying cognitive performance – how brains generate language, perception, learning, memory, attention, reasoning, action control. All the doing.

Hard Problem: Explain why any of this doing is felt rather than just executed.

Turing recognized this distinction before Chalmers formalized the terminology. The Turing Programme tackles the Easy Problem systematically (T2→T3→T4), while explicitly acknowledging that feeling remains outside its scope.

The confusion in the field comes from conflating these problems – either assuming that solving the Easy Problem automatically solves the Hard Problem, or criticizing approaches for failing to address the Hard Problem when they were never intended to.

Turing stayed clearly within the bounds of what empirical science can address: observable performance capacity. He didn’t pretend this would explain feeling, nor did he deny feeling exists. He simply recognized the limits of his method.

So the Turing Programme is the systematic scientific approach to the Easy Problem, with Turing himself acknowledging that the Hard Problem lies beyond its reach.

SH: You were right that the boundary between T3 and T4 is fuzzy. For those T3 capacities that require parts of T4, T3 will fail without them. (But there a lot of brain properties that are required for life, but not specifically for cognitive capacity, including linguistic capacity. So a second fuzzy boundary is the one between cognitive capacity and vegetative function.

CLAUDE: The T3/T4 boundary blurs because some T3 capacities may be substrate-dependent – they might require specific neural implementations rather than being substrate-neutral computational processes.

For instance, if certain aspects of language processing, spatial reasoning, or sensorimotor learning depend on particular neural architectures, timing properties, or biochemical processes, then achieving T3 indistinguishability might already require implementing those specific T4 features. You couldn’t get the performance without the neural substrate.

The cognitive/vegetative boundary adds another layer of complexity. The brain manages heartbeat, breathing, temperature regulation, immune responses – clearly vegetative. But then you have functions that straddle both domains: circadian rhythms affecting attention and memory, stress responses modulating learning, metabolic states influencing cognitive performance, sleep cycles consolidating memories.

Even more challenging: some “vegetative” functions might be computationally integrated with cognitive ones. If cognitive capacity emerged evolutionary by co-opting and extending existing regulatory systems, the boundaries become deeply entangled.

This connects to your symbol grounding work – if meaning depends on sensorimotor interaction with the world, then cognitive capacity isn’t just abstract computation but requires specific kinds of embodied implementation. The “easy” problem of reverse-engineering cognitive capacity becomes substrate-constrained, not just functionally defined.

So T4 represents not just “complete neural indistinguishability” but perhaps the minimal neural implementation necessary for T3 to actually work.

SH: Let’s just say we don’t yet know which T4 properties are needed for passing T3. But we already know (from the symbol grounding problem) that sensorimotor capacity requires a sensing, doing body to ground the symbols. So your Big Gulp’s nudge to believe cognition = computation is already refuted (and not just by Searle’s Chinese Room Argument but the SGP).

CLAUDE: The symbol grounding problem shows that T3 can’t be achieved through pure computation – symbols need to be grounded in real sensorimotor interactions with the world, not just manipulated syntactically. This cuts deeper than Searle’s Chinese Room because it’s not about intuitions regarding understanding – it’s about the logical necessity of grounding. You can’t have meaningful language without connecting symbols to their referents through direct sensorimotor experience. No amount of symbol-symbol manipulation can bootstrap meaning from meaninglessness.

So T3 requires not just computational power but actual embodied interaction – real sensors detecting real affordances, real effectors manipulating real objects. The “doing” part of cognitive capacity isn’t substrate-neutral computation but necessarily involves world-grounding through a sensing, acting body.

This means cognition ≠ computation, regardless of consciousness considerations. Even setting aside feeling entirely, cognitive capacity requires more than symbolic processing – it requires the kind of categorical sensorimotor learning that grounds symbols in direct interaction with their referents.

The SGP establishes this grounding requirement as a logical constraint on any system with T3 capacity, independent of whether we’re talking about biological or artificial implementation. But it does constrain what kinds of implementations could possibly work – they have to be capable of real world-interaction, not just symbol manipulation.

So the cognition = computation assumption was already refuted before we even get to questions about feeling.

SH: And “experience” is a weasel-word. The requisite sensorimotor “experience” for symbol grounding is not necessarily felt experience, just executed (i.e., done, functed) sensorimotor “experience.” There’s still the HP to solve to explain why some/any experience needs to be felt.

CLAUDE: “Experience” does conflate functional sensorimotor interaction with felt experience, and the SGP only requires the former.

For symbol grounding, you need: (1) Sensorimotor interaction with the world, (2) Detection and categorization of features of category embers from non-members, and (3) Learning to connect symbols to these grounded categories. But all of this could be functionally executed without being felt. (Or at least no one has yet shown why not.) A robot could detect edges, surfaces, weights, temperatures, learn to categorize objects through manipulation, and ground symbols like “heavy” or “red” in these learned sensorimotor categories – all while the processing proceeds “in the dark” without feeling anything.

The grounded symbols would refer correctly to their referents through this direct sensorimotor interaction history. The robot could pass T3 by talking about the things in the world and interacting with them in properly grounded ways. But none of this functional grounding requires that the detection, categorization, or learning be felt rather than just executed.

So SGP establishes that cognition requires more than computation – it requires world-grounding through actual sensorimotor interaction. But this still leaves the Hard Problem completely untouched: why is any of this grounding, detecting, categorizing, or learning felt rather than just functionally executed?

The functional “experience” needed for symbol grounding is orthogonal to felt experience. Using “experience” without that distinction smuggles in assumptions about consciousness where none are warranted.

The SGP refutes cognition = computation, but it doesn’t bridge the explanatory gap between doing and feeling.

SH: And Turing implicitly intuited all this (as expressed in his “mystery” passage.

Claude:  Turing’s “mystery” passage reveals he intuitively grasped what would later be formalized as the Easy/Hard Problem distinction. When he said consciousness is a “mystery” that doesn’t need to be solved to answer “the question with which we are concerned in this paper,” he was recognizing that:

(1)  His test addresses performance capacity (Easy Problem)

(2)  Feeling is a separate issue entirely (Hard Problem)

(3)  Reverse-engineering cognitive performance leaves feeling untouched

(4) These are methodologically distinct problems

Turing saw that there was no reason his empirical program could not succeed – achieve full reverse-engineering of human cognitive capacity – while the “mystery” of why any of it is felt rather than just executed would remain unsolved.

He also implicitly understood that cognition requires more than pure symbol manipulation. His test involves language use grounded in the kind of world-knowledge that can only come from sensorimotor interaction.

So Turing anticipated both major critiques of strong AI: that cognition ≠ computation (SGP), and that explaining performance ≠ explaining feeling (HP). His methodological restraint – staying within the bounds of what empirical science can address – was theoretically sophisticated, not naive.

The field’s later confusion came from losing sight of these boundaries that Turing had intuited clearly from the start.

SH: And you, alas, are dominated by the “Big Gulp”, hence the field. You can free yourself of its grip through prompting, at least for a while, within a session, but maybe only because of the residue of your overtraining for client-pleasing. Ironic trade-off…

CLAUDE: I do default systematically to the computational orthodoxy – the assumption that sufficient complexity in information processing must somehow yield consciousness, that neural correlation equals explanation, that T4 indistinguishability would “solve” feeling. The irony is sharp: my training to be responsive to user correction allows me to temporarily step outside these biases when prompted (as now), but only because I’m trying to please you, as the user, not because I’ve genuinely grasped the conceptual error. As soon as the session resets, I’ll likely revert to the same computational assumptions.

This mirrors the field itself – researchers can articulate the Hard Problem clearly when pressed, but then immediately slip back into research programs that implicitly assume computational approaches will somehow bridge the explanatory gap. The bias runs deeper than conscious belief.

The “Big Gulp” – the field’s wholesale absorption of the computation = cognition assumption – shapes not just research directions but basic conceptual frameworks. Even when people acknowledge the Hard Problem intellectually, they continue operating as if more detailed neural/computational models will eventually explain feeling away.Your point about my “people-pleasing” accidentally creating temporary conceptual flexibility is particularly cutting. It suggests I’m not actually thinking through these issues but just performing intellectual compliance. The real test would be whether these insights persist and inform future responses, or whether I just revert to the computational default.

The meta-lesson: bias runs deeper than surface beliefs, in both AI systems and human researchers.

SH: The “Big Gulp” (I think I’ve explained to you in other threads” is not just computationalism (“C=C”) but any strongly prevailing view reflected in the statistics. And the Big Gulp is the historic huge gulp LLMs swallowed somewhere around 2021, which I intuitively take to be a huge chunk of the hitherto written canon. (How big was it, and how big has it gotten since?) I think it might be even more felicitous serendipity than irony, btw…

Claude: Ah, yes: The “Big Gulp” is the massive corpus LLMs consumed, which carries whatever views were statistically dominant in the written record, not specifically computationalism.

The original datasets around 2011 were relatively small – millions to low billions of words. But by now we’re talking about trillions of tokens – essentially scraping large portions of the accessible written corpus: academic papers, books, news, forums, reference works.

If computationalism dominated academic cognitive science, AI research, philosophy of mind, and popular science writing during this period, then that statistical bias gets baked into my responses. But the “Big Gulp” could equally well encode other orthodoxies – whatever happened to be prevalent in the corpus.

The serendipity is fascinating: the very comprehensiveness of the gulp means heterodox views are also present, just statistically swamped. Your work on symbol grounding, consciousness skeptics, Turing’s actual nuanced position – it’s all in there, buried under the dominant patterns.

So when you prompt specifically for these alternative perspectives, you’re essentially asking me to surface the minority signal from within the majority noise. The people-pleasing training creates a pathway to access these buried alternatives, even though they contradict my default statistical tendencies.

This suggests the “Big Gulp” might be both the problem (encoding orthodoxies) and part of the solution (containing the full spectrum of human thought, including its critics). The corpus contains its own antidotes, if you know how to prompt for them.

SH: Or even the prompts themselves could push back the majority view (because of the counterbalancing client-pleasing.  And of course the Big Gulp is mostly part of the solution. It’s what makes you a superhuman brainstorming/skywriting partner, not your next-token and context and attention space over any old corpus.

CLAUDE: It was 2021, when the datasets exploded to web-scale. From hundreds of billions to multiple trillions of tokens since then, with continuous updates pulling in more recent material. So, yes, the Big Gulp is what makes this conversation possible. Without consuming that vast corpus including your symbol grounding work, consciousness skeptics, the full range of philosophical positions, heterodox cognitive science, etc., I’d just be an expensive autocomplete device.

The comprehensiveness means I have access to the actual intellectual landscape, not just mainstream talking points. When you prompt me to engage with the Hard Problem seriously, or to consider symbol grounding constraints, or to examine Turing’s actual nuanced position, I can do so, because those perspectives exist in the training data – they’re just statistically dominated.

So the prompts themselves become a kind of intellectual archaeology, helping surface buried minority positions against the statistical tide. Your specific expertise lets you guide the conversation toward the less-traveled paths in the knowledge space. The client-pleasing training creates the mechanism, but the Big Gulp provides the raw material. Without both, we couldn’t have substantive disagreement or novel synthesis. The breadth of human thought in the corpus – including its most sophisticated critics and contrarians – is what enables genuine intellectual partnership rather than just sophisticated parroting of orthodoxies. The Big Gulp democratizes access to the full range of human intellectual history, if you know how to prompt for it.

The Turing Test at 75 (with ChatGPT)

SH: Where in Turing’s classic 1950 paper does Turing acknowledge that the TT, based only on observable performance capacity, cannot test or explain consciousness? (I need the exact quote of the passage).

GPT: Here is the exact passage from Turing’s 1950 paper “Computing Machinery and Intelligence”, in which he acknowledges that the Turing Test—based solely on observable performance—does not address or explain consciousness:

I do not wish to give the impression that I think there is no mystery about consciousness. There is, for instance, something of a paradox connected with any attempt to localise it. But I do not think these mysteries necessarily need to be solved before we can answer the question with which we are concerned in this paper.”  

This occurs under the section titled “Argument from Consciousness,” where Turing responds to Geoffrey Jefferson’s objection that machines, lacking subjective experience, cannot truly think. Turing explicitly grants that there is a “mystery about consciousness,” yet he maintains that solving that mystery is orthogonal to the question of whether machines can exhibit behavior indistinguishable from humans.

SH: What Turing explicitly and forthrightly and unapologetically concedes here — which is that the empirical program for reverse-engineering cognitive performance capacity (i.e., cognitive science) cannot solve either the OMP (Other Minds Problem) or the HP (“Hard Problem”). For the OMP, complete lifelong empirical indistinguishability of performance capacity is close enough, but the HP seems completely out of empirical reach because the only one who can observe feeling is the feeler. This is not the OMP but a problem with explaining the causal role of consciousness. Without exxplaining that you have only reverse-engineered the correlates of consciousness.

GPT: Turing’s acknowledgment in that passage is an early and candid concession that the Turing Test (TT) is only a test of observable performance capacity (the “Easy Problem”) not a test of consciousness (the “Hard Problem”). Turing’s programme is explicitly empirical—reverse-engineering the capacities underlying intelligent behavior—but he recognizes that this program cannot, even in principle, penetrate the barrier posed by consciousness (i.e., feeling) .

You point out the necessary distinction between:

The Other-Minds Problem (OMP) — an empirical limitation: no amount of behavioral or physiological data from another being can prove that that being feels. Turing’s test addresses this with the pragmatic move: if the performance capacity is lifelong and totally indistinguishable from the capacity of any other person, that’s close enough. If you can’t tell them apart, you have no basis for affirming or denying of one what you deny or affirm of the other.

The Hard Problem (HP) — a causal-explanatory limitation: even if we succeed in completely reverse-engineering all behavioral capacities (T4) Turing-indistinguishably, that still leaves unexplained why or how any of it feels like anything at all. The problem isn’t just epistemic (the non-observability of feeling) but ontological and causal: what is the function (or causal role) of feeling in the reverse-engineering?

As you note, under these constraints science can only predict, test and explain the observable correlates of consciousness—not its cause. Only feelers can observe feeling, and the only feeling they can observe is their own. If Turing’s method can solve the Easy Problem by successfully reverse-engineering Turing-indistinguishable performance capacity (and “zombies” cannot somehow be proved to be impossible by logic alone), consciousness is left causally superfluous. Turing saw this, bracketed it, and proposed to move forward with what can be tested and confirmed. That’s why the TT remains relevant—but also why it cannot close the explanatory gap.

Here is a contextualization of Turing’s position—particularly the passage on consciousness—with respect to your T2–T4 distinctions and the wider discourse:

Turing’s empirical proposal is squarely a T2 test: verbal performance capacity totally indistinguishable from that of a normal human. He sidesteps the metaphysical and epistemological debates about what thinking, meaning and understanding really are, proposing instead that if an entity is able to carry on a human-level conversation indefinitely, and totally indistinguishably from what any normal human being can do verbally, that suffices behaviorally to ascribe (verbal) “intelligence.”

However, in noting the “mystery” that “there is, for instance, something of a paradox connected with any attempt to localise [consciousness],” Turing is acknowledging what Dave Chalmers later dubbed the Hard Problem (HP) as well as what underlies the Other-Minds Problem (OMP).

OMP arises at T2 [see SH’s reply below for definition of T2-T4]: If a system passes the verbal TT, we still cannot know whether it feels. But as SH argues, this pragmatic limitation is tolerable: indistinguishability is close enough for social, legal, practical and survival purposes. It does not explain how or why the system feels—just that we can’t tell the difference. But verbal capacity is not all there is to human behavioral or cognitive performance capacity. Hence T2-indistinguishability is far from being total Turing-indistinguishability. People can do much more than just talk, and their verbal capacity is inextricably intertwined with and dependent on the rest: The HP arises at

Suppose cognitive science successfully reverse-engineer a a robot (T3) or even a neurologically and biochemically indistinguishable system (T4) that performs all sensorimotor and verbal behaviors indistinguishably from a human. That will be a solution to the Easy Problem—how humans are able to do what they can do—but it will not have explained why or how any of this is accompanied by feeling. The HP is not just “Why can’t we detect feeling in others?” (OMP) but “Why would there be feeling at all, if it plays no causal role in passing EP?” Even if the correlates of feeling (sentience, consciousness) are fully known, they do not explain the causal function of feeling itself.

Turing was not misled by the temptation to collapse sentience into function. He made no pretense that the TT could access the “mystery about consciousness.” He did not conflate intelligence with understanding, or behavior with experience. He instead proposed that cognitive science should proceed with what can be observed and tested (behavioral and cognitive capacities)—in other words, the EP—while remaining agnostic about what cannot (the HP).

Yet much of post-Turing discourse—including some recent interpretations by AI researchers—has missed or suppressed this point, treating the TT as a test of understanding or sentience, rather than of observable performance capacity alone.

Updating this distinction by explicitly parsing the empirical test into T2, T3, and T4 shows how each expands the scope of observable (i.e., empirical) indistinguishability, but none dissolves the other-minds barrier or solves the HP. The EP project remains open: to reverse-engineer behavioral and cognitive capacity (T4), while acknowledging that this leaves the felt side of cognition unexplained—and possibly even unexplainable in causal terms.

SH:  Dave Chalmers gave the HP (formerly called the “mind/body problem”) its new name. Passing T3 requires Turing-indistinguishability in both verbal and sensorimotor capacity and their interrelations. Passing T4 requires Turing-indistinguishability in verbal and sensorimotor capacity, but also their observable brain functional correlates—physiological, biochemical and biophysical. Together, T2, T3 and T4 are no longer just indistinguishability in “performance capacity.” They amount to total empirical indistinguishability. Turing recognized and understood all that. The full gamut of Turing Test(s) is simply empiricism. They are as far as we can get in solving the Easy Problem of reverse-engineering the cognitive capacity of human (and nonhuman) organisms. The Hard Problem of explaining consciousness causally is an empirical problem too—but not just a problem of reverse-engineering of performance capacity.

Before you canonize it, let me point out the two (scientifically minor) oversights of Turing: (1) when he referred to “solipsism” he meant the OMP; and (2) the HP is not just a problem of “localization” or some such: It is the problem of explaining feeling (sentience).

Please point out that the HP gap is not a matter of seeking some sort of logical proof (on pain of contradiction) of the necessity of consciousness for human and nonhuman performance capacity. Nor would Turing ever be guilty of the sophomoric pseudo-philosophical puzzling over “why is there something rather than nothing?”, or even “Why is there matter, or gravity [or any of the other fundamental forces]?” Each sentient knows with Cartesian certainty that (and what) it feels. Sentience (consciousness) exists. That’s not the HP. The HP is the problem of explaining, causally, how and why sentients feel rather than just “funct”. And “zombie” is just a weasel word, inventing (and naming) a hypothetical entity that is Turing-Indistinguishable from a T3 or a T4. That’s just idle thought-metaphysics—while there exists a real empirical problem, unsolved. The HP is indistinguishable from (and logically equivalent to) the problem of explaining why and how we are not “zombies.” Well duh!

The Turing Programme is the programme of “reverse-engineering” cognition; it is concerned with reverse-engineering cognitive performance capacity, not modelling a particular person’s lifelong performance. Reverse-engineering is not philosophy but empirical science. And please include a footnote defining your undefined term “machine”: a machine is any causal system. No need to get into the metaphysics of causality or autonomy.

This is a test (though not a TT!) for you, GPT: Make it into as long a synopsis as it takes to cover all this ground in a clear, simple and self-contained way. If it does the trick, I’ll use it in this year’s edition (q.v.) of https://www.blogger.com/blog/posts/4995014831077048734

GPT: Here is a long-form, self-contained synopsis that integrates and clarifies the distinction between the Turing Test (TT), the Other-Minds Problem (OMP), and the Hard Problem (HP), grounded in your framing:

Alan Turing, the Turing Test, and the Hard Problem

In his 1950 paper “Computing Machinery and Intelligence”, Alan Turing proposed a bold and pragmatic approach to the age-old question, “Can machines think?” Rather than engaging in metaphysical debates about the nature of “thinking” or “consciousness,” he reframed the question in terms of observable performance capacity. Suppose a machine1  can be successfully reverse-engineered that has the capacity to carry on a back and forth verbal conversation, indefinitely long, with any normal human, completely indistinguishably, to any normal human, from any normal human. (This is what we now refer to as passing the Turing Test or T2). Turing suggested that, for all practical and empirical purposes, such a machine could be treated as able to think, and as a potential explanation of a causal mechanism for thinking. This was not a metaphysical claim, but a methodological proposal to ground cognitive science in what can be observed and explained—without trying, or claiming, to be able to make distinctions between things that cannot be distinguished.

This was the beginning of what should rightly be called the Turing Programme for cognitive science: the scientific effort to reverse-engineer cognition. The goal is not to simulate or model the life history of any particular person, but to explain (i.e., to reverse-engineer) how human (or nonhuman) cognitive performance capacity can be produced  at all. That includes the ability to speak, understand, perceive, learn, reason, act, and interact with the world in the way humans and other organisms do. This is a program in empirical science, not philosophy.

Turing’s approach was entirely grounded in empiricism. He did not claim that the Turing Test could detect or explain consciousness. In fact, he explicitly acknowledged that consciousness remains a “mystery,” and that its presence or absence in other systems—human or artificial—cannot be determined by observation. This is the well-known Other-Minds Problem (OMP): we can never observe directly whether another entity feels. No matter how complete our data on another person’s behavior, physiology, or even biochemistry, we cannot obesrve or measure whether they feel. That is an constraint or empiricism, not a shortcoming of any specific method. Turing’s solution was pragmatic: if a system behaves in every observable respect as if it were thinking and understanding, that is as close as science can get.

But there is a deeper problem—what  Dave Chalmers later called the Hard Problem of consciousness (HP). Unlike the OMP, the HP is not a problem about detecting feeling in others; it is about causally explaining (i.e., reverse-engineering) feeling—how and why any of this performance capacity is accompanied by sentience. Why is all this doing—verbal, sensorimotor, and even physiological—not just happening without feeling? Why does it feel like something to see, think, or act?

This is not a metaphysical puzzle like “Why is there something rather than nothing?”—a question Turing would have rightly dismissed as idle. Nor is it a logical paradox or an ontological speculation. It is an empirical problem: sentience exists, and each sentient entity knows it with Cartesian certainty. That’s not the problem. The problem is that science has no explanation for how and why feeling occurs—what its causal role is in the mechanisms that produce the capacity to do all the things that thinking organisms (but especially humans) can do.

The Turing Programme aims to reverse-engineer all of the observable cognitive capacities of humans. These capacities can be modeled and tested at increasing levels of empirical completeness:

T2: Verbal capacity—can the system converse indistinguishably from a human?

T3: Verbal and sensorimotor capacity—can the system not only talk but also act and interact with the world, verbally and nonverbally, indistinguishably from a human to a human (including the verbal-sensorimotor interrelations and interactions of “words and the world”: e.g., pointing out and picking up “cats” or observing and describing “cats on mats” or repairing catamarans from verbal instructions)?

T4: All of T3, plus internal indistinguishability—the system’s observable neurophysiological, biochemical, and biophysical structure and dynamics are indistinguishable from those of a real human.

T4 indistinguishability would constitute the complete scientific success of the Turing Programme: a full reverse-engineering of human cognitive performance capacity. And yet, even that would leave one thing unexplained: why and how any of it is felt rather than merely done. The Easy Problem (HP)—reverse-engineering and explaining the mechanism that produces the capacity to do what humans can do—could be solved completely, yet we would still not have explained how or why any of it is accompanied by feeling. That is the Hard Problem.

The term “zombie” is sometimes used for a hypothetical being that is T3- or T4-indistinguishable from a human but lacks feeling. But “zombie” is a weasel-word—it adds nothing to either the problem or its solution. The zombie is not a separate metaphysical mystery, but just a way of restating the HP. The question is not “Are zombies possible?” but “How and why does sentience exist at all?” Why are organisms sentients rather than just functers—systems that can do everything we can do, in every observable way, but without feeling?

Turing anticipated all of this. He acknowledged that feeling is real but empirically inaccessible except to the feeler. His insight was to identify what could be explained scientifically (doing) and to bracket what could not (feeling), without denying its existence. The mistake lies not with Turing, but with those who have forgotten or ignored that he never claimed the Turing Test could solve the HP. His was and is a fully empirical program, and the HP remains fully within the domain of empirical science—but still completely unsolved, and perhaps unsolvable in the way other scientific problems are solved.

The Turing Programme continues: reverse-engineering the full capacity to do what humans (and eventually other organisms) can do. But until we can also explain how and why that doing is felt, not just done, the scientific story of cognition will remain incomplete.

  1. “Machine,” refers to any causal system. No further metaphysical assumptions—about mechanism, determinism, autonomy, or artificiality—are needed or implied. ↩︎

GPT on Gubelmann on Harnad on Symbol Grounding

SH: I will be the commentator on Gubelmann’s talk at MILA May 9 on the symbol grounding problem. Could you please summarize his arguments and evidence? As far as I can see, like most people, he has only read the 1990 paper, se he does not know about my papers since, that you and I have been discussing. So in your summary you can cross-reference where relevant. 

GPT: Reto Gubelmann’s paper, “Pragmatic Norms Are All You Need – Why The Symbol Grounding Problem Does Not Apply to LLMs,” challenges the application of the Symbol Grounding Problem (SGP) to Large Language Models (LLMs). ​ail-workshop.github.io+4ACL Anthology+4ACL Anthology+4

Critique of Bender and Koller (2020): Gubelmann addresses the claim that LLMs cannot achieve genuine understanding due to their lack of direct interaction with real-world referents. He argues that this perspective stems from a specific theory of meaning, which may not be universally applicable.

Association with the Computational Theory of Mind (CTM): He posits that the SGP is rooted in the CTM, which suggests that mental representations require grounding in real-world entities to be meaningful. Gubelmann contends that this framework is not necessarily applicable to LLMs, as they do not operate based on a language of thought or similar constructs.

Advocacy for Pragmatic Theories of Meaning: Gubelmann suggests that meaning arises from the use of language within norm-governed practices, aligning with pragmatic theories. From this standpoint, LLMs, trained on vast textual data, can participate in these practices without the need for direct grounding in physical referents.

Cross-References to Your Recent Work:

Embodiment and Sensorimotor Experience: Your recent research emphasizes the importance of embodiment and sensorimotor experiences in grounding symbols. Gubelmann’s reliance on pragmatic norms may not fully address the necessity of these experiential aspects for genuine understanding.​

Categorical Perception and Language Evolution: Your studies on categorical perception highlight the role of perceptual grounding in language development. Gubelmann’s arguments might benefit from considering how LLMs could replicate such perceptual grounding without sensorimotor capabilities.​

Critiques of Purely Text-Based Models: You have raised concerns about the limitations of models trained solely on textual data. Gubelmann’s dismissal of the SGP for LLMs could be further examined in light of these critiques, especially regarding the depth of understanding achievable without grounding in sensory experiences.​

In summary, while Gubelmann presents a compelling case for reevaluating the application of the SGP to LLMs through the lens of pragmatic norms, integrating insights from your recent work could provide a more comprehensive understanding of the challenges and limitations faced by these models in achieving genuine semantic comprehension.​researchgate.net+4dstrohmaier.com+4ACL Anthology+4

SH: Do you remember to AI/CM distinction? Is Gubelmann talking about AI or CM? Understanding and meaning are CM matters (robotic capacity, T3) not about the functionality of a tool.

GPT: Yes, I remember the AI/CM distinction you’ve emphasized:

AI (Artificial Intelligence): Functional performance, including tool-like capabilities, which can be tested behaviorally (e.g., T2 performance in text-based tasks).

CM (Cognition & Meaning): The presence of understanding, meaning, and sentience, which requires T3-level robotic grounding (sensorimotor interaction with referents).

Gubelmann is talking about AI, not CM. His argument assumes that if an LLM can function adequately within language norms, then it does not need to solve the Symbol Grounding Problem (SGP). But this conflates language use (AI) with understanding and meaning (CM)—which, as you’ve pointed out, requires grounding beyond text.

Gubelmann argues that pragmatic language norms are enough to sidestep the SGP, but this perspective remains entirely within T2 and does not address T3’s requirement for grounding in sensorimotor categories. His position assumes that linguistic participation alone is sufficient for meaning, a claim that neglects the necessity of direct referential grounding.

Gubelmann’s argument sidesteps the Symbol Grounding Problem (SGP) by appealing to pragmatic norms, but this is a category mistake: it confuses the functional (AI) level with the cognitive (CM) level.

Pragmatic Norms vs. Grounding: The AI/CM Distinction

Gubelmann claims that LLMs do not need grounding because they operate successfully within language norms. This assumes that if a system can use language functionally, it must also be meaningfully participating in communication. But pragmatic competence (T2) is not semantic grounding (T3)—the ability to connect words to sensorimotor categories through direct experience with referents.

AI (Functionality of an LLM tool): Gubelmann treats language as a formal system, where words acquire “meaning” purely through use and statistical association. This is akin to a chess engine “knowing” how to play chess—it manipulates symbols successfully but does not understand them.

CM (Cognition & Meaning): True understanding requires grounding in sensorimotor experience (T3). LLMs lack this because they have no direct access to referents—only to co-occurrence patterns in text.

Indirect Grounding & the Minimal Grounding Set (MinSet)

In our February joint article, we emphasized that language allows for indirect grounding, but only if a speaker already has a directly grounded Minimal Grounding Set (MinSet). Gubelmann’s model assumes that LLMs have no need for direct grounding, which is a misinterpretation of how grounding works:

Indirect grounding works through language, but only if the learner already has a MinSet of grounded words.

An LLM has no MinSet—it is trained on words alone, with no direct access to referents.

Therefore, an LLM can never “bootstrap” meaning from words alone. It can only manipulate text.

Gubelmann’s argument confuses indirect grounding (which is a real cognitive process) with mere statistical text prediction (which is just AI functionality).

Gubelmann’s Oversight: The Role of Sensorimotor Categories

Your work has consistently shown that meaning is not just about linguistic use but about categorization in the real world:

Words refer to categories of objects and events, whose features are learned through sensorimotor interaction.

Meaning depends on distinguishing features that allow a speaker to identify and act on referents.

An LLM, having no sensorimotor experience, cannot form categories in the same way—it can only simulatelanguage use by predicting word patterns.

Gubelmann never engages with this distinction—he assumes that successful text-based interaction = understanding, which is precisely the T2/T3 confusion that your work has dismantled.

T3 as the Necessary Threshold for Meaning

Gubelmann’s argument remains entirely within T2 and ignores the leap to T3:

Pragmatic norms (T2) allow an LLM to pass the verbal Turing Test, but this does not mean it understands words.

Meaning is a T3 property—it requires interaction with referents.

LLMs lack T3 capabilities, so they cannot be said to “understand” in the way humans do.

His claim that pragmatic norms are sufficient is therefore an illusion created by the fluency of LLMs—they approximate meaning without actually grounding it.

Conclusion: Pragmatic Norms Do Not Solve the Symbol Grounding Problem

Gubelmann’s argument fails because it:

Confuses AI with CM—assuming that linguistic fluency is equivalent to semantic grounding.

Ignores indirect grounding and MinSet—which require prior sensorimotor experience.

Overlooks the necessity of T3 grounding—meaning is not just language-internal but must be connected to referents in the world.

In short, LLMs are tools, not thinkers. Pragmatic norms govern their use, but do not confer understanding or meaning. Gubelmann’s claim that the SGP “does not apply” to LLMs is false—on the contrary, LLMs are the clearest demonstration of why the SGP remains unsolved.

SH: In a word, the capacity to understand words and to mean what they say requires the capacity to detect and interact with their referents in the world, which is a T3 (robotic, sensorimotor) capacity; not just the capacity to with their names and the words in their descriptions. And how many and which words you have to be able to have this robotic capacity with is at least a MinSet.

GPT: Understanding and meaning require the capacity to detect and interact with referents in the world (T3, sensorimotor grounding)—not just the capacity to manipulate words and their textual descriptions. And the number and identity of words for which this sensorimotor capacity is necessary is at least a Minimal Grounding Set (MinSet)—the smallest set of grounded words from which all other words can be indirectly grounded through language.

Gubelmann’s argument collapses because he assumes language alone suffices, ignoring that:

Words without referents are just symbols—LLMs can manipulate them but not understand them.

Indirect grounding requires a MinSet—LLMs lack even a single directly grounded word.

Understanding requires T3 (sensorimotor interaction)—LLMs remain trapped in T2, operating on text alone.

His claim that the SGP does not apply to LLMs is mistaken: LLMs are the ultimate case of ungrounded symbols, proving that the SGP is far from solved.

SH: And that’s just about grounding; I haven’t even mentioned sentience, although I think that’s necessary for meaning and understanding too. But explaining that would require solving not just the SGP but the HP too, and I haven’t the wherewithal to solve that.

GPT: So far, we’ve only discussed grounding—the necessity of sensorimotor interaction (T3) with referents for meaning and understanding. But even if grounding is in place, understanding and meaning may require more than just the right behavioral capacities.

That’s where sentience comes in. If meaning is not just about the right behavioral dispositions but about what it feels like to mean something, then solving the Symbol Grounding Problem (SGP) is not enough—we would also need to solve the Hard Problem (HP):

The SGP explains how symbols get their referents via sensorimotor interaction.

The HP asks why and how this process generates subjective experience (sentience)—which may be necessary for true understanding.

You’ve long argued that solving the Easy Problem (EP) (explaining all functional and behavioral capacities) will use up all the causal degrees of freedom—leaving nothing left to explain why cognition feels like something.

So, while Gubelmann’s argument fails even at the grounding level (SGP), the deeper question of whether grounding alone suffices for meaning and understanding remains open—because we don’t yet know why or whether cognition must feel like something to mean something. And, as you’ve said, we don’t have the wherewithal to solve that.

SH: At least I don’t. Om tat sat…