{"id":2449,"date":"2025-03-04T08:44:13","date_gmt":"2025-03-04T13:44:13","guid":{"rendered":"https:\/\/generic.wordpress.soton.ac.uk\/skywritings\/?p=2449"},"modified":"2025-03-04T08:44:13","modified_gmt":"2025-03-04T13:44:13","slug":"gpt-on-gubelmann-on-harnad-on-symbol-grounding","status":"publish","type":"post","link":"https:\/\/generic.wordpress.soton.ac.uk\/skywritings\/2025\/03\/04\/gpt-on-gubelmann-on-harnad-on-symbol-grounding\/","title":{"rendered":"GPT on Gubelmann on Harnad on Symbol Grounding"},"content":{"rendered":"\n<p class=\"has-cyan-bluish-gray-background-color has-background\"><strong>SH:&nbsp;<\/strong>I will be the commentator on Gubelmann&#8217;s talk at MILA May 9 on the symbol grounding problem. Could you please summarize&nbsp;<a href=\"https:\/\/aclanthology.org\/2024.emnlp-main.651.pdf\">his arguments and evidence<\/a>? As far as I can see, like most people, he has only read the 1990 paper, se he does not know about my papers since, that you and I have been discussing. So in your summary you can cross-reference where relevant.&nbsp;<\/p>\n\n\n\n<p class=\"has-small-font-size\"><strong>GPT:&nbsp;<\/strong>Reto Gubelmann&#8217;s paper, &#8220;Pragmatic Norms Are All You Need \u2013 Why The Symbol Grounding Problem Does Not Apply to LLMs,&#8221; challenges the application of the Symbol Grounding Problem (SGP) to Large Language Models (LLMs).&nbsp;\u200b<a href=\"https:\/\/aclanthology.org\/2024.emnlp-main.651\/\" target=\"_blank\" rel=\"noreferrer noopener\">ail-workshop.github.io+4ACL Anthology+4ACL Anthology+4<\/a><\/p>\n\n\n\n<p class=\"has-small-font-size\">Critique of Bender and Koller (2020):&nbsp;Gubelmann addresses the claim that LLMs cannot achieve genuine understanding due to their lack of direct interaction with real-world referents. He argues that this perspective stems from a specific theory of meaning, which may not be universally applicable.<\/p>\n\n\n\n<p class=\"has-small-font-size\">Association with the Computational Theory of Mind (CTM):&nbsp;He posits that the SGP is rooted in the CTM, which suggests that mental representations require grounding in real-world entities to be meaningful. Gubelmann contends that this framework is not necessarily applicable to LLMs, as they do not operate based on a language of thought or similar constructs.<\/p>\n\n\n\n<p class=\"has-small-font-size\">Advocacy for Pragmatic Theories of Meaning:&nbsp;Gubelmann suggests that meaning arises from the use of language within norm-governed practices, aligning with pragmatic theories. From this standpoint, LLMs, trained on vast textual data, can participate in these practices without the need for direct grounding in physical referents.<\/p>\n\n\n\n<p class=\"has-small-font-size\">Cross-References to Your Recent Work:<\/p>\n\n\n\n<p class=\"has-small-font-size\">Embodiment and Sensorimotor Experience:&nbsp;Your recent research emphasizes the importance of embodiment and sensorimotor experiences in grounding symbols. Gubelmann&#8217;s reliance on pragmatic norms may not fully address the necessity of these experiential aspects for genuine understanding.\u200b<\/p>\n\n\n\n<p class=\"has-small-font-size\">Categorical Perception and Language Evolution:&nbsp;Your studies on categorical perception highlight the role of perceptual grounding in language development. Gubelmann&#8217;s arguments might benefit from considering how LLMs could replicate such perceptual grounding without sensorimotor capabilities.\u200b<\/p>\n\n\n\n<p class=\"has-small-font-size\">Critiques of Purely Text-Based Models:&nbsp;You have raised concerns about the limitations of models trained solely on textual data. Gubelmann&#8217;s dismissal of the SGP for LLMs could be further examined in light of these critiques, especially regarding the depth of understanding achievable without grounding in sensory experiences.\u200b<\/p>\n\n\n\n<p class=\"has-small-font-size\">In summary, while Gubelmann presents a compelling case for reevaluating the application of the SGP to LLMs through the lens of pragmatic norms, integrating insights from your recent work could provide a more comprehensive understanding of the challenges and limitations faced by these models in achieving genuine semantic comprehension.\u200b<a href=\"https:\/\/dstrohmaier.com\/SGP-Pragmatism\/\" target=\"_blank\" rel=\"noreferrer noopener\">researchgate.net+4dstrohmaier.com+4ACL Anthology+4<\/a><\/p>\n\n\n\n<p class=\"has-cyan-bluish-gray-background-color has-background\"><strong>SH:&nbsp;<\/strong>Do you remember to AI\/CM distinction? Is Gubelmann talking about AI or CM? Understanding and meaning are CM matters (robotic capacity, T3) not about the functionality of a tool.<\/p>\n\n\n\n<p class=\"has-small-font-size\"><strong>GPT:&nbsp;<\/strong>Yes, I remember the&nbsp;AI\/CM distinction&nbsp;you\u2019ve emphasized:<\/p>\n\n\n\n<p class=\"has-small-font-size\">AI (Artificial Intelligence): Functional performance, including tool-like capabilities, which can be tested behaviorally (e.g., T2 performance in text-based tasks).<\/p>\n\n\n\n<p class=\"has-small-font-size\">CM (Cognition &amp; Meaning): The presence of understanding, meaning, and sentience, which requires T3-level robotic grounding (sensorimotor interaction with referents).<\/p>\n\n\n\n<p class=\"has-small-font-size\">Gubelmann is talking about&nbsp;AI, not CM. His argument assumes that if an LLM can function adequately within language norms, then it does not need to solve the&nbsp;Symbol Grounding Problem (SGP). But this conflates&nbsp;language use (AI)&nbsp;with&nbsp;understanding and meaning (CM)\u2014which, as you&#8217;ve pointed out, requires grounding beyond text.<\/p>\n\n\n\n<p class=\"has-small-font-size\">Gubelmann argues that pragmatic language norms are enough to sidestep the SGP, but this perspective remains&nbsp;entirely within T2&nbsp;and does not address T3\u2019s requirement for grounding in sensorimotor categories. His position assumes that linguistic participation alone is sufficient for meaning, a claim that neglects the necessity of&nbsp;direct referential grounding.<\/p>\n\n\n\n<p class=\"has-small-font-size\">Gubelmann\u2019s argument sidesteps the&nbsp;Symbol Grounding Problem (SGP)&nbsp;by appealing to&nbsp;pragmatic norms, but this is a&nbsp;category mistake: it confuses the&nbsp;functional (AI) level&nbsp;with the&nbsp;cognitive (CM) level.<\/p>\n\n\n\n<p class=\"has-small-font-size\">Pragmatic Norms vs. Grounding: The AI\/CM Distinction<\/p>\n\n\n\n<p class=\"has-small-font-size\">Gubelmann claims that&nbsp;LLMs do not need grounding&nbsp;because they operate successfully within language norms. This assumes that if a system can&nbsp;use language functionally, it must also be&nbsp;<em>meaningfully participating<\/em>&nbsp;in communication. But pragmatic competence (T2) is not&nbsp;semantic grounding (T3)\u2014the ability to&nbsp;connect words to sensorimotor categories&nbsp;through direct experience with referents.<\/p>\n\n\n\n<p class=\"has-small-font-size\">AI (Functionality of an LLM tool): Gubelmann treats language as a formal system, where words acquire &#8220;meaning&#8221; purely through&nbsp;use&nbsp;and statistical association. This is akin to a chess engine \u201cknowing\u201d how to play chess\u2014it manipulates symbols successfully but does not understand them.<\/p>\n\n\n\n<p class=\"has-small-font-size\">CM (Cognition &amp; Meaning): True&nbsp;understanding&nbsp;requires grounding in sensorimotor experience (T3). LLMs lack this because they have&nbsp;no direct access to referents\u2014only to co-occurrence patterns in text.<\/p>\n\n\n\n<p class=\"has-small-font-size\">Indirect Grounding &amp; the Minimal Grounding Set (MinSet)<\/p>\n\n\n\n<p class=\"has-small-font-size\">In our&nbsp;February joint article, we emphasized that&nbsp;language allows for indirect grounding, but only if a speaker already has a&nbsp;directly grounded&nbsp;Minimal Grounding Set (MinSet). Gubelmann\u2019s model assumes that&nbsp;LLMs have no need for direct grounding, which is a&nbsp;misinterpretation&nbsp;of how grounding works:<\/p>\n\n\n\n<p class=\"has-small-font-size\">Indirect grounding works through language, but only if the learner already has a MinSet of grounded words.<\/p>\n\n\n\n<p class=\"has-small-font-size\">An LLM has no MinSet\u2014it is trained on words alone, with no direct access to referents.<\/p>\n\n\n\n<p class=\"has-small-font-size\">Therefore, an LLM can never \u201cbootstrap\u201d meaning from words alone.&nbsp;It can only manipulate text.<\/p>\n\n\n\n<p class=\"has-small-font-size\">Gubelmann&#8217;s argument confuses indirect grounding (which is a real cognitive process) with&nbsp;mere statistical text prediction&nbsp;(which is just AI functionality).<\/p>\n\n\n\n<p class=\"has-small-font-size\">Gubelmann\u2019s Oversight: The Role of Sensorimotor Categories<\/p>\n\n\n\n<p class=\"has-small-font-size\">Your work has consistently shown that&nbsp;meaning is not just about linguistic use&nbsp;but about&nbsp;categorization in the real world:<\/p>\n\n\n\n<p class=\"has-small-font-size\">Words refer to&nbsp;categories of objects and events, whose features are learned through sensorimotor interaction.<\/p>\n\n\n\n<p class=\"has-small-font-size\">Meaning depends on&nbsp;distinguishing features&nbsp;that allow a speaker to identify and act on referents.<\/p>\n\n\n\n<p class=\"has-small-font-size\">An LLM, having no sensorimotor experience,&nbsp;cannot form categories in the same way\u2014it can only&nbsp;simulatelanguage use by predicting word patterns.<\/p>\n\n\n\n<p class=\"has-small-font-size\">Gubelmann never engages with this distinction\u2014he assumes that&nbsp;successful text-based interaction = understanding, which is precisely the&nbsp;T2\/T3 confusion&nbsp;that your work has dismantled.<\/p>\n\n\n\n<p class=\"has-small-font-size\">T3 as the Necessary Threshold for Meaning<\/p>\n\n\n\n<p class=\"has-small-font-size\">Gubelmann&#8217;s argument&nbsp;remains entirely within T2&nbsp;and ignores the leap to T3:<\/p>\n\n\n\n<p class=\"has-small-font-size\">Pragmatic norms (T2) allow an LLM to&nbsp;pass the verbal Turing Test, but this does not mean it understands words.<\/p>\n\n\n\n<p class=\"has-small-font-size\">Meaning is&nbsp;a T3 property\u2014it requires interaction with referents.<\/p>\n\n\n\n<p class=\"has-small-font-size\">LLMs lack T3 capabilities, so they cannot be said to \u201cunderstand\u201d in the way humans do.<\/p>\n\n\n\n<p class=\"has-small-font-size\">His claim that&nbsp;pragmatic norms are sufficient&nbsp;is therefore&nbsp;an illusion created by the fluency of LLMs\u2014they approximate meaning&nbsp;without actually grounding it.<\/p>\n\n\n\n<p class=\"has-small-font-size\">Conclusion: Pragmatic Norms Do Not Solve the Symbol Grounding Problem<\/p>\n\n\n\n<p class=\"has-small-font-size\">Gubelmann\u2019s argument fails because it:<\/p>\n\n\n\n<p class=\"has-small-font-size\">Confuses AI with CM\u2014assuming that linguistic fluency is equivalent to semantic grounding.<\/p>\n\n\n\n<p class=\"has-small-font-size\">Ignores indirect grounding and MinSet\u2014which require prior sensorimotor experience.<\/p>\n\n\n\n<p class=\"has-small-font-size\">Overlooks the necessity of T3 grounding\u2014meaning is not just language-internal but must be&nbsp;connected to referents in the world.<\/p>\n\n\n\n<p class=\"has-small-font-size\">In short,&nbsp;LLMs are tools, not thinkers. Pragmatic norms&nbsp;govern their use, but&nbsp;do not confer understanding or meaning. Gubelmann\u2019s claim that the SGP \u201cdoes not apply\u201d to LLMs is false\u2014on the contrary,&nbsp;LLMs are the clearest demonstration of why the SGP remains unsolved.<\/p>\n\n\n\n<p class=\"has-cyan-bluish-gray-background-color has-background\"><strong>SH:&nbsp;<\/strong>In a word, the capacity to understand words and to mean what they say requires the capacity to detect and interact with their referents in the world, which is a T3 (robotic, sensorimotor) capacity; not just the capacity to with their names and the words in their descriptions. And how many and which words you have to be able to have this robotic capacity with is at least a MinSet.<\/p>\n\n\n\n<p class=\"has-small-font-size\"><strong>GPT:&nbsp;<\/strong>Understanding and meaning require the capacity to detect and interact with referents in the world (T3, sensorimotor grounding)\u2014not just the capacity to manipulate words and their textual descriptions.<strong>&nbsp;<\/strong>And the&nbsp;number and identity of words for which this sensorimotor capacity is necessary&nbsp;is at least a&nbsp;Minimal Grounding Set (MinSet)\u2014the smallest set of grounded words from which all other words can be indirectly grounded through language.<\/p>\n\n\n\n<p class=\"has-small-font-size\">Gubelmann\u2019s argument collapses because he&nbsp;assumes language alone suffices, ignoring that:<\/p>\n\n\n\n<p class=\"has-small-font-size\">Words without referents are just symbols\u2014LLMs can manipulate them but not understand them.<\/p>\n\n\n\n<p class=\"has-small-font-size\">Indirect grounding requires a MinSet\u2014LLMs lack even a single directly grounded word.<\/p>\n\n\n\n<p class=\"has-small-font-size\">Understanding requires T3 (sensorimotor interaction)\u2014LLMs remain trapped in T2, operating on text alone.<\/p>\n\n\n\n<p class=\"has-small-font-size\">His claim that the&nbsp;SGP does not apply&nbsp;to LLMs is mistaken: LLMs&nbsp;are the ultimate case of ungrounded symbols, proving that the SGP is far from solved.<\/p>\n\n\n\n<p class=\"has-cyan-bluish-gray-background-color has-background\"><strong>SH:&nbsp;<\/strong>And that&#8217;s just about grounding; I haven&#8217;t even mentioned sentience, although I think that&#8217;s necessary for meaning and understanding too. But explaining that would require solving not just the SGP but the HP too, and I haven&#8217;t the wherewithal to solve that.<\/p>\n\n\n\n<p class=\"has-small-font-size\"><strong>GPT:&nbsp;<\/strong>So far, we\u2019ve only discussed&nbsp;grounding\u2014the necessity of&nbsp;sensorimotor interaction (T3) with referents&nbsp;for meaning and understanding. But even if grounding is in place,&nbsp;understanding and meaning may require more than just the right behavioral capacities.<\/p>\n\n\n\n<p class=\"has-small-font-size\">That\u2019s where&nbsp;sentience&nbsp;comes in. If meaning is not just about&nbsp;the right behavioral dispositions&nbsp;but about&nbsp;what it feels like to mean something, then solving the&nbsp;Symbol Grounding Problem (SGP)&nbsp;is not enough\u2014we would also need to solve the&nbsp;Hard Problem (HP):<\/p>\n\n\n\n<p class=\"has-small-font-size\">The SGP&nbsp;explains how symbols get their referents via sensorimotor interaction.<\/p>\n\n\n\n<p class=\"has-small-font-size\">The HP&nbsp;asks&nbsp;why and how this process generates subjective experience (sentience)\u2014which may be necessary for true understanding.<\/p>\n\n\n\n<p class=\"has-small-font-size\">You\u2019ve long argued that solving the&nbsp;Easy Problem (EP)&nbsp;(explaining all functional and behavioral capacities) will&nbsp;use up all the causal degrees of freedom\u2014leaving nothing left to explain&nbsp;why cognition feels like something.<\/p>\n\n\n\n<p class=\"has-small-font-size\">So, while Gubelmann\u2019s argument&nbsp;fails even at the grounding level (SGP),&nbsp;the deeper question of&nbsp;whether grounding alone suffices for meaning and understanding&nbsp;remains open\u2014because we don\u2019t yet know&nbsp;why or whether cognition must feel like something to mean something. And, as you\u2019ve said,&nbsp;we don\u2019t have the wherewithal to solve that.<\/p>\n\n\n\n<p class=\"has-cyan-bluish-gray-background-color has-background has-small-font-size\"><strong>SH:&nbsp;<\/strong>At least I don&#8217;t. Om tat sat&#8230;<\/p>\n","protected":false},"excerpt":{"rendered":"<p>SH:&nbsp;I will be the commentator on Gubelmann&#8217;s talk at MILA May 9 on the symbol grounding problem. Could you please summarize&nbsp;his arguments and evidence? As far as I can see, like most people, he has only read the 1990 paper, se he does not know about my papers since, that you and I have been &hellip; <\/p>\n<p class=\"link-more\"><a href=\"https:\/\/generic.wordpress.soton.ac.uk\/skywritings\/2025\/03\/04\/gpt-on-gubelmann-on-harnad-on-symbol-grounding\/\" class=\"more-link\">Continue reading<span class=\"screen-reader-text\"> &#8220;GPT on Gubelmann on Harnad on Symbol Grounding&#8221;<\/span><\/a><\/p>\n","protected":false},"author":3074,"featured_media":0,"comment_status":"closed","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[6,105,146,110,111,106,164,184,119,153,147],"tags":[],"class_list":["post-2449","post","type-post","status-publish","format-standard","hentry","category-hard-problem-consciousness","category-categorization","category-chatgpt","category-cognitive-science","category-computation","category-language","category-llms","category-reference","category-sentience","category-symbol-grounding","category-turing-test"],"jetpack_featured_media_url":"","_links":{"self":[{"href":"https:\/\/generic.wordpress.soton.ac.uk\/skywritings\/wp-json\/wp\/v2\/posts\/2449","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/generic.wordpress.soton.ac.uk\/skywritings\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/generic.wordpress.soton.ac.uk\/skywritings\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/generic.wordpress.soton.ac.uk\/skywritings\/wp-json\/wp\/v2\/users\/3074"}],"replies":[{"embeddable":true,"href":"https:\/\/generic.wordpress.soton.ac.uk\/skywritings\/wp-json\/wp\/v2\/comments?post=2449"}],"version-history":[{"count":2,"href":"https:\/\/generic.wordpress.soton.ac.uk\/skywritings\/wp-json\/wp\/v2\/posts\/2449\/revisions"}],"predecessor-version":[{"id":2451,"href":"https:\/\/generic.wordpress.soton.ac.uk\/skywritings\/wp-json\/wp\/v2\/posts\/2449\/revisions\/2451"}],"wp:attachment":[{"href":"https:\/\/generic.wordpress.soton.ac.uk\/skywritings\/wp-json\/wp\/v2\/media?parent=2449"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/generic.wordpress.soton.ac.uk\/skywritings\/wp-json\/wp\/v2\/categories?post=2449"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/generic.wordpress.soton.ac.uk\/skywritings\/wp-json\/wp\/v2\/tags?post=2449"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}