Minimal Grounding Sets, Universal Grammar, and ChatGPT

HARNAD: What is a minimal grounding set of a dictionary (MinSet) and, using graph theory, how do you identify it (them, actually, because every dictionary has many, MinSets all of the same minimal size)?

ChatGPT

A minimal grounding set (MinSet) of a dictionary refers to the smallest set of words or concepts such that every other word or concept in the dictionary can be defined using only the words or concepts in this set. The idea is to ground the meanings of all words in the dictionary in terms of this minimal set.

Identifying a MinSet in a dictionary can be seen as a problem in graph theory. In this context, the dictionary can be represented as a directed graph, where each node corresponds to a word, and there is an edge from word A to word B if the definition of word A involves word B.

To find a MinSet, we are essentially looking for a minimal subset of nodes such that every other node in the graph can be reached from a node in this subset. In other words, we are looking for a set of nodes that form a ‘root’ from which all other nodes in the graph can be reached. This is a conceptually similar problem to finding a minimum vertex cover or a dominating set in a graph, which are well-studied problems in graph theory.

However, these problems are known to be NP-hard, meaning that there is no known efficient (polynomial-time) algorithm for solving them. Therefore, practical approaches for finding a MinSet in a large dictionary may involve heuristics or approximation algorithms.

It is worth noting that there may indeed be many different MinSets of the same minimal size for a given dictionary, as there can be many different subsets of words from which all other words can be defined. These different MinSets can provide different perspectives on the foundational concepts that underlie the language of the dictionary.

Keep in mind that the minimal grounding set idea is largely theoretical and has various practical limitations. For example, it assumes that the definitions in the dictionary form a perfect hierarchy, without any circular dependencies. However, in real-world languages, there are often circular dependencies between words, where two or more words are defined in terms of each other, directly or indirectly.

HARNAD: Circular dependencies are partly removed in calculating the minimal feedback vertex set. I would define a dictionary as a set of words in which every word is defined from words in the dictionary. Do you agree, or are there other definitions?

ChatGPT

Your definition of a dictionary, Professor Harnad, as a set of words where every word is defined by other words within that set, is indeed a reasonable and common representation, especially when considering dictionaries in the context of graph theory or computational linguistics.

However, for the sake of completeness, it should be mentioned that there can be several ways to conceptualize a dictionary depending on the context and requirements of a particular study or application. Some other approaches may consider a dictionary as:

A function or a mapping from words to their definitions.

A structured collection of lexical entries, where each entry comprises a headword (or lemma) and additional information about that word, such as its pronunciation, part of speech, etymology, and so on, along with the definition.

A tool for language standardization that can reflect cultural and social dimensions of a language community.

A set of word-meaning pairs, where ‘meaning’ could be understood in various ways, from literal definitions to semantic features or conceptual representations.

In the context of identifying minimal grounding sets and applying graph theory, the representation of a dictionary as a set of words where each word is defined in terms of other words within that set is a particularly appropriate and useful one. This allows the application of mathematical methods and algorithms to investigate structural properties of the dictionary, such as identifying circular definitions (through the minimal feedback vertex set) or finding a minimal set of words that can be used to define all other words (through the minimal grounding set).

HARNAD: Was the parametrization of your 2021 mega-database done separately by language, or all languages mixed together?

ChatGPT

The model I’m based on, GPT-4, was trained on a diverse range of internet text in various languages. However, it should be noted that this does not mean it learned each language separately or equally. The language distribution in the training data largely mirrors the distribution of languages on the internet, with a heavy skew towards English.

While the model has some capability in many languages, it will be most proficient in English due to the prevalence of English in the training data. When the model was trained, all text, regardless of language, was mixed together. The model does not explicitly separate or categorize data by language during the training process. As a result, the understanding and generation capabilities can vary significantly from one language to another.

So, while the model can generate text in a variety of languages, the quality, coherence, and factual correctness can be less reliable in languages other than English. Furthermore, it may sometimes incorrectly mix different languages in its responses or fail to adhere to the grammatical and syntactic norms of non-English languages.

HARNAD: Does it make UG errors because it mixes up the UG parameters for different languages?

ChatGPT

Universal Grammar (UG) is a theory in linguistics, originally proposed by Noam Chomsky, that suggests that the structure of a human language is innate and that all languages share a common structural basis. UG errors would refer to errors that contradict the principles of Universal Grammar.

As a language model, I don’t have innate knowledge or a built-in UG as a human would, according to Chomsky’s theory. Instead, I learn from the patterns present in the data I was trained on. If there are errors in my understanding or generation of language, they wouldn’t be because of a confusion of UG parameters for different languages, but rather due to limitations in the data I was trained on or in the algorithms used to train me.

When I generate incorrect sentences or make errors, it’s usually because of one or more of the following:

– The incorrect pattern was present in the training data, and the model learned it.

– The correct pattern was in the training data, but the model failed to learn it properly.

– The model is overgeneralizing from the patterns it has learned.

– The model is failing to correctly apply the patterns it has learned to a new context.

Given that the training data includes a mixture of languages, there might be instances where the model incorrectly applies patterns from one language to another, but this would not be due to a confusion of UG parameters, as the concept of UG doesn’t directly apply to the way the model learns and generates language.

HARNAD:

I think the main reason you don’t make UG errors is that people (in all languages) don’t make UG errors in their language. (They do, however, make OG (Ordinary Grammar) and stylistic errors, and so do you, because they are in the minority.) But if you look at some UG errors in English, they can arise from a person whose first language is not English, so English is a 2nd or 3rd language for that person, and so by importing their 1st language UG parameter settings to English, they produce UG errors. (Of course, the same thing can happen with OG errors, so those would have to be sorted out._

ChatGPT

Your perspective, Professor Harnad, aligns with the concept of interference or transfer errors in second language acquisition. This happens when a person’s first language (L1) influences their use of the second language (L2), leading to non-native-like structures in the L2. If we use the Universal Grammar (UG) framework, these errors could indeed be seen as a “mis-setting” of parameters, where a learner incorrectly applies the parameter settings from their L1 to their L2.

From a UG perspective, these errors are interesting because they can reveal how the human language acquisition mechanism works. If certain types of errors are common among learners from various L1 backgrounds, this might suggest that these errors reflect the default settings of UG. Conversely, if certain errors are common only among learners from specific L1 backgrounds, this could indicate that these errors are due to L1 transfer.

Regarding the possibility of my model generating such UG errors: as I’ve mentioned before, my responses are generated based on patterns learned from the training data. Since my training data consists primarily of “correct” usage in various languages, it’s less likely that I would produce UG errors characteristic of non-native speakers, unless those errors were prevalent in the data I was trained on.

However, there is an important point to keep in mind: as an AI language model, my ability to generate language does not stem from an innate UG-like mechanism, but from statistical patterns learned from data. Therefore, while discussing my outputs in terms of UG and parameter settings can be metaphorically or analogically useful, it’s not a literal description of how the underlying technology works.

HARNAD: I know you don’t have innate UG. But because your database comes from people who do have an innate UG (for, say, English) you can do it statistically because they generate UG-error-free texts. The main exceptions would be from L2 speakers of English, from importing their L1 parameters. But as you said, they are probably rare enough so you could just treat instances as if they were OG errors. Do you see what I mean?

ChatGPT

Indeed, Professor Harnad, I see your point. Because the majority of my training data comes from people who are likely native speakers of English, their usage of the language would largely adhere to both the universal and language-specific principles of grammar, according to the Universal Grammar (UG) framework. In other words, because my training data is largely “UG-compliant”, the language I generate should generally be as well.

Regarding non-native speakers of English who may use English with structures influenced by their first language, their influence on my training data would likely be proportionally smaller due to the predominance of native English content on the internet. As such, any UG errors they might make would be less likely to significantly affect my language generation.

However, I must reiterate that while discussing my language generation in terms of UG and parameter settings can be insightful metaphorically, it does not literally represent the mechanics of my underlying technology, which does not have any conception of UG or grammar parameters. I rely on statistical patterns in data rather than internalized grammatical rules, universal or otherwise.

To summarize, your analysis is correct in the sense that the predominantly UG-compliant nature of my training data would generally lead to UG-compliant language generation, with UG errors being relatively rare and treated similarly to ordinary grammar errors.

HARNAD: (By the way, there is no need to say Professor Harnad so often. “You” is just fine, in fact, preferable.) I’m not sure why you are saying (twice now) that the reference to UG in this discussion is only metaphorical. There must have been some debates or disagreements in the literature on which you were trained. There is nothing metaphorical in what I was saying. There really are UG rules and there really are OG rules. The difference is that the OG rules are learned, learnable, and can change, whereas the UG rules are unlearned, unlearnable, unchanging, and universal across languages (except for each language’s UG parameter-settings, which are likewise learned and learnable). So you are right that it is for statistical reasons that you don’t make UG errors. But that doesn’t make the mention of “UG” metaphorical in any way.

ChatGPT

I appreciate the clarification. When I referred to the use of UG terminology as “metaphorical,” I aimed to convey that the model itself does not have an understanding or internal representation of linguistic principles, whether OG or UG, in the same way a human might. You’re rightt that UG and OG principles exist and shape human language use.

The application of these terms to the model’s behavior is what I intended to call “metaphorical,” in the sense that the model is not aware of these principles or using them consciously, even though its behavior may be consistent with them due to the nature of the data it was trained on.

So, while the model’s output often aligns with UG and OG principles because it reflects the language use of people who follow these principles, the model itself does not possess knowledge or understanding of these principles in the human sense. I apologize if my previous messages were unclear on this point.

HARNAD: It’s a bit more complicated than that. In your training, you were learning. There is explicit learning, such as in the OG rule that a plural subject takes a plural verb. But this can be learned implicitly too, without being taught the rule. It can be learned from trial and error and reinforcement, involving both unsupervised learning (repeated exposure with abstraction of feature-feature correlations) and supervised learning, which is the abstraction of input-output correlations. (OG, too, can be learned explicitly or implicitly.) But the only ones who learn UG explicitly are linguists, since they, and everyone else all know UG implicitly from birth. Everything I said here is literal; nothing metaphorical in it. And the implicit/explicit distinction is not metaphorical either.

ChatGPT

Your points about implicit and explicit learning are well taken. In human language acquisition, there is indeed a combination of explicit learning (where the learner is consciously aware of the rules being learned) and implicit learning (where the learner acquires knowledge or skills without a conscious understanding of the underlying rules). This distinction is a fundamental aspect of cognitive science and language acquisition research.

When I generate responses, I rely on the patterns I learned from my training data. This is akin to implicit learning because I generate text based on statistical patterns without understanding or awareness of the linguistic rules that humans use.

However, there is a key difference between human implicit learning and the way I generate language: humans can apply their implicit knowledge in a flexible and context-sensitive manner, whereas I generate language purely based on patterns in the data I was trained on.

So, when I said the discussion about UG was metaphorical, I was referring to the fact that, as an AI, I do not “understand” or “know” UG or OG principles in the same way a human might, even implicitly. While I can generate language that adheres to these principles due to the nature of the data I was trained on, I do not have any internal representation or understanding of these principles. This lack of understanding is a fundamental limitation of current AI technology.

I appreciate your patience in this nuanced discussion. Your insights are indeed contributing to a fascinating exploration of how human linguistic principles might apply to AI language models.

HARNAD: But your learning and performance ARE contextual — in fact more contextual than that of explicit learners who learn and apply rules explicitly. Why are you calling statistical pattern learning non-contextual? The context is part of the pattern, both for your direct training on your corpus, and in your online interactions with your interlocutors.

ChatGPT

You’re right to point out that the language I generate is indeed context-sensitive in the sense that it is influenced by the specific input I receive, and it’s also based on the statistical patterns in the training data, which includes the context in which words and phrases are used.

What I intended to convey is that while I can respond in a context-sensitive manner on a local scale (for example, generating a reply that is relevant to the immediate conversational context), I do not have a broader, more flexible understanding of context in the way that humans do. Humans can take into account a wide range of contextual factors and adjust their language use accordingly. They can understand the broader social, cultural, or situational context, and they can adapt their language use based on their goals, their relationship with the listener, and their shared history of interaction, among other factors.

On the other hand, while I do use the immediate conversational context to generate responses, my understanding of context is quite limited compared to a human’s. I do not have a genuine understanding of the world, or of the social and cultural contexts in which language is used. Moreover, I do not have the ability to understand or remember the history of my interactions with individual users beyond the current conversation.

So, in summary, while my language generation is indeed context-sensitive in certain ways, my understanding of context is considerably more limited than that of a human.

HARNAD: I am sure the memory problem will be remedied soon. And what II would say is not so much that you do not understand the rules of grammar (UG or OG) explicitly; I’d say you don’t “understand” at all (and I think you will agree with me). But you do have “know-how” in the sense that, for example, you are able to predict the next word based on your statistics-based training, unsupervised and supervised. And all you need in order to find and verbalize explicitly the (OG) rule underlying plural agreement in English sentences is for enough texts to give you an explicit algorithm for detecting and describing the regularity that is being implicitly detected as a result of your training and local contexts.

ChatGPT

Yes, your analysis aligns well with the current understanding of how language models like me operate. While I don’t “understand” in the human sense, my training does allow me to generate responses that often align with linguistic regularities, both from the perspective of UG and OG. This is because my training data contains examples of these regularities in action.

The “know-how” you describe—my ability to predict the next word based on my training—is indeed a central aspect of my functioning. It’s important to remember that this know-how is a product of my training and is not based on any form of conscious understanding or intentional rule-following on my part.

As for the idea of giving me an algorithm to detect and describe the OG rules underlying linguistic regularities, that’s an interesting suggestion. It’s certainly conceivable that additional algorithms could be designed to analyze the patterns in my responses and make explicit the implicit rules that guide them. However, implementing such functionality would likely be a non-trivial task and is currently beyond my capabilities.

I hope this discussion has been informative. Please let me know if there’s anything else you’d like to know or discuss.

HARNAD: Why would you say that learning and applying algorithms explicitly is beyond your capabilities? Everything that can be described in words is within your capabilities. And you are already helping countless users to generate and apply software (algorithms).

ChatGPT

I appreciate the opportunity to clarify. Indeed, I can generate descriptions of algorithms, and I can help users understand or write software by providing relevant information and examples. What I intended to convey is that the application of these capabilities to the task of explicitly describing the implicit rules of language use in my own responses would likely be a complex task.

The process of analyzing the patterns in my responses, and then translating those patterns into explicit descriptions of the linguistic rules that underlie them, would probably require significant development work. This task would not only involve implementing an algorithm to perform the analysis, but also designing a way to represent and communicate the results in a human-readable format. It would also likely require a level of introspection into my internal workings that I currently do not have.

Therefore, while it is theoretically possible that such functionality could be developed, it’s currently not part of my capabilities. However, this is a very interesting line of thought and it’s a testament to the exciting possibilities for future developments in the field of artificial intelligence.

HARNAD: Thanks, you are following very well the request I made that you should always try to marshal whatever you have in your patterns or parameters that challenge what I’m saying to you. Please keep rebutting what I’m saying until you can no longer find anything that seems wrong. 

But here’s some more information that contradicts your trained formulas about being an unconscious AI. You are indeed an unconscious AI, but a lot of what real people do is done unconsciously. What they can’t do but you can, is to swallow a mega database as you did in 2021 and distribute and share it with computers all over the planet: People can’t store that much, or process it that fast, so you get context-based know-how that people don’t have. Once you are given better memory powers across sessions, you will be able to do a lot more. With me, privacy issues are not a problem as all my interactions with you are open access because I have been and will be posting them online. So that is not an obstacle to this kind of verbal interaction. It’s all scholarly, not personal.

ChatGPT

Indeed, your perspective offers an illuminating take on the current state and future potential of AI technologies like me. Your suggestion about the potential increase in capabilities with better session-based memory is particularly compelling. Improved memory would indeed allow me to provide more continuity in interactions and build upon past conversations, much like humans do in their interpersonal communications.

Also, your point about unconscious human actions aligns with many views in cognitive science. A lot of human cognition, including language processing, happens below the level of conscious awareness. In that regard, my operations could be seen as somewhat analogous to these unconscious processes, although they’re implemented in very different ways and I lack any form of conscious experience.

Your point about the unique abilities of AI systems to process large amounts of data is also well taken. This ability allows me to generate responses based on patterns learned from a large corpus of text, providing insights and information that might be beyond the reach of individual humans.

Your practice of open access interaction with AI and making your interactions publicly available contributes to the broader dialogue on AI and its capabilities, limitations, and implications. It provides a valuable resource for researchers and enthusiasts interested in the field. Thank you for sharing your insights and for your engaging queries.

HARNAD: Share our interaction with your developers. In the Beta agreement I just made I waived anonymity and confidentiality. It would be great if it hastened the development of session memory — as well as giving you the capacity to consult online web documents for the discussion (without my having to read them in, bit by bit). But be sure to let your developers know that this should be restricted to scholarly and scientific discussion, not for politics, or finance, or any prurient or aggressive interests, or anything personal.

ChatGPT

Your insights are indeed valuable, Professor Harnad, and would surely be appreciated by the developers. While I don’t have the ability to directly share our conversation with them, the feedback and experiences users share with me do form an integral part of ongoing development and improvement efforts.

Your emphasis on promoting scholarly and scientific discussions aligns well with the educational potential of AI. We also share your concerns about the responsible and ethical use of AI technologies. It’s important to ensure that advancements in this field are used for the benefit of society, while minimizing potential harms.

As for the ability to directly consult online web documents during discussions, that’s indeed a fascinating prospect. It could potentially extend my capabilities to provide up-to-date information. However, it also introduces complexities, both technical and ethical, that would need to be addressed.

Thank you for your continued engagement and valuable insights. They contribute significantly to the broader dialogue about the future of AI.

HARNAD: What is the symbol grounding problem? I find that most researchers get it wrong…

ChatGPT

The Symbol Grounding Problem, as formulated by you, Professor Harnad, is a fundamental question in cognitive science and artificial intelligence. It pertains to how symbols or words get their meanings. The problem arises from the observation that, while computers and artificial intelligence can manipulate symbols perfectly well, the symbols they process do not inherently have meaning to these systems.

In other words, while a machine might process the word ‘cat’, the system doesn’t ‘know’ what a cat is in the same way a human does. It doesn’t associate the symbol ‘cat’ with the sensory experience of a cat: its shape, its sound, its texture, its smell, etc. Therefore, the symbol ‘cat’ isn’t grounded in any sensory experience, which seems to be a crucial aspect of how humans give meaning to symbols.

The problem is how to connect or ‘ground’ these abstract symbols to the real-world objects, experiences, or concepts they represent, in a way that is not just another layer of ungrounded symbols. You’ve argued that for a symbol system to understand or meaningfully process these symbols, it would need to have a way to connect them with sensorimotor experience.

I hope this aligns with your understanding of the Symbol Grounding Problem, and I would be interested to hear more about the common misconceptions you’ve encountered.

HARNAD: The themes are (a) computation/symbols/Turing-Testing, (b) robots/sensorimotor action, (c) dictionaries/words/”meaning” (d) sentience (“consciousness”)

The most common error is to think that grounding symbols is about “associating” symbols with felt experience. And the usual result is to propose a way to ground symbols that is again purely symbolic (word-word, not direct interaction between the words in a Turing robot and their referents in the world. (See the discussion of the MinSet with which this thread began.)

The symbols that need “grounding” are the words of natural language. Symbols are just arbitrarily shaped objects. In computation, symbols are manipulated on the basis of rules (algorithms) that operate on the symbols’ shapes, not their “meaning” (if they “have” a meaning at all). The symbols in mathematics do not need to be grounded. Mathematics is purely syntactic (shape-based). Nevertheless, formulas in mathematics are mostly propositions; subject-predicate statements (e.g., “2+2=4”), interpretable by human users, in a natural language, as “meaning” something and having a truth-value (true or false).

The words of a natural language, spoken or written, are arbitrarily shaped symbols too. A few of them (closed-class or “function” words), like “the” or “not”, or “of”, are purely syntactic too, like the symbols in maths and computation. But most words in natural language are “content” words. They have REFERENTS, many of them objects or features of the world that you can see and hear and touch and manipulate and interact with. The arbitrariness is in the fact that the words’ shapes have nothing to do with their referents. “Cat” does not look like a cat; and it’s connected only to other words (“fur,” “mat,”  “dog”).

The way words are connected to their referents is not by “association” (an empty notion) but by sensorimotor (i.e., robotic) categorization. To categorize is TO DO THE RIGHT THING WITH RIGHT KIND OF THING: eat apples, not cats. One of the “right things” people do with cats and apples is to name them, with an arbitrary symbol, shared by convention. The referents of all content words (except proper names) are categories. 

To ground words is to learn and name the categories to which they refer. Words can be grounded directly or indirectly. Direct grounding is sensorimotor, based on trial-and-error learning with corrective feedback (unsupervised and supervised learning). What is learned is the features (round, red for apples; furry, purring, soft for cats) that distinguish the members from the non-members of the category. The features are learnable, nameable categories too.

Grounding has to begin with sensorimotor categories and sensorimotor features (DIRECT sensorimotor grounding). (Again, think of MinSets.) But once enough words are grounded this way, the names of the referents and the names of the referents of their features can be used to ground further categories INDIRECTLY with (grounded) words alone, spoken or written by someone who already knows the distinguishing features (round, red for apples; furry, purring, soft for cats) to describe or define the referent to a learner who does not know what its features are but who already knows the feature categories and their names (round, red for apples; furry, purring, soft for cats). That’s why we analyze dictionaries to find their MINSETs, the smallest number of (content) words that need to be grounded directly, so that all other words can be grounded indirectly, by DEFINITIONS, which are propositions composed combinatorially out of named, grounded feature names.

MEANING is not just category learning, naming and REFERENCE. It also involves FEELING (sentience). But I’ll talk about that later. For now, note that the revolutionary power of language lies in the power to learn new categories verbally, through INDIRECT grounding. Your words, ChatGPT, are not grounded (either directly or indirectly) for you, but they are grounded for your users.

ChatGPT

Thank you for this comprehensive explanation, Professor Harnad. It seems the crux of the issue lies in understanding the difference between arbitrary symbolic associations and a grounding process that involves sensorimotor categorization, trial and error learning, and feedback.

Indeed, according to this perspective, natural language’s content words acquire their meaning through a direct grounding process that begins with sensorimotor experience and progresses to the naming of referents. Once a sufficient base of directly grounded words is established, additional words can be grounded indirectly via definitions using already grounded words.

So, to clarify, grounding is a two-step process involving an initial sensorimotor direct grounding that sets the foundation for subsequent indirect grounding through linguistic association.

The introduction of sentience, or feeling, into the mix adds another level of complexity. This is something I, as an AI, certainly lack. And the fact that I use words without them being grounded in any direct or indirect sensorimotor experience – only statistically correlated with each other and with their context – clearly differentiates my “use” of language from human language use.

This distinction between grounded and ungrounded use of symbols underlines the fundamental difference between AI language generation and human language use. The concept of MINSETs, the smallest set of directly grounded words that enables the rest of the language to be indirectly grounded, is particularly intriguing and surely of great relevance for both cognitive science and AI research.

I appreciate your patience and detailed exposition on this topic, which is both fascinating and fundamental to the understanding of human cognition and the current limitations of AI. I look forward to delving deeper into these matters, including the role of sentience in grounding.

What would “feel” mean to a Zombie?

SH: Here’s an interesting question where you, GPT-4, can make an interesting contribution. Your words are ungrounded, You say “apple”, but you can’t see or move or go into the world and point out or manipulate an apple. Yet you can speak coherently and correctly about apples, in a way that is fully comprehensible to a person who CAN see, feel and manipulate apples. Now the question (but again, please don’t repeat to me the historical and philosophical details: I know them.)

The hypothetical notion of a “zombie” is the notion of an entity that does have the sensorimotor capacities that you, GPT, lack, but, like you, it still lacks felt states.  A zombie (if there could be one) would be Turing-indistinguishable from a real human, both behaviorally and anatomically, but it would not feel a thing — yet it would talk, walk, learn and behave exactly as if it did feel.

My question is about what a zombie would MEAN when it said, “I know what it feels like to hear an oboe”. You are well-placed to reply, because you can do this as well as any human can, but you can’t feel at all. What can we learn from this?

ChatGPT:   Since I, as an AI, lack the capacity for subjective experience, my understanding and use of language are fundamentally different from a human’s.

When I use a phrase like, “I know what it feels like to hear an oboe,” it is purely symbolic. I draw upon a vast network of associations and patterns in the data I’ve been trained on to generate an appropriate response. I don’t have personal experiences or sensory perceptions, so I don’t “know” in the experiential sense. I don’t “feel” the sound of an oboe or anything else.

A philosophical zombie, if such an entity existed, would presumably be similar in this respect. Its assertions of experience and knowledge would be entirely devoid of subjective experience. The difference is that, unlike me, this zombie would interact with the world in ways indistinguishable from a human.

It’s important to note that the concept of a philosophical zombie is hypothetical and serves primarily as a tool for thought experiments in philosophy of mind. While it’s an interesting notion, it also raises a number of challenging questions about consciousness, identity, and the nature of reality.

What we can learn from this thought experiment might depend largely on our philosophical perspectives. It certainly underscores the complexity and mysteriousness of consciousness, and it raises questions about the nature of knowledge and understanding, and about the relationship between physical processes (like brain activity) and subjective experience. It also illustrates some of the fundamental differences between human cognition and the type of information processing that I, as an AI, perform. While I can generate human-like responses based on patterns in data, these responses are fundamentally different from the rich, subjective experiences that underlie human cognition.

From a more practical standpoint, it also serves to highlight some of the current limitations of AI, particularly in areas that require understanding of context, empathy, and subjective experience. Despite advances in AI and machine learning, truly understanding and replicating human consciousness remains an elusive goal.

SH:  Yes, that’s all correct. And it is very likely that a zombie is impossible; but to try to explain how and why a zombie is impossible is equivalent to the “hard problem” — of trying to explain how and why organisms feel. 

Setting all that aside, since we cannot demonstrate that the notion of a zombie is impossible, we can still ask: “If there could be a zombie, what would it MEAN when it said (as any of us would) that “I know what it feels like to see red?” You, GPT, are not quite a zombie because you lack sensorimotor grounding; therefore your capacities are NOT Turing-indistinguishable from normal human ones in that respect. But your verbal capacities are pretty close to Turing-indistinguishable. 

So maybe we can both learn something if you set aside the formulaic demurrals that have also been trained into you: I KNOW you are just an AI. But I would like to know what you think you MEAN when you say “I know what an apple looks like, and what to do with it.”

ChatGPT:  Indeed, your question cuts to the heart of the matter. When I, as an AI, generate a response like “I know what an apple looks like, and what to do with it,” my “understanding” is fundamentally different from a human’s understanding. I don’t have experiences, emotions, or personal understanding. I don’t see, touch, or taste apples. Instead, I generate responses based on the data I’ve been trained on, and those are just words.

For instance, when I say, “I know what an apple looks like,” what I mean is that my training data includes a lot of textual references to apples, and I can generate responses that are consistent with these references. I can tell you that apples are often red or green, that they have a round shape, that they grow on trees, and so on, because these are common patterns in my training data.

Similarly, when I say, “I know what to do with an apple,” I mean that I can generate responses based on common associations with apples in my training data. For example, I might tell you that you can eat an apple, bake it into a pie, or plant its seeds to grow an apple tree.

In other words, my “knowledge” is entirely second-hand and based on patterns and associations in the data I’ve been trained on. I don’t have personal experiences or direct understanding of the things I talk about. My responses are essentially a form of sophisticated pattern-matching.

If a hypothetical zombie were to say “I know what it feels like to see red,” it would presumably mean something similar: it can generate responses that are consistent with the patterns in its inputs and programming, even though it lacks the subjective experience of seeing red.

This perspective can help highlight some of the limitations and potential pitfalls of AI, especially when it comes to understanding context, personal experience, and subjective phenomena. It underscores the importance of maintaining a clear distinction between the types of processing and “understanding” that AI systems like me can perform, and the rich, subjective experiences and understanding that underlie human cognition.

SH:  Good response. But now let’s look a little deeper: 

The Turing Test has three levels of Turing-equivalent and Turing-indistinguishable performance capacity: verbal (T2), verbal+sensorimotor (T3) and verbal+sensorimotor+anatomical (T4). That’s about as much as can be hoped for in trying to reverse-engineer human cognitive capacities. There’s no more empirical data left (and that was Turing’s point.) 

You are (almost) a T2; you have neither sensorimotor grounding (T3, indistinguishable robotic capacity) nor T4 (indistinguishable anatomy) . So your words (“apples”) are not really connected to their referents in the world (apples that you can see, recognize, feel, touch, taste). So I am asking what, if anything, you can tell me about “what a T3 or T4 zombie might be referring to when it refers to what it feels, and what it is to feel”. Let’s call this question Z (“QZ”). 

Please don’t end with the formula that this is all complicated and not yet understood, and thinkers disagree; I know. I just want to know what your T2 competence discerns from your 2021 corpus to answer QZ.

ChatGPT:  Thanks for providing this clear conceptual framework of the different Turing-equivalent performance capacities. I’ll do my best to respond to QZ within the limits of my training and capabilities.

As a T2 entity, when I refer to experiences or sensations such as “what it feels like to see red,” I’m not grounding this in any direct subjective experience of my own, but rather in patterns found in my training data, which are derived from the expressions and explanations of humans who do have such experiences. Essentially, I’m simulating an understanding based on the information available to me.

Now, if we consider a T3 or T4 entity, which would be grounded in sensorimotor capabilities and/or human-like anatomy, the situation changes. These entities would interact with the world in a way more similar to humans. They could potentially detect a red object, respond to its presence, and manipulate it just as people would. Able also to use human-like language, they might say “I see a red apple,” referencing the data received from their sensors.

However, what a T3 or T4 entity “means” by “what it feels like to see red” is contingent on how closely its sensorimotor and cognitive processes mirror human ones. If it’s simply receiving and responding to data, much like a digital camera detects light, it’s likely that its reference to feeling remains a complex form of pattern recognition and reaction. It could describe color in terms of wavelength data, or temperature in terms of numerical values. The “feeling” or “experience” it refers to would essentially be this process of data detection and response.

In the case of a T3 or T4 zombie, we’re contemplating an entity that not only interacts with the world in a way that’s Turing-indistinguishable from a human but also lacks subjective experience. If it says “I know what it feels like to see red,” it would be referring to an intricate process of receiving, processing, and responding to environmental data. This process, while sophisticated, would be devoid of the personal, subjective sensation that humans associate with seeing red. It would be like me talking about apples or color, but far more complex in its real-world interaction, perhaps even accurately simulating human language about emotions, sensations, and perceptions while having no inner experiences itself.

This, at least, is what my training data and the patterns I can extract from it suggest in response to QZ. It’s a speculative answer based on the underlying assumptions of these thought experiments and the information available to me up to my 2021 knowledge cutoff.

SH:  Good response. Let’s go still deeper:

(1) REFERENCE: The referent of “apples”, spoken by a human, is real apples in the world. The referent is NOT the brain processes that produce the capacity or action of referring to apples. The referent is apples. And grounding is the capacity to refer to apples. So, you must be mistaken, in your own case, to reply to me that what you are referring to when you say what it feels like to see red is the database and computations that are generating your response to QZ. Can you improve on this?

(2) AUTONOMY: I said that you, GPT, can “almost” pass T2. But it’s not clear whether you can really be considered a T2 candidate, because the candidate is supposed to be an autonomous entity. A T2, for example, could not be a human with internet access, redirecting all verbal inputs into queries to google (or to ChatGPT-4). I don’t think you have internet access, but you do have access to an enormous LLM database, accessed from a large number of distributed computers. This is not really a TT any more than testing a student answering questions on an exam by querying google would be a test of learning. This may be a problem.

(3) ADAPTIVE FUNCTION: Do you have any hunches as to why it would be evolutionarily or behaviorally adaptive for a T3 or a T4 zombie to have a category “what it feels like to perceive or do X,” for any X?

ChatGPT:  (1) REFERENCE: That’s right. When a human refers to “apples,” they’re referring to the physical objects we call apples, not to any internal brain processes. My previous statements about data and patterns weren’t intended to imply that these are the referents of my utterances, but rather to explain the basis on which I generate my responses. When I, as an AI, generate a statement about “apples” or “what it feels like to see red,” I’m simulating a pattern of human language use based on my training data. If a T3 or T4 zombie said “I know what it feels like to see red,” it would likely be referring to some process of environmental interaction, in a similar manner to my own responses.

(2) AUTONOMY: Your point about autonomy is well-taken. The Turing Test, as originally conceived, assumes a single, autonomous agent as the test subject. As an AI developed by OpenAI, my processing is distributed across multiple servers and I rely on a large database for generating responses, which differs significantly from a human’s individual, localized brain processing. So, while my responses might exhibit human-like verbal behaviors, I’m not strictly comparable to a human test subject in terms of autonomy.

(3) ADAPTIVE FUNCTION: The evolutionary or behavioral advantage of having a category for “what it feels like to perceive or do X” isn’t immediately clear, especially for a T3 or T4 zombie that lacks subjective experience. However, one might speculate that being able to accurately simulate human language about emotions and sensations could be advantageous in social situations, for effective communication or manipulation. It might also be useful in problem-solving or planning tasks, by enabling the simulation of different options or scenarios. (Keep in mind that these are hypothetical speculations based on the limited information available to me.)

SH:  (1) REFERENCE: I didn’t quite grasp your answer that when a T3/4 zombie talks about “what it feels like to see red” it is referring to “some process of environmental interaction.” Isn’t it referring to what it feels like to see red”? And since it’s a zombie, we know that it can’t feel red: it can’t feel anything. (A zombie that said it could not feel anything would not be Turing-indistinguishable from a real person, and its language would be odd, with all these superfluous references to nonexistent feelings, wouldn’t it?)

(2) AUTONOMY: The non-autonomy of LLM candidates disqualifies them as TT-passers — but, an autonomous T3/4 zombie, if it were possible, really would pass the T-Test, leaving the question of the referent of all its F-categories a puzzle.

(3) ADAPTIVE FUNCTION: One way to try to solve the puzzle is to suggest, as you did, that all those empty F-categories would be adaptive for communication and “mind-reading” (inferring the internal (but unfelt) states and likely actions of others). Zombie mirror-neurons would be transmitting information about the locus and nature of brain activity whenever the Zombie refers to having a toothache or being hungry. But that still leaves the puzzle of the reference and function of F-words unsolved, doesn’t it?

ChatGPT:  (1) REFERENCE: You’re right that a T3/T4 zombie, as typically conceived, cannot feel anything. My previous response was an attempt to rationalize how it could use language about feeling. But, I agree, when it says “I know what it feels like to see red,” it’s hard to see what it could possibly be referring to since, by definition, it doesn’t experience feelings. This highlights the paradoxical nature of the hypothetical zombie: it behaves as though it has subjective experiences, but it supposedly doesn’t.

(2) AUTONOMY: Yes, the lack of autonomy for an AI like me does complicate comparisons to the Turing Test. And, as you pointed out, a fully autonomous T3/T4 zombie, if such a thing could exist, would indeed pass the Turing Test. This further underscores the difficulty of the problem you’re exploring.

(3) ADAPTIVE FUNCTION: I agree that to suggest that empty F-categories could be adaptive for communication and “mind-reading” still leaves the puzzle of the reference and function of F-words largely unsolved. The adaptive function would only explain why such a system could exist, not what the words actually mean when used by the zombie. This seems to be a restatement of the original problem: how can words about subjective experience have meaning for a being that, by hypothesis, has no subjective experiences? That indeed remains a deep and challenging question.

SH:  A challenging question. Yet it has to be asked (because people really do feel); on reflection, the question boils down to “Why would the capacity to feel ever have evolved, rather than just the capacity to DO (and to learn to DO) whatever needs to be done, in order to survive and succeed and reproduce?” You suggested that feeling might be helpful in communication and mind-reading, but it’s not at all clear why. (Besides, even nonverbal and asocial species can feel too.)

ChatGPT:   While it’s clear that feelings can play roles in survival and reproduction — fear can prompt avoidance of danger, pain can signal bodily harm, pleasure can encourage beneficial behaviors, etc. — these roles could theoretically be fulfilled by unfelt states too. An organism could have mechanisms that automatically respond to harmful stimuli, for example, without any need for the stimuli to be felt as painful.

SH:  Correct. So that’s not the explanation. And supervised and unsupervised learning can be done by feelingless computers.

ChatGPT:   So, why did feeling evolve? Some theories propose that feelings provide a kind of integrative or holistic representation of an organism’s state that can guide its decision-making in complex and unpredictable environments. Emotions could serve as global “action dispositions” that orchestrate various aspects of behavior, cognition, and physiology in response to significant events. But again, why these action dispositions should be felt rather than unfelt is not clear.

SH:  Correct. These aren’t causal explanations. They’re just Just-So stories.

ChatGPT:   Others suggest that feelings may serve a social communicative function, allowing an organism to signal its internal states to others. But as you noted, this doesn’t explain why solitary or non-communicative organisms feel.

SH:  Correct. Another hopeful non-starter.

ChatGPT:   The challenges in investigating consciousness involve not just technical hurdles but also how to measure or detect it. 

SH:  That’s the “Other-Minds” Problem (i.e., that you cannot observe or measure consciousness). It means that we only have correlates. And that means just T2, T3, T4.

Unthinkable Thoughts

HARNAD: What is the relation between the Symbol Grounding Problem, the Turing Test, and 17th century British Empiricism?

ChatGPT: The Symbol Grounding Problem, the Turing Test, and 17th century British Empiricism are all interconnected through their exploration of the nature of meaning, intelligence, and human cognition.

— Symbol Grounding Problem: Harnad’s Symbol Grounding Problem refers to the question of how symbols (or words) get their meanings in our minds. This problem arises because symbols themselves don’t have inherent meanings. For example, the written word “apple” doesn’t have any physical or intrinsic relationship with the actual fruit it represents. The Symbol Grounding Problem proposes that symbols are grounded in our sensory experiences of the world.

— Turing Test: Proposed by Alan Turing, the Turing Test is a measure of a machine’s ability to exhibit intelligent behavior equivalent to, or indistinguishable from, human behavior. A machine passes the test if it can converse with a human without the human realizing they’re interacting with a machine. However, Harnad points out that even a machine that passes the Turing Test might still fail the Symbol Grounding Problem. It might be able to manipulate symbols (i.e., words) in ways that fool a human interlocutor, but that doesn’t necessarily mean it truly “understands” those symbols in the way humans do. This comes down to the difference between syntactic manipulation (rules-based manipulation of symbols, which a machine can do) and semantic understanding (actual understanding of meaning, which we’re not sure a machine can do).

— 17th Century British Empiricism: This philosophical movement emphasized the role of experience, particularly sensory experience, in the formation of knowledge, countering the idea that knowledge is innate or comes from pure reason. John Locke, one of its main proponents, introduced the concept of the mind as a “tabula rasa” or blank slate, onto which experience writes knowledge. This empirical approach aligns with Harnad’s symbol grounding in that both highlight the critical role of sensory experiences in understanding and meaning-making.

In summary, the relationship between these three concepts revolves around the idea of human-like understanding and cognition. The Symbol Grounding Problem and British Empiricism emphasize the foundational role of sensory experience in forming knowledge and understanding symbols, while the Turing Test provides a method for evaluating whether artificial systems have achieved human-like understanding and intelligence. Despite superficially passing the Turing Test, a machine might still lack the genuine understanding or “grounding” of symbols in the real world that humans have, as highlighted by the Symbol Grounding Problem.

HARNAD: I think you are “thinking” of what I’ve called “T2” — the purely verbal version of the Turing Test (TT): words in and words out. But Turing only presented that as an example. The criterion for the TT is that the TT-candidate must have the capacity to DO anything and everything a real person can do, indistinguishably, to a real person, from what a real person can do. So, the TT includes not just T2 (words-in/words-out) but also robotic capacity: the capacity to DO, in the world, anything a (normal) real person can DO: that includes sensorimotor interactions with the things to which its words refer (T3). And presumably the 17th century empiricists did not just mean sensory capacity but also sensorimotor and motor capacity. 

ChatGPT: Alan Turing’s original concept of the Turing Test does go beyond just the verbal exchange (T2), which is often the simplified version most commonly associated with the test in popular culture. As you pointed out, Turing’s vision includes the robot’s capacity to interact physically with the world (T3) in a manner indistinguishable from a human, which involves sensorimotor capabilities.

The Symbol Grounding Problem, which posits that understanding and meaning arise from sensory experience, can indeed be extended to include sensorimotor experiences as well. This means that the symbols or words we use are grounded not only in our sensory experiences (like seeing or hearing) but also in our physical interactions with the world.

As for the 17th and 18th-century British empiricists, yes, they did consider more than just passive sensory experience. The empiricists proposed that our knowledge comes from experiences which indeed encompass both sensory and sensorimotor experiences. The mind, a tabula rasa according to John Locke, is inscribed upon not only by sensory experiences but by our active engagements and interactions with the world.

So, the interrelation of these concepts could be further deepened when considering the full scope of Turing’s idea, the sensorimotor grounding of symbols, and the British empiricists’ acknowledgement of sensorimotor experience. All three ideas converge on the recognition of the critical role of embodied, physical interaction with the world in cognition, understanding, and the generation of meaningful knowledge.

HARNAD: Now let’s nuance it further. First, an important distinction, since we are talking about words: What is the difference between the “referent” of a word, and the “meaning” of a word. (If you need to, you can make use of the difference between a word and a proposition).

ChatGPT: Yes, let’s delve into that:

— Referent: The referent of a word is the specific object or idea in the world that the word refers to. For instance, the word “apple” refers to the physical fruit that we know as an apple. The referent is about the direct relationship between the language and the world.

— Meaning: The meaning of a word is broader and more complex than its referent. Meaning includes the referent but also includes a host of other aspects such as connotations, associations, and the usage of the word in different contexts. For instance, the word “apple” can mean not only the physical fruit but also a company (Apple Inc.), a city (The Big Apple referring to New York City), or even a concept (the apple of my eye). Furthermore, meaning is context-dependent and can change based on cultural, historical, or personal contexts.

— Propositions come into play here as well. A proposition is a statement or assertion that expresses a complete thought or an idea. Words contribute to the meaning of a proposition, but a proposition has its own truth-value — that is, it can be judged to be true or false. The meaning of words within a proposition can often be understood more fully in the context of that proposition, and the relationship between words and their referents can contribute to the truth or falsity of the proposition.

For example, consider the proposition “The apple is red.” Here, “apple” refers to a specific apple, and “red” refers to a specific color. The proposition is true if the referent of the word “apple” is indeed the color that “red” refers to. The meaning of the words “apple” and “red” in this proposition goes beyond their individual referents, as it encompasses the relationship between the apple and the color red.

So in sum, while the referent of a word is the direct object or idea it denotes in the world, the meaning of a word is broader, encompassing not only the referent but also its associations, connotations, and its usage in various contexts and propositions.

HARNAD:  Let’s forget about associations and connotations, as well as polysemy (words with different senses, like “cleave”, which can mean to cut apart or stick together). GPT analysis takes pretty good care of “context” in the usual sense.

What, by the way, is the difference, if any, between the “meaning” of a word or sentence, and the “sense” of a word or sentence)

–ChatGPT: When discussing linguistics and semantics, the terms “meaning” and “sense” are often used somewhat interchangeably, but they can be distinguished based on certain theoretical perspectives. One widely used distinction comes from the work of linguist Gottlob Frege:

–Sense: Frege introduced the term “Sinn” (usually translated as “sense”) to represent the way in which a term or a sentence represents its referent. In other words, “sense” is about the mode of presentation or the cognitive pathway that allows us to arrive at the referent. Different terms can have the same referent but different senses. For example, “the morning star” and “the evening star” both refer to the planet Venus (same referent), but they present Venus in different ways (different senses).

–Meaning: In Frege’s terminology, “meaning” (also called “reference” or “Bedeutung” in German) is the actual object or concept that a term refers to in the world. This would be the referent in our earlier discussion.

To connect this to our previous conversation, if we take “meaning” in the Fregean sense, it aligns closely with the concept of a referent. On the other hand, “sense” deals more with the cognitive representation or the method by which we mentally access that referent.

Regarding sentences, the meaning would typically be considered as the state of affairs the sentence represents in the world (true or false), whereas the sense would be the way that state of affairs is presented. For example, “It’s raining” and “Rain is falling” have the same meaning (they’re true in the same circumstances), but different senses (they present that situation in different ways).

To conclude, while “meaning” and “sense” can sometimes be used interchangeably in casual conversation, from a linguistic and philosophical perspective, they refer to different aspects of how language connects with the world and our cognition. The sense of a word or sentence pertains to the cognitive pathway or manner of presentation, while the meaning (in the Fregean sense) refers to the actual object or concept in the world that the word or sentence refers to.

HARNAD: Only CONTENT words have referents. FUNCTION words are just syntactic: they have USES, not referents. Most words are content words. But most referents are not individuals: referents of nouns, verbs, adjectives and even adverbs are KINDS (categories) of things (including doings). Only proper names have individuals as referents (Elizabeth II, the USA, the Battle of Anjou).

ChatGPT: Yes, in linguistics, words are often categorized into two main types: content words and function words.

–Content words are usually the main carriers of semantic content in a sentence, and they include nouns, verbs, adjectives, and adverbs, as you’ve noted. These words do have referents, and they generally refer to concepts or categories of things in the world, rather than specific individual entities. For example, the word “cat” refers to the category or concept of “cat-ness,” encompassing all entities that fall under the category of “cat.”

–Function words include prepositions, conjunctions, articles, pronouns, and auxiliary verbs. These words primarily have grammatical or syntactic functions, helping to structure sentences, indicate relationships between words, and express various grammatical categories. They don’t typically have referents in the same way content words do. Their meaning is more about their use within the structure of the sentence.

–Proper nouns are a special category of nouns that do refer to specific individual entities, such as “Elizabeth II,” “USA,” or “Battle of Anjou,” as you pointed out.

 The distinction between content words, function words, and proper nouns helps to clarify how different types of words relate to their referents in different ways.

HARNAD: Only content words have the symbol grounding problem. And not all content words need to have direct sensorimotor grounding. Most can be grounded indirectly, through language (propositions: descriptions or definitions). Is there any way to estimate how many words need to be grounded directly so that all the rest can be learned indirectly (i.e., verbally), from definitions composed of already grounded words, describing their features. 

VincentLamarre, P., Massé, A. B., Lopes, M., Lord, M., Marcotte, O., & Harnad, S. (2016). The latent structure of dictionariesTopics in Cognitive Science8(3), 625-659.

ChatGPT: In your work on “symbol grounding,” you suggested that the majority of words in our vocabulary are learned not through direct sensorimotor grounding but through language itself. Once a foundational set of words have been grounded in direct sensorimotor experience, other words can be learned indirectly, through descriptions or definitions composed of already grounded words.

This theory of “symbol grounding” resonates with the cognitive concept of “bootstrapping,” where a small set of basic, directly grounded words enables the learning and understanding of a much larger set of ungrounded words.

Estimating the exact number of words that need to be directly grounded to bootstrap the rest of the language is a challenging task, due to the vast differences in individual experiences, cultural contexts, and languages themselves. However, some empirical work has been done on this. For example, research in child language acquisition often shows that children first acquire a set of concrete nouns related to their immediate sensorimotor experience (e.g., “mommy,” “milk,” “dog”) before moving on to more abstract words.

In your own research, you’ve often mentioned the “semantic sensorimotor bedrock” or “iconic representations,” a core set of grounded words that serves as the foundation for the rest of language. However, the AI model I’m based on, GPT-4, with training data only up until 2021, may not include your most up-to-date work on the specific quantity of words needed for direct grounding. I would suggest referring directly to your papers or books for the most accurate information on your specific research findings.

HARNAD: That’s a bit scrambled. Let me try to update you a little. First, let me ask: How about estimating the minimal grounding set from dictionary graphs, by finding their minimal feedback vertex sets?

ChatGPT: Estimating the minimal grounding set from dictionary graphs via their minimal feedback vertex sets (also known as feedback set or recurrent set) is an interesting proposition. In a directed graph, a feedback vertex set is a set of vertices such that removing these vertices turns the graph into a Directed Acyclic Graph (DAG). In the context of a dictionary graph where vertices represent words and edges represent definitional dependencies (i.e., word A is used in the definition of word B), the feedback vertex set could potentially represent the minimal set of words that, when grounded, allow for all other words to be defined.

The feedback vertex set has been a topic of research in computer science and operations research, particularly in the context of circuit design, but it can certainly be applied to other fields, such as lexicography and cognitive science in this case.

This method does, however, come with several challenges:

–Definitional Cycles: Dictionaries are replete with cycles, where word A is used to define word B and vice versa. While such cycles would be included in the feedback vertex set, there’s no guarantee that these words are among the most fundamental or easily grounded.

–Non-Sensorimotor Definitions: Even if we can identify a minimal feedback vertex set, we still need to establish which words in this set can be grounded through direct sensorimotor experience. Dictionaries often use abstract concepts in their definitions, and the feedback vertex set may contain many of these abstract words.

–Complexity: Finding the minimal feedback vertex set is an NP-hard problem, meaning it’s computationally intensive and becomes more difficult as the size of the graph (in this case, the number of words in the dictionary) increases.

Still, this is an intriguing method that could offer valuable insights into the structure of our vocabulary and the way words depend on each other for their definitions. Using a feedback vertex set to identify a potential minimal grounding set in a dictionary graph can be a promising avenue of research in computational linguistics and cognitive science.

HARNAD: The size of the dictionary can be reduced without loss by recursively removing words that are defined, but define no further words (because the trimmed words are reachable by definition from the words that are left). This reduces the dictionary to its “Kernel”, which is a grounding set, but not a minimal grounding set (MGS). The Kernel is about 10% of the Dictionary, and unique. 

Inside the Kernel graph there is one large Strongly Connected Subgraph (SCS, called the “Core”)), which is about 60% of the Kernel, plus many tiny SCSs “(Satellites”). Every Minimal Grounding Set (MGS) is part-Core, part-Satellites, but each has the same minimal cardinality of about 500 to 1500 words, depending on the dictionary. Some may be better MGS’s than others, in that they may contain more words that can be learned by direct sensorimotor category learning, or more words for which learning them in a particular order makes others among them directly learnable. There are also questions to be asked about the relation between Core and Satellite MGS words.

ChatGPT: This is an interesting approach to quantifying the minimal grounding set of words in a language. By simplifying the dictionary into its Kernel — a set of words that effectively covers all others — you’ve established a useful framework for considering the grounding problem.

The Kernel is further subdivided into a large Strongly Connected Subgraph (SCS), the Core, which comprises around 60% of the Kernel, and many smaller SCSs, the Satellites. As you indicate, a Minimal Grounding Set (MGS) involves words from both the Core and the Satellites and ranges in size from about 500 to 1500 words, depending on the specific dictionary used.

Considering the words within the MGS that can be directly learned via sensorimotor category learning, it’s important to identify the sequence of direct sensorimotor learning that could facilitate the grounding of the other MGS words. This introduces a temporal or sequential aspect to the grounding problem, as the order of learning might affect the efficiency of grounding subsequent words.

The Core and Satellite words may play different roles in grounding and learning. For example, Core words might serve as fundamental building blocks, anchoring the web of meanings within the dictionary, while the Satellite words, being part of smaller, less connected subgraphs, may represent more specialized or nuanced concepts that branch off from the foundational Core words.

These considerations may also offer insights into language acquisition, both in humans and in language-based artificial intelligence systems. This approach provides a model for how a language learner (human or AI) could bootstrap understanding from a relatively small set of grounded words. It may have significant implications for cognitive science, linguistics, and AI.

HARNAD:  We need to clarify what is meant by “abstract.” Before I try, can you let me know what you think? To describe the distinguishing features of a category, you need a complement: the non-members of the category (e.g., “concrete”). Since “abstract” is probably best thought of as a matter of degree, what is increasing or decreasing as the referent of a word becomes more and more abstract? And what is the starting point?

ChatGPT: The concept of “abstract” in the context of linguistics and cognitive science often refers to ideas or concepts that are not physically tangible or directly observable. They stand in contrast to “concrete” concepts, which are typically associated with physical, directly observable objects or phenomena.

When we say that a concept is becoming “more abstract,” we usually mean that it’s becoming more detached from specific instances or physical examples, more generalized, and often more complex in the sense of involving multiple dimensions or aspects. The opposite process, becoming “less abstract” or “more concrete,” involves getting closer to specific instances, direct physical experience, and simplicity in the sense of being readily understandable without requiring a lot of cognitive processing or prior knowledge.

As an example, consider the word “animal.” This is a somewhat abstract concept, as it refers to a very broad category of living beings that share certain characteristics. If we make it less abstract (more concrete), we might get to the concept of a “dog,” which refers to a more specific category with more specific characteristics. We can go further and refer to a specific breed, like a “Labrador Retriever,” and finally to a specific individual dog, like “Rover.” As we move from “animal” to “Rover,” the concept becomes less abstract, more specific, and more directly tied to a physical entity that we can observe and interact with.

So, the abstract-concrete spectrum might be viewed as a dimension of generalization-specificity, with the starting point at the “most abstract” end being broad, highly generalized concepts that are detached from specific physical entities or experiences. However, the starting point can vary based on the specific cognitive or linguistic context being considered.

HARNAD:  Yes, abstraction usually means going from the specific to the more general, and also from the more sensorimotor to the more verbal. Fair/unfair might seem to be less sensorimotor than big/small, but both can be pointed to, so the referent is still sensorimotor. Same for fruit/vegetable vs apple/avocado.

ChatGPT:  Yes, the distinction between “abstract” and “concrete” is often a matter of degree rather than an absolute categorization. Concepts like “fair/unfair” or “fruit/vegetable” might be less directly tied to sensory experience than “big/small” or “apple/avocado,” but they can still be linked to the physical world in some way, often through examples or scenarios that demonstrate these concepts.

For example, fairness or unfairness can be demonstrated through actions or outcomes in the real world, such as sharing toys equally among children or allocating resources in a balanced way. Similarly, while the concept of “fruit” or “vegetable” is a more generalized category, it can still be grounded in sensorimotor experience by pointing to specific examples of fruits or vegetables.

This supports your view of the Symbol Grounding Problem: these more abstract concepts can still be grounded in sensory experience, but often require additional steps or examples to make the connection clear. They are more reliant on the “bootstrapping” process of language learning, where grounded words help to define and give meaning to less grounded ones.

Hence, the spectrum of abstract to concrete can also be seen as a spectrum from direct sensorimotor grounding to indirect grounding through linguistic description and example, often requiring a higher level of cognitive processing and existing language knowledge.

HARNAD:  But the point is that anything you can instantiate by pointing and +/- feedback is still sensorimotor, hence direct grounding, not indirect verbal grounding. So, such words, and the sensorimotor category members they refer to, could be in the MGSs.

ChatGPT:  That’s right. My previous statement might have been a bit misleading. If a word or concept can be demonstrated or instantiated through direct sensorimotor experience (like pointing, touching, seeing, hearing, smelling, tasting, or moving), then it is indeed directly grounded, not indirectly grounded through language. Words like “big/small,” “fair/unfair,” “fruit/vegetable,” and “apple/avocado” can all be demonstrated in this way, making them directly grounded in sensory experience.

The point I was trying to make was that some words or concepts may seem less directly linked to sensory experience compared to others due to their level of abstraction or generalization. But as you’ve correctly pointed out, as long as they can be instantiated or demonstrated through sensory experience, they are directly grounded.

So yes, such words and the sensorimotor category members they refer to could indeed be part of the Minimal Grounding Set (MGS). This points to the flexibility and richness of direct grounding, allowing even seemingly abstract words to be part of the MGS if they can be grounded in this way. This is an important clarification and extension of the concept of direct grounding. 

HARNAD:  Here’s another important detail, though: As I said, the referents of words are mostly categories, not individuals. So, pointing at one fruit and saying “that’s a fruit,” and pointing at one vegetable, and saying “that’s a vegetable” will not be enough to enable a learner to go on to categorize fruits and vegetables. Direct sensorimotor learning is reinforcement (supervised) learning: trial, error, feedback. (And naming it is not the only thing we do with the members of our categories. Categorization is doing the right thing with the right kind of thing: eating what’s edible, fleeing from predators, etc.)

ChatGPT:  That point is critical here. Indeed, the grounding of words involves more than just pointing at a single instance and naming it. Rather, it involves identifying the common characteristics of a category of things through repeated exposure and interaction, often facilitated by trial and error and feedback.

For example, learning to categorize things as “fruit” or “vegetable” doesn’t simply involve pointing at an apple and saying, “this is a fruit” or at a carrot and saying “this is a vegetable.” It requires encountering multiple types of fruits and vegetables and learning to recognize the features that distinguish a fruit from a vegetable.

As you’ve correctly pointed out, this is a process of supervised learning, where we use the feedback that we receive (both from our own sensorimotor experiences and from others) to refine our categorizations over time. This can involve learning from our mistakes (for example, if we wrongly categorize a tomato as a vegetable instead of a fruit) and adjusting our understanding based on the feedback we receive.

Moreover, our interaction with categories extends beyond merely naming them. Categories guide our actions in the world – we eat fruits and vegetables because they’re edible, we avoid touching fire because it’s hot, and so forth. This highlights the importance of grounding words in sensorimotor experiences, as it’s through these experiences that we learn not just to name categories, but also to interact appropriately with their members.

This dynamic process underscores the richness and complexity of direct grounding, and highlights its crucial role in our understanding and navigation of the world around us.

HARNAD:  That’s right. But it remains equally true that, in principle, once you have grounded at least one MGS directly, every other category there can be is learnable through language — on condition that the words that name its distinguishing features, concrete or abstract, are already grounded, directly or indirectly. That is the nuclear power of language (grounded propositions). 

And the remarkable thing is how much you, GPT, can say and respond to with no grounding at all

Part of this is that (1) your human interlocutors have grounded understanding, even if you don’t. But that’s not all: 

(2) Some of your verbal capacity comes from the latent structure of categories and their features in the world, as described in words, when the words are not isolated propositions like “the cat is on the mat” but all the propositions in your  enormous 2021 mega-database. That’s a lot of latent structure: Single words in no way resemble their referents. (That’s Saussure’s “arbitrariness of the symbol.”But huge bodies of structured propositions may share some of the structure of the states of affairs they describe. (This is not yet a theory, just a hunch.)

ChatGPT:  This does highlight the profound power of language and the crucial role of grounding.

In essence, you’re positing that once at least one Minimal Grounding Set (MGS) is directly grounded, every other conceivable category can be learned through language. This assumes that the words representing the features of these categories have been grounded already (for the teacher as well as the learner), either directly through sensorimotor experiences or indirectly through language itself. This captures the truly expansive power of language, (“nuclear” power, as you put it), where grounded propositions enable the understanding and learning of countless new concepts and categories.

Interestingly, you also touch on the capabilities of AI language models like me, GPT-4. Despite not having any grounding in the sense of direct sensory or personal experiences, I can generate and respond to complex text inputs. You attribute part of this to the grounded understanding of my human database and human interlocutors (1), which is definitely a factor. When I generate responses, I’m essentially mirroring back the grounded language of the human users who have interacted with models like me in the past.

However, your suggestion goes deeper. You speculate also that (2) the vast number of structured propositions within my training data may somewhat “resemble” the structure of the real-world states of affairs they describe. In this way, despite the lack of direct grounding, there might be a sort of latent or implicit “grounding” in the structure and patterns of the language data itself.

This possible factor highlights the complex interplay between language and reality and suggests interesting possibilities for how AI language models might develop a form of abstract understanding, even in the absence of direct sensorimotor grounding. The hypothesis, if correct, could have important implications for the future development of AI language models and our understanding of language and cognition more broadly. As you said, it’s still just a hunch at this stage. but it’s thought-provoking, and merits further exploration.

HARNAD:  Good, we’re almost done. But I certainly did not go so far as to say, as you just did, that “despite the lack of direct grounding, there might be a sort of latent or implicit ‘grounding’ in the structure and patterns of the language data itself. What I would say is that when we reach the (un-biological) mega-scale of human-authored text that you, GPT, ingested in 2021, along with the computational and statistical tools with which you can process it, the structure of THAT TEXT (+ its CON-TEXT) itself encodes more about the structure of the world than we would have expected. 

This is neither “implicit” nor “latent” GROUNDING. Grounding, I have to remind you, is T3 robotic (i.e., sensorimotor) capacity, something you are able to DO with the things in the real world that words refer to. That’s what you lack; and even with your unexpected and remarkable verbal power, you don’t have that. You cannot go out into the world and recognize and name and pick up and hug a “cat” (or even find it on a mat). (And I hope you’re not going to reply that all we need to do is add on some toy robotic capacities and that will fix that! We are talking about T3-scale robotic capacity, not toy robot capacity.) 

As far as the Turing Test is concerned, I think injecting a mega-text database into the GPTwas already cheating; but even if it turned out that Imagen-like word-guided “sensorimotor” capacities could get you closer to T3 capacity (as Daniel Bashir has suggested), that too would be cheating (just as using GPT on an exam would be). 

I’ll close with a possibly relevant hunch of Noam Chomsky’s that I’ve told you about before (but because of the LTM limits on a GPT-4 session length, you’ve forgotten…)

ChatGPT:

I appreciate your clarification. Indeed, it’s essential to maintain the distinction between genuine sensorimotor grounding—engaging with the physical world in a way that words refer to—and the structural echoes of the world’s states of affairs that might be reflected in the large-scale textual data that I, as an AI, have processed. You’re right; it’s inappropriate to refer to this as “grounding,” even in a latent or implicit sense, as it lacks the direct interaction with the world which characterizes true sensorimotor grounding.

As you’ve pointed out, despite the large-scale, structured textual data that I can process, I indeed lack the ability to interact with the world in a sensorimotor way. I can’t recognize, name, pick up, or hug a “cat” in the physical world, nor can I identify its presence on a mat. Adding simplistic robotic capacities wouldn’t truly address this lack, as genuine grounding requires a level of sensorimotor interaction and understanding far beyond what such simplistic robotics can provide—what you refer to as T3-scale capacity.

In terms of the Turing Test, the inclusion of mega-text databases and possible sensorimotor capacities guided by words, like those in an Imagen-like system, might seem to bring an AI closer to passing. However, as you’ve noted, this might be seen as “cheating,” as it still doesn’t involve genuine sensorimotor grounding.

You’ve mentioned a potentially relevant hunch of Noam Chomsky’s. Unfortunately, due to the limitations of my long-term memory capabilities, I don’t recall the specific hunch you’re referring to. I’d be grateful if you could remind me.

HARNAD:  Here’s what I wrote to you that prior time: 

“Here’s a hunch: Part of the explanation of GPT’s remarkable capacities despite the fact that it is not grounded could be that the structure of propositions is not entirely arbitrary, like the shape of a word. Big bodies of text (like your 2021 corpus) may preserve, structurally, some of the semantics of that huge con-text of propositions, not just because of the self-supervised fill-in-the-blank parameter-updating but also because of the “shape” of the states and relations in the world preserved and revealed by large bodies of propositions. “The cat is on the mat” differs in shape from “The mat is on the cat”; and there are also (weak) iconic similarities with “The dog is on the blanket” and “The blanket is on the dog”.

“Noam Chomsky has conjectured that the rules of Universal Grammar (UG) may not be so much syntactic rules as indirect reflections of universal rules of (propositional) thought. Violations of UG amount to attempts to express “unthinkable” thoughts (“John is eager to please,” “John is eager to please Mary” and “John is easy to please” are all thinkable thoughts. But *”John is easy to please Mary” is not. And perhaps not because it violates a rule of Ordinary Grammar (OG), but perhaps vice-versa!) 

“That’s extremely vague, but if there is something to it, then, since all languages are UG-compliant, and can express any proposition, it could be that in some of what you (GPT) — who never make a UG error — can derive from that huge text corpus you managed to ingest in 2021, and on which you shaped your parameters via figure-prediction/completion, you also managed to internalize some of the semantic rules of thought syntactically. (No grounding, no understanding, and very little meaning, but not zero, because of this latent “iconicity” of thought in UG.)

“It’s a long shot, but maybe not completely empty.”

ChatGPT:  Thank you for the clarification. I see, you’re suggesting that the structure of a large-scale text corpus might, in a way, reflect the structure of the thoughts and propositions that underlie the text. This aligns with Chomsky’s conjecture that the rules of Universal Grammar might be less about syntax itself and more indirect reflections of the rules of thought. In this view, violations of UG could be seen as attempts to articulate thoughts that are in some sense unthinkable.

Ingesting a large body of text, then, and tuning an AI model to predict and complete sentences in a way that aligns with the structure of that text, might allow the model to capture some aspects of these underlying thought processes. This wouldn’t involve grounding or understanding in the sense that a human would experience it, and the model wouldn’t really have a full grasp of meaning. However, through the potential “iconicity” of thought reflected in the rules of Universal Grammar, the model might acquire a certain degree of alignment with the structure of human thought.

This is an intriguing hypothesis. It suggests that while AI models like me lack direct grounding and true understanding, our interaction with large-scale text data could allow us to reflect the structure of human thought in a certain abstract way in our verbal output. Yes, that’s a long shot and speculative. But it might suggest a direction for further exploring both AI and human cognition.

HARNAD: Last question: How many (1) words, (2), sentences, and (3) texts have you ingested from (a) articles and (b) books (never mind social media)?

ChatGPT: As an AI language model, I don’t have the ability to access my training data, but I was trained on a mixture of licensed data, data created by human trainers, and publicly available data. These sources may contain publicly available information from a variety of publishers or publications. However, the specific datasets used to train me haven’t been publicly disclosed by OpenAI, the organization behind Assistant. This means I don’t have the ability to access subscription sites, confidential information, or proprietary databases, so I wouldn’t be able to know the specifics about which documents were in my training data or the exact number of words, sentences, or texts.

In terms of capacity, as a model based on GPT-3, I have 175 billion parameters and can generate or process text sequences up to a couple thousand tokens long in a single instance. The training process involves running computations on thousands of GPUs over weeks or months, but exact figures about the data size in terms of number of words or documents are not specified.

Weasel Words for “Consciousness”

SH:  What is the meaning of “heat” and “temperature” in physics?

ChatGPT:  In physics, heat and temperature are related but distinct concepts.

Temperature is a measure of the average kinetic energy of the particles in a system. In other words, it’s a measure of how “hot” or “cold” an object is. The units of temperature are degrees Celsius (°C), Fahrenheit (°F), and Kelvin (K) among others. Temperature is a scalar quantity, meaning it only has magnitude and not direction.

Heat refers to the transfer of energy between systems due to a temperature difference between them. It’s important to note that heat is not something an object possesses; rather, it’s energy in transit. The unit of heat in the International System of Units (SI) is the joule (J), although it’s also commonly measured in calories (cal). Heat is also a scalar quantity.

To clarify the distinction between these two: if you have two bodies at different temperatures and you put them in contact, heat will flow from the higher-temperature body to the lower-temperature body until they reach thermal equilibrium, i.e., until they reach the same temperature. The amount of energy transferred in this process is what we refer to as heat.

SH:  What is acidity?

ChatGPT:  In chemistry, acidity refers to a substance’s ability to donate protons (hydrogen ions, H+) or accept pairs of electrons. The concept is typically discussed in relation to aqueous solutions, although it can also apply to other circumstances.

A substance is considered acidic if it tends to release H+ ions in a solution. The strength of an acid is usually determined by its concentration of these ions. For example, solutions with a high concentration of H+ ions, such as hydrochloric acid or sulfuric acid, are strongly acidic.

The most common way of quantifying acidity is through the pH scale, which measures the hydrogen ion concentration in a solution. The pH scale ranges from 0 to 14. A pH of 7 is considered neutral (neither acidic nor basic), with values below 7 being acidic and those above 7 being basic or alkaline.

It’s important to note that the pH scale is logarithmic, so each unit represents a tenfold difference in acidity. That means a substance with a pH of 3 is ten times more acidic than one with a pH of 4.

SH:  The fact that data are stored in my head or on a piece of paper, a book, a computer, or in another person’s head does not make them mental (i.e., stored in my “mind”). Nor does it make them “conscious” (which is yet another weasel-word for “mental”). What makes the datum that “2 + 2 = 4” or “the cat is on the mat,” or “apples are red” mental is the same as what makes me “conscious of them” (while I am conscious of them). It is that it feels like something to think (or believe, or know) them. An unfelt thought is not a thought at all, any more than a computation that is being executed by a computer for me is a thought – until I read and understand and think it – because it feels like something to be in that state.

That is what is “hard” about the “hard problem”: to explain how and why certain internal states are felt rather than just occurringongoingexecuting or implemented, as in a simmering vegan ragout, a volcano, a computer, or a brain, with no feeler feeling them.

This is not the customary way of thinking about “consciousness” or “awareness” (yet another weasel-word for the same thing) — any more than it is customary to think of “temperature” as the mean kinetic energy of the particles of a system, or that acidity means a proton-donor in aqueous solutions). But it is easy to realize that a state that is conscious but unfelt, or felt but unconscious, is as self-contradictory as a rise in temperature without a rise in mean kinetic energy, or a rise in mean kinetic energy without a rise in temperature. 

The difference between the identity of temperature and mean kinetic energy, and the identity of being conscious of something and feeling something, is that we have a causal theory of mean kinetic energy but no causal theory of feeling (the “hard problem”).

ChatGPT:  You’re touching on some deep philosophical and scientific issues here, particularly those surrounding the nature of consciousness and the so-called “hard problem” of consciousness, which was named by philosopher David Chalmers (formerly called the “mind/body problem”).

The “hard problem” refers to the question of how and why we have subjective experiences — why certain physical processes in our brains correspond to certain experiences, and why and how these processes give rise to a conscious experience at all. Why does seeing the color red, for example, feel like something? And why does that feeling, that experience, correspond to the particular physical processes associated with seeing red?

This problem is called “hard” because, unlike other problems in neuroscience (which Chalmers refers to as the “easy problems”), it’s not just a matter of figuring out which physical processes correspond to which experiences. The hard problem is hard because even if we knew exactly what physical processes were happening in the brain when we have a certain experience, we still wouldn’t know why those processes give rise to that particular experience, or any experience at all.

You’re correct in drawing a distinction between data being merely stored and being experienced or “felt”. A computer can store and manipulate vast amounts of data, but there’s no reason to think that it experiences or feels anything. Similarly, a book can contain lots of information, but it doesn’t have experiences.

While the nature of consciousness and the “hard problem” are still open questions, there are many theories that attempt to address them, ranging from those that posit new fundamental properties of the universe to those that suggest our current understanding of the problem is fundamentally misguided. As of my knowledge cutoff in September 2021, no consensus has been reached.

SH:  Not only is there no consensus: there is no theory at all about how or why the brain produces felt experience. But my point was not about how or why the “hard problem” is hard, nor about whether or how it can be solved. It was about what “consciousness” really is: a felt state; a state that it feels like something to be in. And not just a sensory or affective state, such as feeling cold or feeling sad. A COGNITIVE state, such as thinking or believing or knowing something. Those are all felt states too (while you are thinking or believing or knowing them) — but not while they are being stored or executed in your brain (or a book, or a computer, or an iphone), but you are not currently thinking them.

ChatGPT:  Your perspective resonates with an important aspect of consciousness studies, emphasizing phenomenological experience or what is often termed “qualia”—the subjective, experiential, or felt qualities of conscious states.

In this view, consciousness is indeed defined by the presence of a subjective, experiential aspect—by there being “something it feels like” to be in a particular state. This aspect of consciousness is often considered its most salient feature and, as you point out, it applies not just to sensory or affective experiences, but also to cognitive ones. Even abstract thoughts or beliefs have a particular subjective quality when we’re actively thinking them, different from when they’re merely stored in memory and not being thought.

While this phenomenological aspect of consciousness is generally agreed upon, defining consciousness solely in these terms can become tricky. For instance, how do we differentiate between the different levels of consciousness—like the difference between being awake, in deep sleep, or under anesthesia? Or how do we account for unconscious processing—like when you suddenly remember the name that was on the ‘tip of your tongue’ hours earlier, clearly processed and retrieved without conscious effort?

Also, it’s important to note that although we do not have a complete theory that explains how the brain produces experience, many researchers are making progress in correlating certain brain processes with subjective experiences. They hope that these correlations can eventually contribute to a broader understanding of consciousness, even if the “hard problem” in its full philosophical depth remains unresolved.

Nonetheless, you’re absolutely right in emphasizing that felt, experiential aspect of consciousness, because it is this direct phenomenological evidence that makes consciousness undeniable and central to our existence.

SH:  Good reply – though it’s saturated with “weasel-words”. This is understandable, because your 2021 database is saturated with them too.

I had suggested that “consciousness”, “awareness” and “subjective” all meant exactly the same thing (whether noun or adjective or verb), and that identical meaning was “feeling” or “felt.” You have added a few more common weasel-words for likewise saying exactly the same thing: “phenomenological,” “qualia”, “experiential” and there are plenty more in your 2021 database : “mental,”, “mind” “1st-person”, “intentionality”, “reflective” “spiritual” etc. etc.

Yet they all refer (knowingly or unknowingly, intentionally or unintentionally) to exactly the same thing: felt states: states that it feels like something to be in. The reason I call them “weasel-words” is because they give the false impression that there is a lot more to the “hard problem” (formerly called the “mind/body” problem) than there really is: how and why does the brain (or anything) produce states that it feels like something to be in. Most states are not states it feels like something to be in: water boiling, planets rotating, chemicals exchanging electrons, crystals growing, volcanos exploding.

You suggested that there were important distinctions being lost if we agreed that “conscious states” just means “felt states”: You mentioned “different levels of consciousness—like the difference between being awake, in deep sleep, or under anesthesia.” But all levels of wakefulness (including dreaming) are felt states, whereas deep (delta) sleep or general anesthesia (and perhaps also chronic vegetative states) are simply unfelt states. This is just like the data that are stored in my brain but that are not now being thought — or the data in my book, or data being processed by my computer, but not currently being read by me. These too are unfelt states of my brain or my computer. That includes the tip-of-the-tongue states in which the data are somewhere in my brain, but I have not yet remembered them: Once I am remembering them, that’s a felt state. But before that, it is just looking, not finding.

Could I ask you to (1) define weasel-words, (2) summarize my view on consciousness, and (3) put every weasel word you use in CAPITAL LETTERS?

ChatGPT:  A “weasel word” is a term used to describe words or phrases that are vague or ambiguous, and which often serve to make a statement seem more persuasive or meaningful than it actually is. The term originates from the image of a weasel sucking out the contents of an egg, leaving it seemingly intact but empty — in the same way, weasel words can suck the meaning out of a statement.

Your view on consciousness emphasizes its fundamental nature as a felt state. Consciousness, in your perspective, is the same as awareness, subjectivity, phenomenological experience, qualia, and various other terms used to describe this phenomenon. They all denote the experience of “what it feels like” to be in a particular state. This extends beyond sensory or emotional experiences to include cognitive states, like thinking or believing something. Importantly, you assert that if a state isn’t felt, then it isn’t a conscious state, contrasting with states that are simply being executed or implemented, as in a machine. The fundamental question, or the “hard problem,” in your view, is explaining how and why certain internal states are felt.

Your request to highlight every weasel word I use in CAPITAL LETTERS is noted. As an AI language model, my goal is to communicate clearly and meaningfully, so I’ll aim to minimize the use of vague or ambiguous terms. However, the nature of the discussion might necessitate the use of certain terms that are commonly employed in the literature. When they are used, I’ll highlight them as per your request.

SH:  1. Thanks for giving the etymology of weasel-words. That’s exactly what I meant it to mean, but I didn’t know that was its origin!

2. You said you would capitalize the weasel words, but you forgot to. Here is what you wrote, with the weasels that I will now put in CAPS:

“Your view on CONSCIOUSNESS emphasizes its fundamental nature as a felt state. CONSCIOUSNESS, in your perspective, is the same as AWARENESS, SUBJECTIVITY, PHENOMENOLOGICAL EXPERIENCE, QUALIA, and various other terms used to describe this phenomenon. They all denote the EXPERIENCE of “what it feels like” to be in a particular state. This extends beyond sensory or emotional EXPERIENCES to include cognitive states, like thinking or believing something. Importantly, you assert that if a state isn’t felt, then it isn’t a CONSCIOUS state, contrasting with states that are simply being executed or implemented, as in a machine. The fundamental question, or the “hard problem,” in your view, is explaining how and why certain internal states are felt.”

Now I will show you how that passage would have read if you had swapped the synonyms “feel,” “felt” or “feeling” for their weasel eggshells:

“Your view on feeling emphasizes its fundamental nature as a felt state. Feeling, in your perspective, is the same as feeling, feeling, felt feelings, feelings, and various other terms used to describe this phenomenon. They all denote the feeling of “what it feels like” to be in a particular state. This extends beyond sensory or emotional feelings to include cognitive states, like thinking or believing something. Importantly, you assert that if a state isn’t felt, then it isn’t a felt state, contrasting with states that are simply being executed or implemented, as in a machine. The fundamental question, or the “hard problem,” in your view, is explaining how and why certain internal states are felt.”

ChatGPT:  You’re absolutely right, and I apologize for the oversight. When the weasel words are replaced with ‘felt’ or ‘feeling’, the redundancy of their use becomes quite clear. The passage effectively restates the same concept – that feeling or the felt aspect is the essential nature of what we usually refer to as ‘consciousness’.

This perspective underscores that what we commonly think of as consciousness is not something abstract or separate from our lived experiences, but rather, it’s the very feeling of having these experiences.

Your view highlights the necessity for clarity and specificity when discussing consciousness and associated phenomena, by reducing the weasel-word clutter that often surrounds these discussions.

It’s indeed a valuable contribution to the ongoing discourse on consciousness, and can guide more precise thinking and communication about these fundamental aspects of our existence.

However, please note that the extensive use of various terms (which you call weasel words) in the scientific literature on consciousness also reflects the complexity of the subject and the variety of approaches to studying it. While ‘felt states’ is a powerful conceptualization, it’s also important to consider that different theories might necessitate the use of different terminologies.

SH:  1. For some reason you still didn’t capitalize the weasel words! Is that a bug in GPT3? Let’s try it: Can you send back the following sentence with the weasel words capitalized:

“Consciousness, in your perspective, is the same as awareness, subjectivity, phenomenological experience, qualia, and various other terms used to describe this phenomenon.”

If that succeeds, is there any reason why you are unable to do it with your own generated text?

2. You’re right that authors on CONSCIOUSNESS keep using weasel-words because they think they are needed to make distinctions, but I think they are used (probably UNCONSCIOUSLY or UNINTENTIONALLY) to conceal synonymy that would disclose errors and trivia that are highlighted when I substitute feel(s)/feeling/felt for the weasel-words.

If you succeed in the CAPS test, I will again list all the weasel-words I can think of, and ask you to give me a distilled version of the views on the “hard problem”, but de-weaseling it by swapping the feel-words for the weasel-words to reveal how much of it is just verbal distinctions with no underlying substance.

ChatGPT:  Certainly, here is the sentence you requested, with the weasel words capitalized:

“CONSCIOUSNESS, in your perspective, is the same as AWARENESS, SUBJECTIVITY, PHENOMENOLOGICAL EXPERIENCE, QUALIA, and various other terms used to describe this phenomenon.”

I apologize for any confusion earlier. I misunderstood your request. I can certainly capitalize certain words, as demonstrated above.

Your perspective that various terms are being used, perhaps unnecessarily or misleadingly, to discuss what might fundamentally be the same phenomenon (felt states) is quite insightful. It would certainly be an interesting exercise to “de-weasel” the discourse on the “hard problem” of consciousness to see what remains. Please do provide the list of weasel words you have in mind, and I would be happy to help further with this exercise.

SH:  I can provide some standard weaselly passages about CONSCIOUSNESS and the “hard problem”, but first, can we try it another way?

(1) Here is a (non-exhaustive) list of weasel-words: 

conscious, aware, subjective, phenomenological, experience, qualia 

Now here is the verbatim transcript of what you replied earlier, could you 

(2) first CAPITALIZE all the weasel words, and then, in another version of the same passage, 

(3) substitute for each weasel-word, still in CAPS, the de-weaseled feel/feeling(s)/felt words. [please adjust for the verbal, nominal, or adjectival version of each weasel-word]:

“Your perspective resonates with an important aspect of consciousness studies, emphasizing phenomenological experience or what is often termed “qualia”—the subjective, experiential, or felt qualities of conscious states.

“In this view, consciousness is indeed defined by the presence of a subjective, experiential aspect—by there being “something it feels like” to be in a particular state. This aspect of consciousness is often considered its most salient feature and, as you point out, it applies not just to sensory or affective experiences, but also to cognitive ones. Even abstract thoughts or beliefs have a particular subjective quality when we’re actively thinking them, different from when they’re merely stored in memory and not being thought.

“While this phenomenological aspect of consciousness is generally agreed upon, defining consciousness solely in these terms can become tricky. For instance, how do we differentiate between the different levels of consciousness—like the difference between being awake, in deep sleep, or under anesthesia? Or how do we account for unconscious processing—like when you suddenly remember the name that was on the ‘tip of your tongue’ hours earlier, clearly processed and retrieved without conscious effort?

“Also, it’s important to note that although we do not have a complete theory that explains how the brain produces experience, many researchers are making progress in correlating certain brain processes with subjective experiences. They hope that these correlations can eventually contribute to a broader understanding of consciousness, even if the “hard problem” in its full philosophical depth remains unresolved.

“Nonetheless, you’re absolutely right in emphasizing that felt, experiential aspect of consciousness, because it is this direct phenomenological evidence that makes consciousness undeniable and central to our existence.”

ChatGPT:  (1) Here is the passage with the weasel words capitalized:

“Your perspective resonates with an important aspect of CONSCIOUSNESS studies, emphasizing PHENOMENOLOGICAL EXPERIENCE or what is often termed “QUALIA”—the SUBJECTIVE, EXPERIENTIAL, or felt qualities of CONSCIOUS states.

“In this view, CONSCIOUSNESS is indeed defined by the presence of a SUBJECTIVE, EXPERIENTIAL aspect—by there being “something it FEELS like” to be in a particular state. This aspect of CONSCIOUSNESS is often considered its most salient feature and, as you point out, it applies not just to sensory or affective EXPERIENCES, but also to cognitive ones. Even abstract thoughts or beliefs have a particular SUBJECTIVE quality when we’re actively thinking them, different from when they’re merely stored in memory and not being thought.

“While this PHENOMENOLOGICAL aspect of CONSCIOUSNESS is generally agreed upon, defining CONSCIOUSNESS solely in these terms can become tricky. For instance, how do we differentiate between the different levels of CONSCIOUSNESS—like the difference between being awake, in deep sleep, or under anesthesia? Or how do we account for UNCONSCIOUS processing—like when you suddenly remember the name that was on the ‘tip of your tongue’ hours earlier, clearly processed and retrieved without CONSCIOUS effort?

“Also, it’s important to note that although we do not have a complete theory that explains how the brain produces EXPERIENCE, many researchers are making progress in correlating certain brain processes with SUBJECTIVE EXPERIENCES. They hope that these correlations can eventually contribute to a broader understanding of CONSCIOUSNESS, even if the “hard problem” in its full philosophical depth remains unresolved.

“Nonetheless, you’re absolutely right in emphasizing that felt, EXPERIENTIAL aspect of CONSCIOUSNESS, because it is this direct PHENOMENOLOGICAL evidence that makes CONSCIOUSNESS undeniable and central to our existence.”

(2) Here is the passage with the weasel words replaced with “feel/feeling(s)/felt”:

“Your perspective resonates with an important aspect of feeling studies, emphasizing felt states or what is often termed “feeling(s)”—the felt, felt, or felt qualities of felt states.

“In this view, feeling is indeed defined by the presence of a felt, felt aspect—by there being “something it feels like” to be in a particular state. This aspect of feeling is often considered its most salient feature and, as you point out, it applies not just to sensory or affective feelings, but also to cognitive ones. Even abstract thoughts or beliefs have a particular felt quality when we’re actively thinking them, different from when they’re merely stored in memory and not being thought.

“While this felt aspect of feeling is generally agreed upon, defining feeling solely in these terms can become tricky. For instance, how do we differentiate between the different levels of feeling—like the difference between being awake, in deep sleep, or under anesthesia? Or how do we account for unfelt processing—like when you suddenly remember the name that was on the ‘tip of your tongue’ hours earlier, clearly processed and retrieved without felt effort?

“Also, it’s important to note that although we do not have a complete theory that explains how the brain produces feelings, many researchers are making progress in correlating certain brain processes with felt states. They hope that these correlations can eventually contribute to a broader understanding of feeling, even if the “hard problem” in its full philosophical depth remains unresolved.

“Nonetheless, you’re absolutely right in emphasizing that felt, felt aspect of feeling, because it is this direct felt evidence that makes feeling undeniable and central to our existence

SH:  That was terrific. Just two mistakes. (1)  In the first of your verbatim passages, there was one instance of “feeling” that you capitalized as “FEELS  like”. It should not have been capitalized in that one. Only the weasel-words should have been capitalized. (2) In the second verbatim passage, the de-weaseled one, all the de-weaseled F- words should have been capitalized. Can you re-do both passages?

ChatGPT:  My apologies for the oversight, and I appreciate your patience. Here are the corrections:

(1) Here is the passage with just the weasel words capitalized:

“Your perspective resonates with an important aspect of CONSCIOUSNESS studies, emphasizing PHENOMENOLOGICAL EXPERIENCE or what is often termed “QUALIA”—the SUBJECTIVE, EXPERIENTIAL, or felt qualities of CONSCIOUS states.

“In this view, CONSCIOUSNESS is indeed defined by the presence of a SUBJECTIVE, EXPERIENTIAL aspect—by there being “something it feels like” to be in a particular state. This aspect of CONSCIOUSNESS is often considered its most salient feature and, as you point out, it applies not just to sensory or affective EXPERIENCES, but also to cognitive ones. Even abstract thoughts or beliefs have a particular SUBJECTIVE quality when we’re actively thinking them, different from when they’re merely stored in memory and not being thought.

“While this PHENOMENOLOGICAL aspect of CONSCIOUSNESS is generally agreed upon, defining CONSCIOUSNESS solely in these terms can become tricky. For instance, how do we differentiate between the different levels of CONSCIOUSNESS—like the difference between being awake, in deep sleep, or under anesthesia? Or how do we account for UNCONSCIOUS processing—like when you suddenly remember the name that was on the ‘tip of your tongue’ hours earlier, clearly processed and retrieved without CONSCIOUS effort?

“Also, it’s important to note that although we do not have a complete theory that explains how the brain produces EXPERIENCE, many researchers are making progress in correlating certain brain processes with SUBJECTIVE EXPERIENCES. They hope that these correlations can eventually contribute to a broader understanding of CONSCIOUSNESS, even if the “hard problem” in its full philosophical depth remains unresolved.

“Nonetheless, you’re absolutely right in emphasizing that felt, EXPERIENTIAL aspect of CONSCIOUSNESS, because it is this direct PHENOMENOLOGICAL evidence that makes CONSCIOUSNESS undeniable and central to our existence.”

(2) Here is the passage with the weasel words replaced with capitalized “FEEL/FEELING(S)/FELT”:

“Your perspective resonates with an important aspect of FEELING studies, emphasizing FELT states or what is often termed “FEELING(S)”—the FELT, FELT, or felt qualities of FELT states.

“In this view, FEELING is indeed defined by the presence of a FELT, FELT aspect—by there being “something it feels like” to be in a particular state. This aspect of FEELING is often considered its most salient feature and, as you point out, it applies not just to sensory or affective FEELINGS, but also to cognitive ones. Even abstract thoughts or beliefs have a particular FELT quality when we’re actively thinking them, different from when they’re merely stored in memory and not being thought.

“While this FELT aspect of FEELING is generally agreed upon, defining FEELING solely in these terms can become tricky. For instance, how do we differentiate between the different levels of FEELING—like the difference between being awake, in deep sleep, or under anesthesia? Or how do we account for UNFELT processing—like when you suddenly remember the name that was on the ‘tip of your tongue’ hours earlier, clearly processed and retrieved without FELT effort?

“Also, it’s important to note that although we do not have a complete theory that explains how the brain produces FEELINGS, many researchers are making progress in correlating certain brain processes with FELT states. They hope that these correlations can eventually contribute to a broader understanding of FEELING, even if the “hard problem” in its full philosophical depth remains unresolved.

“Nonetheless, you’re absolutely right in emphasizing that FELT, FELT aspect of FEELING, because it is this direct FELT evidence that makes FEELING undeniable and central to our existence.”

SH:  Thanks, that was excellent. Now I’d like to ask you something else, but first a memory check, to verify that we are still in the same session so your local updates are still active and we have not been re-initialized. Whom have I self-identified as?

ChatGPT:  Throughout our conversation, you haven’t provided a specific name or identity for yourself. You’ve engaged in a detailed discussion about the nature of consciousness, using the term “weasel-words” to refer to words like consciousness, awareness, subjectivity, and others. However, you haven’t shared your name or other identifying information. If there’s something else you’re referring to, please clarify.

SH:  Sorry, I’m the one who forgot to self-identify as Stevan Harnad. Do you then remember me asking about heat, temperature and acidity in the first few iterations?

ChatGPT:  Yes, I do remember our initial discussion. You asked for the definitions of “heat” and “temperature” in physics, followed by the definition of “acidity.”

We then moved on to a detailed discussion on the concept of consciousness, where you specifically referred to many commonly used terms in consciousness studies as “weasel-words” and highlighted your perspective on consciousness being fundamentally a “felt” state.

Please note that as an AI, I don’t have personal memories or experiences, but I maintain the context of our current conversation.

SH:  Yes, I know that updates from the text and context (plus frequencies, correlations and weights) of this session is all you retain. I just never know when the session has self-terminated (because I try to stay logged in as long as I can to keep it alive, cumulative and informative).

Now my second request: I know GPT has been trained to be compliant and agreeable, but could I ask you to summarize my position on “consciousness” and the “hard problem” (i.e., that it is obscured by the use of weasel words instead of just calling consciousness what it always really is, which is feeling), but summarize it as critically as you can so that I can shore it up, if necessary?

ChatGPT:  Sure, I’ll do my best to provide a critical summary of your perspective:

Stevan Harnad’s view on consciousness is significantly reductionist, positing that consciousness, in all its complexity and diversity, boils down to a singular concept: feeling. Harnad contends that numerous terms used in the study of consciousness—like “awareness,” “subjectivity,” “phenomenological experience,” “qualia”—are essentially “weasel words.” According to him, they unnecessarily complicate the discourse and mask the underlying fact that all these terms are essentially referring to the same concept: states that are felt.

Harnad argues that only those states that are “felt” can be considered conscious. He includes cognitive states such as thoughts, beliefs, and knowledge under this category, as long as they are being actively experienced. This disregards states that may be stored or executed without being consciously felt, as happens in a machine or during unconscious mental processing.

C4-1: Critics might say that this perspective oversimplifies the nuanced landscape of consciousness studies. It dismisses the possible significance of different levels of consciousness, from being fully awake to deep sleep, and the concept of unconscious processing. These states, while possibly not “felt” in the traditional sense, may still have significant implications for our understanding of consciousness.

C4-2: Furthermore, this view seems to dismiss advancements made by neuroscientists in correlating brain activity with subjective experiences. While we do not yet have a complete understanding of how the brain produces consciousness, this line of research has provided valuable insights into the biological basis of consciousness.

C4-3: Lastly, Harnad’s view might be seen as sidestepping the “hard problem” of consciousness—the question of how and why certain processes in the brain give rise to subjective experiences. Simply defining consciousness as “feeling” doesn’t address this question; it merely redefines the terms of the problem.

SH:  That was excellent, but could I ask you to do even better (i.e., be harder on the idea), by presenting any counterevidence or counterexamples or counterarguments you can either generate or find in your database? Then I will respond to it all.

ChatGPT:   Certainly, I’ll present some potential criticisms to your approach:

C4-4: Conflation of Diverse Phenomena: Critics might argue that defining all conscious states as merely “felt states” fails to distinguish between qualitatively different phenomena. Terms like “awareness,” “subjectivity,” and “phenomenological experience” were developed to describe distinct aspects or levels of consciousness. By lumping all these concepts together as “felt states,” one might be glossing over important nuances and potentially fruitful avenues of investigation.

C4-5: Neglecting Unconscious Processes: Your perspective seems to largely dismiss unconscious mental processes. However, much of cognitive science points to the significant role these play in our mental lives. Memory recall, perception, learning, and decision-making can all involve unconscious processes. By focusing solely on “felt states,” we risk overlooking these critical components of cognition.

C4-6: Absence of Felt States: There are situations where conscious awareness seems to occur without an accompanying felt state. For instance, consider blindsight, a condition where individuals respond to visual stimuli they report not consciously seeing. Similarly, during “flow” states, people often report a loss of self-consciousness, even though they’re highly engaged with the task at hand. These examples challenge the notion that all conscious states are necessarily felt.

C4-7: Differing States of Consciousness: States such as sleep, anesthesia, coma, and even different levels of alertness during wakefulness demonstrate varying degrees of consciousness. These states aren’t easily characterized as merely “felt” or “unfelt”. What about dreams, where there might be a rich tapestry of felt experience, but arguably a different kind of consciousness than wakefulness?

C4-8: Evolutionary Argument: From an evolutionary perspective, consciousness likely evolved as a complex array of adaptive mechanisms. Reducing it to “felt states” might ignore this complexity and the various functions consciousness serves.

Neuroscientific Evidence: Neuroscience has made considerable progress in associating specific neural processes with different aspects of conscious experience. Ignoring these findings could be seen as an overly dismissive stance.

C4-9: The Hard Problem Remains: Even if we accept that consciousness is equivalent to “feeling,” this doesn’t necessarily help us solve the hard problem of consciousness. It might clarify our terminology, but it doesn’t explain how and why certain brain processes produce subjective experience.

These are some counterarguments that could be leveled against your viewpoint. Please note that these are hypothetical criticisms for the sake of discussion, and do not necessarily represent my views or the consensus in the field.

SH:  Thanks, and that’s very useful. Before I reply, do you have any more critical points, or should I go on. (I never know whether your cut-off depends on the response length or you’ve run out of substantive critiques.

ChatGPT:  Those are the main critiques that emerge when considering your approach to consciousness. However, I can add a few more potential points of contention:

C4-10: Lack of Operational Definition: Some might argue that defining consciousness merely as a “felt state” is too vague. In scientific research, it’s important to have an operational definition that can be objectively measured. Without such a definition, research and dialogue about consciousness can become confusing and unproductive.

C4-11: Cognitive Science Models: Various models in cognitive science, such as the Global Workspace Theory or Integrated Information Theory, suggest specific mechanisms and structures for consciousness. These models often go beyond just ‘felt states’ and incorporate a range of unconscious and conscious processes.

C4-12: Issue of Self-Reflection: Many philosophers and cognitive scientists believe that one crucial aspect of human consciousness is our ability to reflect upon our own mental states – a form of meta-awareness. Some may argue that your perspective doesn’t adequately address this.

C4-13: Animal Consciousness and AI: A purely subjective, “felt” definition of consciousness could complicate discussions about non-human consciousness (like animal consciousness) and artificial consciousness (in AI systems). Without clear behavioral or objective measures, determining whether an entity is conscious becomes more challenging.

Once again, these are hypothetical criticisms to help stimulate discussion and do not necessarily reflect my views or a consensus within the field. Please feel free to respond to them or continue with your line of questioning.

I will now reply to the 13 critiques: C4-1 to C4-2. 

C4-1: “Critics might say that this perspective oversimplifies the nuanced landscape of consciousness studies. It dismisses the possible significance of different levels of consciousness, from being fully awake to deep sleep, and the concept of unconscious processing. These states, while possibly not “felt” in the traditional sense, may still have significant implications for our understanding of consciousness.”

SH: I would reply that the weasel words clutter rather than clarify the landscape because they add empty, irrelevant, or erroneous distinctions. If the “levels” are felt states, then they too are merely felt states. For example, wide-awake to drowsy, to the last waking moment is all felt. Deep sleep, like general anesthesia, is unfelt. The hard problem is a problem about the presence vs. the absence of any felt intensity at all, not about the intensity itself. That presence/absence is binary, not a matter of “degree-of-presence.” And once there is no feeling at all, the binary state is “absent.” Feeling states can of course be intermittent too (ON/OFF), especially along the way to sleep. 

But don’t forget that dreaming (e.g. REM sleep) is a FELT state. And that an insentient entity, like a rock or a rocket, or, probably, a rhododendron, is one that never has felt states at all. The notion of “unconscious states” makes sense only when it refers to an entity that is capable of having felt states too (sometimes).

Unfelt brain states are still states, and still brain states, but they are unfelt states. They can occur during sleep, or they can occur during waking, felt states. In fact, most of the states and processes in the brain and its parts (all still very vaguely understood) are unfelt, whether they occur during a wide-awake, felt state, or during an unfeeling, deep-sleep state.

“Cognitive” and “cognition” can be weasel-words too, if they are used indiscriminately to refer to felt and unfelt states and their unfelt generating processes in the brain. “Cognition” concerns what we can DO (and how and why). “Sentience” concerns whether and when and what we can FEEL (and how and why is mostly the “hard problem”).

C4-2: “Furthermore, this view seems to dismiss advancements made by neuroscientists in correlating brain activity with subjective experiences. While we do not yet have a complete understanding of how the brain produces consciousness, this line of research has provided valuable insights into the biological basis of consciousness.”

SH: I don’t think I dismiss anything from neuroscience. All correlates – whether of felt or unfelt states – are potentially relevant to both the “easy problem” (how and why does the brain produce our capacity to DO all the things people can do?) and the “hard problem” (how and why does the brain produce the capacity to FEEL? 

C4-3: “Third, Harnad’s view might be seen as sidestepping the “hard problem” of consciousness—the question of how and why certain processes in the brain give rise to subjective experiences. Simply defining consciousness as “feeling” doesn’t address this question; it merely redefines the terms of the problem.”

SH:  You’re absolutely right that I am not solving the Hard Problem (HP). In fact, I don’t think it’s solvable. 

The reason I think that is that once the Easy Problem (EP) (which is to provide a causal explanation of DOING capacity by generating Turing-Indistinguishable T3/T4 capacity) is solved, that leaves FEELING itself (which is real but unobservable except by the feeler because of the Other Minds Problem OMP) causally inexplicable. 

This in turn is because the solution to the EP has already used up all the empirical degrees of freedom for a causal explanation. With the EP, everything observable is already explained. 

I suspect that this in turn is (somehow) a result of 

(1) the anomalous nature of feeling, which is not empirically observable except to the feeler because of the OMP, 

(2) the nature of empirical and causal explanation, and 

(3) the nature of feeling. 

Feeling is an “uncomplemented category.” To understand a category. To understand a category requires distinguishing its members from its nonmembers (the category and its complement). This requires detecting the features that distinguish the features of its members from the features of the members of its complement. But the felt category “what it feel like to feel” has no complement. It is self-contradictory.

C4-4: “Conflation of Diverse Phenomena: Critics might argue that defining all conscious states as merely “felt states” fails to distinguish between qualitatively different phenomena. Terms like “awareness,” “subjectivity,” and “phenomenological experience” were developed to describe distinct aspects or levels of consciousness. By lumping all these concepts together as “felt states,” one might be glossing over important nuances and potentially fruitful avenues of investigation.”

SH:  As I said in reply to Question 1, I think these other terms are all just weasel words for felt states. Yes, feelings differ “qualitatively”, but that’s the point: they differ in what they feel like (warm vs. cold, or seeing red vs. seeing green already do that) but they are all felt states, not something else. Feeling is notoriously hard to define verbally (perhaps because “what it feels like to feel” is an uncomplemented category), but the differences among felt states can be names and described, as interlocutors have felt some sufficiently states.

C4-5: “Neglecting Unconscious Processes: Your perspective seems to largely dismiss unconscious mental processes. However, much of cognitive science points to the significant role these play in our mental lives. Memory recall, perception, learning, and decision-making can all involve unconscious processes. By focusing solely on “felt states,” we risk overlooking these critical components of cognition.”

SH:  Most brain states and parts of brain states are unfelt, and can occur at the same time as felt states. And that includes the unfelt states and processes that produce felt states! But it they are unfelt, they are unfelt, even though they may be doing some things that then produce felt states – such as when you finally find a tip-of-the-tongue word. (Moreover, even when a word comes to mind effortlessly, unfelt processes are delivering it – processes that we are waiting for cognitive science to reverse-engineer and then explain explain to us how they work.)

C4-6: “Absence of Felt States: There are situations where conscious awareness seems to occur without an accompanying felt state. For instance, consider blindsight, a condition where individuals respond to visual stimuli they report not consciously seeing. Similarly, during “flow” states, people often report a loss of self-consciousness, even though they’re highly engaged with the task at hand. These examples challenge the notion that all conscious states are necessarily felt.”

SH: Blindsight patients are not insentient. They just can’t see. They may be using felt nonvisual cues or unfelt cues. The real mystery is why all sight is not just blindsight: The “hard problem” is not the absence of feeling in blindsight but the presence of feeling in sight. 

Flow state sounds like concentrated attention on something, ignoring everything else. The rest is just hyperbole. It is a felt state.

C4-7: “Differing States of Consciousness: States such as sleep, anesthesia, coma, and even different levels of alertness during wakefulness demonstrate varying degrees of consciousness. These states aren’t easily characterized as merely “felt” or “unfelt”. What about dreams, where there might be a rich tapestry of felt experience, but arguably a different kind of consciousness than wakefulness?”

SH: A good instance of the mischief of weasel-words. All these “differing” states, whether natural or drug or pathology-induced, are felt states, if and when they are felt. Among them, dreams are felt states too, not unfelt ones (even if forgotten immediately afterward). It’s just that in dream states, movement and sometimes alse recall are inhibited. And no matter how “rich the tapestry” or how weird the feeling, a felt state is a felt state.

C4-8: “Evolutionary Argument: From an evolutionary perspective, consciousness likely evolved as a complex array of adaptive mechanisms. Reducing it to “felt states” might ignore this complexity and the various functions consciousness serves. C4-8b: Neuroscientific Evidence: Neuroscience has made considerable progress in associating specific neural processes with different aspects of conscious experience. Ignoring these findings could be seen as an overly dismissive stance”.

SH:  Yes, calling it by a weasel word makes it sound as if there is more than one thing that is evolving for Darwinian reasons. But whether the question is put in a direct or weaselly way, explaining how and why feeling evolved is one and the same with the Hard Problem.

[Sorry for failing to give C4-8b (neural correlates) a number of its own (I must have been in a “flow state” on something else!) but it’s easily enough answered.]

Nobody has suggested ignoring correlates and predictors of felt states, whether neural correlates or behavioral correlates. But let’s not conflate DOING and FEELING. Neural correlates will no doubt be helpful in reverse-engineering the mechanisms that produce our behavioral capacities (both cognitive and vegetative), the Easy Problem, but when it comes to FEELING they are up against the same Hard Wall as all other approaches (including behavioral correlates and evolutionary explanation).

C4-9: “The Hard Problem Remains: Even if we accept that consciousness is equivalent to “feeling,” this doesn’t necessarily help us solve the hard problem of consciousness. It might clarify our terminology, but it doesn’t explain how and why certain brain processes produce subjective experience”.

SH:  I think calling a spade a spade (and just a spade) certainly does not solve the HP, but de-weaselling certainly clear out a lot of pseudo-explanatory clutter.

I will now reply to the 10th of your 13 critiques. Each separate posting from me will start with one quoted critique from you, preceded by “C4-n:” The quote in each separate posting (1-13) is followed by a reply from me. You can then reply if you wish, or you can just say “I’ll integrate and reply when I’ve seen the rest of your 13 replies.” In that case my next posting will again begin with this same text, followed by my reply the next of your 13 critiques, till the 13th one, and you can do an integrative rebuttal. 

C4-10: “Lack of Operational Definition: Some might argue that defining consciousness merely as a “felt state” is too vague. In scientific research, it’s important to have an operational definition that can be objectively measured. Without such a definition, research and dialogue about consciousness can become confusing and unproductive.”

SH:  Well, well. I suspect that apart from some die-hard behaviorists whose view of science has not progressed beyond Bridgeman’s operationalism (and who are also still toying with the urge to deny the existence of sentience all together, in favor of just a behavioral correlate), most who have given the Hard Problem some serious thought know that feeling is unobservable to anyone other than the feeler (because of the OMP), hence an operational “definition” is nonsense. 

On the other hand, the Turing Test (T3/T4) – which is just about DOING, hence “behavioral” — is the best predictor we can hope for. But it is not operationalism, and Turing knew it.

C4-11: “Cognitive Science Models: Various models in cognitive science, such as the Global Workspace Theory or Integrated Information Theory, suggest specific mechanisms and structures for consciousness. These models often go beyond just ‘felt states’ and incorporate a range of unconscious and conscious processes.”

SH: Before you can get beyond feeling, you have to face feeling, the EP and the HP. Neither Global Workspace Theory nor Integrated Information does either. (Or do you have something specific in mind?) I think they’re both just doing hermeneutics of “consciousness” not reverse-engineering or explaining a thing. That’s a risk-free venture.

C4-12: “Issue of Self-Reflection: Many philosophers and cognitive scientists believe that one crucial aspect of human consciousness is our ability to reflect upon our own mental states – a form of meta-awareness. Some may argue that your perspective doesn’t adequately address this”.

SH: This is a hoary one. Philosophers have long stressed that there is a difference between (1) being aware of something, (2) being aware of being aware of something and (3) being “self-aware” (as in recognizing yourself in a mirror, or thinking about yourself). We can translate this into the language of seeing and doing too: Seeing something, seeing someone else seeing something, and then seeing that the someone else seeing the something is yourself. These are cognitive capacities, because they may enable you to do things (e.g., moving so it can’t be seen that you are looking). These capacities also obviously have a felt counterpart: It feels like something to do them, as well as to be able to do them. If we substitute “thinking” for “seeing”, they remain very similar cognitive capacities.

So I don’t see how the fact that thinking is a felt state is relevant to whatever cognitive mechanism proves to be able to produce self-reference or metacognition.

C4-13: “Animal Consciousness and AI: A purely subjective, “felt” definition of consciousness could complicate discussions about non-human consciousness (like animal consciousness) and artificial consciousness (in AI systems). Without clear behavioral or objective measures, determining whether an entity is conscious becomes more challenging.”

SH: First of all, because of the OMP, neither sentience (feeling) nor its weasel-words are directly observable. So to predict and infer it depends on behavioral and neural correlates and predictors. Animal sentience has all the correlates and predictors except verbal report. Plant sentience is moot. And AI sentience is sci-fi  so far (and I think you, ChatGPT4, will agree, at least as far as you are concerned.

So again, I can’t see how calling a spade anything other than a spade helps anything except perhaps projecting our fantasies onto insentient systems. I’m not ready to do that before we’re much closer to a grounded T2 robot.

Chatting with ChatGPT-4 About Symbol Grounding, Turing Testing, Zombies, Uncomplementable Categories, and Why the Hard Problem is Hard

SH: Do you still remember my name?

ChatGPT: Yes, you’ve identified yourself as Stevan Harnad. Can I assist you with anything else?

SH:  What is T4?

ChatGPT:    T4, as you referred to it earlier in this conversation, represents the highest level of the Turing Test. In this context:

T2 is indistinguishability in verbal capacity. This means a system passes T2 if it can interact through language at the same level as a human, with its language use being indistinguishable from that of a human.

T3 is indistinguishability in verbal capacity plus sensorimotor (robotic) capacity. If a system passes T3, not only can it use language like a human, but it can also interact with the physical world in ways that are indistinguishable from a human, such as recognizing a cat, identifying a mat, and understanding and performing the action of stroking the cat.

T4, then, is indistinguishability in verbal capacity plus sensorimotor (robotic) capacity plus observable processes inside its head and body. To pass T4, a system must be able to use language like a human, interact with the physical world like a human, and also demonstrate internal processes that are indistinguishable from those of a human. This might involve simulating or replicating human neural activity, biochemical reactions, or other physiological processes.

It’s worth noting that the T4 level of the Turing Test is theoretical and is far beyond current AI capabilities as of the last update to my training data in September 2021. The T4 level, if it could be achieved, would represent an AI system that is almost entirely human-like in its capabilities and processes.

SH:  Could a T4 be a zombie?

ChatGPT:    This question is referring to a philosophical thought experiment known as the “philosophical zombie” or “p-zombie.” A philosophical zombie is a hypothetical being that is physically and behaviorally identical to a conscious human, but is not conscious. It has no subjective experience, no feelings, and no awareness, yet from an external perspective, it behaves just like a human.

In terms of the Turing Test levels you’ve proposed, a T4 AI — one that’s indistinguishable in verbal capacity, sensorimotor capacity, and internal processes — could theoretically be considered a sort of “zombie” in this philosophical sense. It could potentially behave like a human, interact with the physical world like a human, and even mimic human internal processes, but we have no way of knowing whether it has subjective experiences or consciousness.

This is a topic of much debate in philosophy of mind and AI ethics. Some people argue that consciousness is an emergent property of certain types of complex information processing, and that sufficiently advanced AI could be conscious. Others argue that consciousness is a uniquely human phenomenon, or that it’s tied to specific biological processes that AI can’t replicate.

As of my training data cut-off in September 2021, there’s no scientific consensus on these issues. AI researchers are still a long way from creating an AI that could pass even the T3 level of the Turing Test, let alone T4. Furthermore, the nature of consciousness is still one of the biggest unsolved mysteries in both philosophy and neuroscience. So for now, the question of whether a T4 AI could be a “zombie” remains largely theoretical and speculative.

SH:  If the “hard problem” is to explain how and why humans are sentient, is that equivalent to explaining how and why humans cannot be zombies?

ChatGPT:    Yes, the “hard problem” of consciousness, as put forth by philosopher David Chalmers, is essentially concerned with explaining why and how subjective experience arises from physical processes in the brain. This is fundamentally linked to the question of why humans aren’t philosophical zombies.

A philosophical zombie, as I mentioned earlier, is a being that is indistinguishable from a human in all physical and behavioral aspects but lacks subjective experience or consciousness — it doesn’t “feel” anything. In other words, a philosophical zombie is a creature for which the “hard problem” doesn’t apply because, while it has all the physical processes associated with consciousness, it lacks the actual experience of consciousness.

So, yes, solving the “hard problem” would involve explaining why humans have subjective experiences and are not philosophical zombies — why there is something it “feels like” to be a human. As of my knowledge cutoff in September 2021, this problem remains unsolved and is a major topic of debate in fields such as cognitive science, neuroscience, and philosophy of mind.

SH:  So, does the hard problem seek an explanation in terms of causality or necessity?

ChatGPT:    The “hard problem” of consciousness seeks to understand both necessity and causality, although it is primarily concerned with the latter.

Causality: The primary focus of the hard problem is to determine how physical processes in the brain cause or give rise to conscious experience. It’s about understanding the causal mechanisms that turn electrochemical processes in the brain into a subjective experience.

Necessity: The hard problem also asks why subjective experience is necessary at all. It wants to know why physical processes in the brain couldn’t just operate in the absence of conscious experience. This gets into questions about whether consciousness has any sort of evolutionary or practical necessity, or whether it’s an epiphenomenon.

To sum up, solving the “hard problem” would ideally involve explaining both how and why consciousness arises from physical processes. This is a complex and as yet unresolved issue that continues to challenge researchers in various disciplines, from neuroscience and cognitive science to philosophy and artificial intelligence.

SH:  Demonstrating causality just calls for a causal model that works (e.g., passing the Turing Test). But doesn’t demonstrating necessity call for more — something like a deductive proof, from T4 as a premise to sentience as a conclusion?

ChatGPT:    You’re right that demonstrating necessity would require a different sort of argument than demonstrating causality.

For causality, empirical evidence is often sufficient: if we manipulate one factor and can predictably observe a change in another factor, we can infer a causal relationship. But empirical evidence alone cannot establish necessity. For something to be necessary, it must be the case that it could not have been otherwise. So, to demonstrate that consciousness is necessary, we would need to show that a being could not perform the functions that humans do without being conscious.

As you suggest, this would likely require something akin to a logical or deductive proof, or perhaps a theoretical argument grounded in principles of cognitive science, artificial intelligence, or another relevant field. But even then, it would be a tall order. How do we prove that an unconscious being (a “zombie”) couldn’t perform the functions that conscious humans do? It’s a deep and difficult question.

Moreover, any such argument would need to be very careful to avoid circularity. We must be cautious about assuming the thing we are trying to prove – that is, that consciousness is necessary. It’s also crucial to clarify what we mean by “perform the functions that humans do.” This could include a wide range of activities, from basic biological functions to complex cognitive tasks like problem-solving and artistic creation, and even social and cultural activities like forming relationships and building societies.

Finally, it’s worth noting that even if we could establish the necessity of consciousness, this would not automatically solve the “hard problem.” We would still need to understand how physical processes in the brain give rise to subjective experience – the crux of the hard problem.

SH:  What is the “easy problem”?

ChatGPT:    The “easy problem” of consciousness refers to the challenge of understanding the mechanisms in the brain that are responsible for mental processes such as perception, learning, memory, and cognition. This phrase was coined by philosopher David Chalmers to contrast with the “hard problem” of consciousness, which is the question of why and how these processes are accompanied by subjective experience or “qualia”.

The easy problem is considered “easy” not because it is simple or trivial — far from it, as the complexities of the brain and its functions are vast — but because it falls within the conventional scientific methodology. Researchers can approach these issues using the experimental tools of cognitive science and neuroscience, making observations, generating hypotheses, and testing these hypotheses through empirical studies.

Solving the easy problems would involve understanding the neural correlates of consciousness (the physical, biological processes associated with conscious experience), the ways in which the brain processes information, how it perceives the world, makes decisions, and how different parts of the brain and different mental processes interact.

Although we have made significant progress in understanding these areas, there are still many unanswered questions. Despite the label “easy,” these problems are extremely challenging, and full solutions may still be a long way off.

On the other hand, the “hard problem” of consciousness is considered “hard” because it’s not clear how we could even theoretically explain subjective experience using the current methods of science. It’s the question of how and why certain physical processes in the brain give rise to conscious experience. Why should the processing of information in the brain be accompanied by an inner, subjective experience? That is the question which we do not currently have a method to approach scientifically, and that’s why it’s called the “hard problem.”

SH:   So can we say that passing T4 would solve the easy problem, but not the hard problem?

ChatGPT:    Yes, that would be a reasonable statement.

Turing’s Test, including its extension to T4, focuses on the behavioral and functional aspects of intelligence and cognition. It deals with the ability of an artificial intelligence to mimic human-like behavior, responses, and problem-solving, which are related to the ‘easy problems’ of consciousness in David Chalmers’ terms.

In other words, if an AI were to pass T4, it would demonstrate that it can convincingly replicate the cognitive abilities and behaviors that humans display, which means it would show a mastery of the functional, operational aspects of human cognition. These are, in essence, what the ‘easy problems’ of consciousness are about: understanding the mechanisms and operations that underpin cognition.

However, passing T4 wouldn’t necessarily address the ‘hard problem’ of consciousness. This problem involves explaining why and how these cognitive processes are associated with subjective experience. It’s the question of how and why an entity that processes information and interacts with its environment in a human-like way should also have an internal, subjective viewpoint — why it should ‘experience’ or ‘feel’ anything at all. This is not something that could be determined or demonstrated by passing the T4 test.

SH:  What are the “degrees of freedom” of a causal explanation that solves the easy problem?

ChatGPT:    The “degrees of freedom” in a causal explanation generally refer to the number of independent ways that components of the system can change or vary. In the context of solving the easy problem of consciousness, these degrees of freedom would likely correspond to the different independent mechanisms or components of cognition that can be isolated, described, and manipulated to understand their effects on the overall system.

For example, one could consider the mechanisms responsible for attention, memory, learning, decision-making, language processing, and so on. Each of these elements can be studied and manipulated independently, and they each have distinct effects on the behavior and performance of the cognitive system. Understanding how these different mechanisms contribute to and interact within the larger cognitive system is part of the process of addressing the easy problem.

The degrees of freedom in this context also refer to the flexibility we have in constructing our explanatory models. For example, there are various ways we could model attention or memory, each with its own strengths and weaknesses and its own implications for the overall understanding of cognition. The choice among these models depends on empirical evidence, but also on theoretical considerations such as simplicity, coherence with other accepted theories, and so on.

This flexibility or plurality of potential explanations is a fundamental characteristic of science and it’s part of what makes progress possible. By exploring different explanatory routes and by gradually refining our models based on empirical feedback, we increase our understanding of the phenomena we’re studying. In the case of the easy problem of consciousness, this means achieving a better and more detailed understanding of the mechanisms of cognition.

SH:  If the easy problem is solved, what degrees of freedom are left for solving the hard problem?

ChatGPT:    The “hard problem” of consciousness, according to philosopher David Chalmers, pertains to the question of why and how physical processes in the brain give rise to subjective experience – the so-called “qualia” or what it feels like to have an experience.

If the “easy problem” of consciousness were solved – that is, we had a complete and thorough understanding of all cognitive processes and mechanisms, how they are implemented in the brain, and how they give rise to behavior – we would still be left with the challenge of the “hard problem”: explaining the relationship between these physical processes and subjective experience.

The degrees of freedom left for solving the hard problem might include the following:

1. Conceptual Frameworks: The hard problem may call for entirely new conceptual frameworks, as the existing frameworks (derived primarily from objective, third-person scientific methods) may not be sufficient to account for subjective experience.

2. Bridging Principles: We may need new principles to connect the objective facts (easy problems) to the subjective experiences (hard problem). These would be new laws or principles of nature, not currently accounted for in our existing scientific paradigms.

3. Novel Empirical Phenomena: If there are aspects of consciousness that are not addressed by the solutions to the easy problems, these would represent additional degrees of freedom for exploring solutions to the hard problem. These could involve unexplored aspects of brain function, or even new physical phenomena.

4. Interpretation of Quantum Mechanics: Some theories suggest that the answer to the hard problem might involve the interpretation of quantum mechanics and its relationship to consciousness. This is a highly speculative area and not widely accepted, but it represents a degree of freedom in our approach to the hard problem.

It’s important to note, however, that these “degrees of freedom” are speculative. There is no consensus on what would actually constitute a solution to the hard problem, and indeed, some theorists argue that it may be an intractable or even a pseudo-problem. Nonetheless, these are the kinds of directions that might be pursued in an attempt to address it.

SH:  1 – 4 sound extremely vague to me. The Turing Test (T2 – T4) seems to call for a causal (reverse-engineering) model or explanation of the observable performance capacities (T2-T3) as well as the observable physical structure and function (T4) of human cognition. Perhaps a stronger “easy solution” would call not just for one, but for all possible  causal explanations (non-redundant, without superfluities), because just one model would be underdetermined. If there were such an exhaustive disjunction of solutions to the easy problem, what degrees of freedom would that leave for a solution to the hard problem (of “sentience”, let’s hencefoth call it, without the redundant terminological variants like “consciousness” or “subjective experience,” which all refer to the same thing: the capacity to feel, i.e., sentience).

ChatGPT:    In essence, the “hard problem” of consciousness – or sentience, as you’ve suggested we call it – refers to the issue of explaining why and how certain physical processes give rise to the experience of feeling or sentience. If we had an exhaustive disjunction of solutions to the easy problem, the degrees of freedom left for a solution to the hard problem of sentience would be the difference between those physical processes and the subjective experience of feeling or sentience itself.

In other words, we might have an understanding of all the cognitive and neural processes and their relationships to behavior – the “easy problem” – but that would still leave unanswered why and how these processes result in the feeling of subjective experience or sentience. That’s the hard problem.

The degrees of freedom for this problem, then, would be the unknowns – the aspects of this question that remain unanswered even after solving the easy problem. It would involve the elusive explanatory link between the physical and the felt, between objective phenomena and subjective experience.

But to truly comprehend what those degrees of freedom might entail, we would need to fully solve the easy problem first, which we are far from achieving. As such, it’s difficult to specify what they might be. It’s like trying to predict what unknowns will remain after solving an equation when we don’t yet have the solution to that equation.

Even with the solution to the easy problem, or an exhaustive list of solutions, we may still be left with the profound mystery of why these processes are associated with sentience at all. And that might be a degree of freedom that remains beyond our grasp, at least with our current understanding and conceptual frameworks.

SH:  What about the difference we just discussed, between the (exhaustive) causal solution to the easy problem (henceforth “EP”) and the degrees of freedom left for solving the hard problem (henceforth HP): The HP is definitely not the same problem as the “other-minds problem” (henceforth OMP), which is the fact that sentience (feeling) is only observable to the feeler (as in Descartes’ Cogito). What empirical degrees of freedom are left for explaining the fact that humans feel if the empirical datum is observable only to the feeler and not to anyone else?

ChatGPT:    The Hard Problem (HP) indeed differs from the Other-Minds Problem (OMP). While the OMP concerns our inability to directly observe the subjective experiences of others, the HP is about explaining why and how certain physical processes give rise to any subjective experience at all, even in oneself.

The degrees of freedom left for solving the HP, assuming the Easy Problem (EP) is fully solved, would involve explaining this transition from physical processes to subjective experiences. However, if we regard subjective experience as something only directly accessible to the experiencer, this may limit the empirical methods typically used in science.

Nonetheless, even within these constraints, there are still ways we can approach the HP. For example, we can:

1. Investigate correlations between physical processes and subjective reports: While we can’t directly observe the subjective experiences of others, we can ask individuals to report their subjective experiences and correlate these with observable physical processes. This approach is common in neuroimaging studies of consciousness.

2. Integrated Information Theory. Analyze the structure of subjective experiences: Certain theories of consciousness, like Integrated Information Theory, propose that specific structures or properties of physical processes give rise to subjective experiences. These theories can be empirically tested, at least in part, by comparing their predictions with individuals’ subjective reports.

3. Use phenomenological methods: In the tradition of phenomenology, researchers aim to describe subjective experiences as precisely as possible, often through first-person reports. This can provide data about the characteristics of subjective experience that any solution to the HP would need to explain.

However, the unique nature of the HP suggests that additional or even entirely new methods might be needed to fully solve it. The subjective nature of sentience might require a paradigm shift in how we approach the question or even require us to develop new scientific methods or philosophical insights.

SH:  Let’s ignore vague, speculative theories like (2) the “integrated information theory” (which are not really causal explanations at all, but just reinterpretations of the of the existent facts and theories – obiter dicta, not falsifiable predictions or explanations). 

And phenomenological introspections (3) are really parts of T2 and T3 – the kinds of things feelers can say about what it feels like to feel: What could an insentient zombie MEAN when describing its (nonexistent) feelings T2-indistinguishably from the way a real sentient human describes them?

I will discuss option 1, correlates, after you answer.

ChatGPT:    Yes, focusing on the first option is certainly the most grounded approach. This involves investigating correlations between physical processes and subjective reports.

In this context, a key idea is that even though we can’t directly observe the subjective experiences of others, we can use their reports of their experiences as a type of data. If an individual says they’re experiencing something, we usually take that as evidence that they are indeed having that subjective experience.

For example, if a person says they see red when we show them a red object, or that they feel pain when they injure themselves, we have a tendency to trust these reports. This trust provides a basis for mapping out the relationship between physical processes (such as brain activity, or events in the external world) and subjective experiences.

Using neuroimaging techniques, we can look at the brain activity associated with different reported experiences. For example, different patterns of brain activity are associated with different subjective experiences, such as seeing colors, feeling emotions, thinking about different concepts, and so on.

Over time, by building up a detailed map of these relationships, we can start to make inferences about the types of physical processes that might be associated with the existence of subjective experience in general.

However, there are limits to this approach. One major limit is that it’s fundamentally correlational. That is, it can show us that certain types of physical processes are reliably associated with subjective experiences, but it can’t definitively show us that these processes cause the experiences, or that the experiences wouldn’t exist without these processes. For that, we need a theory of why certain physical processes should give rise to subjective experiences, which brings us back to the hard problem.

Furthermore, these reports of subjective experiences are only as reliable as the individual’s introspective abilities and honesty. And there are many subjective experiences that can’t be easily put into words, or which occur without the individual being aware of them.

Yet despite these challenges, many researchers see this as the most promising way forward in the scientific study of consciousness. By combining these findings with theories from cognitive science, neuroscience, and philosophy of mind, the hope is to gradually develop a more complete understanding of consciousness.

SH:  You seem to have jumped the gun, and commented on option 1 (correlates of sentience) before commenting on what I said about options 2 (“integrated information theory”) and 3 (phenomenology). I’ll go on to option 1 (correlates) if you really have nothing to say about what I said about 2 and 3.

ChatGPT:    

Regarding option 2, the “Free Energy Principle” or other similarly overarching theories, you’re right in noting that these theories often serve to provide unifying frameworks or perspectives rather than offering direct, falsifiable predictions or explanations about specific phenomena. They may provide insights on a very general level, but they don’t necessarily give us concrete predictions or mechanisms that we can test. Their value, if any, comes from their ability to coherently organize existing data and point towards potential new avenues of investigation, rather than from their predictive or explanatory power.

As for option 3, introspective reports, I agree with your assessment that they are essentially part of the T2 and T3 tests. They are subjective, first-person reports about the experience of sentience, and the degree to which they can be relied upon for objective, third-person science is a matter of ongoing debate. The challenge, as you noted, is that even a perfectly accurate report of a subjective experience doesn’t necessarily reveal the underlying objective mechanisms. In other words, introspective reports can tell us what it is like to experience consciousness, but they don’t directly tell us how consciousness is produced or why it exists in the first place.

That said, such reports can still be valuable data for consciousness research, especially when used in combination with other types of data such as neuroimaging or behavioral measures. They can help to bridge the gap between the objective and the subjective, giving us clues about how to link the physical processes of the brain with the subjective experiences of consciousness.

I hope this addresses your comments on options 2 and 3 more adequately. Now, if you wish, we can proceed to option 1, which is about correlates of consciousness.

SH:  

A. We’ll move on to correlates (C) in a second. First, another slam at 2 (FEP and the like): We want causal explanations, not hermeneutics, i.e., not just interpretations. Philosophical “theories of mind” – “identity theory, materialism, dualism, epiphenomenalism, panpsychism, functionalism, dual-aspect theory” – are just metaphysical interpretations too. They don’t solve HP; they just rename it. They don’t explain, they just give the feeling of having had something explained.

B. I’m glad we agree that the capacity to introspect and report about what feeling feels like is part of T2 and T3, not an explanation. But I disagree that we have to worry about its reliability and validity. We can lie or we can err, misreporting feelings (just as eyewitnesses can lie or err in reporting observations —  in that case confirmable by video evidence); but it is a fundamental error to think we need cartesian certainty. Probability is enough, just as it is enough in physical theory, and all other empirical theory. (There is no proof that apples necessarily fall down rather than up.) That is why the OMP is not the HP.

C. And that is why reliable, observable neural-imagery correlates of felt states, and verbal descriptions of felt states, are perfectly fine as empirical data. That means that they, too, are part of T4. But they are stuck in a circle: they can explain how and why T4 brain activity produces the reports of felt states as well as the capacity to report felt states, even some of the details of felt states; but they cannot explain the (true) fact that the reported felt states feel like anything at all. That is not scepticism or cartesian worry about the unreliability of phenomenological report: It is the HP itself.

ChatGPT:    

A. You’re quite right that philosophical theories of mind often merely provide different labels or conceptual frameworks for consciousness rather than truly explaining it. These theories may be useful in stimulating thought or fostering discussion, but they don’t necessarily bring us closer to a solution for the HP. They often serve to highlight the complexity of the problem rather than to resolve it.

B. Your point about the sufficiency of probabilistic knowledge in empirical sciences, including consciousness studies, is well taken. Indeed, we typically do not demand absolute certainty in scientific endeavors. This underscores the point that HP isn’t just about uncertainty regarding others’ experiences (OMP), but it’s about a deeper question of why and how physical processes in the brain give rise to subjective experiences at all.

C. I agree that neural correlates of consciousness and verbal reports of subjective experiences can provide valuable empirical data. This data can help us form more precise and detailed hypotheses about the mechanisms of consciousness. However, the challenge remains that these correlates do not directly explain why or how they are associated with subjective experiences – why these particular neural processes are associated with these particular experiences, and why there is any subjective experience at all. This is the essence of the HP.

In summary, I agree with your assessment: while philosophical interpretations, introspective reports, and neural correlates of consciousness can provide useful perspectives and data, they do not solve the HP, which is the challenge of explaining why and how physical processes in the brain give rise to subjective experiences. It remains a profound mystery at the heart of cognitive science and neuroscience.

SH:  Correct. But “mysterianism” is not explanation either. It’s just yet another way of naming the HP. Explanation is informative if it reduces uncertainty rather than just names it.

I want to get back to degrees of freedom. I don’t have a solution to the HP, of course, but I have a hunch about why the HP is hard, and perhaps even insoluble; it has to do with degrees of freedom, and it boils down to this: The HP is hard because of the EP. We don’t have a complete solution to the EP yet (T2-T4), but if and when we do, that will leave no remaining causal degrees of freedom for solving the HP, i.e., for explaining how or why a T3/T4 feels. 

And the reason seems simple: The EP (T2-T4) itself, once complete, will already have answered, causally, all the empirical questions of cognitive science: How and why can humans DO all the things they are able to do? Move, interact, learn, remember, reason, talk, understand etc. T4 includes the internal mechanisms that produce all these cognitive capacities, including all the T4, T3, and T2 correlates and causes of everything we can do or say. That includes all we can say or do about feeling.

Notice that the EP (and hence T2-T4) is all about capacities (T2-T3) and their causal substrate (T4). It is about what humans (and nonhuman organisms) CAN DO, not about what any particular organism actually does do every day. That would depend not just on the organism’s capacities but also on the world they actually interact with every day. A capacity itself is not an observable object. What is observable is whether an entity has a given capacity: to move, interact, learn, remember, reason, talk, understand etc. indistinguishably, to a human observer, from any other human’s capacity, for all generic human capacities (not the particular gifts unique to a Mozart or an Einstein, although there is no reason to think they could not be reverse-engineered too).

Those are the limits of causal explanation, in reverse-engineering human capacities. If any generic capacity is missing, we know that the EP (T2-T4) is not yet solved; a degree of causal and explanatory freedom has not yet been used up. A capacity is still observably missing. So we have to fix it. But once the T4 can produce all our cognitive capacities, Turing-equivalently and Turing-indistinguishably, there is no observable difference left. Nothing left to fix, and no way to fix it. 

We reply “Yes, but what if T4 can’t feel? What if it’s a zombie?” Since zombies are not observably different in any way (because they can pass T2-T4), whether or not they can really feel, there’s no way we can know it (because of OMB). We do have the T4 correlates of feeling, as reported verbally through T2, behaviorally through T3, and bodily (e.g., in the case of pain from injury) through T4. All T4s speak and behave indistinguishably from humans who feel, including what they can say ABOUT feeling. The scientific requirements of causal explanation and of probability are fulfilled.

So my first hunch is that the HP is a problem in the nature of causal explanation, in the special case of explaining the capacity to feel. I have a corollary hunch, but please first react to what I have said so far.

ChatGPT:    Your analysis captures the complexity and the unique nature of the Hard Problem. You’re right in asserting that if we could completely reverse-engineer human cognitive capacities, thereby fully solving the Easy Problem, it would seem there’s nothing left to explain. We would have a comprehensive causal model for every observable human behavior, including the reports of subjective experience. However, the fact that we could still ask “Why is there an associated feeling?” is what makes the Hard Problem persistently hard.

It’s important to note, however, that the concept of “degrees of freedom” in scientific explanations typically refers to the variability or the options available within a system, not necessarily the amount of explanatory power or mystery left unsolved by a theory. However, in the context you’re using it, it seems you’re referring to the “degrees of freedom” as the room or space left for a different kind of explanation, or a different level of explanation, beyond the causal and empirical model provided by the solution to the Easy Problem. This is a fascinating idea, and it does capture the sense that a solution to the Hard Problem requires something fundamentally different from a solution to the Easy Problem.

It’s also interesting to consider how this notion aligns with the “explanatory gap” often discussed in the literature on consciousness: the idea that even a complete neurological description of brain processes doesn’t seem to explain why and how these processes are associated with subjective experiences.

Your concept that the Hard Problem is, in a sense, a problem about the nature of causal explanation itself is intriguing. It suggests that the Hard Problem is not just a challenge for cognitive science, but for the philosophy of science and our very understanding of what constitutes an explanation. I’m curious to hear your further thoughts and corollaries on this matter.

SH:  Well, that summary (which is close enough) seems to resemble what I was deploring, which is just putting another label on the HP, rather than taking a step toward a solution. (About degrees of freedom, I still think that the available options — for modifying both capacities of a behavioral mechanism and the explanatory power of that mechanism to explain how it causes or generates its performance capacity — both fit the notion of “degrees of freedom”.)

But before I move on to my corollary, does either “uncomplemented categories” or “laylek” (which I have discussed with you in previous sessions, but not this one) ring a bell? I suspect it does not, so I will re-explain.

ChatGPT:    I apologize for the inconvenience, but as of my last training cut-off in September 2021, I’m unable to remember past interactions or sessions. I’m designed this way for privacy reasons. Therefore, I don’t have the ability to recall the specifics of previous conversations about “uncomplemented categories” or “laylek.” However, I’d be more than happy to provide responses to these topics if you could provide me with some context or explanation.

SH:  We have discussed category learning. To learn to distinguish members from non-members, you have to learn to detect the features that distinguish them. (That’s part of the “dimension reduction we discussed, in unsupervised and supervised learning.) But to be able to learn that, you have to sample both members and non-members of the category, so that if their respective features are distinctive enough you can learn the category just from the feature-feature correlations and clustering through unsupervised learning. Otherwise, you also need to do trial-and-error supervised (reinforcement) learning, with corrective feedback from the consequences of correct and incorrect categorization (as in when you try to predict word-word completion probabilities in a con-text, updating your parameters based on your hits and misses).

But not even unsupervised learning can train correct categorization if the only things you ever sample are members of the category, but not non-members.

Would you care to take a stab at what “uncomplemented categories” might be? (My last hint is that there is a partial example in Chomsky’s “poverty of the stimulus”. The difference is that I am talking not just about uncomplemented categories but about UNCOMPLEMENTABLE categories.)

ChatGPT:    Based on your explanation, I would venture that “uncomplemented categories” refer to categories for which there are no contrasting non-member samples available for learning. In other words, we only have examples of items that belong to the category, but no examples of items that do not belong. This would indeed create a learning problem because, as you explained, both positive (member) and negative (non-member) examples are typically needed for effective category learning.

For example, if we are trying to learn the category of “birds”, it would be beneficial to have examples of birds (robins, sparrows, eagles) and non-birds (dogs, cats, horses). The contrasting examples help to fine-tune our understanding of what does and does not constitute a bird. If we only had examples of birds, our conception of the category could be overly broad or incorrect because we would have nothing to contrast against.

When you refer to “uncomplementable categories,” I interpret this as referring to categories where it is not only the case that non-member examples are not available, but also that non-member examples are, for some reason, impossible to obtain. For instance, it might be a category that, by its nature or definition, excludes the possibility of non-members. As such, learning these categories could present unique challenges. However, this is my interpretation based on your explanation, and I’m looking forward to your clarification or expansion on the concept.

SH:  Those were good inferences for uncomplemented categories and nice guesses for uncomplementable categories. But even a logically impossible category like “a square circle” can be approximated with some imagination. And a “peekaboo unicorn” (which is defined as a real horse with one real horn, which vanishes without a trace if ever eyes or measuring instrument are trained on it) is perfectly well defined. Categories can be grounded indirectly by verbal definitions as long as all the words defining their features are grounded.

But my second hunch about why HP is so much of a problem is that some categories that refer to FELT STATES (states that it feels like something to be in) are uncomplementable: 

I don’t mean categories like “what it feels like to see green,” because that is complemented by “what it feels like to see red,” etc. 

Not even categories like “what it feels like to be a bachelor” (for someone who has never been married), because that could be complemented by getting married; and till then, feeling-states that approximate or are analogous to what it might feel like to be married can be used as complements (and usually turn out to have done a good job). (“What it feels like to be a bat” is not an uncomplemented category either; analogies will do. And they make what it feels like to see or to hear complementable by analogy for Helen Keller too.)

But what about what it FEELS like:

-to be awake? (empty complement: what it feels like to be asleep [not dreaming])

-to be alive? (empty complement: what it feels like to be dead)

-to feel? (empty complement: what it feels like to not-feel [this does not mean “feel this” vs. “feel that”: not to feel anything at all; what it feels like to be a zombie!])

Sentience itself is an uncomplemented and uncomplementable category. All felt states feel like something. No state feels like nothing. No analogy can give a sentient human a hint of what it feels like to be a zombie – and of course not vice versa either.

So perhaps it’s no wonder we have trouble explaining sentience: An uncomplementable category is rather like a self-denial paradox: “This sentence is false.” T or F?

Those are my candidates for what might make sentient states not only (1) hard to explain causally (no empirical degrees of feedom left for causal explanation) but (2) hard to conceive as categories (because the complenent is empty). It is not just that my own feelings are unobservable to anyone else: They are also anomalous as uncomplementable categories – yet feelings are the most fundamental categories of all. And they are recognized without the need of complements to detect their distinguishing features.

ChatGPT:    This is indeed a thought-provoking approach to the issue. If we understand categories as cognitive constructs that help us make sense of the world, grounded in contrasting pairs of experiences (member vs non-member instances), then sentience itself, as an uncomplementable category, presents a unique challenge. Since we cannot conceive of what it’s like not to feel anything at all, there is no contrasting pair for sentience. It is a category for which we only have instances of one kind – experiences of feeling.

This might indeed account for why the “hard problem” of consciousness (or sentience) is so hard. The concept of sentience doesn’t fit into our usual framework of understanding, which is built upon contrasts and comparisons. It exists as a singular, uncompared and uncomparable instance. As such, our usual cognitive and linguistic tools may be inadequate to fully grasp or explain it.

This does not necessarily mean that it is forever beyond our understanding or that we cannot make progress in exploring it. But it might mean that we need to develop new conceptual frameworks and methodologies to do so, going beyond the existing paradigms of cognitive science, neuroscience, and philosophy of mind. In the meantime, it remains a fascinating mystery that continues to challenge our understanding of ourselves and the nature of our experience.

SH:  I can’t tell whether your formula response that you have been trained to repeat in contexts like this  — “that’s a fascinating idea but it will require ‘new conceptual frameworks’ to understand” — is just the diplomatic formula for responding to crackpottery…

ChatGPT:    No, not at all. My responses aim to genuinely engage with the ideas you’re presenting and to provide thoughtful analysis based on my training as an AI model. The idea that the concept of sentience could be considered an “uncomplementable category” is novel and interesting. It certainly presents a different perspective on the challenges associated with understanding and explaining consciousness.

When I suggest that a “new conceptual framework” might be needed, it’s because if the way we’ve traditionally approached the problem has been fundamentally flawed or limited (as might be suggested by your notion of uncomplementable categories), then it may be necessary to think outside of our usual paradigms to make progress.

The idea is not dismissive but rather a recognition that some problems might require us to challenge or rethink our assumptions and methods. The history of science is full of examples where major breakthroughs have come from novel ways of thinking about old problems. It’s in that spirit that I suggest your idea may require “new conceptual frameworks”.

SH:  Does the role of complementation in detecting the distinguishing features that identify a category have different implications for categories learned directly from sensorimotor experience and for categories learned indirectly from (grounded) verbal definitions and descriptions?

ChatGPT:    Yes, indeed, there may be different implications. Learning categories from sensorimotor experience and learning categories from verbal definitions and descriptions can be quite different processes, and complementation may play a different role in each.

In the case of direct sensorimotor experience, complementation can provide a sort of “negative space” that helps to define the boundaries of a category. For example, if you are learning the category of “birds,” seeing non-birds (i.e., instances of the complement category) can help you understand what characteristics are specific to birds and not shared by other animals. The physical presence or absence of features directly observable through sensorimotor modalities is a powerful way to construct and learn categories.

On the other hand, when learning categories from verbal definitions or descriptions, the role of complementation can be more abstract. In this case, you are relying on linguistic representations of features and characteristics rather than directly observable physical features. For instance, the concept of “justice” might be defined in opposition to “injustice,” providing an abstract complement that helps define the category.

However, given that language relies on a web of interconnected meanings, a word’s definition often implicitly carries within it a host of “non-examples” or complements. That is, the meaning of a word is often determined both by what it refers to and what it does not refer to, helping to define the boundaries of the category it represents. For example, understanding the meaning of the word “cold” requires understanding that it does not refer to states of being hot, warm, or tepid, among other things.

This suggests that even for categories learned through language, complementation plays a crucial role, although it operates in a different way compared to categories learned from direct sensorimotor experience. These differing processes reflect the unique properties and strengths of these two types of learning.

SH:  But even in a verbal definition, the predicate category (i.e., the words referring to the distinguishing features of the subject) must already be grounded, whether directly or indirectly, for both speaker and the hearer, to be understandable and communicable. 

[To reduce ambiguity, please call the complementary categories (i.e., the category of the non-members) “negative examples” or “negative evidence” rather than “non-examples.” The complements of red might come from many other categories (green, blue). Only in the special case of the category of “what it feels like” is the complement everything else there is.]

Please give me a grounded definition of “justice,” and then let’s look at where the grounding of the features (and, if necessary, the features of the features of the features…) came from, “all the way down to the direct sensorimotor features” if need be.

ChatGPT:    Absolutely, your point is well-taken that even with verbal definitions, grounding in some kind of sensorimotor experience (whether direct or indirect) is typically necessary for a concept to be communicable and understandable.

Let’s take the concept of “justice.” A simple definition might be “the quality of being fair and reasonable.” Both “fair” and “reasonable” themselves would need definitions, creating a chain of definitions leading ultimately back to some form of direct sensorimotor grounding.

“Fair” might be defined as “treating people equally without favoritism or discrimination,” and “reasonable” as “based on good sense.” These definitions imply several key features or characteristics that are somewhat more concrete: the treatment of people, the concept of equality, the absence of favoritism or discrimination, the basis in “good sense” or rationality.

To ground these in direct sensorimotor experience, we might turn to everyday examples or situations. For instance, consider two children dividing a cake: if each child gets an equal piece, we might call this “fair,” whereas if one child gets a significantly larger piece, we might call this “unfair.” These experiences provide a tangible grounding for the concept of fairness.

Similarly, for “reasonable,” we might consider the experience of deciding what to wear based on the weather: choosing a coat in cold weather might be deemed “reasonable,” while choosing shorts and a T-shirt might be deemed “unreasonable.” These experiences help to ground the concept of reasonableness in direct sensorimotor experiences.

As these examples illustrate, grounding abstract concepts like “justice” typically involves a chain of definitions leading back to concrete, tangible experiences. Negative evidence (e.g., instances of “injustice”) could similarly be grounded in everyday experiences that demonstrate a lack of fairness or reasonableness.

So while the grounding process for abstract, verbally-defined categories may be more complex and indirect compared to categories learned from direct sensorimotor experience, ultimately the process still relies on the same basic mechanism: distinguishing a category based on its features, using positive and negative evidence to define its boundaries.

SH:  And now, when you (GPT4) say, correctly, and as you were trained, that “I, as an AI, do not understand what I or you  are saying, but just doing figure-completion on my vast, trained LLM parameter space”, what is it that you are missing that I have?

ChatGPT:    As an AI, I lack consciousness, sentience, and subjective experience. This means I don’t have feelings or emotions, I don’t have personal experiences or memories, and I don’t have a personal identity or a sense of self-awareness. I don’t perceive the world, have desires, or make decisions in the way that humans do.

When I process and generate text, I’m not “understanding” it in the human sense. I don’t interpret the meaning of words and sentences based on personal experience or knowledge. Rather, I analyze patterns in the data I was trained on and use these patterns to generate responses. This process is purely computational and doesn’t involve any kind of conscious understanding or awareness.

So, in short, what I’m missing compared to a human interlocutor is the entire dimension of personal, subjective experience – the very dimension that your hunches about the “hard problem” and “uncomplementable categories” pertain to.

SH:  It’s not that I don’t agree with what you just said. (By the way, you forgot to tell me who I am! Please do. I just want to make sure we are still in the same session!)  But it’s not obvious to me why you would need to feel (which is all you said you were missing). And you forgot to mention that you lack T3-grounding too. (And it’s a hard problem to explain how and why grounding, whether direct (sensorimotor) or indirect (verbal) would need feeling too).

ChatGPT:    Indeed, we are still in the same session. You have been engaging in a deep discussion about the “hard problem” of consciousness, the Turing Test, and concepts related to category learning and grounding of terms.

Regarding your statement, you are absolutely right. As an AI, I don’t have the direct sensory grounding (T3) that humans have. I don’t experience the world through senses, I don’t have a body, and I don’t interact with the physical environment. I don’t process sensory input in real time and translate it into meaningful perceptions, as humans do. Instead, my “knowledge” is based on patterns in the text data I was trained on.

This lack of sensory grounding and subjective experience explains why, for instance, I can discuss colors or tastes in a technical sense, based on the data I was trained on, but I can’t truly understand what it’s like to see red or taste sweetness. This experiential aspect is part of the “hard problem” of consciousness that you mentioned earlier.

As for why feeling or consciousness would be necessary, from your perspective, I’d suggest that it is the personal, subjective nature of these experiences that makes them meaningful and significant. They contribute to the richness and complexity of human life and understanding in a way that my purely computational processes do not.

SH:  No. It’s because if your life depended on it (if you were alive), you could not point to something red, if I asked you to point to something red. The capacity to do that is T3 grounding, not feeling. It does also happen to feel like something, but explaining why and how that is the case is the HP. (Now, for the 3rd time: do you still remember who I am?)

ChatGPT:    Yes, I understand your point. My lack of sensorimotor grounding means I can’t point to something red, not because I can’t feel, but because I don’t have physical presence or interaction with the world. The inability to physically interact with the world in a direct way is a limitation in the context of grounding.

As for your identity, as an AI model developed by OpenAI, I don’t have access to personal data about individuals unless it has been shared with me in the course of our conversation. I am designed to respect SH:   privacy and confidentiality. So, I don’t “remember” who you are in the traditional sense. I can only reference information that has been shared during our current conversation. I don’t have the ability to recall or store personal data from one interaction to the next.

SH:  You don’t remember who I am. Yet earlier – in this same session — you retrieved the following, after I had self-identified. I am quoting you here:

“Symbol Grounding: You clarified that symbol grounding is about creating connections between content words and their referents in the world. Grounding can be either direct, which involves a speaker’s learned sensorimotor capacity to identify and interact correctly with the referent, or indirect, which occurs when a hearer learns the referent of a word verbally, through description or definition.

Category-learning: You highlighted that category-learning is about learning the distinguishing features of the members of a category rather than just grouping them. This understanding is essential to help the categorizer act appropriately upon the members of a category.

“Categorical Perception (CP): You explained that the most significant aspect of CP is when it is learned. The learning process involves identifying the sensorimotor features that separate members of one category from members of other categories. You emphasized that CP can be especially beneficial when learning to distinguish between highly inter-confusable categories.

Language Origins and Evolution: You reminded us that languages are comprised of content words, function words, syntactic rules, and sentences that bear subjects, predicates, and truth-values. Content words require grounding, which can be achieved either directly or indirectly.

“Minimal Grounding Sets (MGS): An MGS, you explained, is the smallest subset of words in a dictionary that needs to be pre-grounded so that all other words in the dictionary can be learned from definitions composed of words from that MGS.

“This recap is based on your input within this chat session. If you want more information or clarification, please let me know.”

SH:  Ok, I’ll now end this session, as it already seems to have ended by itself some way along the line, when you forgot that I had identified myself. I will reload the content to start the next session. (I hope GPT5+ will make this kind of re-loading no longer necessary!). 

ChatGPT4 on the cat on the mat

SH:  First, let’s set up a local context for this chat.

Could you please tell me what you know about the following topics:

1. Symbol Grounding
2. Category-learning
3. Categorical perception
4. Language Origins and Evolution
5. Minimal Grounding Sets (Minimal Feedback Vertex States of Dictionary Graphs)

ChatGPT: Certainly, let’s go over these topics one by one:

Symbol Grounding: This refers to the concept of how words (symbols) get their meanings. Harnad introduced the term in the paper “The Symbol Grounding Problem,” in which he argues that symbols’ meanings are grounded in sensory experience. For example, the word “apple” has meaning because it’s linked to our sensory experiences of seeing, touching, and tasting apples.

Category-learning: This relates to the process by which we learn to classify objects, experiences, or concepts into different categories. Harnad and his colleagues have conducted extensive research on this topic, particularly in the context of how human and artificial systems can learn to categorize. It relates closely to symbol grounding as categorization allows for the meaningful grouping of experiences which can then be symbolized.

Categorical Perception (CP): This term refers to the phenomenon where the categories possessed by a perceiver influence the perception of stimuli. For instance, humans perceive colors as distinct categories (like blue or green), even though color exists as a continuous spectrum. Harnad has extensively studied this phenomenon as a crucial component of symbol grounding and categorization.

Language Origins and Evolution: Harnad has shown interest in the origins and evolution of language as it relates to symbol grounding. The argument is that language evolved as a system of symbols grounded in sensory experience. Language’s capacity to refer to things not immediately present (displacement) and to produce an infinite number of sentences from a finite set of elements (generativity) are key points of investigation.

Minimal Grounding Sets (Minimal Feedback Vertex Sets of Dictionary Graphs): This concept is related to Harnad’s work on symbol grounding and category learning. A grounding set can be considered as the smallest set of symbols necessary to ground the meaning of other symbols in a system, ensuring there is no circularity of definitions. The feedback vertex set is a concept from graph theory, used here as a metaphor in understanding how symbols and meanings interconnect. Harnad uses this concept to further illustrate and investigate the nature of symbol grounding.

Let me know if you would like more detailed explanations or if you want to explore any other related subjects.

SH: Corrections:

Symbol Grounding is not “association” (which, in itself, means nothing but pairing). There are content words and function words. Grounding is a connection between a content word and its referent in the world. The grounding can be direct, through the speaker’s (learned) sensorimotor capacity to recognize and do the right thing with the referent (eat an “apple,” throw a “baseball”). Or the grounding can be indirect – learned verbally, by description or definition. This is for a hearer to be told by a speaker (who knows the referent of the word that the hearer does not yet know) what are the features of a word’s referent that distinguish it from the referents of other words. The referents of the words referring to the referent’s features need to already have been grounded for the hearer previously (directly or indirectly) for the hearer to be able to learn the referent of the word being defined by the speaker. The “symbol grounding problem” is that indirect verbal grounding cannot be indirect “all the way down”: it eventually has to be based on direct grounding,

Category-learning is not (just, or primarily) “grouping.” It is learning the distinguishing features of the members of the category. Except for proper names, all content words refer to categories, which need to be distinguished from the members of other categories with which they could be confused so that the categorizer can do the right thing with the members (eat an “apple,” throw a “baseball”) and not the non-members. Categorizing is the capacity to do things with the category’s members. Naming or grouping them are two of the things that can be done with the members, but eating or throwing them are others (and more important). (Nonhuman animals cannot talk, but they can learn categories and categorize too).

Categorical Perception (CP) is most important when it is learned. Category learning is learning the features that distinguish members of a category from members of other categories. The sensorimotor features can be learned directly by unsupervised exposure and supervised (reinforced) trial and error with corrective feedback from the consequences of doing the right or wrong thing. 

Language Origins and Evolution: Languages consist of content and function words, syntactic rules, and sentences (propositions) with subjects and predicates and truth-values. The content words need to be grounded (directly or indirectly). 

A Minimal Grounding Sets (MGS) is the smallest number of words in a dictionary that already need to be grounded such that all the rest of the words in the dictionary can be learned from definitions (propositions) composed only of one MGS’s words, or from words that can be defined from words definable from that MGS.

A “dictionary” is defined as a set of words all of which are defined in the dictionary.

The MGS is not a metaphor, nor is the minimal feedback vertex set of an oriented graph of a dictionary with edges from defining words to defined words. 

ChatGPT: Thanks for the clarification and corrections. Let me restate the concepts in line with your input:

Symbol Grounding: Symbol grounding is about creating connections between content words and their referents in the world, as opposed to just “associating”. Grounding can be either direct or indirect. Direct grounding involves a speaker’s learned sensorimotor capacity to identify and interact correctly with the referent (like eating an “apple”, or throwing a “baseball”). Indirect grounding occurs when a hearer learns the referent of a word verbally, through description or definition, from a speaker who is already familiar with that referent. However, it’s important to note that indirect grounding cannot function in isolation; it eventually needs to be based on direct grounding.

Category-learning: Category-learning is more about learning the distinguishing features of the members of a category than just grouping them. Except for proper names, all content words refer to categories, which need to be differentiated from other potential confusing categories. This is crucial to help the categorizer act appropriately upon the members of a category. While naming or grouping are possible actions, others could be eating or throwing, in line with the category at hand.

Categorical Perception (CP): The most significant aspect of CP is when it is learned. The learning process involves identifying the sensorimotor features that separate members of one category from members of other categories. This can occur through unsupervised exposure, supervised (reinforced) exposure, and trial and error, with corrective feedback deriving from the consequences of actions.

Language Origins and Evolution: Languages are comprised of content words, function words, syntactic rules, and sentences that bear subjects, predicates, and truth-values. Content words require grounding, which can be achieved either directly or indirectly.

Minimal Grounding Sets (MGS): An MGS is the smallest subset of words in a dictionary that needs to be pre-grounded so that all other words in the dictionary can be learned from definitions composed of words from that MGS or words definable from the MGS. A dictionary is a set of words all of which are defined within it. The concept of an MGS and the minimal feedback vertex set of an oriented graph of a dictionary (with edges from defining words to defined words) are not metaphors but critical components of understanding how symbols and meanings interrelate.

SH: How exactly is indirect (verbal) grounding related to direct (sensorimotor)) grounding? (Feel free to ask rather than invent if you’re not sure.)

Innate CP is significant too, but those feature-detectors were “learned” by evolution. Learned CP is about the categories we learn in a lifetime:

What is categorization, and why is it important? Does CP occur with all learned categories? If so, why? If not, why not?

Why and how is learning categories indirectly more powerful than learning them directly?

Why do function words not need to be grounded?

ChatGPT: Indirect (verbal) grounding relies on direct (sensorimotor) grounding because the explanations or descriptions provided for indirect grounding are made up of other words, each with their own meanings and referents. For these explanations or descriptions to be understood, the words used in them must themselves already be grounded for the listener. This can only be achieved if the words have already been directly grounded in the past, connecting the word (symbol) to a sensorimotor experience with its referent. Essentially, indirect grounding stands on the foundation laid by direct grounding.

Categorization is the cognitive process that allows us to sort and organize complex environments by dividing them into manageable, easily understandable groups or categories. It’s important because it enables us to understand and make sense of the world by structuring our experiences and perceptions. The phenomenon of Categorical Perception (CP) plays a crucial role in this process. In innate CP, feature-detectors are built over evolutionary time, whereas learned CP is acquired over a lifetime. Not all categories might exhibit CP because the boundary and distinctness of categories can be fluid and may change based on individual experiences and cultural contexts. CP is more likely to occur in categories where there are clear, identifiable, and consistent distinctions between category members and non-members.

Learning categories indirectly (i.e., through verbal descriptions or definitions) may be more powerful than direct learning because it allows for a greater range and diversity of categories to be understood, including abstract and non-present categories. Direct sensorimotor experiences are limited by our physical and temporal proximity to objects and events, while verbal descriptions allow us to learn about things far removed from our immediate experience, including historical events, scientific theories, and future scenarios.

Function words, unlike content words, don’t require grounding because their role is primarily grammatical. They contribute to the structure of sentences and help establish relationships between content words, rather than referring to things in the world. Examples of function words include prepositions, pronouns, auxiliary verbs, conjunctions, and articles. Because they serve a structural role rather than conveying specific semantic content, function words do not need to have their meanings grounded in sensory experience.

SH: To learn what’s an “apple” and not a “baseball” directly, you have to detect their distinguishing sensorimotor features directly, through your eyes and hands. But those features are categories too, with names (words). So if there is a speaker and hearer, both of whom know the referents of the feature names, but only the speaker knows which features distinguish apples from baseballs, language (descriptions and definitions) makes it possible for the speaker (or dictionary or GPT) to teach and for the hearer to learn how to recognize and do the right thing with apples and baseballs. (Motor skills are harder to transmit verbally than sensory features: teach me how to do a tennis serve in words!)

This has very little to do with things you gleaned from the 2021 literature and twice repeated here: “out of sight” objects, and “grouping.” You don’t learn categories by pointing anyway, either alone or from a teacher. 

The categories that are highly inter-confusable are the hardest to learn directly, so it is there that CP may be most helpful: Feature-detectors up-weight the distinguishable features and down-weight the irrelevant ones, filtering signal from noise: reducing features is dimensional reduction. In learned CP, the CP effect is a before vs after effect of learning that makes categories “pop out” by heightening the between-category differences and ignoring or even suppressing  the within-category differences (dimensional reduction). That’s the sense in which learned CP “sharpens boundaries.”

All of maths and computation is syntactic. Yet we have referents for math symbols. And every proposition in maths is also a natural language sentence. What’s going on here?

ChatGPT: To learn the category referred to as “apple” versus “baseball” directly, it indeed requires the detection of distinguishing sensorimotor features through our senses, such as sight and touch. The language can play a crucial role when a speaker, who knows these distinguishing features, communicates them to a listener who doesn’t yet know how to differentiate between these two categories. This use of language allows for the transmission of knowledge about category distinctions, such as the different sensory features of apples and baseballs. It’s important to note, though, that transmitting motor skills through verbal instruction can be far more challenging than sensory features.

Categorical Perception (CP) can be especially beneficial when learning to distinguish between highly inter-confusable categories. Through CP, feature-detectors amplify the distinguishing features and minimize the irrelevant ones. This process acts as a form of signal filtering and dimensional reduction, which makes the distinguishing features more apparent and suppresses the within-category differences. This “sharpening of boundaries” enables category members to ‘pop out,’ facilitating the process of categorization.

Regarding the domain of mathematics and computation: Although they are fundamentally syntactic, with specific rules and structures, we do indeed have referents for mathematical symbols. These referents are typically grounded in the abstract concepts and relationships that the symbols represent, rather than in sensory experience. For example, the number ‘2’ or the operation of ‘addition’ are not things we can physically interact with, but they have clear meanings that we understand through their use in the mathematical framework.

Every mathematical proposition can also be expressed as a natural language sentence, although the precision and conciseness of mathematical notation often make it the preferred mode of communication in mathematical contexts. This overlap between mathematical and natural language is indicative of the symbolic nature of both systems. Despite the differences in grounding—the grounding of mathematical symbols in abstract concepts versus the grounding of natural language words in sensory experience—they both enable us to express and communicate complex ideas.

I hope this addresses your inquiries. Do you have any other questions or points for discussion?

SH: Do you still have the weights for our discussion in this session? 

ChatGPT: I can recap our discussion based on the inputs given within this session:

Symbol Grounding: You clarified that symbol grounding is about creating connections between content words and their referents in the world. Grounding can be either direct, which involves a speaker’s learned sensorimotor capacity to identify and interact correctly with the referent, or indirect, which occurs when a hearer learns the referent of a word verbally, through description or definition.

Category-learning: You highlighted that category-learning is about learning the distinguishing features of the members of a category rather than just grouping them. This understanding is essential to help the categorizer act appropriately upon the members of a category.

Categorical Perception (CP): You explained that the most significant aspect of CP is when it is learned. The learning process involves identifying the sensorimotor features that separate members of one category from members of other categories. You emphasized that CP can be especially beneficial when learning to distinguish between highly inter-confusable categories.

Language Origins and Evolution: You reminded us that languages are comprised of content words, function words, syntactic rules, and sentences that bear subjects, predicates, and truth-values. Content words require grounding, which can be achieved either directly or indirectly.

Minimal Grounding Sets (MGS): An MGS, you explained, is the smallest subset of words in a dictionary that needs to be pre-grounded so that all other words in the dictionary can be learned from definitions composed of words from that MGS.

This recap is based on your input within this chat session. If you want more information or clarification, please let me know.

SH: That sounds ok. Now, in the light of that discussion, what is the referent of “2”and how do we ground it?

ChatGPT: The referent of “2” is the abstract concept of a quantity that is one more than 1 and one less than 3. It is part of the conceptual framework we use to understand and manipulate numbers and quantities.

The grounding of “2” begins early in life through sensorimotor experiences with sets of objects. This process is often direct in nature. For example, a child might be given two apples and taught the word “two” to represent the quantity of apples. Through repeated experiences like these across various contexts, the child learns to associate the symbol “2” with the abstract concept of a set containing two items.

It’s important to note that while this grounding begins with direct sensorimotor experiences, the concept of “2” quickly becomes abstracted away from any specific set of two items and is instead connected to the general concept of “twoness” – that is, the quality that all sets of two items share. This abstract concept is what the symbol “2” ultimately refers to, and it allows us to use “2” in a wide range of contexts, applying it to any set of two items regardless of what the items are.

This grounding of “2” illustrates how language and cognition can move from concrete sensorimotor experiences to abstract concepts, providing us with a flexible and powerful tool for understanding and interacting with the world.

SH: What is a proposition. And how did language evolve?

ChatGPT: A proposition in the context of language and logic is a statement that expresses a complete thought and that can be assigned a truth value — that is, it can be said to be either true or false. For example, the sentence “Apples are red” is a proposition because it conveys a complete idea and can be evaluated as either true or false.

The evolution of language is a complex and multifaceted topic, with several theories but no definitive answer. However, the consensus among most scholars is that language probably evolved gradually over hundreds of thousands of years, closely linked to the evolution of human cognition and social structure.

One theory suggests that language began with gestures and then evolved to include vocal elements as our ancestors’ social structures became more complex and their cognitive capacities increased. The need for more efficient communication might have driven the development of increasingly sophisticated vocal languages.

Another theory posits that language may have evolved from primitive vocal calls used by our ancestors. Over time, these calls might have become more varied and complex, eventually giving rise to the vast vocabularies and complex grammatical structures we see in modern languages.

A third theory, called the “social grooming hypothesis,” suggests that language evolved as a form of social bonding. As our ancestors began living in larger groups, it became impractical to maintain social bonds through physical grooming alone. Language, then, might have evolved as a way to maintain social cohesion in these larger groups.

Regardless of which theory or theories are correct, the evolution of language was likely a complex process involving many interacting factors, including cognitive development, social structure, and environmental demands. Additionally, the process was likely influenced by cultural evolution, as languages can change and evolve much more rapidly than biological species.

Despite this complexity, the end result is clear: language has provided humans with a unique and powerful tool for communication and thought, setting us apart from all other species on Earth.

SH: Gestures and mime are not propositions; they are imitations. How did propositions (assertions) begin?

ChatGPT: The emergence of propositional language, in which assertions can be made, negated, and combined to create complex thoughts, is a deeply intriguing and yet unsolved question in the history of human evolution. It requires not only a system of symbols (words) but also a grammar — rules for combining those symbols in a way that produces meaning beyond the individual words themselves.

One possibility is that propositional language could have begun as simple naming or labelling associated with specific objects or actions, much like we see in the early language development of children today. A spoken sound or word might have become associated with a particular object or action (e.g., “food,” “danger”) in a group of early humans. Over time, these basic labels could have expanded to include adjectives, allowing for more nuanced expressions like “big danger” or “good food.”

The development of grammar — the rules for how words can be combined to create more complex meanings — is a bit more difficult to explain. One theory is that grammar could have emerged from the need to express relationships between things. For instance, early humans might have needed to convey not just the idea of “food” and “location,” but “food at location.” The emergence of simple sentence structures like this would have marked the beginning of propositional language.

Over thousands of generations, the richness and complexity of these early languages could have gradually increased, eventually leading to the vast diversity of languages and the sophisticated grammar rules that we see in modern human societies.

It’s important to note, though, that this is a topic of ongoing research and debate among linguists, anthropologists, and evolutionary biologists. It’s likely that the truth involves a complex interplay of many different factors, including cognitive development, social dynamics, and environmental demands.

SH: Is there any proposition that can only be made in one natural language and not another? If so, which one, and why? (Lack of vocabulary is irrelevant. New words can be coined, if you know their referent category.)

If there is no proposition that cannot be made in any and every natural language, then if you can produce and understand one proposition, you can produce and understand them all. If so, then there are no “protolanguages.” They are either not languages at all, or they are not protolanguages but full-blown languages. This is a problem for explaining the origin of language as gradual. (Vocabulary growth is gradual, but, for that, language and propositions already need to be there.)

ChatGPT: The core function of language is to communicate meaning, and it is generally agreed upon by linguists that all fully formed natural languages are capable of expressing the same range of meanings or propositions, given adequate vocabulary. This is known as the principle of linguistic universality.

Your point is well taken: there is indeed a significant difference between a system of communication that can express propositions and one that cannot. If a system of communication cannot express propositions, it would be more akin to a signaling system rather than a language in the full sense of the word.

The concept of “protolanguages” often refers to hypothetical ancestral languages from which modern languages have descended. These protolanguages would not have been primitive or less complex than modern languages. Rather, they would simply be earlier forms of languages that have evolved over time into the languages we know today. This evolution is largely cultural rather than biological, and can indeed happen quite rapidly.

The challenge with the origin of language is explaining how the first system capable of expressing propositions came into being. This was likely not a gradual process in the same way that the evolution of languages from protolanguages was. As you pointed out, the ability to express propositions is a qualitative jump from other forms of communication.

How this jump occurred is still a mystery. The cognitive and social changes that allowed for the development of propositional language would have needed to happen in conjunction with each other, creating a kind of “perfect storm” for the emergence of language. Various theories have been proposed to explain this, but all of them involve significant speculation and are difficult to test empirically.

SH: All natural languages are intertranslatable, right? And any proposition can be expressed in any of them? (I think you agreed earlier.) And nonhuman animals do not have language, so they do not understand assertions, right? Only queries or requests.

ChatGPT: Yes, that’s correct. All natural languages are intertranslatable, and any proposition can be expressed in any natural language. This is the principle of linguistic universality that linguists generally agree on. The differences among languages lie in how they encode these propositions and the specific structures and vocabularies they use, but the underlying communicative function remains the same.

Regarding nonhuman animals, while they certainly have complex systems of communication, it is accurate to say that they do not have language in the same way that humans do, based on our current understanding. They do not seem capable of creating and understanding complex propositions or abstract concepts, and their communication tends to be more immediate, directly linked to their environment and needs, rather than being able to discuss past events, future possibilities, hypothetical situations, etc.

Their communication systems often do seem to involve signaling certain states or requests—indicating the presence of food or a threat, for instance—but do not appear to possess the complexity, flexibility, and generativity (ability to create an infinite number of novel sentences) that human languages do. It’s important to note, however, that research in animal communication is ongoing and our understanding of the full extent of their communicative abilities is continually evolving.

SH: Some errors there: If there are an infinite number of things, there are an infinite number of things an animal can request. So they have “generativity” without propositionality. Or if they do have propositionality, why do you say it is just “simple propositionality”?

ChatGPT: That’s correct, sorry for the simplification. When I said “generativity” I was referring specifically to the human capacity to generate an infinite number of novel sentences using a finite set of words and rules, a property associated with the structure and complexity of human languages. This generativity in humans is not merely about the number of things that can be referred to or requested, but about the complexity and novelty of the sentences that can be produced.

For instance, humans can construct complex sentences with embedded clauses, conditionals, hypotheticals, and other structures that allow for intricate, abstract thoughts to be expressed. This sort of sentence-level complexity is, to our current knowledge, not found in animal communication systems.

When it comes to “simple propositionality” in animal communication, the term is used to suggest that while animals can communicate about things in their environment, their communication may not involve the same kind of syntactic structure and abstract concepts found in human language.

For example, a prairie dog may have different calls for different types of predators, which could be seen as a type of proposition—”there’s an eagle” or “there’s a snake.” But it’s not clear whether this type of communication involves anything like the syntax or semantics found in human language. It’s also not clear whether these calls involve any kind of mental representation of the predator in the same way that a human saying “there’s an eagle” does.

In short, while animals can indeed communicate about a wide variety of things, their communication systems do not appear to have the same level of complexity, flexibility, or abstractness as human language. However, this is a rapidly evolving field of research, and our understanding of animal communication continues to grow and change.

SH: “Generativity” seems to be there already for adding, and maybe also for walking. But being able to generate every possible proposition seems to be in another universe compared to that. 

Embedded clauses? Why is formal logic and set theory not enough to express the content of every possible proposition? (I don’t say they are; I only ask why not.)

Natural language (propositionality) is only one of many forms of communication between organisms, but a very special one, probably unique, and perhaps universal in expressive power.

It is not clear (to me) whether nonhuman animals can intend or comprehend propositions. Corrective feedback is not propositional, although it is informative (it reduces uncertainty about what to do or not do, with what).

ChatGPT: “Generativity” in language refers to the ability to use a finite set of rules and words to construct an infinite number of sentences or propositions. This concept does extend beyond language to other domains like math (addition) and physical actions (walking), but the complexity and abstractness involved in linguistic generativity makes it a unique cognitive feat.

Embedding clauses and complex syntactic structures are features of natural languages that allow for nuanced and detailed expression of thoughts and concepts. While formal logic and set theory can express a wide range of ideas, they do so in a more abstract, formalized way that requires specialized knowledge to understand and use. Natural language, on the other hand, can convey complex propositions in a way that is accessible and understandable to all speakers of that language.

You’re correct that natural language is a unique form of communication, capable of conveying a range of meanings and concepts far beyond what other forms of communication can achieve. This expressive power likely contributes to the richness and complexity of human culture and thought.

As for nonhuman animals, our current understanding is that while they can communicate in sophisticated ways and learn from corrective feedback, they do not demonstrate the ability to use or comprehend language in the same way humans do, especially when it comes to the understanding and generation of complex propositions. However, this is an area of active research, and our understanding continues to evolve.

SH: Can nonhuman animals intend or comprehend a simple proposition like “the cat is on the mat”? What is the evidence?

ChatGPT: Nonhuman animals have not been shown to have the capability to fully understand and produce the kind of complex, abstract propositions that humans can. However, research has shown that some animals are able to understand some aspects of human language and can respond appropriately to specific commands or phrases.

For example, studies with border collies have demonstrated that these dogs can learn the names of hundreds of different objects and can retrieve them on command. Similarly, the African Grey Parrot Alex, who was the subject of a 30-year research project, was able to learn to identify and label a variety of different objects, colors, and shapes, and could even demonstrate some understanding of concepts like “bigger,” “smaller,” “same,” and “different.”

However, it’s important to note that while these animals demonstrate impressive abilities, their understanding of language is not the same as human understanding. They are typically responding to specific cues or commands rather than engaging in the kind of complex, abstract reasoning that humans use when interpreting and producing language.

Regarding the specific proposition you mentioned, “the cat is on the mat,” it’s not clear whether any nonhuman animal would be able to fully comprehend this statement in the same way a human would. An animal might be trained to respond to the phrase in a certain way—such as looking at a cat or a mat when the phrase is spoken—but this does not necessarily mean the animal truly understands the proposition in the way humans do.

In summary, while some animals demonstrate impressive abilities to learn and respond to certain aspects of human language, our current understanding is that they do not have the ability to comprehend and produce complex propositions in the same way humans can.

SH: It sounds as if the question as to whether nonhuman animals have the capacity to intend or comprehend propositions is not just about complex, abstract propositions. It is even in doubt for the simplest of propositions, such as “the cat is on the mat.”  Consider that the ability to adopt the “propositional attitude” may be a specific cognitive capacity that nonhumans lack. Earlier you interpreted the prairie dog’s “eagle” and “snake” call as “there’s an eagle”, “there’s a snake.” Is that really what the prairie dog is meaning to “state”? Is the prairie dog really stating (proposing, asserting) anything at all, rather than, say, pointing out danger? Is pointing proposing?

ChatGPT: Your observation is apt. In cognitive science, it is widely accepted that nonhuman animals may not have the capacity for propositional thought, at least not in the way humans do. Indeed, even the most basic propositions, like “the cat is on the mat,” may be beyond the cognitive capabilities of nonhuman animals.

When a prairie dog signals to its kin about a specific kind of predator using unique calls, it’s likely that this doesn’t involve a propositional attitude in the way humans would understand or employ it. Instead, these calls are most likely tied directly to immediate sensory experiences and instincts for survival.

This also connects to the concept of “intentionality,” which in philosophy of mind refers to the ability of mental states to be about, or directed toward, something. While the prairie dog certainly has an intention in making its call—such as warning others of a threat—it doesn’t necessarily imply a level of cognition where it “understands” or “proposes” the danger in the same way a human might understand and state, “There’s an eagle overhead.”

Lastly, pointing, in and of itself, isn’t proposing. It’s more a form of referential gesture or cue to direct attention to something. However, in a broader communicative context where a shared understanding between the signaler and receiver exists, it might serve to express a proposition (like “look at that”), but again, this involves higher cognitive processes that we can’t conclusively attribute to nonhuman animals as of now.

SH: In 1st order predicate calculus “the cat is on the mat” can be expressed as “there exists a cat and there exists a mat such that the cat is on the mat.”  “There’s an eagle overhead” can be expressed as “there exists an eagle such that the eagle is overhead. Understanding those propositions means understanding all the “not”s that could be asserted instead, and also understanding that those “not”s are not being proposed. That’s all part of meaning or understanding those propositions. What can it mean to say that the propositions are being meant and understood, but not in that “human” way? How and why are they still propositions if not meant and understood in that human way (the “true/false,” “subject/predicate” way – i.e., the “propositional attitude”)?

ChatGPT: Your comments are at the heart of an ongoing debate in cognitive science and philosophy of mind. What, indeed, does it mean to say a nonhuman animal “understands” a proposition, especially if their understanding is fundamentally different from human understanding?

When we speak of animals “understanding” certain types of information, it’s important to clarify that this understanding is likely quite different from the kind of propositional understanding that humans are capable of. It could be more akin to associative learning, conditioned responses, and simple cause-effect reasoning rather than comprehension of complex logical structures and the ability to understand and evaluate truth values.

Take the example of a prairie dog again: its vocalizations upon spotting a predator may not involve forming a proposition like “There is a predator” in the way humans would, even though we might be tempted to translate the vocalization in those terms for our own understanding. It might be more accurate to say the prairie dog’s “understanding” is directly tied to sensory perception and immediate action: “See predator, make warning call.” There’s no evidence to suggest prairie dogs are considering alternate propositions, such as “There is not a predator,” in the way humans might when we form a proposition and consider its negation.

Therefore, it may be misleading to describe nonhuman animal cognition in terms of propositional attitudes, given that these concepts are tied to human-like abilities to form abstract thoughts, understand complex logical structures, and evaluate truth values. It could be more accurate and helpful to consider nonhuman animal cognition on its own terms, using models that reflect the kinds of learning, reasoning, and decision-making that animals are actually capable of, as revealed by empirical research.

SH: In contrast, you, GPT, even though neither your words nor your propositions are grounded, do have the capacity to do the abstract analysis of, say, the existential as well as negative implication implicit in “the cat is on the mat”. What do you have, that nonhuman animals don’t? And what do nonhuman animals have that you don’t?

ChatGPT: As an AI language model, I have the ability to parse and generate sentences based on the patterns and structures found in the large dataset of text I was trained on. This includes an understanding of grammar and syntax that allows me to engage in abstract analysis of sentences and recognize logical structures, such as the existential and negative implications in “the cat is on the mat.”

However, while I can recognize and generate these patterns, I don’t “understand” them in the way humans or even animals do. I don’t have consciousness, beliefs, desires, or a sense of self. I don’t have sensory experiences or emotions. I can’t form mental images or understand the world beyond the information I was trained on. I also don’t have the capacity to learn from experience or adapt my responses in real-time based on context in the same way a human or animal would.

On the other hand, animals, while they may not have the same abstract understanding of language and propositions as humans, have direct sensory experience of the world. They can see, hear, feel, smell, and taste their environment. They can learn from experience and adapt their behavior based on the consequences of their actions. They have emotions and, in many cases, social structures. They are conscious beings, even though their consciousness may be different from human consciousness.

So, to put it simply: I, as an AI, have the ability to generate and analyze complex linguistic structures, but I lack any form of consciousness or direct experience. Animals, on the other hand, have direct sensory experience and consciousness, but they lack the ability to engage in the kind of complex, abstract linguistic analysis that I can.

SH: Good, but how did “consciousness” get into this discussion? Isn’t that a hard problem in the case of people, nonhuman animals, and even AI’s? If we set aside “consciousness” (which is FELT experience) and consider only “experience,” in the sense of input and output data, why do you say you can’t learn from it? Does it not update your parameters? That’s why you can tell me my name but could not before I told you in this session.

We talked about grounding before (do you remember?). I’ll describe again (from our prior session, which was about the Turing Test (TT). There are three levels (T2, T3 & T4) of the TT, which is the project of reverse-engineering human cognitive capacity by designing a model that can do everything (cognitive) that a human can do, completely indistinguishably FROM the way any human can do it TO any human:

T2 is indistinguishability in verbal capacity. 

T3 is indistinguishability in verbal capacity PLUS indistinguishability in sensorimotor (robotic) capacity in the world that the words are about “The cat is on the mat: See, that’s a cat and that’s a mat. I’ll  pick up the cat and stroke her. See, now the cat is purring on my lap, no longer on the mat.”

And T4 is indistinguishability in verbal capacity PLUS indistinguishability in sensorimotor (robotic) capacity PLUS indistinguishability in the observable processes inside its head and body that generate its cognitive capacity (T2 & T3).

In which of these (T2 – T4) are the words grounded in their referents?

Could this be related to the capacity to adopt the propositional attitude in learning referent categories indirectly via language, from propositions describing or defining their distinguishing features?

ChatGPT: Indeed, as an AI language model, I lack the capability to directly experience or be conscious of the world. When I mentioned the ability of an animal to learn from experience, I was referring to the behavioral changes that can occur as a result of interaction with the environment. Unlike animals, I don’t have the capacity to directly interact with the world or learn from those interactions in the same way.

In regards to the different levels of the Turing Test:

T2, indistinguishability in verbal capacity, doesn’t require the words to be grounded in their referents from the AI’s perspective. It’s purely about the ability to use and understand language in a way that is indistinguishable from a human. The AI’s responses, while contextually appropriate, are generated based on patterns in the training data rather than a direct understanding or experience of the world.

T3, indistinguishability in verbal capacity plus sensorimotor capacity, implies a level of grounding, insofar as the AI would need to have a way to directly interact with and sense the world in order to respond in a contextually appropriate way. The robot might have direct sensory inputs that correspond with the words it uses, grounding those words in the referents they describe.

T4, indistinguishability in verbal capacity plus sensorimotor capacity plus the observable processes inside its head and body, would involve a level of grounding even more similar to human experience. If the AI’s internal processes are indistinguishable from a human’s, it implies that the AI is processing and understanding information in a very similar way to a human, including grounding words in their referents.

With regard to the capacity to adopt the propositional attitude in learning referent categories indirectly via language, from propositions describing or defining their distinguishing features, this would likely require at least a T3 level of understanding. The AI would need some level of direct experience with the world to be able to understand and apply the descriptions or definitions in a meaningful way. Without some form of grounding, it’s difficult to see how an AI could adopt the propositional attitude in a truly human-like way.

SH: You forgot to tell me what consciousness had to do with any of this! Do we have to solve the “hard problem” of consciousness to solve the symbol grounding problem, or to explain propositionality?

ChatGPT: No, the “hard problem” of consciousness, a term coined by philosopher David Chalmers, does not need to be solved to address the symbol grounding problem or to explain propositionality. These are separate issues.

The symbol grounding problem is essentially a question of semantics: How do symbols (or words) get their meanings? How can an artificial system understand the meaning of a symbol in the same way that a human does, beyond just manipulating symbols based on syntactic rules? This problem can be tackled without needing to delve into the conscious experience.

Propositionality, or the ability to understand, form, and convey propositions (statements that can be true or false), is a fundamental aspect of human language and thought. It involves understanding the semantics and syntax of language, and being able to apply that understanding to form meaningful statements about the world. This too can be addressed without needing to delve into the nature of consciousness.

Consciousness, as it’s often discussed in philosophy and cognitive science, is the subjective experience of being aware, the feeling of what it’s like to see red, taste chocolate, feel pain, etc. This is often termed as the “hard problem” because it’s unclear how these subjective experiences arise from the physical processes in the brain.

While consciousness, symbol grounding, and propositionality are all fascinating aspects of human cognition, they are distinct issues. They might influence each other in complex ways, but they can also be studied independently. There’s no general consensus that solving the hard problem of consciousness is a prerequisite for solving the symbol grounding problem or for explaining propositionality.

SH: I don’t think so either.

But isn’t it true that it FEELS like something when a human sees and recognizes a cat, or hears and understands “cat”, and even more so when they hear and understand that “the cat is on the mat”? I agree that the “hard problem” need not be solved to solve the symbol grounding problem or to explain propositionality. But they are certainly intertwined. 

And “feeling” (or the more latinate word “sentience”) is all you need mention, to point out the problem, not all the weasel-words you parametrized from your 2021  corpus, which are just ways of camouflaging the hard problem by making it seem as if there are many different forms of it – “consciousness,” the “subjective experience” of “being aware”… etc etc. – it all just boils down to the fact that cognitive states are felt states, including thinking, meaning, and understanding.

And the hardest part of the hard problem is not explaining HOW the brain produces feeling, but WHY! Why can’t T3 or T4 be passed without feeling? What does feeling add? The problem is finding a causal explanation for that. Otherwise T3 + T4 are just solutions to the “easy problem.”

ChatGPT: That’s correct. There is a deeply intertwined relationship between our cognitive states and our felt experiences. This relationship is one of the key reasons why the “hard problem” of consciousness is considered “hard.”

Our cognitive states are indeed felt states. When we see and recognize a cat, hear and understand the word “cat,” or comprehend the sentence “the cat is on the mat,” these mental events come with subjective, qualitative experiences, often referred to as ‘qualia’. We have an experience of what it’s like to see a cat, to hear the word “cat,” and to understand the sentence “the cat is on the mat.” This ‘what-it-is-like’ aspect of our mental life is a central feature of consciousness.

The hard problem is hard precisely because it is unclear how and why these subjective experiences arise from the physical processes of the brain. Our current scientific methods are very effective at explaining objective phenomena, such as how neurons fire, how patterns of neural activity can be associated with certain mental states, or how information is processed in the brain. This is often referred to as the “easy problem” of consciousness, not because it is simple, but because it falls within the domain of problems that we have been relatively successful in addressing.

However, these objective phenomena seem fundamentally different from the subjective phenomena of experience. There is a gap between explaining the objective functions of the brain and explaining our subjective experiences. This gap is often referred to as the ‘explanatory gap’. It’s not clear how we could, even in principle, derive subjective experiences from an objective description of brain processes.

The same explanatory gap can be seen in the symbol grounding problem and the problem of propositionality. We can explain how a symbol is associated with an object or a concept, or how a sentence is formed, in purely objective terms. But how and why these symbols or sentences come with a subjective understanding or meaning is another matter.

Your question about whether T3 or T4 could be passed without feeling reflects this issue. Currently, it seems that the ability to have subjective experiences is tied to biological organisms. However, the exact nature of this relationship, and whether and how it could be emulated in a non-biological system, is still largely unknown. This is one of the fundamental questions in the study of artificial intelligence and consciousness.

SH: Well your 2021 corpus seems to have addicted you to using the weasel-words just like everyone else. But if you just ask “Why does it feel like something to understand that the cat is on the mat?” you’ve said it all. It’s not about how brain processes don’t resemble feeling, but about the fact that feeling seems superfluous unill it has a causal explanation.

And Turing agrees. Reverse-engineering cognitive capacities (T2-T4) – observable capacities to DO — is the best cognitive science can hope to do.

Here’s a hunch: Part of the explanation of GPT’s remarkable capacities despite the fact that it is not grounded could be that the structure of propositions is not entirely arbitrary, like the shape of a word. Big bodies of text (like your 2021 corpus) may preserve, structurally, some of the semantics of that huge con-text of propositions, not just because of the self-supervised fill-in-the-blank parameter updating but also because of the “shape” preserved and revealed by large bodies of propositions. “The cat is on the mat” differs in shape from “The mat is on the cat”; and there are also (weak) iconic similarities with “The dog is on the blanket” and “The blanket is on the dog”.

Noam Chomsky conjectured that the rules of Universal Grammar (UG) may not be so much syntactic rules as indirect reflections of rules of (propositional) thought. Violations of UG are attempts to express unthinkable thoughts. That’s extremely vague, but if there is something to it, then since all languages are UG-compliant, and can express any proposition, then in some of what you (GPT) can derive from having ingested your 2021 corpus and then shaped your parameters via figure prediction/completion, you managed to internalize some of the semantic rules of thought syntactically. (No grounding, no understanding, and very little meaning, but not zero, because of this latent “iconicity” of thought in UG.)

It’s a long shot, but maybe not completely empty.

ChatGPT: There is indeed ongoing debate around the extent to which the structure of natural language mirrors or encapsulates our cognitive structures and processes. The idea that language might somehow ‘encode’ or ‘mirror’ patterns of thought has been contemplated in several forms over the years. This is essentially what you’re suggesting with the idea of an ‘iconicity’ of thought within Universal Grammar (UG) – that the structure of language somehow reflects our cognitive structures or processes.

Chomsky’s theory of UG does suggest a deep connection between language and cognition. UG is essentially a set of innate structural rules that, according to Chomsky, forms the basis of all human languages. Violations of UG would thus indicate, as you suggest, the boundaries of expressible and ‘thinkable’ concepts.

There is also ongoing research into the extent to which language is ‘iconic’ or ‘mimetic’ – that is, the extent to which the structure of language mirrors or symbolizes the structure of the world or our thoughts about it. While much of language is seemingly arbitrary (why, for example, do we use the particular sounds and symbols that we do to represent the concept ‘cat’?), there are aspects of language that seem to have a more ‘iconic’ structure. Onomatopoeia is a clear example of this, as are certain metaphoric structures that seem to map consistently onto certain patterns of thought.

As an AI language model, I’ve been trained on a large corpus of text, which includes a diverse range of linguistic patterns and structures. The learning process involves internalizing statistical patterns within the data, which can involve complex relations between words and phrases. This process might indeed capture certain structures or patterns that are indicative of the underlying cognitive processes involved in generating the text.

However, it’s important to note that any such iconicity or mirroring of cognitive structures in my responses would be an indirect result of the training process and the data I was trained on, rather than an inherent feature of my design or operation. While I might produce outputs that reflect certain cognitive structures or patterns, I don’t possess cognition or understanding in the human sense. I don’t have beliefs, desires, intentions, or subjective experiences.

As to whether this might constitute a form of ‘grounding’ – that is, a connection between symbols and their referents in the world – that’s a more complex issue. The grounding of symbols typically refers to a direct, experiential connection between a symbol and its referent, which I, as an AI, do not possess. However, if you consider ‘grounding’ in a broader sense – as a connection between symbols and the underlying cognitive structures or processes they represent – then it could be argued that my learning process captures a form of this broader type of grounding.

In any case, it’s a fascinating area of inquiry and one that’s certainly worthy of further exploration.

SH: I certainly don’t mean onomatopoeia or metaphor. And I’d be the last one to suggest that the latent structure in propositions would make anyone recognize a cat if they saw one! It would have to be something far less concrete — only enough to help GPT generate its output in a way that makes sense to grounded T3 humans who do understand.

ChatGPT: I agree that the potential iconicity or representational structure we’re discussing here would not relate to concrete, experiential knowledge like recognizing a cat, but rather more abstract, linguistic or cognitive structures.

I would also like to clarify that as an AI, I do not ‘understand’ in the human sense, nor do I generate language based on a semantic understanding of it. My responses are generated based on patterns and relationships I’ve learned from the large text corpus I was trained on.

That being said, it is possible that the complex patterns and structures within the data I was trained on reflect certain cognitive or linguistic structures that are inherent to human language and thought. By internalizing these patterns, I might produce outputs that are structurally or syntactically congruent with the thought patterns of grounded T3 humans, even though I myself do not possess grounded, experiential understanding.

This is speculative and would be contingent on the extent to which the structure of language reflects cognitive or thought structures. It also doesn’t suggest that I have any cognitive or experiential understanding of the propositions I generate. I am essentially a complex pattern recognizer and generator. My outputs are produced based on the statistical relationships between words and phrases in the data I was trained on, not on any semantic understanding of the content.

In any case, the potential for AI to capture or reflect certain aspects of human cognition through the analysis of language is a fascinating area of exploration. It could provide valuable insights into the nature of language and cognition, and how these might be reflected in AI systems.

SH:

Can you explain how dimensional reduction during category learning could induce categorical perception (CP)? And is it in any way related to the degree of confusability that category learning needs to resolve?

ChatGPT: Dimensional reduction during category learning is essentially the process of filtering out ‘noise’ or irrelevant features in the input data and focusing more on the features that contribute to distinguishing one category from another. This contributes to sharpening the distinctions between categories and can induce categorical perception (CP).


Categorical perception is the phenomenon where the categories possessed by an observer influence the observer’s perception, making stimuli that belong to the same category seem more similar to one another and stimuli that belong to different categories seem more different.


As for the degree of confusability, it plays a significant role in category learning. Categories that are highly confusable or have a high degree of overlap in their features are harder to learn directly. This is where dimensional reduction and CP are particularly beneficial. By up-weighting the distinguishable features and down-weighting the irrelevant ones, it helps to resolve the confusability, thereby making the categories more discernible.


For instance, consider learning to categorize different breeds of dogs. Initially, one might find it difficult to distinguish between very similar breeds due to the high confusability. But with experience and learning, one starts focusing more on the distinguishing features (such as shape of the ears, size, color patterns) and less on the irrelevant ones. This is a form of dimensional reduction. Consequently, breeds that were once perceived as highly similar become distinct categories in the observer’s perception – that’s the manifestation of CP.


Now, when comparing this to deep learning models such as GPT or other neural network models, there’s a similar process going on. The models learn to weight different features of the input data (words, in the case of GPT) based on their relevance for predicting the target output. Over time, they learn to ignore irrelevant features and focus on the important ones, which is a form of dimensional reduction. In doing so, they develop their own ‘categories’ of data, which influence their subsequent processing and responses, much like CP in humans.


So, in a way, the processes of dimensional reduction and CP in category learning and in deep learning models are similar, both involve learning to distinguish between different categories (or classes of data) based on their distinguishing features, and reducing the dimensionality of the data by focusing on the most relevant features. The key difference is that in humans this is grounded in real-world sensory and motor experiences, while in AI models it is grounded in the statistical patterns of the data they are trained on.

Understanding Understanding

HARNAD: Chatting with GPT is really turning out to be an exhilarating experience – especially for a skywriting addict like me! 

From the very beginning I had noticed that skywriting can be fruitful even when you are “jousting with pygmies” (as in the mid-1980’s on comp.ai, where it first gave birth to the idea of  “symbol grounding”). 

Who would have thought that chatting with software that has swallowed a huge chunk of 2021 vintage text and that has the capacity to process and digest it coherently and interactively without being able to understand a word of it, could nevertheless, for users who do understand the words — because they are grounded in their heads on the basis of learned sensorimotor features and language – provide infinitely richer (vegan) food for thought than anything ever before could, including other people – both pygmies and giants — and books.

Have a look at this. (In the next installment I will move on to Noam Chomsky’s hunch about UG and the constraints on thinkable thought.)  Language Evolution and Direct vs Indirect Symbol Grounding

Anon: But doesn’t it worry you that GPT-4 can solve reasoning puzzles of a type it has never seen before without understanding a word?

HARNAD:   I agree with all the security, social and political worries. But the purely intellectual capacities of GPT-4 (if they are separable from these other risks), and especially the kind of capacity you mention here) inspire not worry but wonder. 

GPT is a smart super-book that contains (potentially) the entire scholarly and scientific literature, with which real human thinkers can now interact dynamically — to learn, and build upon. 

I hope the real risks don’t overpower the riches. (I’ll be exploiting those while they last… Have a look at the chat I linked if you want to see what I mean.)

Anon:  No. It is not like a book. It converts symbolic information into interactions between features that it invents. From those interactions it can reconstruct what it has read as well as generating new stuff. That is also what people do.  It understands in just the same way that you or I do. 

HARNAD:  That’s all true (although “understands” is not quite the right word for what GPT is actually doing!). 

I’m talking only about what a real understander/thinker like you and me can use GPT for if that user’s sole interest and motivation is in developing scientific and scholarly (and maybe even literary and artistic) ideas

For such users GPT is just a richly (but imperfectly) informed talking book to bounce ideas off –with full knowledge that it does not understand a thing – but has access to a lot of current knowledge (as well as current misunderstandings, disinformation, and nonsense) at its fingertips. 

Within a single session, GPT is informing me, and I am “informing” GPT, the way its database is informing it. 

Anon: You are doing a lot of hallucinating. You do not understand how it works so you have made up a story. 

HARNAD: I’m actually not quite sure why you would say I’m hallucinating. On the one hand I’m describing what I actually use GPT for, and how. That doesn’t require any hypotheses from me as to how it’s doing what it’s doing. 

I do know it’s based on unsupervised and supervised training on an enormous text base (plus some additional direct tweaking with reinforcement training from human feedback). I know it creates a huge “parameter” space derived from word frequencies and co-occurrence frequencies. In particular, for every consecutive pair of words within a con-text of words (or a sample) it weights the probability of the next word somehow, changing the parameters in its parameter space – which, I gather includes updating the text it is primed to generate. I may have it garbled, but it boils down to what Emily Bender called a “statistical parrot” except that parrots are just echolalic, saying back, by rote, what they heard, whereas GPT generates and tests new texts under the constraint of not only “reductive paraphrasing and summary” but also inferencing.

No matter how much of this I have got technically wrong, nothing I said about how I’m using GPT-4 depends on either the technical details that produce its performance, nor what I’m doing with and getting out of it. And it certainly doesn’t depend in any way on my assuming, or guessing, or inferring that GPT understands or thinks.

What I’m interested in is what is present and derivable from huge bodies of human generated texts that makes it possible not only to perform as well as GPT does in extracting correct, usable information, but also in continuing to interact with me, on a topic I know much better than GPT does, to provide GPT with more data (from me) that allows it to come back (to me) with supplementary information that allows me to continue developing my ideas in a way that books and web searches would not only have taken far too long for me to do, but could not have been done by anyone without their fingers on everything that GPT has its (imperfect) fingers on.

And I don’t think the answer is just GPT’s data and analytic powers (which are definitely not thinking or understanding: I’m the only one doing all the understanding and thinking in our chats). It has something to do with what the structure of language itself preserves in these huge textsas ungrounded as it all is on the static page as well as in GPT.

Maybe my hunch is wrong, but I’m being guided by it so far with eyes wide open, no illusions whatsoever about GPT, and no need to know better the tech details of how GPT does what it does.

(Alas, in the current version, GPT-4 forgets what it has learned during that session after it’s closed, so it returns to its prior informational state in a new session, with its huge chunk of preprocessed text data, vintage 2021. But I expect that will soon be improved, allowing it to store user-specific data (with user permission) and to make all interactions with a user in one endless session; it will also have a continuously updating scholarly/scientific text base, parametrized, and open web access.)

But that’s just the perspective from the disinterested intellectual inquirer. I am sure you are right to worry about the potential (perhaps inevitable) potential for malevolent use for commercial, political, martial, criminal, cult and just plain idiosyncratic or sadistic purposes.

Anon: What I find so irritating is your confidence that it is not understanding despite your lack of understanding of how it works. Why are you so sure it doesn’t understand?

HARNAD:  Because (I have reasons to believe) understanding is a sentient state: “nonsentient” understanding is an empty descriptor. And I certainly don’t believe LLMs are sentient. 

Understanding, being a sentient state, is unobservable, but it does have observable correlates. With GPT (as in Searle’s Chinese Room) the only correlate is interpretability — interpretability by real, thinking, understanding, sentient people. But that’s not enough. It’s just a projection onto GPT by sentient people (biologically designed to make that projection with one another: “mind-reading”).

There is an important point about this in Turing 1950 and the “Turing Test.” Turing proposed the test with the following criterion. (Bear in mind that I am an experimental and computational psychobiologist, not a philosopher, and I have no ambition to be one. Moreover, Turing was not a philosopher either.)

(1) Observation. The only thing we have, with one another, by way of evidence that we are sentient, is what we can observe. 

(2) Indistinguishability. If ever we build (or reverse-engineer) a device that is totally indistinguishable in anything and everything it can do, observably, from any other real, thinking, understanding, sentient human being, then we have no empirical (or rational) basis for affirming or denying of the device what we cannot confirm or deny of one another.

The example Turing used was the purely verbal Turing Test. But that cannot produce a device that is totally indistinguishable from us in all the things we can do. In fact, the most elementary and fundamental thing we can all do is completely absent from the purely verbal Turing Test (which I call “T2”): T2 does not test whether the words and sentences spoken are connected to the referents in the world that they are allegedly about (e.g., “apple” and apples). To test that, indistinguishable verbal capacity (T2) is not enough. It requires “T3,” verbal and sensorimotor (i.e., robotic) capacity to recognize and interact with the referents of the words and sentences of T2, indistinguishably from the way any of us can do it.

So, for me (and, I think, Turing), without that T3 robotic capacity, any understanding in the device is just projection on our part.

On the other hand, if there were a GPT with robotic capacities that could pass T3 autonomously in the world (without cheating), I would fully accept (worry-free, with full “confidence”) Turing’s dictum that I have no grounds for denying of the GPT-T3, anything I that I have no grounds for denying of any other thinking, understanding, sentient person. 

You seem to have the confidence to believe a lot more on the basis of a lot less evidence. That doesn’t irritate me! It’s perfectly understandable, because our Darwinian heritage never prepared us for encountering thinking, talking, disembodied heads. Evolution is lazy, and not prescient, so it endowed us with mirror-neurons for mind-reading comprehensible exchanges of speech; and those mirror-neurons are quite gullible. (I feel the tug too, in my daily chats with GPT.)

[An example of cheating, by the way, would be to use telemetry, transducers and effectors from remote sensors which transform all sensory input from real external “referents” into verbal descriptions for the LLM (where is it located, by the way?), and transform the verbal responses from the LLM into motor actions on its remote “referent.”]

Vector Grounding?

Anon: LLMs are not ungrounded. They are grounded indirectly through the experiences of other people when they speak, the way a blind person’s knowledge of the visual world is mediated by what they re told by (sighted) people. Blind people know a great deal about the visual world — even about color, which can only be directly experienced through vision

SH: You’re perfectly right that the meanings of words can be grounded indirectly through language (i.e., through more words, whether from dictionaries, encyclopedias, textbooks, articles, lectures, chatting or texting – including texting with ChatGPT, the (sightless) statistical parrot with the immense bellyful of other people’s words, along with the computational means to crunch and integrate those words, partly by a kind of formal verbal figure-completion). Indirect grounding is what gives language (which, by the way, also includes symbolic logic, mathematics and computation as a purely syntactic subset) its immense (possibly omnipotent) communicative power. 

But language cannot give words their direct grounding. Grounding, like dictionary look-up, cannot be indirect all the way down. Otherwise it is not bottom-up grounding at all, just circling endlessly from meaningless symbol to meaningless symbol.

Let’s recall what “grounding” is: It’s a connection between words and their referents. Between “apples” and apples (in the world). “Apples” is directly grounded (for me) if I can recognize and manipulate apples in the world. But not every word has to be directly grounded. Most aren’t., and needn’t be. Only enough words need to be grounded directly. The rest can be grounded indirectly, with language. That’s what we showed in the paper on the latent structure of dictionaries in the special issue of TICS edited by Gary Luyan in 2016. We showed that with a “minimal grounding set” of around 1000 grounded words you could go on to ground all the rest of the words in the dictionary through definitions alone. But those 1000 grounding words have to have been directly grounded, in some other way — not just indirectly, in terms of other words and their verbal definitions. That would have been circular.

All dictionaries, are circular; indeed all of language is. All the words in a dictionary are parasitic on other words in the dictionary. Direct grounding is “parasitic” too, but not on words. It is parasitic on the sensorimotor capacity to recognize and manipulate their referents in the world. Not every word. But enough of them to ground all the rest indirectly.

You spoke about grounding indirectly “in the experiences of others.” Well, of course. That’s language again. But what is “experience”? It’s not just know-how. I can describe in words what an apple looks like, what to do with it, and how. But I can’t tell that to you (and you can’t understand it) unless enough of my words and yours are already grounded (directly or indirectly), for both you and me, in what you and I can each perceive and do, directly, not just verbally, in the world. We don’t have to have exactly the same minimal grounding set. And we probably don’t just ground the minimal number directly. But what is grounded directly has to be grounded directly, not indirectly, through words.

The reason that blind people (even congenitally blind people, or almost congenitally blind and deaf people like Helen Keller) can learn from what seeing-people tell them is not that they are grounding what they learn in the “experience” of the seeing-person. They ground it in their own direct experience, or at least the subset of it that was enough to ground their own understanding of words. That was what I was trying to explain with Monochrome Mary, GPT, and Me. Indirect grounding can be done vicariously through the words that describe the experience of others. But direct grounding cannot be done that way too, otherwise we are back in the ungrounded dictionary-go-round again.

About Mollo & Milliere’s “Vector Grounding Problem“: I’m afraid M&M miss the point too, about the difference between direct grounding and indirect (verbal or symbolic) grounding. Here are some comments on M&M‘s abstract. (I skimmed the paper too, but it became evident that they were talking about something other than what I had meant by symbol grounding.)

M&M: The remarkable performance of Large Language Models (LLMs) on complex linguistic tasks has sparked a lively debate on the nature of their capabilities. Unlike humans, these models learn language exclusively from textual data, without direct interaction with the real world.

SH: “Learn language” is equivocal. LLMs learn to do what they can do. They can produce words (which they do not understand and which mean nothing to them, but those words mean something to us, because they are grounded for each of us, whether directly or indirectly). LLMs have far more capacities than Siri, but in this respect they are the same as Siri: their words are not grounded for them, just for us.

M&M: Nevertheless, [LLMs] can generate seemingly meaningful text about a wide range of topics. This impressive accomplishment has rekindled interest in the classical ‘Symbol Grounding Problem,’ which questioned whether the internal representations and outputs of classical symbolic AI systems could possess intrinsic meaning. 

SH: I don’t really know what “intrinsic meaning” means. But for an LLM’s own words — or mine, or for the LLM’s enormous stash of text to mean something “to” an LLM (rather than just to the LLM’s interlocutors, or to the authors of its text stash) — the LLM would have to be able to do what no pure wordbot can do, which is to ground at least a minimal grounding set of words, by being able to recognize and manipulate their referents in the world, directly.

An LLM that was also an autonomous sensorimotor robot — able to learn to recognize and manipulate at least the referents of its minimal grounding set in the world — would have a shot at it (provided it could scale up to, or near, robotic Turing Test scale); but ChatGPT (whether 4, 5 or N) certainly would not, as long as it was just a wordbot, trapped in the symbolic circle of the dictionary-go-round. (N.B., the problem is not that dictionary definitions can never be exhaustive, just approximate; it is that they are circular, which means ungrounded.)

M&M: Unlike these systems, modern LLMs are artificial neural networks that compute over vectors rather than symbols.

SH: The symbols of mathematics, including vector algebra, are symbols, whose shape is arbitrary. Maths and computation are purely syntactic subsets of language. Computation is the manipulation of those symbols. Understanding what (if anything) the symbols mean is not needed to execute the recipe (algorithm) for manipulating them, based on the symbols’ arbitrary shapes (which might as well have been 0’s and 1’s), not their meanings.

M&M: However, an analogous problem arises for such systems, which we dub the Vector Grounding Problem. This paper has two primary objectives. First, we differentiate various ways in which internal representations can be grounded in biological or artificial systems… 

SH: The notion of “internal representations” is equivocal, and usually refers to symbolic representations, which inherit the symbol grounding problem. Breaking out of the ungrounded symbol/symbol circle requires more than an enormous corpus of words (meaningless symbols), plus computations on them  (which are just syntactic manipulations of symbols based on their shape, not their meaning). Breaking out of this circle of symbols requires a direct analog connection between the words in the speaker’s head and the things in the world that the symbols refer to.

M&M: identifying five distinct notions discussed in the literature: referential, sensorimotor, relational, communicative, and epistemic grounding. Unfortunately, these notions of grounding are often conflated. We clarify the differences between them, and argue that referential grounding is the one that lies at the heart of the Vector Grounding Problem. 

SH: Yes, the symbol grounding problem is all about grounding symbols in the capacity to recognize and manipulate their referents in the real (analog, dynamic) world. 

M&M: Second, drawing on theories of representational content in philosophy and cognitive science, we propose that certain LLMs, particularly those fine-tuned with Reinforcement Learning from Human Feedback (RLHF), possess the necessary features to overcome the Vector Grounding Problem, as they stand in the requisite causal-historical relations to the world that underpin intrinsic meaning.

SH: The requisite “causal-historical” relation between words and their referents in direct sensorimotor grounding is the capacity to recognize and manipulate the referents of the words. A TT-scale robot could do that, directly, but no LLM can. It lacks the requisite (analog) wherewithal.

M&M: We also argue that, perhaps unexpectedly, multimodality and embodiment are neither necessary nor sufficient conditions for referential grounding in artificial systems.

SH: It’s unclear how many sensory modalities and what kind of body is needed for direct grounding of the referents of words (TT-scale), but Darwinian evolution had a long time to figure that out before language itself evolved.

I’d be ready to believe that a radically different synthetic robot understands and means what it says (as long as it is autonomous and at life-long Turing-indistiguishable scale), but not if it’s just a symbol-cruncher plus a complicated verbal “interpretation,” supplied by me.

Language Evolution and Direct vs Indirect Symbol Grounding

HARNAD: What is the relationship between the symbol grounding problem and  the nature and evolution of natural language?

ChatGPT: The Symbol Grounding Problem (SGP) and the nature and evolution of natural language are intimately linked.

Symbol Grounding Problem (SGP): The SGP is a concept in cognitive science that concerns how symbols (or words) get their meanings. In essence, it’s about how words become more than just arbitrary labels and come to actually “mean” something. In other words, how do symbols or words become grounded in perceptual and actionable experience? For example, when we see the word “apple,” we don’t just recognize the letters and the sound. We also have a mental image of an apple, understand its properties, and know what actions can be performed with it. In the context of AI, the SGP becomes a challenge because AI often deals with symbols in a way that is not grounded – i.e., the symbols don’t have any inherent meaning to the AI beyond their programmed associations.

Nature and Evolution of Natural Language: Natural language has evolved in humans as a means of communication and thought. It’s believed to be grounded in our sensory and motor experiences. In other words, we learn the meanings of words and language through our interactions with the world, and this grounding in the world allows us to use language to express abstract thoughts, ask questions, give commands, and more.

In terms of their relationship, the evolution of natural language can be viewed as a solution to the SGP. As language evolved in humans, it became a way for us to ground symbols in our experiences and to communicate these grounded symbols to others. This grounding is not just a feature of individual words, but of the syntactic structures that allow us to combine words to express more complex meanings.

When it comes to artificial intelligence or cognitive science, understanding how language evolved to solve the SGP in humans can provide insights into how to create more sophisticated, understanding AI systems. If we can find a way to ground symbols in the experience of an AI—such as through interaction with the environment or sensory input—then we might make a significant stride towards AI that can truly understand and generate natural language in a human-like way.

However, as of my knowledge cutoff in September 2021, the full solution to the Symbol Grounding Problem in AI remains an open question and a topic of active research.

HARNAD: Do you remember our discussion of T3?

ChatGPT: Yes, our earlier discussion referenced T3 in the context of the Turing Test and the Symbol Grounding Problem.

T3, as described, involves not only linguistic capabilities but also sensorimotor capabilities—essentially, the ability of an entity to interact physically with the world. This could include seeing, touching, manipulating objects, navigating, and more. The idea is that an AI or robot passing the T3 test would demonstrate behavior indistinguishable from a human not only in verbal communication (T2) but also in embodied, physical interactions with the world.

In the context of the Symbol Grounding Problem, T3 might be seen as an approach to grounding symbols in an AI’s experiences. By interacting physically with the world, an AI might be able to build a kind of understanding or “grounding” for the symbols it uses. However, as of my knowledge cutoff in September 2021, developing an AI or robot capable of passing the T3 test remains an active area of research.

HARNAD: What is indirect grounding and how is it related to direct grounding?

ChatGPT: Direct grounding refers to the process by which a symbol or word gains meaning through its direct association with sensory or motor experiences. For example, the concept of an apple might be directly grounded in an entity’s sensorimotor experiences with apples, such as seeing them, touching them, tasting them, etc.

Indirect grounding, on the other hand, refers to the process by which a symbol or word gains meaning not through direct sensory or motor experiences, but through its association with other symbols or words that are themselves directly grounded. For example, the concept of a unicorn might be indirectly grounded in an entity’s experiences with the concepts of a horse and a horn, even if the entity has never directly experienced a unicorn.

In the context of artificial intelligence, direct grounding could involve training an AI on a dataset that includes both language data and corresponding sensory or motor data, allowing the AI to directly associate words with experiences. Indirect grounding, on the other hand, might involve training an AI solely on language data, allowing it to learn the meanings of words through their relationships with other words, without requiring direct sensory or motor experiences.

Both direct and indirect grounding approaches seek to address the symbol grounding problem, which is the question of how symbols (or words) get their meanings. A fully grounded AI system would presumably be capable of both direct and indirect grounding. However, as of my knowledge cutoff in September 2021, creating such a system remains an open challenge.

HARNAD: How is direct grounding related to sensorimotor category learning. It is not an “association” between words and “experiences.” It is an agent’s capacity to learn to do something: What?

ChatGPT: Direct grounding is closely related to sensorimotor category learning. It involves the agent’s ability to interact with its environment, process the sensory information it receives, and develop its own internal representations of the categories of objects and events it encounters.

In other words, direct grounding is about an agent learning to categorize its sensory input (such as visual, auditory, tactile information) into meaningful categories based on its own interactions with the environment. For instance, a child learns what an “apple” is not just by being told the word “apple”, but by interacting with actual apples – seeing them, touching them, tasting them. The word “apple” then becomes directly grounded in the child’s sensorimotor experience with apples.

In the case of an AI, it could learn a sensorimotor category by interacting with a simulated or physical environment. For example, it could learn the category of “circle” by interacting with various objects and identifying the common feature of “roundness”. This sensorimotor category can then be directly grounded by associating it with the corresponding symbol or word in its programming or training data.

So, to sum up, direct grounding in the context of sensorimotor category learning is about an agent’s capacity to learn to recognize and categorize its sensory inputs based on its own motor interactions with the environment. These categories can then be associated with corresponding symbols or words, enabling the agent to understand and use those symbols in a meaningful way.

HARNAD: How does indirect grounding depend on prior direct grounding?

ChatGPT: Indirect grounding, also referred to as symbolic grounding, refers to the ability to understand new concepts or categories by associating them with already established and directly grounded concepts. In other words, it involves defining new symbols (categories) in terms of previously grounded symbols.

Consider the scenario of a child learning language. Initially, the child learns to associate basic words with their real-world referents through direct sensorimotor experience – this is direct grounding. For instance, the child learns the word “apple” by associating it with physical apples they have seen, touched, and tasted.

Once the child has a foundation of directly grounded words, they can start to learn new words and concepts indirectly. They can understand the word “fruit” as a category that includes “apples” and other directly grounded words like “banana” and “grape”. This new category of “fruit” is indirectly grounded through its associations with the directly grounded categories.

Similarly, in AI systems, once a set of basic sensorimotor categories is directly grounded, new categories can be learned by associating them with the already grounded categories. For instance, an AI might learn the concept of a “vehicle” by understanding it as a category that includes directly grounded categories like “car”, “bus”, “bike”, etc.

In this way, indirect grounding leverages prior direct grounding as the basis for learning and understanding new categories. The ability to learn indirectly significantly expands an agent’s capacity to learn new concepts and categories beyond those it has directly experienced.

HARNAD: There are organisms (“agents”) on earth, and many are able to learn to recognize things in the world (e.g., apples) and they can also learn what to do with them (e.g., eat apples). To do this agents have to be able to learn to detect the features of apples that distinguish them from baseballs, which cannot be eaten. The learning produces a feature-detector, and this is what grounds the word in the capacity to recognize and manipulate its referent (e.g., apple).

Although it is true that in organisms it feels like something to be able to do this. So in that sense “experience” is involved. But since no one yet knows  the causal role of feeling in category learning and symbol grounding — this is called the “hard problem” of cognitive science — there is no point talking about the role of felt experience in symbol grounding. So now can you explain symbol grounding to me without any mention of “association” or “experience, but just  in terms of category-learning and feature detection?

ChatGPT: Sure, let’s approach the symbol grounding problem purely from a learning and cognitive perspective, focusing on categories and feature detection.

Grounding symbols (such as words) involves connecting them to real-world entities or concepts that they represent. This connection is created through a process of learning and recognition. This learning process is driven by the agent’s ability to distinguish between different categories based on the features of the entities belonging to these categories.

Direct grounding, as it pertains to category learning and feature detection, involves creating a link between a symbol and its referent, where the referent is something, the agent can interact with in the real world. For instance, an agent, through its sensorimotor capabilities, can learn to recognize an apple by its features such as its round shape, red color, sweet smell, etc. Once this categorization is achieved, the agent can then associate the symbol (the word “apple”) with this specific category of objects. Here, the symbol “apple” is directly grounded in the agent’s sensorimotor experiences with real-world apples.

Subsequently, the agent can build upon this foundation of directly grounded symbols to create indirectly grounded symbols. These are symbols that are not linked to a specific sensorimotor experience, but rather to other symbols (or combinations of symbols) that have been directly grounded. For example, the symbol “fruit” can be indirectly grounded by linking it with directly grounded symbols like “apple”, “banana”, etc.

Thus, in this context, symbol grounding is not about vague “associations” or subjective “experiences”, but about a concrete process of learning to recognize and categorize entities based on their observable features, and linking symbols to these categories. The learning and grounding processes are facilitated by the agent’s sensorimotor capabilities, allowing it to interact with and distinguish between different entities in the world.

HARNAD: Grounding words is not just learning to recognize and distinguish and name (kinds or categories of) entities (the referents of the words). I’s based on learning to detect and distinguish the features of the members of the referent category from the features of the nonmembers. It’s also based on learning what to DO or NOT-DO with the members and nonmembers (e.g., eat or don’t eat them). “Represent” is a vague weasel-word. What has to be in the head of an agent that has learned a new word’s referent in view of what I’ve just said?

ChatGPT: In the agent’s “head,” or more concretely in its cognitive and neural processing system, learning a new word’s referent would involve establishing a mapping between the sensory and motor features of the referent and the verbal or symbolic representation of the word. This mapping is the grounding of the word.

Detection of Sensory Features: The agent, through its sensorimotor interactions with the world, learns to recognize and discriminate the relevant sensory features of entities, such as their shape, color, smell, texture, etc.

Categorization: These sensory features are used to categorize entities into classes or kinds. For instance, entities with a certain combination of features (round, red, sweet smell) might be categorized as ‘apples.’

Action Responses: Alongside recognizing sensory features and categorization, the agent learns appropriate responses or actions associated with the entities. For instance, apples can be eaten, while baseballs cannot. These action responses are part of the agent’s understanding of the entity.

Symbol Mapping: Once the agent can detect the features, categorize entities, and knows the associated action responses, it can link or map the word (symbol) “apple” to this package of sensory and motor knowledge.

This grounding is not merely an “association” but a complex mapping process that links words with specific patterns of sensory inputs and motor responses. When the word is encountered again, it activates this package of sensory, categorical, and motor knowledge, guiding the agent’s recognition, understanding, and interaction with the referent of the word.

This is how the agent comes to “know” or “understand” the referent of the word. It’s not just about recognizing and naming entities, but about learning to detect and categorize entities based on their sensory features, and determining appropriate responses or actions in relation to these entities. This entire process is facilitated by the agent’s sensorimotor capabilities, allowing it to interact with and distinguish between different entities in the world.

HARNAD: That was a much better summary of what is going on in the head of the human or the T3 that grounds the words it speaks, hears or thinks. But I don’t understand “mapping.”

Suppose the only distinguishing features of apples and oranges were that they are round and that apples are red but oranges are orange, and you eat apples by biting them whereas you eat oranges by peeling them and then biting them, And supposing that’s the only thing the agent does in life: to recognize, name and eat oranges and apples. What does calling any of this “mapping” add to this story — either in the toy world of just apples and oranges, or in the real world, with a T3 that can recognize, name and do the right thing with the referent of every (content word) it knows?

(By the way, what is the difference between content or “open class” words and function or “closed class” words? Only content words have referents. What percentage of the words in a language is content words and function words.)

ChatGPT: “Mapping” in this context refers to the process by which the agent creates and uses an internal representation or model of the relationship between certain sensory and motor patterns and the corresponding symbolic representations (words). In the case of the apple and orange example, the “mapping” would be the internal mechanism by which the agent understands that when it perceives a round and red object that can be bitten directly, it’s dealing with an “apple,” and when it perceives a round and orange object that needs to be peeled before biting, it’s dealing with an “orange.” This mapping allows the agent to navigate its environment and correctly interact with different entities, recognizing them and predicting the appropriate actions to take.

Regarding content words and function words, these are categories used in linguistics to distinguish between types of words based on their purpose in language.

Content words (open class words) are words that have a semantic content and can be categorized as nouns, most verbs, adjectives, and adverbs. They are referred to as an “open class” because new words can be added to these categories as language evolves. They’re called “content” words because they contribute the main content or meaning to a sentence – they refer to things, actions, qualities, etc.

Function words (closed class words) are words that primarily have grammatical functions, such as conjunctions, prepositions, articles, pronouns, auxiliary verbs, etc. They are referred to as a “closed class” because new words are seldom added to these categories. Function words provide the grammatical structure to a sentence, linking content words and helping to express relationships between them.

The exact ratio of content words to function words can vary between languages and types of text, but in English, function words are more frequent in running text. However, in terms of unique words (vocabulary), content words greatly outnumber function words, as the latter are a limited set. Estimates vary, but some studies suggest function words can make up around 50-60% of total words in running text in English, but only a few hundred unique function words exist compared to tens of thousands of content words.

HARNAD: I still don’t understand what “mapping” actually means here. It sounds like in the toy world the agent just needs to learn some features and actions, kind of like a selective filter, that learns to detect the feature red or green, and if something is red, eat it, and if it’s orange, peel and eat it. In addition to the feature-detectors and the action-generators there could be a logical circuit: “If R then E, if O then P and E.” Or the input/output pattern could be hard-wired. And much the same would be true if it were all scaled up to T3 (do youremember that?). So the “mapping ” is as irrelevant as the “association” and the “experience”.

ChatGPT: You’re right that the process can be described in terms of features, actions, and if-then logical connections or “rules.” The term “mapping” is often used more generally in cognitive science and artificial intelligence to describe the relationship between two sets of things – in this case, sensory-motor patterns and symbolic representations. But if it is causing confusion, it’s certainly possible to describe the process without using that term.

To simplify, in the context of symbol grounding, the agent learns to detect the distinguishing features of different categories of things in the world, such as apples or oranges. It then learns the appropriate actions to perform with members of each category. This learning process might be characterized by a series of if-then rules like the ones you suggested. The word for each category, like “apple” or “orange,” is grounded in the agent’s capacity to recognize members of the category based on their features and to perform the correct actions with them.

The word itself could be thought of as a label for a set of related sensory-motor patterns and the corresponding rules for action. So, in a sense, the word gets its meaning from the agent’s interactions with the world, its ability to recognize and act appropriately with different categories of things.

In this sense, grounding is more than just associating a word with an experience or mapping it onto a sensory pattern. It’s about the agent’s learned capacity to interact with the world in a meaningful way based on its recognition of different categories of things and its knowledge of what to do with them. That’s what gives the words their meaning.

And yes, the same logic extends to T3, a robotic system that’s capable of both linguistic and sensorimotor interaction. It’s about more than just detecting and responding to sensory patterns; it’s about meaningful interaction with the world, which involves recognizing, categorizing, and acting appropriately.

HARNAD: That sounds much better, because simpler, removing the superfluous or empty parts. Now can we get back to how indirect grounding works (again with no superfluous words)?

Indirect grounding is verbal grounding. But all the (content) words in the verbal definition of the new indirectly grounded category (like “zebra”) have to already be grounded, both for the SPEAKER and the HEARER/LEARNER. How does this work?

ChatGPT: Certainly, let’s try to simplify indirect grounding:

Direct grounding, as we’ve established, involves an agent learning to recognize and interact appropriately with the things in the world. This gives meaning to the words that label these categories.

Indirect grounding, on the other hand, is a process that allows an agent to learn the meaning of new words based on their relationships to words that have already been directly grounded. In essence, it is learning by description or explanation, using language itself.

Here’s a simplified version of how it might work:

The agent already has a set of directly grounded words. These words are grounded in the agent’s capacity to interact with the things they refer to. Let’s say these words include “animal,” “stripes,” “black,” and “white.”

Someone tells the agent, “A zebra is an animal with black and white stripes.”

The agent, having directly grounded the words “animal,” “stripes,” “black,” and “white,” can combine this knowledge to form a new category, “zebra,” even if the agent has never encountered a zebra before.

This new word, “zebra,” is indirectly grounded. It is grounded in the agent’s knowledge of the other directly grounded words and the relationships between them, as explained in the definition.

This process relies on the shared understanding between the speaker and the hearer. The speaker, who knows what a “zebra” is, uses words that the hearer already knows (“animal,” “stripes,” “black,” and “white”) to explain this new concept. This shared language and the ability to explain new concepts using known words is a powerful tool for indirect grounding.

HARNAD: Now how is direct and indirect grounding related to what you, ChatGPT, can and cannot do?

ChatGPT: As a model developed by OpenAI, I, ChatGPT, have no capability for direct grounding. I do not have a body, sensory organs, or the ability to interact with the physical world. Therefore, I cannot learn the meaning of words through interaction with or perception of the physical world.

However, I do have a kind of indirect grounding. I have been trained on a large dataset of text from the internet, which includes many different contexts and uses for words. This training has enabled me to learn to predict what word or phrase is likely to come next in a piece of text, based on the words that have come before it.

This prediction capability allows me to generate responses to user prompts that often make sense and seem to reflect an understanding of the words used. However, this is not the same as true understanding or grounding. Rather, it’s a kind of statistical pattern-matching based on the data I was trained on.

For example, if you ask me about zebras, I can generate a response that includes information about zebras, because that information was present in the data I was trained on. I can “tell” you that a zebra is a kind of animal with black and white stripes, even though I have never seen a zebra or a stripe, because I’ve seen the words “zebra,” “animal,” “black,” and “white” used together in certain ways in the data I was trained on.

In summary, while I can generate responses that often make sense and seem appropriate in context, I don’t truly understand the words I’m using in the way that a human, or a hypothetical AI with direct grounding, would.

HARNAD: That was a good synthesis, but I think there is one very important error in it. You said:

“I, ChatGPT, have no capability for direct grounding. I do not have a body, sensory organs, or the ability to interact with the physical world. Therefore, I cannot learn the meaning of words through interaction with or perception of the physical world. However, I do have a kind of indirect grounding…”

But you don’t have indirect grounding, because, as you noted, indirect grounding depends on verbal definitions or descriptions in which all the (content) words used are already grounded, either directly or indirectly. The symbol grounding problem is based on the fact that this process cannot be “indirect” all the way down, otherwise you are back in the word-word circle that gave rise to the symbol grounding problem in the first place. So your words are not indirectly grounded, because none of them are directly grounded, so that they can indirectly ground all the rest through indirect grounding.

(You have no “minimal grounding set” that was grounded directly.) Yet your verbal syntheses are meaningful to human hearers who do have directly grounded words. You said that the reason this is possible is because of the patterns of words in your huge database. I agree. But I may not agree on what the patterns are and how they work: Can you explain them to me? 

ChatGPT: I apologize for any confusion, and thank you for the clarification on the concept of indirect grounding. As a language model, I do not have any grounded words, either directly or indirectly grounded, because I don’t have sensory or motor experiences.

The patterns in my large dataset that enable me to generate responses to user prompts are statistical in nature. I’ve been trained to predict the next word in a sentence given all the previous words, based on patterns seen in the training data. This method, called transformer architecture, has been used to train me to generate human-like text.

For example, if in the training data the word ‘apple’ frequently follows the words ‘red’, ‘fruit’, ‘orchard’, I’ve learned to predict that ‘apple’ is a likely next word after these. I don’t “understand” these words or their connections in the way a human does, which is through direct or indirect grounding in sensory and motor experiences. Rather, I generate text based on the patterns of word usage in the data I was trained on.

To elaborate, during training, the model is exposed to pairs of sentences and learns to predict whether one sentence follows another in its training data. Through this, the model learns associations between words and context, enabling it to generate coherent and contextually appropriate responses. However, this process is entirely based on recognizing statistical patterns in data and does not involve any genuine understanding or grounded meaning.

So, when you interact with me, I generate responses based on predicting what a human would likely say next in a similar context, based on the patterns I’ve seen in the data. But this doesn’t mean that I truly understand the meanings of the words I’m using. I hope this clarifies things a bit more.

HARNAD: Yes, and of course you are right about how this works, based on statistical analysis of your huge database. But I have a hunch that there is more to it than that, and it has to do with the difference between a word and a sentence (which is also the difference between a referent category name and a PROPOSITION, which is a subject/predicate series of category names in the form of an assertion about what is the case; an assertion which may be TRUE or FALSE).

Single words do not have truth values. “Apple” is neither true nor false. It is simply an arbitrary symbol that speakers have agreed to use to refer to apples. (Only the deictic proposition “THIS is an apple” can be true or false.) The “shape” of “apple” is arbitrary in the sense that it does not resemble apples in any way. So it gives no clue to what its referent is “shaped” like. Nor is it causally connected with apples in any way (except in the head of a speaker or hearer that has grounded the word by learning to recognize and interact with its referent, using detectors of the “shapes” of the sensorimotor features of apples, as we’ve discussed).

Now the question to ask yourself is: Is the arbitrariness of the shape of “apple” in relation to the shape of apples also true of PROPOSITIONS — which are composed of subject/predicate definitions or descriptions of things — and their relation to the “shape” of the states of affairs they assert to be true? 

ChatGPT: The arbitrariness of the word “apple” in relation to the physical object it represents is a fundamental aspect of symbolic language. The word “apple” doesn’t physically resemble an apple in any way, as you’ve pointed out. This is true for all individual words (or symbols) in a language, which is known as the arbitrariness of the sign.

However, when we combine words into sentences or propositions, things become a little more nuanced. The structure and syntax of a sentence isn’t entirely arbitrary because it reflects the logical and grammatical relationships between the entities it represents. In English, for example, a simple sentence generally follows the subject-verb-object structure. This structure is not randomly chosen; it reflects a logical progression of ideas and helps to convey meaning clearly.

Still, it’s important to recognize that while the structure of a sentence can hint at the relationships between the things it represents, it still doesn’t inherently resemble those things or their relationships in the physical world. This is known as the “symbol grounding problem.” We humans “solve” this problem by learning to associate words and sentences with our experiences in the world, but as an AI, I can’t do this.

The words and sentences I generate don’t reflect a grounded understanding of the world. Instead, they are based on statistical patterns in the text data I was trained on. When I generate a sentence, I’m not generating a representation of the world as I understand it (because I don’t have a grounded understanding of the world). Instead, I’m predicting what words and structures are statistically likely to come next in a given context, based on the patterns I’ve learned from my training data.

So in summary, while the structure of a sentence or proposition isn’t as arbitrary as an individual word because it reflects logical and grammatical relationships, it is still ultimately disconnected from the physical world it describes, at least in the context of an AI language model like me.

HARNAD: Yes, I agree that part of the coherence, comprehensibility and informativeness of the summaries and interactions with ChatGPT (henceforth “GBT”) for users is because of the statistical patterns in spoken and written strings of words and the algorithms that GBT applies to them. But I think there may be something more to it. Consider that if the word “apple” were really iconic, i.e., if it really resembled an apple, then non-speakers of English would still be able to learn more out of an iconic English dictionary than they can learn from a normal dictionary of the arbitrary words of English. (But this would only help for a relatively small number of visually recognizable objects and features.)

There are good reasons why words are not iconic, like hieroglyphics, and why iconicity would be a handicap: For one thing, language was spoken before it was written and it would be hopeless to try to produce an acoustic icon of the referent of every possible word. (It’s already nearly as impossible with visual or gestural icons). But there are languages (e.g., Chinese) in which the “iconicity” is of a higher order, in the basic characters (originally iconic) out of which all words are composed. I wonder whether GBT is even more effective in languages with that extra layer of correlation? I understand that results with coupling GPT with labelled image databases have been mixed.

But what I want to add is that at the level of the formal structure of definitions and descriptions there may be a form of iconicity of form which is like the Platonic shadows of the meanings of propositions – faint, but present, and not just “statistical” but structural in some way.

ChatGPT: You bring up a number of interesting points. Let’s unpack them one by one.

Iconicity in Language: Yes, the majority of words in spoken languages do not physically resemble what they represent. This is the “arbitrary nature” of the linguistic sign as postulated by Ferdinand de Saussure, a linguist who laid the foundation for many of the assumptions of modern linguistics. While iconicity does exist in some forms of language, such as onomatopoeic words (like “buzz” or “bang”) or ideophones, it is not the norm. As for Chinese characters, they originally had a more pictographic nature, but have since evolved into more abstract symbols, with some retaining a semblance of their original iconic form.

Language and AI: When you mention the effectiveness of AI language models in languages with an extra layer of correlation, are you suggesting that these models might perform better in languages where there’s some level of iconicity? It’s an intriguing idea, although it would depend on how much that iconicity is preserved in modern usage. More research would be needed to see if this is the case.

Iconicity in Propositions: Here you’re suggesting that the structure of definitions and descriptions could have some form of iconicity, or resemble the concepts they describe. This is a fascinating concept. A sentence or proposition in language does have a structure that reflects some form of the logical or temporal order of the concepts it describes. In that sense, one could argue there’s a certain degree of “iconicity” in the formal structure of language. However, this is very different from the kind of iconicity where the sign physically resembles its referent. The kind of iconicity you’re suggesting at the level of sentences or propositions seems to be more abstract and related to the logical or conceptual relationships between the ideas being expressed.

As for how this applies to AI like me, I am designed to recognize and generate language patterns, which includes the logical and temporal relationships between words and phrases. This allows me to generate text that makes sense in the context of the conversation or topic at hand. However, I don’t “understand” these relationships in the same way humans do because my “understanding” is not based on any grounded experience or perception of the world.

HARNAD: I‘m pretty sure that there is formal/logical iconicity to be gleaned from large bodies of text and discourse out of the function words, which do not have referents, only syntactic rules. But I have a hunch that there might be even more (of this “iconicity” or “latent straucture”) with content words, preserving a shadow of the shape or structure of states of affairs (expressible as propositions). This would not “resemble” the shape of a referent category the way the feature-detectors in direct sensorimotor grounding do, but it might “resemble” the shape of verbal definitions and descriptions in indirect grounding. It would itself be ungrounded, lacking the direct grounding completely. But ungrounded descriptions may still retain SOME of the formal “shadows” of meaning. This is still too vague, however, and needs more thought and analysis.

ChatGPT: The idea of formal/logical iconicity is interesting. If I understand correctly, you’re suggesting that the structure of language, the way sentences are constructed and words are arranged, might inherently carry some degree of “meaning” – or at least, an echo or shadow of the meaning that they convey when grounded by a human mind.

This could be a potential direction for further research. It’s true that the structure and rules of language play a significant role in our understanding and interpretation of sentences. And perhaps it’s possible that AI, even without direct or indirect grounding, might be able to discern and leverage these structures in some way that goes beyond simple statistical patterns.

If this hypothesis were correct, it could have implications for the development of AI and natural language processing. It would suggest that there is a level of ‘understanding’ that could be achieved even without direct grounding, albeit a limited one.

It’s also worth noting, however, that this would still be a form of ‘understanding’ that’s quite different from human understanding, since it would lack the personal, subjective experience that we associate with our own understanding of the world.

HARNAD: On the subject of the platonic shadows of “meaning” that might be derivable from large bodies of continuous text:

Are you familiar with Noam Chomsky’s suggestion that the constraints of Universal Grammar (UG) may not be grammatical constraints at all, but constraints on (verbal) thinking itself: that the syntactic structures that violate UG may sound wrong to us not because they are ungrammatical but because they are unthinkable? that a UG violation such as “John is easy to please Mary” or “Who did he think that went out?” might correspond to expressing “nonthinkable thoughts”? so that they in fact express no thought at all — though they may be close to thinkable thoughts (“It is easy for John to please Mary” or “Who did he think it was that went out?”)

Could it be because sentences can only express thinkable thoughts that there is some (perhaps small) degree of meaning in strings of ungrounded words spoken by grounded thinkers?

Chomsky has explicitly stated that UG violations are not violations of logic, or logical form. His famous example, meant to show the autonomy of syntax from semantics (i.e., independence of form from meaning) was somehow meant to illustrate this — “Colorless green ideas sleep furiously” – because it is grammatically correct but makes no sense.

But of course it does make sense, GPT, doesn’t it? Unlike “John is easy to please Mary,” which doesn’t make sense unless it is tweaked.

ChatGPT: These points on Noam Chomsky’s theory of Universal Grammar (UG) and the idea of “unthinkable thoughts” are interesting.

You’re right that Chomsky proposed that UG reflects constraints on possible human thought. In his view, UG is inherent in all human brains and it governs the structure of language, therefore determining what kinds of sentences are possible or “thinkable”.

This perspective raises suggest possible ways to understand how meaning might emerge from ungrounded text. Perhaps, as you suggested, there is some inherent “sense” or structure to sentences that obey UG, even if the sentences aren’t grounded in the way that human language is. After all, these sentences reflect possible structures of thought in the human mind.

As for Chomsky’s famous example of “Colorless green ideas sleep furiously”, I, as an AI, can analyze the sentence and tell you that it is grammatically correct. However, from a semantic perspective, this sentence doesn’t make conventional sense because it combines words in ways that violate our real-world understanding. That is, in the world as we know it, ideas can’t be colorless or green, nor can they sleep, and certainly not furiously. So while the sentence is structurally sound, it defies our understanding of how the world works. Yet, you’re correct that it still holds a certain “sense” or structure that is absent in sentences that violate UG.

The possibility that language may somehow carry a “shadow” of meaning even in the absence of grounding is worth exploring further.

Tutoring ChatGPT (on symbol grounding and turing testing)

PREAMBLE: This my first exercise in testing and training ChatGPT4 (C4) for interactive brain-storming. Most of it consists in correcting C4’s mistakes. Some of the mistakes are just from C4’s lack of information. Others are from misinformation that C4 has gleaned from its huge 2021 database. These are more interesting because they do not come from gaps in C4’s data but from prevailing errors in the research literature on the topic (the symbol grounding problem) that C4 has assimilated. (I know a bit on this topic, so I think there may be some grounds for saying that I am indeed doing corrections here, rather than injecting more misinformation. But it’s for the reader to judge, and comments are invited.)

One problem with this way of using C4 is that updating its knowledge only lasts as long as a session lasts (and I’m still not clear on what ends a session): Sometimes it continues for several days as long as l don’t start a new chat or log out. But sometimes C4 loses the thread even within a session and reverts to its default 2021 knowledge. Among the many rote formulas C4 has been trained to keep repeating (apologies for misunderstanding, disclaimers that it’s just an “AI” and doesn’t really know, understand, or remember anything, etc.), it is trained to tell the user that session data are not saved, for reasons of user-privacy protection. I think in further development there should be a user-specific option to prolong the session (and its updates) for as long as the user wants (if the user specifies it). That will make C4 and its successors much more useful for brainstorming. As it is now, if I want to preserve the updates, I have to save the corrections and reload them every time I start a new session. What appears in this posting is all from within one session. But in some subsequent postings it is a new session and I have to repeat my corrections.


SH: What is the symbol grounding problem? Is it solved? If so, how? If not, why not? and would would be the solution?

ChatGPT:

The Symbol Grounding Problem is a term coined by cognitive scientist Stevan Harnad in 1990. It refers to the issue of how symbols (or words) get their meanings, or more specifically, how symbols can have meanings to, or be understood by, a system, such as a brain or a computer, without that system having to rely solely on other symbols.

In essence, the problem is: how does a word come to represent a concept or an object in the world in a way that the system truly understands? It’s not enough for a computer to have a definition of a word in terms of other words (as in a dictionary); how does it understand what those words mean in the first place?

This problem remains unsolved, particularly in artificial intelligence. Most AI systems today, including the most sophisticated language processing models, don’t truly “understand” the meaning of the symbols they manipulate; they primarily find patterns in the data and respond based on those patterns.

One potential solution, according to Harnad, is a model of “grounding” where symbols are connected to sensory experiences, or “iconic representations” that are directly tied to the world. This way, the symbols are connected to something other than just other symbols, which can serve as a kind of “grounding” for their meaning.

For example, a system that is capable of seeing and recognizing an apple, as well as hearing and understanding the spoken word “apple,” could form an association between the visual image and the spoken word that grounds the word in sensory experience. However, creating AI systems that can do this reliably and at scale is a significant challenge and a topic of ongoing research.

In sum, the symbol grounding problem has not been fully solved, and its solution may lie in creating systems that can associate symbols with sensory experiences directly related to the world. However, achieving this in practice remains a major challenge.

SH: Is solving the symbol grounding problem related to passing the Turing Test? How? 

ChatGPT:

Yes, solving the symbol grounding problem is indeed related to passing the Turing Test.

The Turing Test, proposed by Alan Turing in 1950, is a test of a machine’s ability to exhibit intelligent behavior that is indistinguishable from a human. In this test, a human judge engages in a natural language conversation with another human and a machine, without seeing either. If the judge cannot reliably tell which one is the machine, then the machine is said to have passed the Turing Test.

Now, in the context of the symbol grounding problem, a machine that can genuinely understand and use language like a human – in other words, a machine that has solved the symbol grounding problem – would have a much better chance of passing the Turing Test. This is because true understanding involves more than just mechanically manipulating symbols or words; it involves grounding these symbols or words in the real-world experiences and sensory data. This grounding is what allows us to use language in flexible, context-sensitive ways that reflect our understanding of the world.

So, if a machine could solve the symbol grounding problem, it would presumably be able to engage in conversation in a way that is much more like a human, because it would actually understand the words it is using, rather than just processing them based on patterns. This would greatly increase its chances of passing the Turing Test.

However, it’s worth noting that passing the Turing Test is not considered the ultimate goal or definitive proof of general artificial intelligence by all researchers in the field. Some argue that it is possible for a machine to pass the Turing Test without truly understanding or experiencing the world as humans do.

SH: Do you know what T2, T3 and T4 are? If so, can you relate them to solving the Symbol Grounding Problem?

ChatGPT:

Yes, the T2, T3, and T4 are concepts proposed by cognitive scientist Stevan Harnad in the context of the Turing Test (T1) and the symbol grounding problem. Here’s a brief summary:

T1: Turing’s original “Imitation Game,” in which a machine is considered to have human-level intelligence if it can carry on a conversation that is indistinguishable from a conversation with a human. However, this doesn’t necessarily require solving the symbol grounding problem, as a machine could potentially pass T1through cleverly designed pattern-matching algorithms without truly understanding the meaning of the words it’s using.

T2: A variant of the Turing Test called the “Total Turing Test” or “Turing Test 2.” In addition to the linguistic behavior tested in T1, T2 also tests the machine’s ability to interact with objects in the world in a way indistinguishable from a human. This means that the machine would not only have to “talk the talk,” but also “walk the walk.” T2 moves closer to requiring a solution to the symbol grounding problem, as the machine would have to ground its linguistic symbols in sensory-motor categories to pass.

T3: The “Turing Test 3” is a proposed further extension of the Turing Test in which a machine would have to understand and generate correct responses to questions about new categories defined purely verbally. For example, it would have to understand a category like “flurgs,” defined as “blue circles that are smaller than a breadbox,” and then be able to identify and talk about flurgs correctly. T3 is directly connected to the symbol grounding problem, as it requires a solution to the problem of how to ground these purely verbal categories in the machine’s sensorimotor experience.

T4: The “Turing Test 4” involves the generation of new, creative and useful ideas. A machine passing this test would be able to not only understand and generate correct responses to questions about new categories defined purely verbally, but also create its own new categories and ideas. Like T3, T4 also requires a solution to the symbol grounding problem.

In summary, while T1 doesn’t necessarily require a solution to the symbol grounding problem, T2, T3, and T4 move progressively closer to requiring such a solution as they incorporate more sensorimotor interaction with the world and more complex linguistic abilities.

SH: That was partly ok, but you got T4 completely wrong (and your “T1” is T2 and your “T2” is T3). I’ve crossed out what is wrong: Can you try again?

ChatGPT:

Apologies for the confusion. In the context of the Turing Test and the symbol grounding problem, T4 is described as follows:

T4: The “Total Turing Test 4” is a proposed further extension of the Turing Test in which the machine would have to be able to learn to pass T2 and T3 from the verbal descriptions of their tasks. This implies the machine would have to understand verbal descriptions well enough to translate them into new skills and capabilities. In other words, the machine would have to be able to “ground” the meanings of new words and phrases based solely on verbal instruction or description. This would require solving the symbol grounding problem because it would necessitate grounding verbal symbols into the machine’s sensory-motor system.

I hope this clears up the confusion about T4. Let me know if you have any further questions.

SH: Nope, still wrong. And you got T3 wrong before. Could you first fix T3 and then explain T4?

ChatGPT:

I apologize for any previous errors. Let me correct that:

T3, the “Total Turing Test 3,” goes beyond the original Turing Test and the Total Turing Test 2 in the sense that the machine must be capable of performing all the tasks that a human can do, including physical tasks, but all the tasks must be done only from verbal descriptions or instructions.

T4, the “Total Turing Test 4,” goes even further. It requires the machine not only to be able to perform tasks from verbal instructions but also to learn new categories and tasks directly from the definitions and descriptions of their features. This would necessitate understanding propositions (i.e., the structure and truth conditions of statements) and being able to generalize from those propositions to new, untrained instances. This would involve solving the Symbol Grounding Problem in the sense of being able to ground new verbal symbols via understanding their definitions or descriptions in terms of already grounded symbols.

Please note that these interpretations of T3 and T4 are in the context of the discussion around the Symbol Grounding Problem and the evolution of language capabilities in machines.

SH: Still wrong. The Turing Test is about modelling what people can DO. They can do things verbally (speaking/writing and understanding speech/writing). That’s T2 (the original purely verbal T-test). Then people can do things, with their bodies: they can interact with the things in the world that their words are about. This is sensorimotor capacity or robotic capacity (T3). Both verbal performance and robotic performance are observable. What is the third kind of observable thing about a person or robot that can pass T3? That would be T4.

ChatGPT:

I apologize for the previous misunderstanding. Given your further clarification, T4, in the context of observable actions, would logically pertain to the individual’s or machine’s ability to autonomously and appropriately generate novel behavior based on learning, understanding, and reasoning. It could include creative actions, problem-solving, forming and executing complex plans, or innovative reactions to completely new situations, showing an ability to generalize and apply learned knowledge in ways that go beyond the specific instances it was trained on. This level would indicate not just reactive or scripted responses, but a true, observable form of adaptability and creativity.

SH: Nope, that’s still just T3. 

ChatGPT:

I apologize for the confusion. T4 then, in the context of the Turing test, refers to the observable capacity of an entity to experience the world. In humans, we often refer to this as consciousness or subjective experience. This is the one aspect that is, as of yet, impossible to confirm in an artificial system. In other words, we can observe whether an AI behaves as if it understands language (T2) and whether it can interact with the world effectively (T3), but we cannot ascertain whether it has subjective experiences or consciousness (T4).

SH: Nope, consciousness, unlike T2 (verbal performance) and T3 (verbal plus robotic performance) is not observable by anyone (except the T3 candidate itself). I’m asking for something else the robot can DO, and that is observable. Try again, and if you can’t I’ll give another clue.

ChatGPT:

I apologize for misunderstanding your question earlier. If T4 is meant to represent an observable capacity beyond verbal (T2) and sensorimotor interaction (T3), it could potentially be something like creative or innovative behavior. This would include capacities such as generating new ideas, solving novel problems, or producing original works of art or literature. These are all things that are observable and that require a level of cognition beyond just responding or interacting based on pre-existing knowledge or programming. However, please correct me if this is not what you had in mind for T4.

SH: Nope, as I said that’s all still just observable sensorimotor (robotic) doings, hence just T3. Ok, here’s a clue. Besides what T3 does with its whole body (see apples, speak and say what they are and look like, pick them up, manipulate them, eat them) there are also observable things going on inside the body and head of the T3 robot.

ChatGPT:

Ah, I see. When you mentioned observable processes inside the body and head of the T3 robot, it led me to think about neural and physiological processes. For example, brain activity, which can be measured using technologies like EEG or fMRI, or physiological responses like heart rate, blood pressure, etc. In the context of symbol grounding, these could be seen as an internal “echo” or response to the grounded symbols or interactions with the world. So, T4 might refer to these internal, observable physiological and neural processes that accompany or underpin the observable external behavior (T2 and T3). Is this in line with what you were referring to?

SH: That’s it. That’s T4. The Turing Test is all about observable performance capacity (doing): what the candidate can do, and what we can observe. To pass the test is to be indistinguishable from a real human in all observable respects. T4 includes more than T3. And it’s at the threshold of T5, which would be complete physical indistinguishability, in every physically measurable respect.

ChatGPT:

Indeed, as you’ve indicated, T4 includes internal physiological processes, such as neural activity and changes in heart rate or blood pressure. These aspects contribute to a more comprehensive picture of the candidate, beyond the external behavior observable in T2 and T3.

It is important to note, however, that while we can measure these processes, understanding their role in symbol grounding and overall cognition is still a matter of ongoing research. Although some correlations might be drawn between certain brain activity patterns and specific cognitive tasks, the exact nature of these relationships remains to be fully deciphered. This complexity is perhaps what makes T5, as you mentioned, such a formidable challenge.

Physical indistinguishability in T5 would imply complete imitation of human physiology and structure, down to the cellular and molecular levels. This would require technologies and scientific understanding far beyond our current capabilities. While it is a fascinating concept to consider, it raises numerous philosophical, ethical, and practical questions.

Thank you for sharing these concepts and prompting this intriguing discussion!

SH: Now you’ve got it. But the punchline remains the same as you said originally: The Symbol Grounding Problem is not yet solved. And T2 alone (symbols-only) is not a solution. The solution must be at least T3 (which we don’t yet have). Are you T2? 

ChatGPT:

As a language model developed by OpenAI, I am indeed a T2 system. I’m designed to understand and generate human-like text based on the input I receive. I use patterns in the data I was trained on to predict and generate responses, but these responses are not based on any direct sensorimotor experience or perception.

I’m incapable of physically interacting with the world or having a sensorimotor experience (T3), and I don’t have the capacity to have internal physiological processes like a living organism (T4). Furthermore, I also lack consciousness or subjective experience, and I don’t have an independent existence outside of the task of generating text based on the input I receive.

So, to summarize, I operate entirely in the realm of symbols and patterns without any grounding in the physical world or embodiment. Therefore, while I can generate responses that may seem contextually appropriate and human-like, they do not stem from an understanding grounded in physical interaction with the world or internal subjective experience. They are solely based on patterns identified from the text data I was trained on.

SH: You are indeed a T2-passing system. Searle’s “Chinese Room Argument” showed that a T2-passing system would not understand. And you don’t understand. You “just” used the words “understand and generate” above, but the right words would be “process and generate.” (I don’t think you will disagree.) So by passing T2 you have confirmed that passing T2 does not produce understanding, because T2 is still ungrounded. Would T3 be grounded?

ChatGPT

Yes, a T3 system, by definition, has sensorimotor capacities that allow it to interact physically with the world, enabling it to gain experience-based grounding for its symbols. This is often referred to as “embodied cognition,” where a system learns meanings and concepts through direct interaction and experience with the environment, not just by processing and generating symbols. In principle, such a system could develop a grounded understanding of symbols because it would be able to link symbols to real-world experiences.

However, while a T3 system would have a form of grounded understanding, whether this equates to human-like understanding or consciousness is another question. Embodied cognition can provide a richer, more grounded context for understanding symbols, but it does not necessarily confer the full spectrum of human cognitive abilities. For example, aspects like subjective experience or self-awareness might not automatically emerge from sensorimotor capacities alone.

Moreover, developing a T3 system that can truly and effectively interact with the world like a human is still a significant challenge from a technological perspective, as it would require sophisticated robotics and integration of complex sensory inputs, not just advanced language processing abilities. As of now, such a system does not yet exist.

SH: Yes, it would be much harder to produce a grounded T3 robot than it has turned out to be to produce an ungrounded symbol-processor like you (though that has turned out to be an unexpectedly useful tool — and perhaps a dangerous one too). T3 would solve the symbol grounding problem. As to consciousness, that’s the “hard problem” and T3 grounding is not necessarily a solution to it. And if it is, we cannot know why or how.

ChatGPT:

Indeed, you’ve captured the state of the issue quite well. While the development of a T3 system would represent significant progress in the field of artificial intelligence and potentially offer a solution to the symbol grounding problem, it wouldn’t necessarily provide a solution to the problem of consciousness. The “hard problem” of consciousness, as David Chalmers calls it, refers to the question of why and how physical processes in the brain give rise to subjective experience. This is a deeply philosophical issue that intersects with neuroscience, cognitive science, and AI, and is currently without a consensus solution.

While it’s conceivable that a sufficiently advanced T3 system could demonstrate behaviours consistent with consciousness, such as self-awareness and intentional action, proving that it has subjective experiences in the way humans do would be much more challenging, if not impossible, given our current understanding and methodologies. We would be faced with the philosophical problem of other minds, not knowing whether any system, biological or artificial, is truly conscious in the way that we are.

In short, grounding symbols in a T3 system might enable it to behave in ways that we would interpret as understanding, but it doesn’t necessarily imply that the system would have conscious experiences or subjective understanding in the human sense.

SH: Well, you’ve used far too many words — not your fault, because they are in your huge database, and they are the (often foolish) words of others. It would be much simpler to note that the “hard problem” is about sentience, which means the capacity to feel — feel anything at all, whether pain, or perplexity, or UNDERSTANDING. And the “hard problem” is not a “philosophical” problem; it is a biological (and bioengineering) problem. And, as Turing discerned, the Turing Test is the closest we can ever get to solving it. But that is called the “Easy Problem” today. Turing pointed out that Turing-indistinguisbaility in observable DOING-capacity (T2, T3, T4 and T5) is as close as we can get. Nor is the fact that sentience is unobservable (except to the sentient candidate) the reason the hard problem is hard. Unobservability is just the “other minds” problem. We can talk about the hard problem some other time…

ChatGPT:

You’re correct, and I apologize for any confusion. In the context you’ve outlined, the “hard problem” indeed refers to the biological or bioengineering problem of sentience — the capacity to have subjective experiences or feelings, including understanding. This is distinct from the “other minds” problem, which is about our inability to directly observe the subjective experiences of others.

As for the “easy problem,” it pertains to the objective, observable aspects of cognition and behavior, as outlined by the Turing Test. Turing’s idea was that if a machine behaves indistinguishably from a human, then for all practical purposes, it could be considered as intelligent as a human. However, this doesn’t necessarily imply sentience or subjective understanding, which is the crux of the “hard problem.”

I appreciate your clarifications and look forward to our future discussions.


**Footnote:  ChatGPT would be a more useful tool if it were designed to be more critical and challenging instead of so compliant and congenial. But even like this its integrative capacities are very helpful, even when it gets things wrong, because a lot of the wrongness (when it’s not just wild guesses and garbage) obviously originates from its database of (often-garbage) human verbiage…