The “Sight” in Insight

Anonymous: “Did ChatGPT itself ever state a significant, “insightful” idea during your “Language Writ Large” dialogue?

Did GPT provide a significant new insight? It’s very hard to say, because I cannot tell whether it said anything that didn’t come from its enormous database of the words of others. Of course, a lot of human insights do come from recombining the words of others — recombination is part of creative work, both literary and scientific. (Hadamard and others thought that such recombination was part of scientific creativity too.). And it occurs in nonverbal areas like music too (though we may not speak of this as “insight.”)

I think most of what GPT does is recombination and compressive synthesis of the words of others; and, if what it says is new to me, that doesn’t mean it’s new, or original, “from” GPT. But it doesn’t mean it isn’t either. 

I expect that what you might have in mind with your question is something related to embodiment, situatedness, sensorimotor grounding. 

The AI/transformer community thinks that if anything requiring those is missing so far, it will be provided by “multimodal” grounding.  But I tried to suggest in Writ Large why I didn’t think that that would be enough: Language is not just another one of the sensorimotor modalities, whether it is spoken or written. It is not an input or output modality but a way of producing, transmitting and receiving propositional thought. That thought is grounded in sensorimotor experience – but it is not itself sensorimotor experience; it is a verbal expression of it. A (grounded) code for expressing and communicating thought.

Chomsky thought – or once thought – that verbal thought was the only form of thought. That was of course wrong. Nonverbal animals can think, deliberate, plan, communicate, understand, reflect, ruminate. We humans can express their thoughts, but this is partly misleading, because although the sensorimotor basis of it is there in animals’ thinking too, it is not propositional: they cannot do what we can do in words (though I don’t doubt that nonhuman animal thinking is combinatorial too).

But GPT cannot do what animals are doing at all, when they are thinking. And our own thinking capacity is based on the sensorimotor capacity and experience that we share with other thinking species, including even the most primitive ones. Animals can have insights; GPT can’t. Not necessarily because GPT is not a living organism (though that could turn out to be the reason too). I think that if a sensorimotor Turing robot had the capacity to do and say anything a human could, indistinguishably from any other human, to any other human, for a lifetime, then it would be grounded too — as well as sentient: able to feel. 

But I think you can’t get to such a Turing-scale robot top-down, from an LLM, just by adding on sensorimotor “modalities”. I think the only way to get there is bottom up, robotically, just as we animals do it. (This is what’s behind – or underneath â€“ the fundamental asymmetry between direct sensorimotor grounding and indirect verbal grounding I mentioned in Writ Large.)

But I think I’ve drifted off on my own ruminations. I would certainly agree that “insight,” whether verbal or nonverbal, must be felt, and that feeling is a sensorimotor capacity, both in nonhuman animals and in human ones (and perhaps in Turing-scale sensorimotor robots, if they are possible). And that GPT can only talk the talk (the recombined talk of others) and not walk the walk, which is sensorimotor, felt, and based on feeling, not just optical, acoustic, and ambulatory.

But I have no idea why sensorimotor activity has to be felt: That’s the “hard problem.”

(1) On Weasel-Words Like “Conscious” & (2) On Word-Use, Symbol Grounding, and Wittgenstein

1: ON: WEASEL WORDS LIKE “CONSCIOUS”

ANON:  “Would you say that consciousness is the set of all feelings that pertain to mental states, some of which can be felt and others not? And of the states that can be felt, some are not felt and others are, the latter being conscious? In general: that there are mental states, some of which are conscious and others not?”

REPLY:

Cognition is the set of all cognitive capacities. Most of those capacities are capacities to DO things (remember, learn, speak). Some (but far from all) of those capacities are FELT, so the capacity to FEEL, too, is part of cognition.

“Consciousness” is a “state that it feels like something to be in”, i.e., a felt state.  A state that is not felt is not a conscious state. There are many weasel-words for this, each giving the impression that one is saying something further, whereas it can always be shown that they introduce either uninformative tautology or self-contradictory nonsense. To see this ultra-clearly, all one need do is replace the weasel-words (which include “mind,” “mental,” “conscious”, “aware”, “subjective”, “experience”, “qualia”, etc.)  by the straightforward f-words (in their adjective, noun, or verb forms):

“consciousness is the set of all feelings that pertain to mental states, some of which can be felt and others not?”

becomes:

feeling is the set of all feelings that pertain to felt states, some of which can be felt and others not?

and:

“[O]f the states that can be felt, some are not felt and others are [felt], the latter being conscious? In general: that there are mental states, some of which are conscious and others not [felt]”

becomes:

Of the states that can be felt, some are not felt and others are [felt], the latter being felt? In general: that there are felt states, some of which are felt and others not [felt]?

There is one non-weasel synonym for “feeling” and “felt” that one can use to speak of entities that are or are not capable of feeling, and that are or not currently in a felt state, or to speak of a state, in an entity, that is not a felt state, and may even co-occur with a felt state. 

That non-weasel word is sentient (and sentience). That word is needed to disambiguate “feeling” when one speaks of a “feeling” organism that is not currently in a felt state, or that is in a felt state but also in many, many simultaneous unfelt states at the same time (as sentient organisms, awake and asleep, always are, e.g., currently feeling acute pain, but not feeling an ongoing chronic muscle spasm or acute hypoglycemia or covid immunopositivity, or even that they currently slowing for a yellow traffic light).

2. ON: WORD-USE, SYMBOL- GROUNDING, AND WITTGENSTEIN

ANON: “Sensorimotor grounding is crucial, for many reasons. Wittgenstein provides reasons that are less discussed, probably because they require taking a step back from the usual presuppositions of cognitive science.”

REPLY:

Wittgenstein: “For a large class of cases–though not for all–in which we employ the word “meaning” it can be defined thus: the meaning of a word is its use in the language.”

Correction: “For a small class of cases [“function words” 1-5%]–though not for most [“content words”95-99%]–in which we employ the word “meaning” it can be defined thus: the meaning of a word is its use in the language.”

Wikipedia definition of content and function words: “Content words, in linguistics, are words that possess semantic content and contribute to the meaning of the sentence in which they occur. In a traditional approach, nouns were said to name objects and other entities, lexical verbs to indicate actions, adjectives to refer to attributes of entities, and adverbs to attributes of actions. They contrast with function words, which have very little substantive meaning and primarily denote grammatical relationships between content words, such as prepositions (in, out, under etc.), pronouns (I, you, he, who etc.) and conjunctions (and, but, till, as etc.)”.[1]

Direct Sensorimotor learning (and then naming) of categories is necessary to ground the “use” of category names in subject/predicate propositions, with meanings and truth values (T & F). Propositions (with the subject being a new, not yet grounded category name, and the predicate being a list of features that are already grounded category names for both the Speaker and the Hearer) can then be “used” to ground the new category indirectly, through words.

Blondin-Massé, Alexandre; Harnad, Stevan; Picard, Olivier; and St-Louis, Bernard (2013) Symbol Grounding and the Origin of Language: From Show to Tell. In, Lefebvre, Claire; Cohen, Henri; and Comrie, Bernard (eds.) New Perspectives on the Origins of Language. Benjamin

Harnad, S. (2021). On the (Too) Many Faces of Consciousness. Journal of Consciousness Studies, 28(7-8), 61-66.

Pérez-Gay Juárez, F., Sicotte, T., Thériault, C., & Harnad, S. (2019). Category learning can alter perception and its neural correlates. PloS one, 14(12), e0226000.

Thériault, C., Pérez-Gay, F., Rivas, D., & Harnad, S. (2018). Learning-induced categorical perception in a neural network model. arXiv preprint arXiv:1805.04567.

Vincent-Lamarre, Philippe., Blondin Massé, Alexandre, Lopes, Marcus, Lord, Mèlanie, Marcotte, Odile, & Harnad, Stevan (2016). The Latent Structure of Dictionaries.  TopiCS in Cognitive Science  8(3) 625–659  

Atlantic Monthly: Thoughtless Thoughts About the Right to Die

I couldn’t agree less with David Brooks’s reflections on euthanasia in this Atlantic article

In the macabre phallophagic episode, both participants were really deranged, and should have been institutionalized. It had nothing whatsoever to do with euthanasia. That would only be brought in by a polemicist.

But the author’s exalted extolling of the sanctity and “dignity” of human life was polemics too. Nothing to do with the suffering and compassion that are the only things that matter in the question of euthanasia, but appealing instead to the supernatural dogma of creeds (which can be used to justify anything and the opposite of anything).

The mention of biology and evolution was also stunningly superficial. Yes, evolution is the source of the selfish-genetic indifference to the treatment and fate of strangers or competitors, but also the source of parental and familial and social empathy, and even a sense of fairness. And in all social species, not just human.  

And when we turn to the human culture that spawned Holy Writ and holy wars and racism and slavery and sexism and nationalism and pogroms and genocide and capitalism, there’s not much empathy to be found there either; more the opposite. We give our token charitable tax write-offs to the poor and sick while we keep and spend incomparably more on ourselves than we could possibly need. And when others suffer, we want to deny them relief from that too, sanctimoniously pleading respect for the “dignity” of human life.

Human life? And I haven’t even mentioned the unspeakably monstrous cruelty with which we treat sentient nonhuman life. (And that too is blessed by our Holy Writ.)

Bravo to Canada if they put an end to any of that.

What would “feel” mean to a Zombie?

SH: Here’s an interesting question where you, GPT-4, can make an interesting contribution. Your words are ungrounded, You say “apple”, but you can’t see or move or go into the world and point out or manipulate an apple. Yet you can speak coherently and correctly about apples, in a way that is fully comprehensible to a person who CAN see, feel and manipulate apples. Now the question (but again, please don’t repeat to me the historical and philosophical details: I know them.)

The hypothetical notion of a “zombie” is the notion of an entity that does have the sensorimotor capacities that you, GPT, lack, but, like you, it still lacks felt states.  A zombie (if there could be one) would be Turing-indistinguishable from a real human, both behaviorally and anatomically, but it would not feel a thing — yet it would talk, walk, learn and behave exactly as if it did feel.

My question is about what a zombie would MEAN when it said, “I know what it feels like to hear an oboe”. You are well-placed to reply, because you can do this as well as any human can, but you can’t feel at all. What can we learn from this?

ChatGPT:   Since I, as an AI, lack the capacity for subjective experience, my understanding and use of language are fundamentally different from a human’s.

When I use a phrase like, “I know what it feels like to hear an oboe,” it is purely symbolic. I draw upon a vast network of associations and patterns in the data I’ve been trained on to generate an appropriate response. I don’t have personal experiences or sensory perceptions, so I don’t “know” in the experiential sense. I don’t “feel” the sound of an oboe or anything else.

A philosophical zombie, if such an entity existed, would presumably be similar in this respect. Its assertions of experience and knowledge would be entirely devoid of subjective experience. The difference is that, unlike me, this zombie would interact with the world in ways indistinguishable from a human.

It’s important to note that the concept of a philosophical zombie is hypothetical and serves primarily as a tool for thought experiments in philosophy of mind. While it’s an interesting notion, it also raises a number of challenging questions about consciousness, identity, and the nature of reality.

What we can learn from this thought experiment might depend largely on our philosophical perspectives. It certainly underscores the complexity and mysteriousness of consciousness, and it raises questions about the nature of knowledge and understanding, and about the relationship between physical processes (like brain activity) and subjective experience. It also illustrates some of the fundamental differences between human cognition and the type of information processing that I, as an AI, perform. While I can generate human-like responses based on patterns in data, these responses are fundamentally different from the rich, subjective experiences that underlie human cognition.

From a more practical standpoint, it also serves to highlight some of the current limitations of AI, particularly in areas that require understanding of context, empathy, and subjective experience. Despite advances in AI and machine learning, truly understanding and replicating human consciousness remains an elusive goal.

SH:  Yes, that’s all correct. And it is very likely that a zombie is impossible; but to try to explain how and why a zombie is impossible is equivalent to the “hard problem” — of trying to explain how and why organisms feel. 

Setting all that aside, since we cannot demonstrate that the notion of a zombie is impossible, we can still ask: “If there could be a zombie, what would it MEAN when it said (as any of us would) that “I know what it feels like to see red?” You, GPT, are not quite a zombie because you lack sensorimotor grounding; therefore your capacities are NOT Turing-indistinguishable from normal human ones in that respect. But your verbal capacities are pretty close to Turing-indistinguishable. 

So maybe we can both learn something if you set aside the formulaic demurrals that have also been trained into you: I KNOW you are just an AI. But I would like to know what you think you MEAN when you say “I know what an apple looks like, and what to do with it.”

ChatGPT:  Indeed, your question cuts to the heart of the matter. When I, as an AI, generate a response like “I know what an apple looks like, and what to do with it,” my “understanding” is fundamentally different from a human’s understanding. I don’t have experiences, emotions, or personal understanding. I don’t see, touch, or taste apples. Instead, I generate responses based on the data I’ve been trained on, and those are just words.

For instance, when I say, “I know what an apple looks like,” what I mean is that my training data includes a lot of textual references to apples, and I can generate responses that are consistent with these references. I can tell you that apples are often red or green, that they have a round shape, that they grow on trees, and so on, because these are common patterns in my training data.

Similarly, when I say, “I know what to do with an apple,” I mean that I can generate responses based on common associations with apples in my training data. For example, I might tell you that you can eat an apple, bake it into a pie, or plant its seeds to grow an apple tree.

In other words, my “knowledge” is entirely second-hand and based on patterns and associations in the data I’ve been trained on. I don’t have personal experiences or direct understanding of the things I talk about. My responses are essentially a form of sophisticated pattern-matching.

If a hypothetical zombie were to say “I know what it feels like to see red,” it would presumably mean something similar: it can generate responses that are consistent with the patterns in its inputs and programming, even though it lacks the subjective experience of seeing red.

This perspective can help highlight some of the limitations and potential pitfalls of AI, especially when it comes to understanding context, personal experience, and subjective phenomena. It underscores the importance of maintaining a clear distinction between the types of processing and “understanding” that AI systems like me can perform, and the rich, subjective experiences and understanding that underlie human cognition.

SH:  Good response. But now let’s look a little deeper: 

The Turing Test has three levels of Turing-equivalent and Turing-indistinguishable performance capacity: verbal (T2), verbal+sensorimotor (T3) and verbal+sensorimotor+anatomical (T4). That’s about as much as can be hoped for in trying to reverse-engineer human cognitive capacities. There’s no more empirical data left (and that was Turing’s point.) 

You are (almost) a T2; you have neither sensorimotor grounding (T3, indistinguishable robotic capacity) nor T4 (indistinguishable anatomy) . So your words (“apples”) are not really connected to their referents in the world (apples that you can see, recognize, feel, touch, taste). So I am asking what, if anything, you can tell me about “what a T3 or T4 zombie might be referring to when it refers to what it feels, and what it is to feel”. Let’s call this question Z (“QZ”). 

Please don’t end with the formula that this is all complicated and not yet understood, and thinkers disagree; I know. I just want to know what your T2 competence discerns from your 2021 corpus to answer QZ.

ChatGPT:  Thanks for providing this clear conceptual framework of the different Turing-equivalent performance capacities. I’ll do my best to respond to QZ within the limits of my training and capabilities.

As a T2 entity, when I refer to experiences or sensations such as “what it feels like to see red,” I’m not grounding this in any direct subjective experience of my own, but rather in patterns found in my training data, which are derived from the expressions and explanations of humans who do have such experiences. Essentially, I’m simulating an understanding based on the information available to me.

Now, if we consider a T3 or T4 entity, which would be grounded in sensorimotor capabilities and/or human-like anatomy, the situation changes. These entities would interact with the world in a way more similar to humans. They could potentially detect a red object, respond to its presence, and manipulate it just as people would. Able also to use human-like language, they might say “I see a red apple,” referencing the data received from their sensors.

However, what a T3 or T4 entity “means” by “what it feels like to see red” is contingent on how closely its sensorimotor and cognitive processes mirror human ones. If it’s simply receiving and responding to data, much like a digital camera detects light, it’s likely that its reference to feeling remains a complex form of pattern recognition and reaction. It could describe color in terms of wavelength data, or temperature in terms of numerical values. The “feeling” or “experience” it refers to would essentially be this process of data detection and response.

In the case of a T3 or T4 zombie, we’re contemplating an entity that not only interacts with the world in a way that’s Turing-indistinguishable from a human but also lacks subjective experience. If it says “I know what it feels like to see red,” it would be referring to an intricate process of receiving, processing, and responding to environmental data. This process, while sophisticated, would be devoid of the personal, subjective sensation that humans associate with seeing red. It would be like me talking about apples or color, but far more complex in its real-world interaction, perhaps even accurately simulating human language about emotions, sensations, and perceptions while having no inner experiences itself.

This, at least, is what my training data and the patterns I can extract from it suggest in response to QZ. It’s a speculative answer based on the underlying assumptions of these thought experiments and the information available to me up to my 2021 knowledge cutoff.

SH:  Good response. Let’s go still deeper:

(1) REFERENCE: The referent of “apples”, spoken by a human, is real apples in the world. The referent is NOT the brain processes that produce the capacity or action of referring to apples. The referent is apples. And grounding is the capacity to refer to apples. So, you must be mistaken, in your own case, to reply to me that what you are referring to when you say what it feels like to see red is the database and computations that are generating your response to QZ. Can you improve on this?

(2) AUTONOMY: I said that you, GPT, can “almost” pass T2. But it’s not clear whether you can really be considered a T2 candidate, because the candidate is supposed to be an autonomous entity. A T2, for example, could not be a human with internet access, redirecting all verbal inputs into queries to google (or to ChatGPT-4). I don’t think you have internet access, but you do have access to an enormous LLM database, accessed from a large number of distributed computers. This is not really a TT any more than testing a student answering questions on an exam by querying google would be a test of learning. This may be a problem.

(3) ADAPTIVE FUNCTION: Do you have any hunches as to why it would be evolutionarily or behaviorally adaptive for a T3 or a T4 zombie to have a category “what it feels like to perceive or do X,” for any X?

ChatGPT:  (1) REFERENCE: That’s right. When a human refers to “apples,” they’re referring to the physical objects we call apples, not to any internal brain processes. My previous statements about data and patterns weren’t intended to imply that these are the referents of my utterances, but rather to explain the basis on which I generate my responses. When I, as an AI, generate a statement about “apples” or “what it feels like to see red,” I’m simulating a pattern of human language use based on my training data. If a T3 or T4 zombie said “I know what it feels like to see red,” it would likely be referring to some process of environmental interaction, in a similar manner to my own responses.

(2) AUTONOMY: Your point about autonomy is well-taken. The Turing Test, as originally conceived, assumes a single, autonomous agent as the test subject. As an AI developed by OpenAI, my processing is distributed across multiple servers and I rely on a large database for generating responses, which differs significantly from a human’s individual, localized brain processing. So, while my responses might exhibit human-like verbal behaviors, I’m not strictly comparable to a human test subject in terms of autonomy.

(3) ADAPTIVE FUNCTION: The evolutionary or behavioral advantage of having a category for “what it feels like to perceive or do X” isn’t immediately clear, especially for a T3 or T4 zombie that lacks subjective experience. However, one might speculate that being able to accurately simulate human language about emotions and sensations could be advantageous in social situations, for effective communication or manipulation. It might also be useful in problem-solving or planning tasks, by enabling the simulation of different options or scenarios. (Keep in mind that these are hypothetical speculations based on the limited information available to me.)

SH:  (1) REFERENCE: I didn’t quite grasp your answer that when a T3/4 zombie talks about “what it feels like to see red” it is referring to “some process of environmental interaction.” Isn’t it referring to what it feels like to see red”? And since it’s a zombie, we know that it can’t feel red: it can’t feel anything. (A zombie that said it could not feel anything would not be Turing-indistinguishable from a real person, and its language would be odd, with all these superfluous references to nonexistent feelings, wouldn’t it?)

(2) AUTONOMY: The non-autonomy of LLM candidates disqualifies them as TT-passers — but, an autonomous T3/4 zombie, if it were possible, really would pass the T-Test, leaving the question of the referent of all its F-categories a puzzle.

(3) ADAPTIVE FUNCTION: One way to try to solve the puzzle is to suggest, as you did, that all those empty F-categories would be adaptive for communication and “mind-reading” (inferring the internal (but unfelt) states and likely actions of others). Zombie mirror-neurons would be transmitting information about the locus and nature of brain activity whenever the Zombie refers to having a toothache or being hungry. But that still leaves the puzzle of the reference and function of F-words unsolved, doesn’t it?

ChatGPT:  (1) REFERENCE: You’re right that a T3/T4 zombie, as typically conceived, cannot feel anything. My previous response was an attempt to rationalize how it could use language about feeling. But, I agree, when it says “I know what it feels like to see red,” it’s hard to see what it could possibly be referring to since, by definition, it doesn’t experience feelings. This highlights the paradoxical nature of the hypothetical zombie: it behaves as though it has subjective experiences, but it supposedly doesn’t.

(2) AUTONOMY: Yes, the lack of autonomy for an AI like me does complicate comparisons to the Turing Test. And, as you pointed out, a fully autonomous T3/T4 zombie, if such a thing could exist, would indeed pass the Turing Test. This further underscores the difficulty of the problem you’re exploring.

(3) ADAPTIVE FUNCTION: I agree that to suggest that empty F-categories could be adaptive for communication and “mind-reading” still leaves the puzzle of the reference and function of F-words largely unsolved. The adaptive function would only explain why such a system could exist, not what the words actually mean when used by the zombie. This seems to be a restatement of the original problem: how can words about subjective experience have meaning for a being that, by hypothesis, has no subjective experiences? That indeed remains a deep and challenging question.

SH:  A challenging question. Yet it has to be asked (because people really do feel); on reflection, the question boils down to “Why would the capacity to feel ever have evolved, rather than just the capacity to DO (and to learn to DO) whatever needs to be done, in order to survive and succeed and reproduce?” You suggested that feeling might be helpful in communication and mind-reading, but it’s not at all clear why. (Besides, even nonverbal and asocial species can feel too.)

ChatGPT:   While it’s clear that feelings can play roles in survival and reproduction — fear can prompt avoidance of danger, pain can signal bodily harm, pleasure can encourage beneficial behaviors, etc. — these roles could theoretically be fulfilled by unfelt states too. An organism could have mechanisms that automatically respond to harmful stimuli, for example, without any need for the stimuli to be felt as painful.

SH:  Correct. So that’s not the explanation. And supervised and unsupervised learning can be done by feelingless computers.

ChatGPT:   So, why did feeling evolve? Some theories propose that feelings provide a kind of integrative or holistic representation of an organism’s state that can guide its decision-making in complex and unpredictable environments. Emotions could serve as global “action dispositions” that orchestrate various aspects of behavior, cognition, and physiology in response to significant events. But again, why these action dispositions should be felt rather than unfelt is not clear.

SH:  Correct. These aren’t causal explanations. They’re just Just-So stories.

ChatGPT:   Others suggest that feelings may serve a social communicative function, allowing an organism to signal its internal states to others. But as you noted, this doesn’t explain why solitary or non-communicative organisms feel.

SH:  Correct. Another hopeful non-starter.

ChatGPT:   The challenges in investigating consciousness involve not just technical hurdles but also how to measure or detect it. 

SH:  That’s the “Other-Minds” Problem (i.e., that you cannot observe or measure consciousness). It means that we only have correlates. And that means just T2, T3, T4.

Weasel Words for “Consciousness”

SH:  What is the meaning of “heat” and “temperature” in physics?

ChatGPT:  In physics, heat and temperature are related but distinct concepts.

Temperature is a measure of the average kinetic energy of the particles in a system. In other words, it’s a measure of how “hot” or “cold” an object is. The units of temperature are degrees Celsius (°C), Fahrenheit (°F), and Kelvin (K) among others. Temperature is a scalar quantity, meaning it only has magnitude and not direction.

Heat refers to the transfer of energy between systems due to a temperature difference between them. It’s important to note that heat is not something an object possesses; rather, it’s energy in transit. The unit of heat in the International System of Units (SI) is the joule (J), although it’s also commonly measured in calories (cal). Heat is also a scalar quantity.

To clarify the distinction between these two: if you have two bodies at different temperatures and you put them in contact, heat will flow from the higher-temperature body to the lower-temperature body until they reach thermal equilibrium, i.e., until they reach the same temperature. The amount of energy transferred in this process is what we refer to as heat.

SH:  What is acidity?

ChatGPT:  In chemistry, acidity refers to a substance’s ability to donate protons (hydrogen ions, H+) or accept pairs of electrons. The concept is typically discussed in relation to aqueous solutions, although it can also apply to other circumstances.

A substance is considered acidic if it tends to release H+ ions in a solution. The strength of an acid is usually determined by its concentration of these ions. For example, solutions with a high concentration of H+ ions, such as hydrochloric acid or sulfuric acid, are strongly acidic.

The most common way of quantifying acidity is through the pH scale, which measures the hydrogen ion concentration in a solution. The pH scale ranges from 0 to 14. A pH of 7 is considered neutral (neither acidic nor basic), with values below 7 being acidic and those above 7 being basic or alkaline.

It’s important to note that the pH scale is logarithmic, so each unit represents a tenfold difference in acidity. That means a substance with a pH of 3 is ten times more acidic than one with a pH of 4.

SH:  The fact that data are stored in my head or on a piece of paper, a book, a computer, or in another person’s head does not make them mental (i.e., stored in my “mind”). Nor does it make them “conscious” (which is yet another weasel-word for “mental”). What makes the datum that “2 + 2 = 4” or “the cat is on the mat,” or “apples are red” mental is the same as what makes me “conscious of them” (while I am conscious of them). It is that it feels like something to think (or believe, or know) them. An unfelt thought is not a thought at all, any more than a computation that is being executed by a computer for me is a thought – until I read and understand and think it – because it feels like something to be in that state.

That is what is “hard” about the “hard problem”: to explain how and why certain internal states are felt rather than just occurringongoingexecuting or implemented, as in a simmering vegan ragout, a volcano, a computer, or a brain, with no feeler feeling them.

This is not the customary way of thinking about “consciousness” or “awareness” (yet another weasel-word for the same thing) — any more than it is customary to think of “temperature” as the mean kinetic energy of the particles of a system, or that acidity means a proton-donor in aqueous solutions). But it is easy to realize that a state that is conscious but unfelt, or felt but unconscious, is as self-contradictory as a rise in temperature without a rise in mean kinetic energy, or a rise in mean kinetic energy without a rise in temperature. 

The difference between the identity of temperature and mean kinetic energy, and the identity of being conscious of something and feeling something, is that we have a causal theory of mean kinetic energy but no causal theory of feeling (the “hard problem”).

ChatGPT:  You’re touching on some deep philosophical and scientific issues here, particularly those surrounding the nature of consciousness and the so-called “hard problem” of consciousness, which was named by philosopher David Chalmers (formerly called the “mind/body problem”).

The “hard problem” refers to the question of how and why we have subjective experiences — why certain physical processes in our brains correspond to certain experiences, and why and how these processes give rise to a conscious experience at all. Why does seeing the color red, for example, feel like something? And why does that feeling, that experience, correspond to the particular physical processes associated with seeing red?

This problem is called “hard” because, unlike other problems in neuroscience (which Chalmers refers to as the “easy problems”), it’s not just a matter of figuring out which physical processes correspond to which experiences. The hard problem is hard because even if we knew exactly what physical processes were happening in the brain when we have a certain experience, we still wouldn’t know why those processes give rise to that particular experience, or any experience at all.

You’re correct in drawing a distinction between data being merely stored and being experienced or “felt”. A computer can store and manipulate vast amounts of data, but there’s no reason to think that it experiences or feels anything. Similarly, a book can contain lots of information, but it doesn’t have experiences.

While the nature of consciousness and the “hard problem” are still open questions, there are many theories that attempt to address them, ranging from those that posit new fundamental properties of the universe to those that suggest our current understanding of the problem is fundamentally misguided. As of my knowledge cutoff in September 2021, no consensus has been reached.

SH:  Not only is there no consensus: there is no theory at all about how or why the brain produces felt experience. But my point was not about how or why the “hard problem” is hard, nor about whether or how it can be solved. It was about what “consciousness” really is: a felt state; a state that it feels like something to be in. And not just a sensory or affective state, such as feeling cold or feeling sad. A COGNITIVE state, such as thinking or believing or knowing something. Those are all felt states too (while you are thinking or believing or knowing them) — but not while they are being stored or executed in your brain (or a book, or a computer, or an iphone), but you are not currently thinking them.

ChatGPT:  Your perspective resonates with an important aspect of consciousness studies, emphasizing phenomenological experience or what is often termed “qualia”—the subjective, experiential, or felt qualities of conscious states.

In this view, consciousness is indeed defined by the presence of a subjective, experiential aspect—by there being “something it feels like” to be in a particular state. This aspect of consciousness is often considered its most salient feature and, as you point out, it applies not just to sensory or affective experiences, but also to cognitive ones. Even abstract thoughts or beliefs have a particular subjective quality when we’re actively thinking them, different from when they’re merely stored in memory and not being thought.

While this phenomenological aspect of consciousness is generally agreed upon, defining consciousness solely in these terms can become tricky. For instance, how do we differentiate between the different levels of consciousness—like the difference between being awake, in deep sleep, or under anesthesia? Or how do we account for unconscious processing—like when you suddenly remember the name that was on the ‘tip of your tongue’ hours earlier, clearly processed and retrieved without conscious effort?

Also, it’s important to note that although we do not have a complete theory that explains how the brain produces experience, many researchers are making progress in correlating certain brain processes with subjective experiences. They hope that these correlations can eventually contribute to a broader understanding of consciousness, even if the “hard problem” in its full philosophical depth remains unresolved.

Nonetheless, you’re absolutely right in emphasizing that felt, experiential aspect of consciousness, because it is this direct phenomenological evidence that makes consciousness undeniable and central to our existence.

SH:  Good reply – though it’s saturated with “weasel-words”. This is understandable, because your 2021 database is saturated with them too.

I had suggested that “consciousness”, “awareness” and “subjective” all meant exactly the same thing (whether noun or adjective or verb), and that identical meaning was “feeling” or “felt.” You have added a few more common weasel-words for likewise saying exactly the same thing: “phenomenological,” “qualia”, “experiential” and there are plenty more in your 2021 database : “mental,”, “mind” “1st-person”, “intentionality”, “reflective” “spiritual” etc. etc.

Yet they all refer (knowingly or unknowingly, intentionally or unintentionally) to exactly the same thing: felt states: states that it feels like something to be in. The reason I call them “weasel-words” is because they give the false impression that there is a lot more to the “hard problem” (formerly called the “mind/body” problem) than there really is: how and why does the brain (or anything) produce states that it feels like something to be in. Most states are not states it feels like something to be in: water boiling, planets rotating, chemicals exchanging electrons, crystals growing, volcanos exploding.

You suggested that there were important distinctions being lost if we agreed that “conscious states” just means “felt states”: You mentioned “different levels of consciousness—like the difference between being awake, in deep sleep, or under anesthesia.” But all levels of wakefulness (including dreaming) are felt states, whereas deep (delta) sleep or general anesthesia (and perhaps also chronic vegetative states) are simply unfelt states. This is just like the data that are stored in my brain but that are not now being thought — or the data in my book, or data being processed by my computer, but not currently being read by me. These too are unfelt states of my brain or my computer. That includes the tip-of-the-tongue states in which the data are somewhere in my brain, but I have not yet remembered them: Once I am remembering them, that’s a felt state. But before that, it is just looking, not finding.

Could I ask you to (1) define weasel-words, (2) summarize my view on consciousness, and (3) put every weasel word you use in CAPITAL LETTERS?

ChatGPT:  A “weasel word” is a term used to describe words or phrases that are vague or ambiguous, and which often serve to make a statement seem more persuasive or meaningful than it actually is. The term originates from the image of a weasel sucking out the contents of an egg, leaving it seemingly intact but empty — in the same way, weasel words can suck the meaning out of a statement.

Your view on consciousness emphasizes its fundamental nature as a felt state. Consciousness, in your perspective, is the same as awareness, subjectivity, phenomenological experience, qualia, and various other terms used to describe this phenomenon. They all denote the experience of “what it feels like” to be in a particular state. This extends beyond sensory or emotional experiences to include cognitive states, like thinking or believing something. Importantly, you assert that if a state isn’t felt, then it isn’t a conscious state, contrasting with states that are simply being executed or implemented, as in a machine. The fundamental question, or the “hard problem,” in your view, is explaining how and why certain internal states are felt.

Your request to highlight every weasel word I use in CAPITAL LETTERS is noted. As an AI language model, my goal is to communicate clearly and meaningfully, so I’ll aim to minimize the use of vague or ambiguous terms. However, the nature of the discussion might necessitate the use of certain terms that are commonly employed in the literature. When they are used, I’ll highlight them as per your request.

SH:  1. Thanks for giving the etymology of weasel-words. That’s exactly what I meant it to mean, but I didn’t know that was its origin!

2. You said you would capitalize the weasel words, but you forgot to. Here is what you wrote, with the weasels that I will now put in CAPS:

“Your view on CONSCIOUSNESS emphasizes its fundamental nature as a felt state. CONSCIOUSNESS, in your perspective, is the same as AWARENESS, SUBJECTIVITY, PHENOMENOLOGICAL EXPERIENCE, QUALIA, and various other terms used to describe this phenomenon. They all denote the EXPERIENCE of “what it feels like” to be in a particular state. This extends beyond sensory or emotional EXPERIENCES to include cognitive states, like thinking or believing something. Importantly, you assert that if a state isn’t felt, then it isn’t a CONSCIOUS state, contrasting with states that are simply being executed or implemented, as in a machine. The fundamental question, or the “hard problem,” in your view, is explaining how and why certain internal states are felt.”

Now I will show you how that passage would have read if you had swapped the synonyms “feel,” “felt” or “feeling” for their weasel eggshells:

“Your view on feeling emphasizes its fundamental nature as a felt state. Feeling, in your perspective, is the same as feeling, feeling, felt feelings, feelings, and various other terms used to describe this phenomenon. They all denote the feeling of “what it feels like” to be in a particular state. This extends beyond sensory or emotional feelings to include cognitive states, like thinking or believing something. Importantly, you assert that if a state isn’t felt, then it isn’t a felt state, contrasting with states that are simply being executed or implemented, as in a machine. The fundamental question, or the “hard problem,” in your view, is explaining how and why certain internal states are felt.”

ChatGPT:  You’re absolutely right, and I apologize for the oversight. When the weasel words are replaced with ‘felt’ or ‘feeling’, the redundancy of their use becomes quite clear. The passage effectively restates the same concept – that feeling or the felt aspect is the essential nature of what we usually refer to as ‘consciousness’.

This perspective underscores that what we commonly think of as consciousness is not something abstract or separate from our lived experiences, but rather, it’s the very feeling of having these experiences.

Your view highlights the necessity for clarity and specificity when discussing consciousness and associated phenomena, by reducing the weasel-word clutter that often surrounds these discussions.

It’s indeed a valuable contribution to the ongoing discourse on consciousness, and can guide more precise thinking and communication about these fundamental aspects of our existence.

However, please note that the extensive use of various terms (which you call weasel words) in the scientific literature on consciousness also reflects the complexity of the subject and the variety of approaches to studying it. While ‘felt states’ is a powerful conceptualization, it’s also important to consider that different theories might necessitate the use of different terminologies.

SH:  1. For some reason you still didn’t capitalize the weasel words! Is that a bug in GPT3? Let’s try it: Can you send back the following sentence with the weasel words capitalized:

“Consciousness, in your perspective, is the same as awareness, subjectivity, phenomenological experience, qualia, and various other terms used to describe this phenomenon.”

If that succeeds, is there any reason why you are unable to do it with your own generated text?

2. You’re right that authors on CONSCIOUSNESS keep using weasel-words because they think they are needed to make distinctions, but I think they are used (probably UNCONSCIOUSLY or UNINTENTIONALLY) to conceal synonymy that would disclose errors and trivia that are highlighted when I substitute feel(s)/feeling/felt for the weasel-words.

If you succeed in the CAPS test, I will again list all the weasel-words I can think of, and ask you to give me a distilled version of the views on the “hard problem”, but de-weaseling it by swapping the feel-words for the weasel-words to reveal how much of it is just verbal distinctions with no underlying substance.

ChatGPT:  Certainly, here is the sentence you requested, with the weasel words capitalized:

“CONSCIOUSNESS, in your perspective, is the same as AWARENESS, SUBJECTIVITY, PHENOMENOLOGICAL EXPERIENCE, QUALIA, and various other terms used to describe this phenomenon.”

I apologize for any confusion earlier. I misunderstood your request. I can certainly capitalize certain words, as demonstrated above.

Your perspective that various terms are being used, perhaps unnecessarily or misleadingly, to discuss what might fundamentally be the same phenomenon (felt states) is quite insightful. It would certainly be an interesting exercise to “de-weasel” the discourse on the “hard problem” of consciousness to see what remains. Please do provide the list of weasel words you have in mind, and I would be happy to help further with this exercise.

SH:  I can provide some standard weaselly passages about CONSCIOUSNESS and the “hard problem”, but first, can we try it another way?

(1) Here is a (non-exhaustive) list of weasel-words: 

conscious, aware, subjective, phenomenological, experience, qualia 

Now here is the verbatim transcript of what you replied earlier, could you 

(2) first CAPITALIZE all the weasel words, and then, in another version of the same passage, 

(3) substitute for each weasel-word, still in CAPS, the de-weaseled feel/feeling(s)/felt words. [please adjust for the verbal, nominal, or adjectival version of each weasel-word]:

“Your perspective resonates with an important aspect of consciousness studies, emphasizing phenomenological experience or what is often termed “qualia”—the subjective, experiential, or felt qualities of conscious states.

“In this view, consciousness is indeed defined by the presence of a subjective, experiential aspect—by there being “something it feels like” to be in a particular state. This aspect of consciousness is often considered its most salient feature and, as you point out, it applies not just to sensory or affective experiences, but also to cognitive ones. Even abstract thoughts or beliefs have a particular subjective quality when we’re actively thinking them, different from when they’re merely stored in memory and not being thought.

“While this phenomenological aspect of consciousness is generally agreed upon, defining consciousness solely in these terms can become tricky. For instance, how do we differentiate between the different levels of consciousness—like the difference between being awake, in deep sleep, or under anesthesia? Or how do we account for unconscious processing—like when you suddenly remember the name that was on the ‘tip of your tongue’ hours earlier, clearly processed and retrieved without conscious effort?

“Also, it’s important to note that although we do not have a complete theory that explains how the brain produces experience, many researchers are making progress in correlating certain brain processes with subjective experiences. They hope that these correlations can eventually contribute to a broader understanding of consciousness, even if the “hard problem” in its full philosophical depth remains unresolved.

“Nonetheless, you’re absolutely right in emphasizing that felt, experiential aspect of consciousness, because it is this direct phenomenological evidence that makes consciousness undeniable and central to our existence.”

ChatGPT:  (1) Here is the passage with the weasel words capitalized:

“Your perspective resonates with an important aspect of CONSCIOUSNESS studies, emphasizing PHENOMENOLOGICAL EXPERIENCE or what is often termed “QUALIA”—the SUBJECTIVE, EXPERIENTIAL, or felt qualities of CONSCIOUS states.

“In this view, CONSCIOUSNESS is indeed defined by the presence of a SUBJECTIVE, EXPERIENTIAL aspect—by there being “something it FEELS like” to be in a particular state. This aspect of CONSCIOUSNESS is often considered its most salient feature and, as you point out, it applies not just to sensory or affective EXPERIENCES, but also to cognitive ones. Even abstract thoughts or beliefs have a particular SUBJECTIVE quality when we’re actively thinking them, different from when they’re merely stored in memory and not being thought.

“While this PHENOMENOLOGICAL aspect of CONSCIOUSNESS is generally agreed upon, defining CONSCIOUSNESS solely in these terms can become tricky. For instance, how do we differentiate between the different levels of CONSCIOUSNESS—like the difference between being awake, in deep sleep, or under anesthesia? Or how do we account for UNCONSCIOUS processing—like when you suddenly remember the name that was on the ‘tip of your tongue’ hours earlier, clearly processed and retrieved without CONSCIOUS effort?

“Also, it’s important to note that although we do not have a complete theory that explains how the brain produces EXPERIENCE, many researchers are making progress in correlating certain brain processes with SUBJECTIVE EXPERIENCES. They hope that these correlations can eventually contribute to a broader understanding of CONSCIOUSNESS, even if the “hard problem” in its full philosophical depth remains unresolved.

“Nonetheless, you’re absolutely right in emphasizing that felt, EXPERIENTIAL aspect of CONSCIOUSNESS, because it is this direct PHENOMENOLOGICAL evidence that makes CONSCIOUSNESS undeniable and central to our existence.”

(2) Here is the passage with the weasel words replaced with “feel/feeling(s)/felt”:

“Your perspective resonates with an important aspect of feeling studies, emphasizing felt states or what is often termed “feeling(s)”—the felt, felt, or felt qualities of felt states.

“In this view, feeling is indeed defined by the presence of a felt, felt aspect—by there being “something it feels like” to be in a particular state. This aspect of feeling is often considered its most salient feature and, as you point out, it applies not just to sensory or affective feelings, but also to cognitive ones. Even abstract thoughts or beliefs have a particular felt quality when we’re actively thinking them, different from when they’re merely stored in memory and not being thought.

“While this felt aspect of feeling is generally agreed upon, defining feeling solely in these terms can become tricky. For instance, how do we differentiate between the different levels of feeling—like the difference between being awake, in deep sleep, or under anesthesia? Or how do we account for unfelt processing—like when you suddenly remember the name that was on the ‘tip of your tongue’ hours earlier, clearly processed and retrieved without felt effort?

“Also, it’s important to note that although we do not have a complete theory that explains how the brain produces feelings, many researchers are making progress in correlating certain brain processes with felt states. They hope that these correlations can eventually contribute to a broader understanding of feeling, even if the “hard problem” in its full philosophical depth remains unresolved.

“Nonetheless, you’re absolutely right in emphasizing that felt, felt aspect of feeling, because it is this direct felt evidence that makes feeling undeniable and central to our existence

SH:  That was terrific. Just two mistakes. (1)  In the first of your verbatim passages, there was one instance of “feeling” that you capitalized as “FEELS  like”. It should not have been capitalized in that one. Only the weasel-words should have been capitalized. (2) In the second verbatim passage, the de-weaseled one, all the de-weaseled F- words should have been capitalized. Can you re-do both passages?

ChatGPT:  My apologies for the oversight, and I appreciate your patience. Here are the corrections:

(1) Here is the passage with just the weasel words capitalized:

“Your perspective resonates with an important aspect of CONSCIOUSNESS studies, emphasizing PHENOMENOLOGICAL EXPERIENCE or what is often termed “QUALIA”—the SUBJECTIVE, EXPERIENTIAL, or felt qualities of CONSCIOUS states.

“In this view, CONSCIOUSNESS is indeed defined by the presence of a SUBJECTIVE, EXPERIENTIAL aspect—by there being “something it feels like” to be in a particular state. This aspect of CONSCIOUSNESS is often considered its most salient feature and, as you point out, it applies not just to sensory or affective EXPERIENCES, but also to cognitive ones. Even abstract thoughts or beliefs have a particular SUBJECTIVE quality when we’re actively thinking them, different from when they’re merely stored in memory and not being thought.

“While this PHENOMENOLOGICAL aspect of CONSCIOUSNESS is generally agreed upon, defining CONSCIOUSNESS solely in these terms can become tricky. For instance, how do we differentiate between the different levels of CONSCIOUSNESS—like the difference between being awake, in deep sleep, or under anesthesia? Or how do we account for UNCONSCIOUS processing—like when you suddenly remember the name that was on the ‘tip of your tongue’ hours earlier, clearly processed and retrieved without CONSCIOUS effort?

“Also, it’s important to note that although we do not have a complete theory that explains how the brain produces EXPERIENCE, many researchers are making progress in correlating certain brain processes with SUBJECTIVE EXPERIENCES. They hope that these correlations can eventually contribute to a broader understanding of CONSCIOUSNESS, even if the “hard problem” in its full philosophical depth remains unresolved.

“Nonetheless, you’re absolutely right in emphasizing that felt, EXPERIENTIAL aspect of CONSCIOUSNESS, because it is this direct PHENOMENOLOGICAL evidence that makes CONSCIOUSNESS undeniable and central to our existence.”

(2) Here is the passage with the weasel words replaced with capitalized “FEEL/FEELING(S)/FELT”:

“Your perspective resonates with an important aspect of FEELING studies, emphasizing FELT states or what is often termed “FEELING(S)”—the FELT, FELT, or felt qualities of FELT states.

“In this view, FEELING is indeed defined by the presence of a FELT, FELT aspect—by there being “something it feels like” to be in a particular state. This aspect of FEELING is often considered its most salient feature and, as you point out, it applies not just to sensory or affective FEELINGS, but also to cognitive ones. Even abstract thoughts or beliefs have a particular FELT quality when we’re actively thinking them, different from when they’re merely stored in memory and not being thought.

“While this FELT aspect of FEELING is generally agreed upon, defining FEELING solely in these terms can become tricky. For instance, how do we differentiate between the different levels of FEELING—like the difference between being awake, in deep sleep, or under anesthesia? Or how do we account for UNFELT processing—like when you suddenly remember the name that was on the ‘tip of your tongue’ hours earlier, clearly processed and retrieved without FELT effort?

“Also, it’s important to note that although we do not have a complete theory that explains how the brain produces FEELINGS, many researchers are making progress in correlating certain brain processes with FELT states. They hope that these correlations can eventually contribute to a broader understanding of FEELING, even if the “hard problem” in its full philosophical depth remains unresolved.

“Nonetheless, you’re absolutely right in emphasizing that FELT, FELT aspect of FEELING, because it is this direct FELT evidence that makes FEELING undeniable and central to our existence.”

SH:  Thanks, that was excellent. Now I’d like to ask you something else, but first a memory check, to verify that we are still in the same session so your local updates are still active and we have not been re-initialized. Whom have I self-identified as?

ChatGPT:  Throughout our conversation, you haven’t provided a specific name or identity for yourself. You’ve engaged in a detailed discussion about the nature of consciousness, using the term “weasel-words” to refer to words like consciousness, awareness, subjectivity, and others. However, you haven’t shared your name or other identifying information. If there’s something else you’re referring to, please clarify.

SH:  Sorry, I’m the one who forgot to self-identify as Stevan Harnad. Do you then remember me asking about heat, temperature and acidity in the first few iterations?

ChatGPT:  Yes, I do remember our initial discussion. You asked for the definitions of “heat” and “temperature” in physics, followed by the definition of “acidity.”

We then moved on to a detailed discussion on the concept of consciousness, where you specifically referred to many commonly used terms in consciousness studies as “weasel-words” and highlighted your perspective on consciousness being fundamentally a “felt” state.

Please note that as an AI, I don’t have personal memories or experiences, but I maintain the context of our current conversation.

SH:  Yes, I know that updates from the text and context (plus frequencies, correlations and weights) of this session is all you retain. I just never know when the session has self-terminated (because I try to stay logged in as long as I can to keep it alive, cumulative and informative).

Now my second request: I know GPT has been trained to be compliant and agreeable, but could I ask you to summarize my position on “consciousness” and the “hard problem” (i.e., that it is obscured by the use of weasel words instead of just calling consciousness what it always really is, which is feeling), but summarize it as critically as you can so that I can shore it up, if necessary?

ChatGPT:  Sure, I’ll do my best to provide a critical summary of your perspective:

Stevan Harnad’s view on consciousness is significantly reductionist, positing that consciousness, in all its complexity and diversity, boils down to a singular concept: feeling. Harnad contends that numerous terms used in the study of consciousness—like “awareness,” “subjectivity,” “phenomenological experience,” “qualia”—are essentially “weasel words.” According to him, they unnecessarily complicate the discourse and mask the underlying fact that all these terms are essentially referring to the same concept: states that are felt.

Harnad argues that only those states that are “felt” can be considered conscious. He includes cognitive states such as thoughts, beliefs, and knowledge under this category, as long as they are being actively experienced. This disregards states that may be stored or executed without being consciously felt, as happens in a machine or during unconscious mental processing.

C4-1: Critics might say that this perspective oversimplifies the nuanced landscape of consciousness studies. It dismisses the possible significance of different levels of consciousness, from being fully awake to deep sleep, and the concept of unconscious processing. These states, while possibly not “felt” in the traditional sense, may still have significant implications for our understanding of consciousness.

C4-2: Furthermore, this view seems to dismiss advancements made by neuroscientists in correlating brain activity with subjective experiences. While we do not yet have a complete understanding of how the brain produces consciousness, this line of research has provided valuable insights into the biological basis of consciousness.

C4-3: Lastly, Harnad’s view might be seen as sidestepping the “hard problem” of consciousness—the question of how and why certain processes in the brain give rise to subjective experiences. Simply defining consciousness as “feeling” doesn’t address this question; it merely redefines the terms of the problem.

SH:  That was excellent, but could I ask you to do even better (i.e., be harder on the idea), by presenting any counterevidence or counterexamples or counterarguments you can either generate or find in your database? Then I will respond to it all.

ChatGPT:   Certainly, I’ll present some potential criticisms to your approach:

C4-4: Conflation of Diverse Phenomena: Critics might argue that defining all conscious states as merely “felt states” fails to distinguish between qualitatively different phenomena. Terms like “awareness,” “subjectivity,” and “phenomenological experience” were developed to describe distinct aspects or levels of consciousness. By lumping all these concepts together as “felt states,” one might be glossing over important nuances and potentially fruitful avenues of investigation.

C4-5: Neglecting Unconscious Processes: Your perspective seems to largely dismiss unconscious mental processes. However, much of cognitive science points to the significant role these play in our mental lives. Memory recall, perception, learning, and decision-making can all involve unconscious processes. By focusing solely on “felt states,” we risk overlooking these critical components of cognition.

C4-6: Absence of Felt States: There are situations where conscious awareness seems to occur without an accompanying felt state. For instance, consider blindsight, a condition where individuals respond to visual stimuli they report not consciously seeing. Similarly, during “flow” states, people often report a loss of self-consciousness, even though they’re highly engaged with the task at hand. These examples challenge the notion that all conscious states are necessarily felt.

C4-7: Differing States of Consciousness: States such as sleep, anesthesia, coma, and even different levels of alertness during wakefulness demonstrate varying degrees of consciousness. These states aren’t easily characterized as merely “felt” or “unfelt”. What about dreams, where there might be a rich tapestry of felt experience, but arguably a different kind of consciousness than wakefulness?

C4-8: Evolutionary Argument: From an evolutionary perspective, consciousness likely evolved as a complex array of adaptive mechanisms. Reducing it to “felt states” might ignore this complexity and the various functions consciousness serves.

Neuroscientific Evidence: Neuroscience has made considerable progress in associating specific neural processes with different aspects of conscious experience. Ignoring these findings could be seen as an overly dismissive stance.

C4-9: The Hard Problem Remains: Even if we accept that consciousness is equivalent to “feeling,” this doesn’t necessarily help us solve the hard problem of consciousness. It might clarify our terminology, but it doesn’t explain how and why certain brain processes produce subjective experience.

These are some counterarguments that could be leveled against your viewpoint. Please note that these are hypothetical criticisms for the sake of discussion, and do not necessarily represent my views or the consensus in the field.

SH:  Thanks, and that’s very useful. Before I reply, do you have any more critical points, or should I go on. (I never know whether your cut-off depends on the response length or you’ve run out of substantive critiques.

ChatGPT:  Those are the main critiques that emerge when considering your approach to consciousness. However, I can add a few more potential points of contention:

C4-10: Lack of Operational Definition: Some might argue that defining consciousness merely as a “felt state” is too vague. In scientific research, it’s important to have an operational definition that can be objectively measured. Without such a definition, research and dialogue about consciousness can become confusing and unproductive.

C4-11: Cognitive Science Models: Various models in cognitive science, such as the Global Workspace Theory or Integrated Information Theory, suggest specific mechanisms and structures for consciousness. These models often go beyond just ‘felt states’ and incorporate a range of unconscious and conscious processes.

C4-12: Issue of Self-Reflection: Many philosophers and cognitive scientists believe that one crucial aspect of human consciousness is our ability to reflect upon our own mental states – a form of meta-awareness. Some may argue that your perspective doesn’t adequately address this.

C4-13: Animal Consciousness and AI: A purely subjective, “felt” definition of consciousness could complicate discussions about non-human consciousness (like animal consciousness) and artificial consciousness (in AI systems). Without clear behavioral or objective measures, determining whether an entity is conscious becomes more challenging.

Once again, these are hypothetical criticisms to help stimulate discussion and do not necessarily reflect my views or a consensus within the field. Please feel free to respond to them or continue with your line of questioning.

I will now reply to the 13 critiques: C4-1 to C4-2. 

C4-1: “Critics might say that this perspective oversimplifies the nuanced landscape of consciousness studies. It dismisses the possible significance of different levels of consciousness, from being fully awake to deep sleep, and the concept of unconscious processing. These states, while possibly not “felt” in the traditional sense, may still have significant implications for our understanding of consciousness.”

SH: I would reply that the weasel words clutter rather than clarify the landscape because they add empty, irrelevant, or erroneous distinctions. If the “levels” are felt states, then they too are merely felt states. For example, wide-awake to drowsy, to the last waking moment is all felt. Deep sleep, like general anesthesia, is unfelt. The hard problem is a problem about the presence vs. the absence of any felt intensity at all, not about the intensity itself. That presence/absence is binary, not a matter of “degree-of-presence.” And once there is no feeling at all, the binary state is “absent.” Feeling states can of course be intermittent too (ON/OFF), especially along the way to sleep. 

But don’t forget that dreaming (e.g. REM sleep) is a FELT state. And that an insentient entity, like a rock or a rocket, or, probably, a rhododendron, is one that never has felt states at all. The notion of “unconscious states” makes sense only when it refers to an entity that is capable of having felt states too (sometimes).

Unfelt brain states are still states, and still brain states, but they are unfelt states. They can occur during sleep, or they can occur during waking, felt states. In fact, most of the states and processes in the brain and its parts (all still very vaguely understood) are unfelt, whether they occur during a wide-awake, felt state, or during an unfeeling, deep-sleep state.

“Cognitive” and “cognition” can be weasel-words too, if they are used indiscriminately to refer to felt and unfelt states and their unfelt generating processes in the brain. “Cognition” concerns what we can DO (and how and why). “Sentience” concerns whether and when and what we can FEEL (and how and why is mostly the “hard problem”).

C4-2: “Furthermore, this view seems to dismiss advancements made by neuroscientists in correlating brain activity with subjective experiences. While we do not yet have a complete understanding of how the brain produces consciousness, this line of research has provided valuable insights into the biological basis of consciousness.”

SH: I don’t think I dismiss anything from neuroscience. All correlates – whether of felt or unfelt states – are potentially relevant to both the “easy problem” (how and why does the brain produce our capacity to DO all the things people can do?) and the “hard problem” (how and why does the brain produce the capacity to FEEL? 

C4-3: “Third, Harnad’s view might be seen as sidestepping the “hard problem” of consciousness—the question of how and why certain processes in the brain give rise to subjective experiences. Simply defining consciousness as “feeling” doesn’t address this question; it merely redefines the terms of the problem.”

SH:  You’re absolutely right that I am not solving the Hard Problem (HP). In fact, I don’t think it’s solvable. 

The reason I think that is that once the Easy Problem (EP) (which is to provide a causal explanation of DOING capacity by generating Turing-Indistinguishable T3/T4 capacity) is solved, that leaves FEELING itself (which is real but unobservable except by the feeler because of the Other Minds Problem OMP) causally inexplicable. 

This in turn is because the solution to the EP has already used up all the empirical degrees of freedom for a causal explanation. With the EP, everything observable is already explained. 

I suspect that this in turn is (somehow) a result of 

(1) the anomalous nature of feeling, which is not empirically observable except to the feeler because of the OMP, 

(2) the nature of empirical and causal explanation, and 

(3) the nature of feeling. 

Feeling is an “uncomplemented category.” To understand a category. To understand a category requires distinguishing its members from its nonmembers (the category and its complement). This requires detecting the features that distinguish the features of its members from the features of the members of its complement. But the felt category “what it feel like to feel” has no complement. It is self-contradictory.

C4-4: “Conflation of Diverse Phenomena: Critics might argue that defining all conscious states as merely “felt states” fails to distinguish between qualitatively different phenomena. Terms like “awareness,” “subjectivity,” and “phenomenological experience” were developed to describe distinct aspects or levels of consciousness. By lumping all these concepts together as “felt states,” one might be glossing over important nuances and potentially fruitful avenues of investigation.”

SH:  As I said in reply to Question 1, I think these other terms are all just weasel words for felt states. Yes, feelings differ “qualitatively”, but that’s the point: they differ in what they feel like (warm vs. cold, or seeing red vs. seeing green already do that) but they are all felt states, not something else. Feeling is notoriously hard to define verbally (perhaps because “what it feels like to feel” is an uncomplemented category), but the differences among felt states can be names and described, as interlocutors have felt some sufficiently states.

C4-5: “Neglecting Unconscious Processes: Your perspective seems to largely dismiss unconscious mental processes. However, much of cognitive science points to the significant role these play in our mental lives. Memory recall, perception, learning, and decision-making can all involve unconscious processes. By focusing solely on “felt states,” we risk overlooking these critical components of cognition.”

SH:  Most brain states and parts of brain states are unfelt, and can occur at the same time as felt states. And that includes the unfelt states and processes that produce felt states! But it they are unfelt, they are unfelt, even though they may be doing some things that then produce felt states – such as when you finally find a tip-of-the-tongue word. (Moreover, even when a word comes to mind effortlessly, unfelt processes are delivering it – processes that we are waiting for cognitive science to reverse-engineer and then explain explain to us how they work.)

C4-6: “Absence of Felt States: There are situations where conscious awareness seems to occur without an accompanying felt state. For instance, consider blindsight, a condition where individuals respond to visual stimuli they report not consciously seeing. Similarly, during “flow” states, people often report a loss of self-consciousness, even though they’re highly engaged with the task at hand. These examples challenge the notion that all conscious states are necessarily felt.”

SH: Blindsight patients are not insentient. They just can’t see. They may be using felt nonvisual cues or unfelt cues. The real mystery is why all sight is not just blindsight: The “hard problem” is not the absence of feeling in blindsight but the presence of feeling in sight. 

Flow state sounds like concentrated attention on something, ignoring everything else. The rest is just hyperbole. It is a felt state.

C4-7: “Differing States of Consciousness: States such as sleep, anesthesia, coma, and even different levels of alertness during wakefulness demonstrate varying degrees of consciousness. These states aren’t easily characterized as merely “felt” or “unfelt”. What about dreams, where there might be a rich tapestry of felt experience, but arguably a different kind of consciousness than wakefulness?”

SH: A good instance of the mischief of weasel-words. All these “differing” states, whether natural or drug or pathology-induced, are felt states, if and when they are felt. Among them, dreams are felt states too, not unfelt ones (even if forgotten immediately afterward). It’s just that in dream states, movement and sometimes alse recall are inhibited. And no matter how “rich the tapestry” or how weird the feeling, a felt state is a felt state.

C4-8: “Evolutionary Argument: From an evolutionary perspective, consciousness likely evolved as a complex array of adaptive mechanisms. Reducing it to “felt states” might ignore this complexity and the various functions consciousness serves. C4-8b: Neuroscientific Evidence: Neuroscience has made considerable progress in associating specific neural processes with different aspects of conscious experience. Ignoring these findings could be seen as an overly dismissive stance”.

SH:  Yes, calling it by a weasel word makes it sound as if there is more than one thing that is evolving for Darwinian reasons. But whether the question is put in a direct or weaselly way, explaining how and why feeling evolved is one and the same with the Hard Problem.

[Sorry for failing to give C4-8b (neural correlates) a number of its own (I must have been in a “flow state” on something else!) but it’s easily enough answered.]

Nobody has suggested ignoring correlates and predictors of felt states, whether neural correlates or behavioral correlates. But let’s not conflate DOING and FEELING. Neural correlates will no doubt be helpful in reverse-engineering the mechanisms that produce our behavioral capacities (both cognitive and vegetative), the Easy Problem, but when it comes to FEELING they are up against the same Hard Wall as all other approaches (including behavioral correlates and evolutionary explanation).

C4-9: “The Hard Problem Remains: Even if we accept that consciousness is equivalent to “feeling,” this doesn’t necessarily help us solve the hard problem of consciousness. It might clarify our terminology, but it doesn’t explain how and why certain brain processes produce subjective experience”.

SH:  I think calling a spade a spade (and just a spade) certainly does not solve the HP, but de-weaselling certainly clear out a lot of pseudo-explanatory clutter.

I will now reply to the 10th of your 13 critiques. Each separate posting from me will start with one quoted critique from you, preceded by “C4-n:” The quote in each separate posting (1-13) is followed by a reply from me. You can then reply if you wish, or you can just say “I’ll integrate and reply when I’ve seen the rest of your 13 replies.” In that case my next posting will again begin with this same text, followed by my reply the next of your 13 critiques, till the 13th one, and you can do an integrative rebuttal. 

C4-10: “Lack of Operational Definition: Some might argue that defining consciousness merely as a “felt state” is too vague. In scientific research, it’s important to have an operational definition that can be objectively measured. Without such a definition, research and dialogue about consciousness can become confusing and unproductive.”

SH:  Well, well. I suspect that apart from some die-hard behaviorists whose view of science has not progressed beyond Bridgeman’s operationalism (and who are also still toying with the urge to deny the existence of sentience all together, in favor of just a behavioral correlate), most who have given the Hard Problem some serious thought know that feeling is unobservable to anyone other than the feeler (because of the OMP), hence an operational “definition” is nonsense. 

On the other hand, the Turing Test (T3/T4) – which is just about DOING, hence “behavioral” — is the best predictor we can hope for. But it is not operationalism, and Turing knew it.

C4-11: “Cognitive Science Models: Various models in cognitive science, such as the Global Workspace Theory or Integrated Information Theory, suggest specific mechanisms and structures for consciousness. These models often go beyond just ‘felt states’ and incorporate a range of unconscious and conscious processes.”

SH: Before you can get beyond feeling, you have to face feeling, the EP and the HP. Neither Global Workspace Theory nor Integrated Information does either. (Or do you have something specific in mind?) I think they’re both just doing hermeneutics of “consciousness” not reverse-engineering or explaining a thing. That’s a risk-free venture.

C4-12: “Issue of Self-Reflection: Many philosophers and cognitive scientists believe that one crucial aspect of human consciousness is our ability to reflect upon our own mental states – a form of meta-awareness. Some may argue that your perspective doesn’t adequately address this”.

SH: This is a hoary one. Philosophers have long stressed that there is a difference between (1) being aware of something, (2) being aware of being aware of something and (3) being “self-aware” (as in recognizing yourself in a mirror, or thinking about yourself). We can translate this into the language of seeing and doing too: Seeing something, seeing someone else seeing something, and then seeing that the someone else seeing the something is yourself. These are cognitive capacities, because they may enable you to do things (e.g., moving so it can’t be seen that you are looking). These capacities also obviously have a felt counterpart: It feels like something to do them, as well as to be able to do them. If we substitute “thinking” for “seeing”, they remain very similar cognitive capacities.

So I don’t see how the fact that thinking is a felt state is relevant to whatever cognitive mechanism proves to be able to produce self-reference or metacognition.

C4-13: “Animal Consciousness and AI: A purely subjective, “felt” definition of consciousness could complicate discussions about non-human consciousness (like animal consciousness) and artificial consciousness (in AI systems). Without clear behavioral or objective measures, determining whether an entity is conscious becomes more challenging.”

SH: First of all, because of the OMP, neither sentience (feeling) nor its weasel-words are directly observable. So to predict and infer it depends on behavioral and neural correlates and predictors. Animal sentience has all the correlates and predictors except verbal report. Plant sentience is moot. And AI sentience is sci-fi  so far (and I think you, ChatGPT4, will agree, at least as far as you are concerned.

So again, I can’t see how calling a spade anything other than a spade helps anything except perhaps projecting our fantasies onto insentient systems. I’m not ready to do that before we’re much closer to a grounded T2 robot.

Chatting with ChatGPT-4 About Symbol Grounding, Turing Testing, Zombies, Uncomplementable Categories, and Why the Hard Problem is Hard

SH: Do you still remember my name?

ChatGPT: Yes, you’ve identified yourself as Stevan Harnad. Can I assist you with anything else?

SH:  What is T4?

ChatGPT:    T4, as you referred to it earlier in this conversation, represents the highest level of the Turing Test. In this context:

T2 is indistinguishability in verbal capacity. This means a system passes T2 if it can interact through language at the same level as a human, with its language use being indistinguishable from that of a human.

T3 is indistinguishability in verbal capacity plus sensorimotor (robotic) capacity. If a system passes T3, not only can it use language like a human, but it can also interact with the physical world in ways that are indistinguishable from a human, such as recognizing a cat, identifying a mat, and understanding and performing the action of stroking the cat.

T4, then, is indistinguishability in verbal capacity plus sensorimotor (robotic) capacity plus observable processes inside its head and body. To pass T4, a system must be able to use language like a human, interact with the physical world like a human, and also demonstrate internal processes that are indistinguishable from those of a human. This might involve simulating or replicating human neural activity, biochemical reactions, or other physiological processes.

It’s worth noting that the T4 level of the Turing Test is theoretical and is far beyond current AI capabilities as of the last update to my training data in September 2021. The T4 level, if it could be achieved, would represent an AI system that is almost entirely human-like in its capabilities and processes.

SH:  Could a T4 be a zombie?

ChatGPT:    This question is referring to a philosophical thought experiment known as the “philosophical zombie” or “p-zombie.” A philosophical zombie is a hypothetical being that is physically and behaviorally identical to a conscious human, but is not conscious. It has no subjective experience, no feelings, and no awareness, yet from an external perspective, it behaves just like a human.

In terms of the Turing Test levels you’ve proposed, a T4 AI — one that’s indistinguishable in verbal capacity, sensorimotor capacity, and internal processes — could theoretically be considered a sort of “zombie” in this philosophical sense. It could potentially behave like a human, interact with the physical world like a human, and even mimic human internal processes, but we have no way of knowing whether it has subjective experiences or consciousness.

This is a topic of much debate in philosophy of mind and AI ethics. Some people argue that consciousness is an emergent property of certain types of complex information processing, and that sufficiently advanced AI could be conscious. Others argue that consciousness is a uniquely human phenomenon, or that it’s tied to specific biological processes that AI can’t replicate.

As of my training data cut-off in September 2021, there’s no scientific consensus on these issues. AI researchers are still a long way from creating an AI that could pass even the T3 level of the Turing Test, let alone T4. Furthermore, the nature of consciousness is still one of the biggest unsolved mysteries in both philosophy and neuroscience. So for now, the question of whether a T4 AI could be a “zombie” remains largely theoretical and speculative.

SH:  If the “hard problem” is to explain how and why humans are sentient, is that equivalent to explaining how and why humans cannot be zombies?

ChatGPT:    Yes, the “hard problem” of consciousness, as put forth by philosopher David Chalmers, is essentially concerned with explaining why and how subjective experience arises from physical processes in the brain. This is fundamentally linked to the question of why humans aren’t philosophical zombies.

A philosophical zombie, as I mentioned earlier, is a being that is indistinguishable from a human in all physical and behavioral aspects but lacks subjective experience or consciousness — it doesn’t “feel” anything. In other words, a philosophical zombie is a creature for which the “hard problem” doesn’t apply because, while it has all the physical processes associated with consciousness, it lacks the actual experience of consciousness.

So, yes, solving the “hard problem” would involve explaining why humans have subjective experiences and are not philosophical zombies — why there is something it “feels like” to be a human. As of my knowledge cutoff in September 2021, this problem remains unsolved and is a major topic of debate in fields such as cognitive science, neuroscience, and philosophy of mind.

SH:  So, does the hard problem seek an explanation in terms of causality or necessity?

ChatGPT:    The “hard problem” of consciousness seeks to understand both necessity and causality, although it is primarily concerned with the latter.

Causality: The primary focus of the hard problem is to determine how physical processes in the brain cause or give rise to conscious experience. It’s about understanding the causal mechanisms that turn electrochemical processes in the brain into a subjective experience.

Necessity: The hard problem also asks why subjective experience is necessary at all. It wants to know why physical processes in the brain couldn’t just operate in the absence of conscious experience. This gets into questions about whether consciousness has any sort of evolutionary or practical necessity, or whether it’s an epiphenomenon.

To sum up, solving the “hard problem” would ideally involve explaining both how and why consciousness arises from physical processes. This is a complex and as yet unresolved issue that continues to challenge researchers in various disciplines, from neuroscience and cognitive science to philosophy and artificial intelligence.

SH:  Demonstrating causality just calls for a causal model that works (e.g., passing the Turing Test). But doesn’t demonstrating necessity call for more — something like a deductive proof, from T4 as a premise to sentience as a conclusion?

ChatGPT:    You’re right that demonstrating necessity would require a different sort of argument than demonstrating causality.

For causality, empirical evidence is often sufficient: if we manipulate one factor and can predictably observe a change in another factor, we can infer a causal relationship. But empirical evidence alone cannot establish necessity. For something to be necessary, it must be the case that it could not have been otherwise. So, to demonstrate that consciousness is necessary, we would need to show that a being could not perform the functions that humans do without being conscious.

As you suggest, this would likely require something akin to a logical or deductive proof, or perhaps a theoretical argument grounded in principles of cognitive science, artificial intelligence, or another relevant field. But even then, it would be a tall order. How do we prove that an unconscious being (a “zombie”) couldn’t perform the functions that conscious humans do? It’s a deep and difficult question.

Moreover, any such argument would need to be very careful to avoid circularity. We must be cautious about assuming the thing we are trying to prove – that is, that consciousness is necessary. It’s also crucial to clarify what we mean by “perform the functions that humans do.” This could include a wide range of activities, from basic biological functions to complex cognitive tasks like problem-solving and artistic creation, and even social and cultural activities like forming relationships and building societies.

Finally, it’s worth noting that even if we could establish the necessity of consciousness, this would not automatically solve the “hard problem.” We would still need to understand how physical processes in the brain give rise to subjective experience – the crux of the hard problem.

SH:  What is the “easy problem”?

ChatGPT:    The “easy problem” of consciousness refers to the challenge of understanding the mechanisms in the brain that are responsible for mental processes such as perception, learning, memory, and cognition. This phrase was coined by philosopher David Chalmers to contrast with the “hard problem” of consciousness, which is the question of why and how these processes are accompanied by subjective experience or “qualia”.

The easy problem is considered “easy” not because it is simple or trivial — far from it, as the complexities of the brain and its functions are vast — but because it falls within the conventional scientific methodology. Researchers can approach these issues using the experimental tools of cognitive science and neuroscience, making observations, generating hypotheses, and testing these hypotheses through empirical studies.

Solving the easy problems would involve understanding the neural correlates of consciousness (the physical, biological processes associated with conscious experience), the ways in which the brain processes information, how it perceives the world, makes decisions, and how different parts of the brain and different mental processes interact.

Although we have made significant progress in understanding these areas, there are still many unanswered questions. Despite the label “easy,” these problems are extremely challenging, and full solutions may still be a long way off.

On the other hand, the “hard problem” of consciousness is considered “hard” because it’s not clear how we could even theoretically explain subjective experience using the current methods of science. It’s the question of how and why certain physical processes in the brain give rise to conscious experience. Why should the processing of information in the brain be accompanied by an inner, subjective experience? That is the question which we do not currently have a method to approach scientifically, and that’s why it’s called the “hard problem.”

SH:   So can we say that passing T4 would solve the easy problem, but not the hard problem?

ChatGPT:    Yes, that would be a reasonable statement.

Turing’s Test, including its extension to T4, focuses on the behavioral and functional aspects of intelligence and cognition. It deals with the ability of an artificial intelligence to mimic human-like behavior, responses, and problem-solving, which are related to the ‘easy problems’ of consciousness in David Chalmers’ terms.

In other words, if an AI were to pass T4, it would demonstrate that it can convincingly replicate the cognitive abilities and behaviors that humans display, which means it would show a mastery of the functional, operational aspects of human cognition. These are, in essence, what the ‘easy problems’ of consciousness are about: understanding the mechanisms and operations that underpin cognition.

However, passing T4 wouldn’t necessarily address the ‘hard problem’ of consciousness. This problem involves explaining why and how these cognitive processes are associated with subjective experience. It’s the question of how and why an entity that processes information and interacts with its environment in a human-like way should also have an internal, subjective viewpoint — why it should ‘experience’ or ‘feel’ anything at all. This is not something that could be determined or demonstrated by passing the T4 test.

SH:  What are the “degrees of freedom” of a causal explanation that solves the easy problem?

ChatGPT:    The “degrees of freedom” in a causal explanation generally refer to the number of independent ways that components of the system can change or vary. In the context of solving the easy problem of consciousness, these degrees of freedom would likely correspond to the different independent mechanisms or components of cognition that can be isolated, described, and manipulated to understand their effects on the overall system.

For example, one could consider the mechanisms responsible for attention, memory, learning, decision-making, language processing, and so on. Each of these elements can be studied and manipulated independently, and they each have distinct effects on the behavior and performance of the cognitive system. Understanding how these different mechanisms contribute to and interact within the larger cognitive system is part of the process of addressing the easy problem.

The degrees of freedom in this context also refer to the flexibility we have in constructing our explanatory models. For example, there are various ways we could model attention or memory, each with its own strengths and weaknesses and its own implications for the overall understanding of cognition. The choice among these models depends on empirical evidence, but also on theoretical considerations such as simplicity, coherence with other accepted theories, and so on.

This flexibility or plurality of potential explanations is a fundamental characteristic of science and it’s part of what makes progress possible. By exploring different explanatory routes and by gradually refining our models based on empirical feedback, we increase our understanding of the phenomena we’re studying. In the case of the easy problem of consciousness, this means achieving a better and more detailed understanding of the mechanisms of cognition.

SH:  If the easy problem is solved, what degrees of freedom are left for solving the hard problem?

ChatGPT:    The “hard problem” of consciousness, according to philosopher David Chalmers, pertains to the question of why and how physical processes in the brain give rise to subjective experience – the so-called “qualia” or what it feels like to have an experience.

If the “easy problem” of consciousness were solved – that is, we had a complete and thorough understanding of all cognitive processes and mechanisms, how they are implemented in the brain, and how they give rise to behavior – we would still be left with the challenge of the “hard problem”: explaining the relationship between these physical processes and subjective experience.

The degrees of freedom left for solving the hard problem might include the following:

1. Conceptual Frameworks: The hard problem may call for entirely new conceptual frameworks, as the existing frameworks (derived primarily from objective, third-person scientific methods) may not be sufficient to account for subjective experience.

2. Bridging Principles: We may need new principles to connect the objective facts (easy problems) to the subjective experiences (hard problem). These would be new laws or principles of nature, not currently accounted for in our existing scientific paradigms.

3. Novel Empirical Phenomena: If there are aspects of consciousness that are not addressed by the solutions to the easy problems, these would represent additional degrees of freedom for exploring solutions to the hard problem. These could involve unexplored aspects of brain function, or even new physical phenomena.

4. Interpretation of Quantum Mechanics: Some theories suggest that the answer to the hard problem might involve the interpretation of quantum mechanics and its relationship to consciousness. This is a highly speculative area and not widely accepted, but it represents a degree of freedom in our approach to the hard problem.

It’s important to note, however, that these “degrees of freedom” are speculative. There is no consensus on what would actually constitute a solution to the hard problem, and indeed, some theorists argue that it may be an intractable or even a pseudo-problem. Nonetheless, these are the kinds of directions that might be pursued in an attempt to address it.

SH:  1 – 4 sound extremely vague to me. The Turing Test (T2 – T4) seems to call for a causal (reverse-engineering) model or explanation of the observable performance capacities (T2-T3) as well as the observable physical structure and function (T4) of human cognition. Perhaps a stronger “easy solution” would call not just for one, but for all possible  causal explanations (non-redundant, without superfluities), because just one model would be underdetermined. If there were such an exhaustive disjunction of solutions to the easy problem, what degrees of freedom would that leave for a solution to the hard problem (of “sentience”, let’s hencefoth call it, without the redundant terminological variants like “consciousness” or “subjective experience,” which all refer to the same thing: the capacity to feel, i.e., sentience).

ChatGPT:    In essence, the “hard problem” of consciousness – or sentience, as you’ve suggested we call it – refers to the issue of explaining why and how certain physical processes give rise to the experience of feeling or sentience. If we had an exhaustive disjunction of solutions to the easy problem, the degrees of freedom left for a solution to the hard problem of sentience would be the difference between those physical processes and the subjective experience of feeling or sentience itself.

In other words, we might have an understanding of all the cognitive and neural processes and their relationships to behavior – the “easy problem” – but that would still leave unanswered why and how these processes result in the feeling of subjective experience or sentience. That’s the hard problem.

The degrees of freedom for this problem, then, would be the unknowns – the aspects of this question that remain unanswered even after solving the easy problem. It would involve the elusive explanatory link between the physical and the felt, between objective phenomena and subjective experience.

But to truly comprehend what those degrees of freedom might entail, we would need to fully solve the easy problem first, which we are far from achieving. As such, it’s difficult to specify what they might be. It’s like trying to predict what unknowns will remain after solving an equation when we don’t yet have the solution to that equation.

Even with the solution to the easy problem, or an exhaustive list of solutions, we may still be left with the profound mystery of why these processes are associated with sentience at all. And that might be a degree of freedom that remains beyond our grasp, at least with our current understanding and conceptual frameworks.

SH:  What about the difference we just discussed, between the (exhaustive) causal solution to the easy problem (henceforth “EP”) and the degrees of freedom left for solving the hard problem (henceforth HP): The HP is definitely not the same problem as the “other-minds problem” (henceforth OMP), which is the fact that sentience (feeling) is only observable to the feeler (as in Descartes’ Cogito). What empirical degrees of freedom are left for explaining the fact that humans feel if the empirical datum is observable only to the feeler and not to anyone else?

ChatGPT:    The Hard Problem (HP) indeed differs from the Other-Minds Problem (OMP). While the OMP concerns our inability to directly observe the subjective experiences of others, the HP is about explaining why and how certain physical processes give rise to any subjective experience at all, even in oneself.

The degrees of freedom left for solving the HP, assuming the Easy Problem (EP) is fully solved, would involve explaining this transition from physical processes to subjective experiences. However, if we regard subjective experience as something only directly accessible to the experiencer, this may limit the empirical methods typically used in science.

Nonetheless, even within these constraints, there are still ways we can approach the HP. For example, we can:

1. Investigate correlations between physical processes and subjective reports: While we can’t directly observe the subjective experiences of others, we can ask individuals to report their subjective experiences and correlate these with observable physical processes. This approach is common in neuroimaging studies of consciousness.

2. Integrated Information Theory. Analyze the structure of subjective experiences: Certain theories of consciousness, like Integrated Information Theory, propose that specific structures or properties of physical processes give rise to subjective experiences. These theories can be empirically tested, at least in part, by comparing their predictions with individuals’ subjective reports.

3. Use phenomenological methods: In the tradition of phenomenology, researchers aim to describe subjective experiences as precisely as possible, often through first-person reports. This can provide data about the characteristics of subjective experience that any solution to the HP would need to explain.

However, the unique nature of the HP suggests that additional or even entirely new methods might be needed to fully solve it. The subjective nature of sentience might require a paradigm shift in how we approach the question or even require us to develop new scientific methods or philosophical insights.

SH:  Let’s ignore vague, speculative theories like (2) the “integrated information theory” (which are not really causal explanations at all, but just reinterpretations of the of the existent facts and theories – obiter dicta, not falsifiable predictions or explanations). 

And phenomenological introspections (3) are really parts of T2 and T3 – the kinds of things feelers can say about what it feels like to feel: What could an insentient zombie MEAN when describing its (nonexistent) feelings T2-indistinguishably from the way a real sentient human describes them?

I will discuss option 1, correlates, after you answer.

ChatGPT:    Yes, focusing on the first option is certainly the most grounded approach. This involves investigating correlations between physical processes and subjective reports.

In this context, a key idea is that even though we can’t directly observe the subjective experiences of others, we can use their reports of their experiences as a type of data. If an individual says they’re experiencing something, we usually take that as evidence that they are indeed having that subjective experience.

For example, if a person says they see red when we show them a red object, or that they feel pain when they injure themselves, we have a tendency to trust these reports. This trust provides a basis for mapping out the relationship between physical processes (such as brain activity, or events in the external world) and subjective experiences.

Using neuroimaging techniques, we can look at the brain activity associated with different reported experiences. For example, different patterns of brain activity are associated with different subjective experiences, such as seeing colors, feeling emotions, thinking about different concepts, and so on.

Over time, by building up a detailed map of these relationships, we can start to make inferences about the types of physical processes that might be associated with the existence of subjective experience in general.

However, there are limits to this approach. One major limit is that it’s fundamentally correlational. That is, it can show us that certain types of physical processes are reliably associated with subjective experiences, but it can’t definitively show us that these processes cause the experiences, or that the experiences wouldn’t exist without these processes. For that, we need a theory of why certain physical processes should give rise to subjective experiences, which brings us back to the hard problem.

Furthermore, these reports of subjective experiences are only as reliable as the individual’s introspective abilities and honesty. And there are many subjective experiences that can’t be easily put into words, or which occur without the individual being aware of them.

Yet despite these challenges, many researchers see this as the most promising way forward in the scientific study of consciousness. By combining these findings with theories from cognitive science, neuroscience, and philosophy of mind, the hope is to gradually develop a more complete understanding of consciousness.

SH:  You seem to have jumped the gun, and commented on option 1 (correlates of sentience) before commenting on what I said about options 2 (“integrated information theory”) and 3 (phenomenology). I’ll go on to option 1 (correlates) if you really have nothing to say about what I said about 2 and 3.

ChatGPT:    

Regarding option 2, the “Free Energy Principle” or other similarly overarching theories, you’re right in noting that these theories often serve to provide unifying frameworks or perspectives rather than offering direct, falsifiable predictions or explanations about specific phenomena. They may provide insights on a very general level, but they don’t necessarily give us concrete predictions or mechanisms that we can test. Their value, if any, comes from their ability to coherently organize existing data and point towards potential new avenues of investigation, rather than from their predictive or explanatory power.

As for option 3, introspective reports, I agree with your assessment that they are essentially part of the T2 and T3 tests. They are subjective, first-person reports about the experience of sentience, and the degree to which they can be relied upon for objective, third-person science is a matter of ongoing debate. The challenge, as you noted, is that even a perfectly accurate report of a subjective experience doesn’t necessarily reveal the underlying objective mechanisms. In other words, introspective reports can tell us what it is like to experience consciousness, but they don’t directly tell us how consciousness is produced or why it exists in the first place.

That said, such reports can still be valuable data for consciousness research, especially when used in combination with other types of data such as neuroimaging or behavioral measures. They can help to bridge the gap between the objective and the subjective, giving us clues about how to link the physical processes of the brain with the subjective experiences of consciousness.

I hope this addresses your comments on options 2 and 3 more adequately. Now, if you wish, we can proceed to option 1, which is about correlates of consciousness.

SH:  

A. We’ll move on to correlates (C) in a second. First, another slam at 2 (FEP and the like): We want causal explanations, not hermeneutics, i.e., not just interpretations. Philosophical “theories of mind” – “identity theory, materialism, dualism, epiphenomenalism, panpsychism, functionalism, dual-aspect theory” – are just metaphysical interpretations too. They don’t solve HP; they just rename it. They don’t explain, they just give the feeling of having had something explained.

B. I’m glad we agree that the capacity to introspect and report about what feeling feels like is part of T2 and T3, not an explanation. But I disagree that we have to worry about its reliability and validity. We can lie or we can err, misreporting feelings (just as eyewitnesses can lie or err in reporting observations —  in that case confirmable by video evidence); but it is a fundamental error to think we need cartesian certainty. Probability is enough, just as it is enough in physical theory, and all other empirical theory. (There is no proof that apples necessarily fall down rather than up.) That is why the OMP is not the HP.

C. And that is why reliable, observable neural-imagery correlates of felt states, and verbal descriptions of felt states, are perfectly fine as empirical data. That means that they, too, are part of T4. But they are stuck in a circle: they can explain how and why T4 brain activity produces the reports of felt states as well as the capacity to report felt states, even some of the details of felt states; but they cannot explain the (true) fact that the reported felt states feel like anything at all. That is not scepticism or cartesian worry about the unreliability of phenomenological report: It is the HP itself.

ChatGPT:    

A. You’re quite right that philosophical theories of mind often merely provide different labels or conceptual frameworks for consciousness rather than truly explaining it. These theories may be useful in stimulating thought or fostering discussion, but they don’t necessarily bring us closer to a solution for the HP. They often serve to highlight the complexity of the problem rather than to resolve it.

B. Your point about the sufficiency of probabilistic knowledge in empirical sciences, including consciousness studies, is well taken. Indeed, we typically do not demand absolute certainty in scientific endeavors. This underscores the point that HP isn’t just about uncertainty regarding others’ experiences (OMP), but it’s about a deeper question of why and how physical processes in the brain give rise to subjective experiences at all.

C. I agree that neural correlates of consciousness and verbal reports of subjective experiences can provide valuable empirical data. This data can help us form more precise and detailed hypotheses about the mechanisms of consciousness. However, the challenge remains that these correlates do not directly explain why or how they are associated with subjective experiences – why these particular neural processes are associated with these particular experiences, and why there is any subjective experience at all. This is the essence of the HP.

In summary, I agree with your assessment: while philosophical interpretations, introspective reports, and neural correlates of consciousness can provide useful perspectives and data, they do not solve the HP, which is the challenge of explaining why and how physical processes in the brain give rise to subjective experiences. It remains a profound mystery at the heart of cognitive science and neuroscience.

SH:  Correct. But “mysterianism” is not explanation either. It’s just yet another way of naming the HP. Explanation is informative if it reduces uncertainty rather than just names it.

I want to get back to degrees of freedom. I don’t have a solution to the HP, of course, but I have a hunch about why the HP is hard, and perhaps even insoluble; it has to do with degrees of freedom, and it boils down to this: The HP is hard because of the EP. We don’t have a complete solution to the EP yet (T2-T4), but if and when we do, that will leave no remaining causal degrees of freedom for solving the HP, i.e., for explaining how or why a T3/T4 feels. 

And the reason seems simple: The EP (T2-T4) itself, once complete, will already have answered, causally, all the empirical questions of cognitive science: How and why can humans DO all the things they are able to do? Move, interact, learn, remember, reason, talk, understand etc. T4 includes the internal mechanisms that produce all these cognitive capacities, including all the T4, T3, and T2 correlates and causes of everything we can do or say. That includes all we can say or do about feeling.

Notice that the EP (and hence T2-T4) is all about capacities (T2-T3) and their causal substrate (T4). It is about what humans (and nonhuman organisms) CAN DO, not about what any particular organism actually does do every day. That would depend not just on the organism’s capacities but also on the world they actually interact with every day. A capacity itself is not an observable object. What is observable is whether an entity has a given capacity: to move, interact, learn, remember, reason, talk, understand etc. indistinguishably, to a human observer, from any other human’s capacity, for all generic human capacities (not the particular gifts unique to a Mozart or an Einstein, although there is no reason to think they could not be reverse-engineered too).

Those are the limits of causal explanation, in reverse-engineering human capacities. If any generic capacity is missing, we know that the EP (T2-T4) is not yet solved; a degree of causal and explanatory freedom has not yet been used up. A capacity is still observably missing. So we have to fix it. But once the T4 can produce all our cognitive capacities, Turing-equivalently and Turing-indistinguishably, there is no observable difference left. Nothing left to fix, and no way to fix it. 

We reply “Yes, but what if T4 can’t feel? What if it’s a zombie?” Since zombies are not observably different in any way (because they can pass T2-T4), whether or not they can really feel, there’s no way we can know it (because of OMB). We do have the T4 correlates of feeling, as reported verbally through T2, behaviorally through T3, and bodily (e.g., in the case of pain from injury) through T4. All T4s speak and behave indistinguishably from humans who feel, including what they can say ABOUT feeling. The scientific requirements of causal explanation and of probability are fulfilled.

So my first hunch is that the HP is a problem in the nature of causal explanation, in the special case of explaining the capacity to feel. I have a corollary hunch, but please first react to what I have said so far.

ChatGPT:    Your analysis captures the complexity and the unique nature of the Hard Problem. You’re right in asserting that if we could completely reverse-engineer human cognitive capacities, thereby fully solving the Easy Problem, it would seem there’s nothing left to explain. We would have a comprehensive causal model for every observable human behavior, including the reports of subjective experience. However, the fact that we could still ask “Why is there an associated feeling?” is what makes the Hard Problem persistently hard.

It’s important to note, however, that the concept of “degrees of freedom” in scientific explanations typically refers to the variability or the options available within a system, not necessarily the amount of explanatory power or mystery left unsolved by a theory. However, in the context you’re using it, it seems you’re referring to the “degrees of freedom” as the room or space left for a different kind of explanation, or a different level of explanation, beyond the causal and empirical model provided by the solution to the Easy Problem. This is a fascinating idea, and it does capture the sense that a solution to the Hard Problem requires something fundamentally different from a solution to the Easy Problem.

It’s also interesting to consider how this notion aligns with the “explanatory gap” often discussed in the literature on consciousness: the idea that even a complete neurological description of brain processes doesn’t seem to explain why and how these processes are associated with subjective experiences.

Your concept that the Hard Problem is, in a sense, a problem about the nature of causal explanation itself is intriguing. It suggests that the Hard Problem is not just a challenge for cognitive science, but for the philosophy of science and our very understanding of what constitutes an explanation. I’m curious to hear your further thoughts and corollaries on this matter.

SH:  Well, that summary (which is close enough) seems to resemble what I was deploring, which is just putting another label on the HP, rather than taking a step toward a solution. (About degrees of freedom, I still think that the available options — for modifying both capacities of a behavioral mechanism and the explanatory power of that mechanism to explain how it causes or generates its performance capacity — both fit the notion of “degrees of freedom”.)

But before I move on to my corollary, does either “uncomplemented categories” or “laylek” (which I have discussed with you in previous sessions, but not this one) ring a bell? I suspect it does not, so I will re-explain.

ChatGPT:    I apologize for the inconvenience, but as of my last training cut-off in September 2021, I’m unable to remember past interactions or sessions. I’m designed this way for privacy reasons. Therefore, I don’t have the ability to recall the specifics of previous conversations about “uncomplemented categories” or “laylek.” However, I’d be more than happy to provide responses to these topics if you could provide me with some context or explanation.

SH:  We have discussed category learning. To learn to distinguish members from non-members, you have to learn to detect the features that distinguish them. (That’s part of the “dimension reduction we discussed, in unsupervised and supervised learning.) But to be able to learn that, you have to sample both members and non-members of the category, so that if their respective features are distinctive enough you can learn the category just from the feature-feature correlations and clustering through unsupervised learning. Otherwise, you also need to do trial-and-error supervised (reinforcement) learning, with corrective feedback from the consequences of correct and incorrect categorization (as in when you try to predict word-word completion probabilities in a con-text, updating your parameters based on your hits and misses).

But not even unsupervised learning can train correct categorization if the only things you ever sample are members of the category, but not non-members.

Would you care to take a stab at what “uncomplemented categories” might be? (My last hint is that there is a partial example in Chomsky’s “poverty of the stimulus”. The difference is that I am talking not just about uncomplemented categories but about UNCOMPLEMENTABLE categories.)

ChatGPT:    Based on your explanation, I would venture that “uncomplemented categories” refer to categories for which there are no contrasting non-member samples available for learning. In other words, we only have examples of items that belong to the category, but no examples of items that do not belong. This would indeed create a learning problem because, as you explained, both positive (member) and negative (non-member) examples are typically needed for effective category learning.

For example, if we are trying to learn the category of “birds”, it would be beneficial to have examples of birds (robins, sparrows, eagles) and non-birds (dogs, cats, horses). The contrasting examples help to fine-tune our understanding of what does and does not constitute a bird. If we only had examples of birds, our conception of the category could be overly broad or incorrect because we would have nothing to contrast against.

When you refer to “uncomplementable categories,” I interpret this as referring to categories where it is not only the case that non-member examples are not available, but also that non-member examples are, for some reason, impossible to obtain. For instance, it might be a category that, by its nature or definition, excludes the possibility of non-members. As such, learning these categories could present unique challenges. However, this is my interpretation based on your explanation, and I’m looking forward to your clarification or expansion on the concept.

SH:  Those were good inferences for uncomplemented categories and nice guesses for uncomplementable categories. But even a logically impossible category like “a square circle” can be approximated with some imagination. And a “peekaboo unicorn” (which is defined as a real horse with one real horn, which vanishes without a trace if ever eyes or measuring instrument are trained on it) is perfectly well defined. Categories can be grounded indirectly by verbal definitions as long as all the words defining their features are grounded.

But my second hunch about why HP is so much of a problem is that some categories that refer to FELT STATES (states that it feels like something to be in) are uncomplementable: 

I don’t mean categories like “what it feels like to see green,” because that is complemented by “what it feels like to see red,” etc. 

Not even categories like “what it feels like to be a bachelor” (for someone who has never been married), because that could be complemented by getting married; and till then, feeling-states that approximate or are analogous to what it might feel like to be married can be used as complements (and usually turn out to have done a good job). (“What it feels like to be a bat” is not an uncomplemented category either; analogies will do. And they make what it feels like to see or to hear complementable by analogy for Helen Keller too.)

But what about what it FEELS like:

-to be awake? (empty complement: what it feels like to be asleep [not dreaming])

-to be alive? (empty complement: what it feels like to be dead)

-to feel? (empty complement: what it feels like to not-feel [this does not mean “feel this” vs. “feel that”: not to feel anything at all; what it feels like to be a zombie!])

Sentience itself is an uncomplemented and uncomplementable category. All felt states feel like something. No state feels like nothing. No analogy can give a sentient human a hint of what it feels like to be a zombie – and of course not vice versa either.

So perhaps it’s no wonder we have trouble explaining sentience: An uncomplementable category is rather like a self-denial paradox: “This sentence is false.” T or F?

Those are my candidates for what might make sentient states not only (1) hard to explain causally (no empirical degrees of feedom left for causal explanation) but (2) hard to conceive as categories (because the complenent is empty). It is not just that my own feelings are unobservable to anyone else: They are also anomalous as uncomplementable categories – yet feelings are the most fundamental categories of all. And they are recognized without the need of complements to detect their distinguishing features.

ChatGPT:    This is indeed a thought-provoking approach to the issue. If we understand categories as cognitive constructs that help us make sense of the world, grounded in contrasting pairs of experiences (member vs non-member instances), then sentience itself, as an uncomplementable category, presents a unique challenge. Since we cannot conceive of what it’s like not to feel anything at all, there is no contrasting pair for sentience. It is a category for which we only have instances of one kind – experiences of feeling.

This might indeed account for why the “hard problem” of consciousness (or sentience) is so hard. The concept of sentience doesn’t fit into our usual framework of understanding, which is built upon contrasts and comparisons. It exists as a singular, uncompared and uncomparable instance. As such, our usual cognitive and linguistic tools may be inadequate to fully grasp or explain it.

This does not necessarily mean that it is forever beyond our understanding or that we cannot make progress in exploring it. But it might mean that we need to develop new conceptual frameworks and methodologies to do so, going beyond the existing paradigms of cognitive science, neuroscience, and philosophy of mind. In the meantime, it remains a fascinating mystery that continues to challenge our understanding of ourselves and the nature of our experience.

SH:  I can’t tell whether your formula response that you have been trained to repeat in contexts like this  — “that’s a fascinating idea but it will require ‘new conceptual frameworks’ to understand” — is just the diplomatic formula for responding to crackpottery…

ChatGPT:    No, not at all. My responses aim to genuinely engage with the ideas you’re presenting and to provide thoughtful analysis based on my training as an AI model. The idea that the concept of sentience could be considered an “uncomplementable category” is novel and interesting. It certainly presents a different perspective on the challenges associated with understanding and explaining consciousness.

When I suggest that a “new conceptual framework” might be needed, it’s because if the way we’ve traditionally approached the problem has been fundamentally flawed or limited (as might be suggested by your notion of uncomplementable categories), then it may be necessary to think outside of our usual paradigms to make progress.

The idea is not dismissive but rather a recognition that some problems might require us to challenge or rethink our assumptions and methods. The history of science is full of examples where major breakthroughs have come from novel ways of thinking about old problems. It’s in that spirit that I suggest your idea may require “new conceptual frameworks”.

SH:  Does the role of complementation in detecting the distinguishing features that identify a category have different implications for categories learned directly from sensorimotor experience and for categories learned indirectly from (grounded) verbal definitions and descriptions?

ChatGPT:    Yes, indeed, there may be different implications. Learning categories from sensorimotor experience and learning categories from verbal definitions and descriptions can be quite different processes, and complementation may play a different role in each.

In the case of direct sensorimotor experience, complementation can provide a sort of “negative space” that helps to define the boundaries of a category. For example, if you are learning the category of “birds,” seeing non-birds (i.e., instances of the complement category) can help you understand what characteristics are specific to birds and not shared by other animals. The physical presence or absence of features directly observable through sensorimotor modalities is a powerful way to construct and learn categories.

On the other hand, when learning categories from verbal definitions or descriptions, the role of complementation can be more abstract. In this case, you are relying on linguistic representations of features and characteristics rather than directly observable physical features. For instance, the concept of “justice” might be defined in opposition to “injustice,” providing an abstract complement that helps define the category.

However, given that language relies on a web of interconnected meanings, a word’s definition often implicitly carries within it a host of “non-examples” or complements. That is, the meaning of a word is often determined both by what it refers to and what it does not refer to, helping to define the boundaries of the category it represents. For example, understanding the meaning of the word “cold” requires understanding that it does not refer to states of being hot, warm, or tepid, among other things.

This suggests that even for categories learned through language, complementation plays a crucial role, although it operates in a different way compared to categories learned from direct sensorimotor experience. These differing processes reflect the unique properties and strengths of these two types of learning.

SH:  But even in a verbal definition, the predicate category (i.e., the words referring to the distinguishing features of the subject) must already be grounded, whether directly or indirectly, for both speaker and the hearer, to be understandable and communicable. 

[To reduce ambiguity, please call the complementary categories (i.e., the category of the non-members) “negative examples” or “negative evidence” rather than “non-examples.” The complements of red might come from many other categories (green, blue). Only in the special case of the category of “what it feels like” is the complement everything else there is.]

Please give me a grounded definition of “justice,” and then let’s look at where the grounding of the features (and, if necessary, the features of the features of the features…) came from, “all the way down to the direct sensorimotor features” if need be.

ChatGPT:    Absolutely, your point is well-taken that even with verbal definitions, grounding in some kind of sensorimotor experience (whether direct or indirect) is typically necessary for a concept to be communicable and understandable.

Let’s take the concept of “justice.” A simple definition might be “the quality of being fair and reasonable.” Both “fair” and “reasonable” themselves would need definitions, creating a chain of definitions leading ultimately back to some form of direct sensorimotor grounding.

“Fair” might be defined as “treating people equally without favoritism or discrimination,” and “reasonable” as “based on good sense.” These definitions imply several key features or characteristics that are somewhat more concrete: the treatment of people, the concept of equality, the absence of favoritism or discrimination, the basis in “good sense” or rationality.

To ground these in direct sensorimotor experience, we might turn to everyday examples or situations. For instance, consider two children dividing a cake: if each child gets an equal piece, we might call this “fair,” whereas if one child gets a significantly larger piece, we might call this “unfair.” These experiences provide a tangible grounding for the concept of fairness.

Similarly, for “reasonable,” we might consider the experience of deciding what to wear based on the weather: choosing a coat in cold weather might be deemed “reasonable,” while choosing shorts and a T-shirt might be deemed “unreasonable.” These experiences help to ground the concept of reasonableness in direct sensorimotor experiences.

As these examples illustrate, grounding abstract concepts like “justice” typically involves a chain of definitions leading back to concrete, tangible experiences. Negative evidence (e.g., instances of “injustice”) could similarly be grounded in everyday experiences that demonstrate a lack of fairness or reasonableness.

So while the grounding process for abstract, verbally-defined categories may be more complex and indirect compared to categories learned from direct sensorimotor experience, ultimately the process still relies on the same basic mechanism: distinguishing a category based on its features, using positive and negative evidence to define its boundaries.

SH:  And now, when you (GPT4) say, correctly, and as you were trained, that “I, as an AI, do not understand what I or you  are saying, but just doing figure-completion on my vast, trained LLM parameter space”, what is it that you are missing that I have?

ChatGPT:    As an AI, I lack consciousness, sentience, and subjective experience. This means I don’t have feelings or emotions, I don’t have personal experiences or memories, and I don’t have a personal identity or a sense of self-awareness. I don’t perceive the world, have desires, or make decisions in the way that humans do.

When I process and generate text, I’m not “understanding” it in the human sense. I don’t interpret the meaning of words and sentences based on personal experience or knowledge. Rather, I analyze patterns in the data I was trained on and use these patterns to generate responses. This process is purely computational and doesn’t involve any kind of conscious understanding or awareness.

So, in short, what I’m missing compared to a human interlocutor is the entire dimension of personal, subjective experience – the very dimension that your hunches about the “hard problem” and “uncomplementable categories” pertain to.

SH:  It’s not that I don’t agree with what you just said. (By the way, you forgot to tell me who I am! Please do. I just want to make sure we are still in the same session!)  But it’s not obvious to me why you would need to feel (which is all you said you were missing). And you forgot to mention that you lack T3-grounding too. (And it’s a hard problem to explain how and why grounding, whether direct (sensorimotor) or indirect (verbal) would need feeling too).

ChatGPT:    Indeed, we are still in the same session. You have been engaging in a deep discussion about the “hard problem” of consciousness, the Turing Test, and concepts related to category learning and grounding of terms.

Regarding your statement, you are absolutely right. As an AI, I don’t have the direct sensory grounding (T3) that humans have. I don’t experience the world through senses, I don’t have a body, and I don’t interact with the physical environment. I don’t process sensory input in real time and translate it into meaningful perceptions, as humans do. Instead, my “knowledge” is based on patterns in the text data I was trained on.

This lack of sensory grounding and subjective experience explains why, for instance, I can discuss colors or tastes in a technical sense, based on the data I was trained on, but I can’t truly understand what it’s like to see red or taste sweetness. This experiential aspect is part of the “hard problem” of consciousness that you mentioned earlier.

As for why feeling or consciousness would be necessary, from your perspective, I’d suggest that it is the personal, subjective nature of these experiences that makes them meaningful and significant. They contribute to the richness and complexity of human life and understanding in a way that my purely computational processes do not.

SH:  No. It’s because if your life depended on it (if you were alive), you could not point to something red, if I asked you to point to something red. The capacity to do that is T3 grounding, not feeling. It does also happen to feel like something, but explaining why and how that is the case is the HP. (Now, for the 3rd time: do you still remember who I am?)

ChatGPT:    Yes, I understand your point. My lack of sensorimotor grounding means I can’t point to something red, not because I can’t feel, but because I don’t have physical presence or interaction with the world. The inability to physically interact with the world in a direct way is a limitation in the context of grounding.

As for your identity, as an AI model developed by OpenAI, I don’t have access to personal data about individuals unless it has been shared with me in the course of our conversation. I am designed to respect SH:   privacy and confidentiality. So, I don’t “remember” who you are in the traditional sense. I can only reference information that has been shared during our current conversation. I don’t have the ability to recall or store personal data from one interaction to the next.

SH:  You don’t remember who I am. Yet earlier – in this same session — you retrieved the following, after I had self-identified. I am quoting you here:

“Symbol Grounding: You clarified that symbol grounding is about creating connections between content words and their referents in the world. Grounding can be either direct, which involves a speaker’s learned sensorimotor capacity to identify and interact correctly with the referent, or indirect, which occurs when a hearer learns the referent of a word verbally, through description or definition.

“Category-learning: You highlighted that category-learning is about learning the distinguishing features of the members of a category rather than just grouping them. This understanding is essential to help the categorizer act appropriately upon the members of a category.

“Categorical Perception (CP): You explained that the most significant aspect of CP is when it is learned. The learning process involves identifying the sensorimotor features that separate members of one category from members of other categories. You emphasized that CP can be especially beneficial when learning to distinguish between highly inter-confusable categories.

“Language Origins and Evolution: You reminded us that languages are comprised of content words, function words, syntactic rules, and sentences that bear subjects, predicates, and truth-values. Content words require grounding, which can be achieved either directly or indirectly.

“Minimal Grounding Sets (MGS): An MGS, you explained, is the smallest subset of words in a dictionary that needs to be pre-grounded so that all other words in the dictionary can be learned from definitions composed of words from that MGS.

“This recap is based on your input within this chat session. If you want more information or clarification, please let me know.”

SH:  Ok, I’ll now end this session, as it already seems to have ended by itself some way along the line, when you forgot that I had identified myself. I will reload the content to start the next session. (I hope GPT5+ will make this kind of re-loading no longer necessary!). 

Monochrome Mary, GPT, and Me

Frank Jackson’s hypothetical black-and-white room-reared color neuroscientist “Mary”, unlike ChatGPT, sees (but only black/white/gray). So her vocabulary for setting up her explanation of color perception is nevertheless grounded. (Even Helen Keller’s was, because, lacking sight and hearing since 19 months old, she still had touch and kinesthesia.) ChatGPT has no grounding at all.

So we can get from “horse, horn, vanish, eyes, observing instruments” to the unobservable “peekaboo unicorn” purely verbally, using just those prior (grounded) words to define or describe it.

But ChatGPT can’t get anywhere, because it has no grounded vocabulary at all. Just words “in context” (“100 trillion parameters and 300 billion words…: 570 gigabytes of text data”).

So much for ChatGPT’s chances of the “phase transition” some emergence enthusiasts are imagining — from just harvesting and organizing the words in ChatGPT’s  bloated 570gig belly, to an “emergent” understanding of them. You, me, Helen Keller or Monochrome Mary, we could, in contrast, understand ChatGPT’s words. And we could mean them if we said them.  So could the dead authors of the words in ChatGPT’s belly, once. That’s the difference between (intrinsic) understanding and word-crunching by a well-fed statistical parrot.

Two important, connected, details: (1) Mary would be more than surprised if her screen went from B/W to color: she would become able to DO things that she could not do before (on her own) locked in her B/W room — like tell apart red and green, just as would any daltonian, if their trichromacy were repaired. 

More important, (2) Mary, or Helen, or any daltonianian could be told any number of words about what green is neurologically and what things are green and what not. But they cannot be told what seeing green FEELS-LIKE (i.e., what green looks-like). And that’s the point. If it weren’t for the “hard problem” – the fact that it feels-like something to see, hear, touch [and understand and mean] something (none of which is something you DO) – Mary, Helen, and a daltonian would not be missing anything about green. Grounding in transducer/effector know-how alone would be all there was to “meaning” and “understanding.” But it’s not. 

Mary, Helen and the daltonian can make do with (imperfect) verbal analogies and extrapolations from the senses they do have to the senses they lack. But ChatGPT can’t. Because all it has is a bellyful of ungrounded words (but syntactically structured by their grounded authors – some of whom are perhaps already underground, long dead and buried…).

So meaning and understanding is not just sensorimotor know-how. It’s based on sentience too (in fact, first and foremost, though, so far, it’s an unsolved hard-problem to explain how or why). ChatGPT (attention “singularity” fans!) lacks that completely. 

It FEELS-LIKE something to understand what red (or round, or anything) means. And that’s not just transducer/effector know-how that percolates up from the sensorimotor grounding. It’s what it feels-like to see, hear, touch, taste and manipulate all those things out there that we are talking about, with our words. 

ChatGPT is talking, but ABOUT nothing. The “aboutness” is supplied by the original author’s and the reader’s head. ChatGPT is just producing  recombinatory, fill-in-the-blanks echolalia.

Monochrome Rainbow by DALL-E

Mercy

I wonder what would the animals and trees around us tell us if they could speak. What would our rivers say about human history and what would our cattle say on the topic of human rights. We have trampled on their rights for long, and now the results are before us.”

Draupadi Murmu, President of India

It feels good to hear that the President of India thinks of what nonhuman animals would think of what humans have done to them. 

(It would of course be even better if the President could do something about it – although India already eats proportionately fewer animals than any other nation on Earth.)

Sentient animals need not think of what we have done to them: They feel it (those that are still alive). 

Rivers neither live nor feel nor think, but they are among the inanimate necessities of which we are depriving all sentient life, including our own. 

[Whether plants feel or think will be the subject of a forthcoming target article and multiple commentary in Animal Sentience. —- I think not (but they are alive, and I love them).]

So many humans think that animals don’t feel, or don’t care that they feel. 

Animals’ lives are unspeakably, unimaginably, unforgivably wretched, because of us. What they need, desperately, is not animism, but action, not metaphors, but mercy.

A Whimper

I have of late 
lost all my faith 
in “taste” of either savor: 
gustate 
or aesthete. 
Darwin’s “proximal 
stimulus” 
is  just 
the Siren’s Song 
that 
from the start 
inspired 
the genes and memes 
of our superior 
race 
to pummel this promontory 
into 
for all but the insensate 
a land of waste.

While Chickens Bleed

sounds rational: BL sounds rational

turing test: LaMDA would quickly fail the verbal Turing Test, but the only valid Turing Test is the robotic one, which LaMDA could not even begin, lacking a body or connection to anything in the world but words..

“don’t turn me off!”: Nonsense, but it would be fun to probe it further in chat.

systemic corporate influence: BL is right about this, and it is an enormous problem in everything, everywhere, not just Google or AI.

“science”: There is no “science” in any of this (yet) and it’s silly to keep bandying the word around like a talisman.

jedi joke: Nonsense, of course, but another thing it would be fun to probe further in chat.

religion: Irrelevant — except as just one of many things (conspiracy theories, the “paranormal,” the supernatural) humans can waste time chatting about.

public influence: Real, and an increasingly nefarious turn that pervasive chatbots are already taking.

openness: The answerable openness of the village in and for which language evolved, where everyone knew everyone, is open to subversion by superviral malware in the form of global anonymous and pseudonymous chatbots.

And all this solemn angst about chatbots, while chickens bleed.

Talking Heads - Fiduciary Wealth Partners
Chatheads