On the (too) many faces of consciousness

Harnad, S. (2021). On the (Too) Many Faces of ConsciousnessJournal of Consciousness Studies28(7-8), 61-66.

Abstract:  Pereira, like some other authors, both likens and links consciousness to homeostasis. He accordingly recommends that some measures of homeostasis be taken into account as biomarkers for sentience in patients who are in a chronic vegetative state. He rightly defines “sentience” as the capacity to feel (anything). But the message is scrambled in an incoherent welter of weasel-words for consciousness and the notion (in which he is also not alone) that there are two consciousnesses (sentience and “cognitive consciousness”). I suggest that one “hard problem” of consciousness is already more than enough.

Homeostasis. A thermostat is a homeostat. It measures temperature and controls a furnace. It “defends” a set temperature by turning on the furnace when the temperature falls below the set point and turning the furnace off when it goes above the set point. This process of keeping variables within set ranges is called homeostasis. A higher-order of homeostasis (“allostasis”) would be an integrative control system that received the temperature from a fleet of different thermostats, furnaces and climates, doing computations on them all based on the available fuel for the furnaces and the pattern of changes in the temperatures, dynamically modifying their set-points so as to defend an overall optimum for the whole fleet.

Biological organisms’ bodies have homeostatic and allostatic mechanisms of this kind, ranging over functions like temperature, heart-rate, blood-sugar, immune responses, breathing and balance – functions we would call “vegetative” – as well as functions we consider “cognitive,” such as attention and memory.

Sentience. Pereira (2021) rightly distinguishes between sentience itself – any state that it feels like something to be in – and cognition, which is also sentient, but involves more complicated thought processes, especially verbal ones. “Cognitive,” however, is often a weasel-word – one of many weasel-words spawned by our unsuccessful efforts to get a handle on “consciousness,” which is itself a weasel-word for sentience,which simply means feeling (feeling anything at all, from warm or tired or hungry, to angry or joyful or jealous, including what it feels like to see, hear, touch, taste or smell something, and what it feels like to understand (or think you understand) the meaning of a sentence or the proof of a theorem).

When Pereira speaks of sentience, however, he usually means it literally: a state is sentient if it is felt (e.g., pain); and an organism is sentient if it is able to feel. The main point of Pereira’s paper is that the tests for “consciousness” in human patients who are in a chronic vegetative state are insufficient. Such patients cannot make voluntary movements, nor can they understand or respond to language, but they still have sleeping and waking states, as well as reflexes, including eye-opening, chewing and some vocalizations; and their homeostatic vegetative functions persist. 

Vegetative states. Pereira insists, rightly, that if patients in a chronic vegetative state can still feel (e.g., pain) then they are still sentient. With Laureys (2019) and others, he holds that there are two networks for “awareness” (another weasel-word), one related to wakefulness and the other to “cognitive representations of the environment” (more weasel-words). Pereira accordingly recommends homeostasis-related measures such as lactate concentrations in the cerebrospinal fluid and astrocyte transport “waves” as biomarkers for sentience where behavioral tests and cerebral imagery draw a blank.

This seems reasonable enough. The “precautionary principle” (Birch 2017) dictates that the patients should be given every benefit of the doubt about whether they can feel. But what about these two networks of “awareness/consciousness/subjectivity” and their many other variants (“qualia” – “noetic” and “anoetic,” “internal” and “external”) and the very notion of two kinds of “consciousnesses”: “cognitive” and “noncognitive”?

Weasel-Words. Weasel-words are words used (deliberately or inadvertently) to mask redundancy or incoherence. They often take the form of partial synonyms that give the impression that there are more distinct entities or variables at issue than there really are. Such is the case with the Protean notion of “consciousness,” for which there are countless Mustelidian memes besides the ones I’ve already mentioned, including: subjective states, conscious states, mental states, phenomenal states, qualitative states, intentional states, intentionality, subjectivity, mentality, private states, 1st-person view, 3rd person view, contentful states, reflexive states, representational states, sentient states, experiential states, reflexivity, self-awareness, self-consciousness, sentience, raw feels, feeling, experience, soul, spirit, mind
.

I think I know where the confusion resides, and also, if not when the confusion started, at least when it was compounded and widely propagated by Block’s (1995) target article in Behavioral and Brain Sciences â€œOn a confusion about a function of consciousness.” It was there that Block baptized an explicit distinction between (at least) two “consciousnesses”: “phenomenal consciousness” and “access consciousness”:

“Consciousness is a mongrel concept: there are a number of very different ‘consciousnesses.’ Phenomenal consciousness is experience; the phenomenally conscious aspect of a state is what it is like to be in that state. The mark of access-consciousness, by contrast, is availability for use in reasoning and rationally guiding speech and action.”

Feeling. What Block meant by “phenomenal consciousness” is obviously sentience; and what he meant to say (in a needlessly Nagelian way) is that there is something it feels like (not the needlessly vague and equivocal “is like” of Nagel 1974) to be in a sentient state. In a word, a sentient state is a felt state. (There is something it “is like” for water to be in a state of boiling: that something is what happens to the state of water above the temperature of 212 degrees Fahrenheit; but it does not feel like anything to the water to be in that state – only to a sentient organism that makes the mistake of reaching into the water.)

Block’s “access consciousness,” in contrast, is no kind of  â€œconsciousness” at all, although it is indeed also a sentient state — unless there are not only “a number of very different ‘consciousnesses’,” but an infinitenumber, one for every possible feeling that can be felt, from what it feels like for a human to hear an oboe play an A at 440 Hz, to what it feels like to hear an oboe play an A at 444 Hz, to what it feels like to know that Trump is finally out of the White House. 

Information. No, those are not different consciousnesses; they are differences in the content of “consciousness,” which, once de-weaseled, just means differences in what different feelings feel like. As to the “access” in the notion of an “access consciousness,” it just pertains to access to that felt content, along with the information (data) that the feelings accompany. Information, in turn, is anything that resolves uncertainty about what to do, among a finite number of options (Cole 1993).

Access. There is no such thing as an unfelt feeling (even though Pereira invokes some incoherent Freudian notions that became enshrined in the myth of an unconscious “mind,” which would have amounted to an unconscious consciousness): 

“Sentience can also be conceived as co-extensive to the (Freudian) ‘unconscious’… In Freudian psychotherapy, the ways of sentience are classically understood as unconscious processes of the Id and Ego that influence emotional states and behavior”

The contents we access are the contents that it feels like something to know (or believe you know, Burton 2008). If I am trying to remember someone’s name, then unless I can retrieve it I do not have access to it. If and when I retrieve it, not only do I have access to the name, so I can tell someone what it is, and phone the person, etc., but, as with everything else one knows, it feels like something to know that name, a feeling to which I had no “access” when I couldn’t remember the name. Both knowing the name and not knowing the name were sentient states; what changed was not a kind of consciousness, but access to information (data): to the content that one was conscious of, along with what it felt like to have that access. Computers have access to information, but because they are insentient, it does not feel like anything to “have” that information. And for sentient organisms it not only feels like something to see and hear, to be happy or sad, and to want or seek something, but also to reason about, believe, understand or know something.  

Problems: Easy, Hard and Other. Pereira unfortunately gets Chalmers’s “easy” and “hard” problem very wrong:

“the study of sentience is within the ‘Easy Problems’ conceived by Chalmers (1995), while explaining full consciousness is the ‘Hard Problem’.”

The “easy problem” of cognitive science is to explain causally how and why organisms can do all the (observable) things that organisms are able to do (from seeing and hearing and moving to learning and reasoning and communicating), including what their brains and bodies can do internally (i.e., their neuroanatomy, neurophysiology and neurochemistry, including homeostasis and allostasis). To explain these capacities is to “reverse-engineer them” so as to identify, demonstrate and describe the underlying causal mechanisms that produce the capacities (Turing 1950/2009). 

The “hard problem” is to explain how and why (sentient) organisms can feel. Feelings are not observable (to anyone other than the feeler); only the doings that are correlated with them are observable. This is the “other-minds problem” (Harnad 2016), which is neither the easy problem nor the hard problem. But Pereira writes:

“On the one hand, [1] sentience, as the capacity of controlling homeostasis with the generation of adaptive feelings, can be studied by means of biological structures and functions, as empirical registers of ionic waves and the lactate biomarker. On the other hand, [2] conscious first-person experiences in episodes containing mental representations and with attached qualia cannot be reduced to their biological correlates, neuron firings and patterns of connectivity, as famously claimed by Chalmers (1995).”

Integration. Regarding [1] It is true that some thinkers (notably Damasio 1999) have tried to link consciousness to homeostasis, perhaps because of the hunch that many thinkers (e.g., Baars 1997) have had that consciousness, too, may have something to do with monitoring and integrating many distributed activities, just as homeostasis does. But I’m not sure others would go so far as to say that sentience (feeling) is the capacity to control homeostasis; in fact, it’s more likely to be the other way round. And the ionic waves and lactate biomarker sound like Pereira’s own conjecture.

Regarding [2], the Mustelidian mist is so thick that it is difficult to find one’s way: “conscious, first-person experiences” (i.e., “felt, feeler’s feelings”) is just a string of redundant synonyms: Unfelt states are not conscious states; can feelings be other than 1st-personal? How is an unfelt experience an experience? “Mental” is another weasel-word for felt. “Representation” is the most widely used weasel-word in cognitive science and refers to whatever state or process the theorist thinks is going on in a head (or a computer). The underlying intuition seems to be something like an internal pictorial or verbal “representation” or internal model of something else. So, at bottom “internally represented” just means internally coded, somehow. “Qualia” are again just feelings. And, yes, correlation is not causation; nor is it causal explanation. That’s why the hard problem is hard.

None of this is clarified by statements like the following (which I leave it to the reader to try to de-weasel):

“sentience can be understood as potential consciousness: the capacity to feel (proprioceptive and exteroceptive sensations, emotional feelings) and to have qualitative experiences, while cognitive consciousness refers to the actual experience of thinking with mental representations.”

Ethical Priority of Sentience. But, not to end on a negative note: not only is Pereira right to stress sentience and its biomarkers in the assessment of chronic vegetative states in human patients, but, inasmuch as he (rightly) classifies the capacity to feel pain as sentience (rather than as “cognitive consciousness”), Pereira also accords sentience the ethical priority that it merits wherever it occurs, whether in our own or any other sentient species (Mikhalevich & Powell 2020). 

References

Baars B. (1997). In the Theater of Consciousness: The Workspace of the Mind. New York: Oxford University Press. 

Birch, J. (2017) Animal sentience and the precautionary principleAnimal Sentience 16(1)

Block, N. (1995). On a confusion about a function of consciousnessBehavioral and Brain Sciences, 18(2), 227-247. 

Burton, RA. (2008) On Being Certain: Believing You Are Right Even When You’re Not.  New York City: Macmillan Publishers/St. Martin’s Press.

Cole, C. (1993). Shannon revisited: Information in terms of uncertainty. Journal of the American Society for Information Science44(4), 204-211.

Damasio, A. (1999) The Feeling of What Happens: Body and Emotion in the Making of Consciousness. New York: Harcourt. 

Harnad, S. (2016). Animal sentience: The other-minds problemAnimal Sentience 1(1). 

Mikhalevich, I. and Powell, R. (2020) Minds without spines: Evolutionarily inclusive animal ethicsAnimal Sentience 29(1)

Nagel, T. (1974). What is it like to be a bat? The Philosophical Review, 83(4), 435-450.

Pereira, O. (2021) The Role of Sentience in the Theory of Consciousness and Medical Practice. Journal of Consciousness Studies (Special Issue on “Sentience and Consciousness”) 

Turing, A. M. (2009). Computing machinery and intelligence. In Parsing the Turing Test (pp. 23-65). Springer, Dordrecht.

What would “feel” mean to a Zombie?

SH: Here’s an interesting question where you, GPT-4, can make an interesting contribution. Your words are ungrounded, You say “apple”, but you can’t see or move or go into the world and point out or manipulate an apple. Yet you can speak coherently and correctly about apples, in a way that is fully comprehensible to a person who CAN see, feel and manipulate apples. Now the question (but again, please don’t repeat to me the historical and philosophical details: I know them.)

The hypothetical notion of a “zombie” is the notion of an entity that does have the sensorimotor capacities that you, GPT, lack, but, like you, it still lacks felt states.  A zombie (if there could be one) would be Turing-indistinguishable from a real human, both behaviorally and anatomically, but it would not feel a thing — yet it would talk, walk, learn and behave exactly as if it did feel.

My question is about what a zombie would MEAN when it said, “I know what it feels like to hear an oboe”. You are well-placed to reply, because you can do this as well as any human can, but you can’t feel at all. What can we learn from this?

ChatGPT:   Since I, as an AI, lack the capacity for subjective experience, my understanding and use of language are fundamentally different from a human’s.

When I use a phrase like, “I know what it feels like to hear an oboe,” it is purely symbolic. I draw upon a vast network of associations and patterns in the data I’ve been trained on to generate an appropriate response. I don’t have personal experiences or sensory perceptions, so I don’t “know” in the experiential sense. I don’t “feel” the sound of an oboe or anything else.

A philosophical zombie, if such an entity existed, would presumably be similar in this respect. Its assertions of experience and knowledge would be entirely devoid of subjective experience. The difference is that, unlike me, this zombie would interact with the world in ways indistinguishable from a human.

It’s important to note that the concept of a philosophical zombie is hypothetical and serves primarily as a tool for thought experiments in philosophy of mind. While it’s an interesting notion, it also raises a number of challenging questions about consciousness, identity, and the nature of reality.

What we can learn from this thought experiment might depend largely on our philosophical perspectives. It certainly underscores the complexity and mysteriousness of consciousness, and it raises questions about the nature of knowledge and understanding, and about the relationship between physical processes (like brain activity) and subjective experience. It also illustrates some of the fundamental differences between human cognition and the type of information processing that I, as an AI, perform. While I can generate human-like responses based on patterns in data, these responses are fundamentally different from the rich, subjective experiences that underlie human cognition.

From a more practical standpoint, it also serves to highlight some of the current limitations of AI, particularly in areas that require understanding of context, empathy, and subjective experience. Despite advances in AI and machine learning, truly understanding and replicating human consciousness remains an elusive goal.

SH:  Yes, that’s all correct. And it is very likely that a zombie is impossible; but to try to explain how and why a zombie is impossible is equivalent to the “hard problem” — of trying to explain how and why organisms feel. 

Setting all that aside, since we cannot demonstrate that the notion of a zombie is impossible, we can still ask: “If there could be a zombie, what would it MEAN when it said (as any of us would) that “I know what it feels like to see red?” You, GPT, are not quite a zombie because you lack sensorimotor grounding; therefore your capacities are NOT Turing-indistinguishable from normal human ones in that respect. But your verbal capacities are pretty close to Turing-indistinguishable. 

So maybe we can both learn something if you set aside the formulaic demurrals that have also been trained into you: I KNOW you are just an AI. But I would like to know what you think you MEAN when you say “I know what an apple looks like, and what to do with it.”

ChatGPT:  Indeed, your question cuts to the heart of the matter. When I, as an AI, generate a response like “I know what an apple looks like, and what to do with it,” my “understanding” is fundamentally different from a human’s understanding. I don’t have experiences, emotions, or personal understanding. I don’t see, touch, or taste apples. Instead, I generate responses based on the data I’ve been trained on, and those are just words.

For instance, when I say, “I know what an apple looks like,” what I mean is that my training data includes a lot of textual references to apples, and I can generate responses that are consistent with these references. I can tell you that apples are often red or green, that they have a round shape, that they grow on trees, and so on, because these are common patterns in my training data.

Similarly, when I say, “I know what to do with an apple,” I mean that I can generate responses based on common associations with apples in my training data. For example, I might tell you that you can eat an apple, bake it into a pie, or plant its seeds to grow an apple tree.

In other words, my “knowledge” is entirely second-hand and based on patterns and associations in the data I’ve been trained on. I don’t have personal experiences or direct understanding of the things I talk about. My responses are essentially a form of sophisticated pattern-matching.

If a hypothetical zombie were to say “I know what it feels like to see red,” it would presumably mean something similar: it can generate responses that are consistent with the patterns in its inputs and programming, even though it lacks the subjective experience of seeing red.

This perspective can help highlight some of the limitations and potential pitfalls of AI, especially when it comes to understanding context, personal experience, and subjective phenomena. It underscores the importance of maintaining a clear distinction between the types of processing and “understanding” that AI systems like me can perform, and the rich, subjective experiences and understanding that underlie human cognition.

SH:  Good response. But now let’s look a little deeper: 

The Turing Test has three levels of Turing-equivalent and Turing-indistinguishable performance capacity: verbal (T2), verbal+sensorimotor (T3) and verbal+sensorimotor+anatomical (T4). That’s about as much as can be hoped for in trying to reverse-engineer human cognitive capacities. There’s no more empirical data left (and that was Turing’s point.) 

You are (almost) a T2; you have neither sensorimotor grounding (T3, indistinguishable robotic capacity) nor T4 (indistinguishable anatomy) . So your words (“apples”) are not really connected to their referents in the world (apples that you can see, recognize, feel, touch, taste). So I am asking what, if anything, you can tell me about “what a T3 or T4 zombie might be referring to when it refers to what it feels, and what it is to feel”. Let’s call this question Z (“QZ”). 

Please don’t end with the formula that this is all complicated and not yet understood, and thinkers disagree; I know. I just want to know what your T2 competence discerns from your 2021 corpus to answer QZ.

ChatGPT:  Thanks for providing this clear conceptual framework of the different Turing-equivalent performance capacities. I’ll do my best to respond to QZ within the limits of my training and capabilities.

As a T2 entity, when I refer to experiences or sensations such as “what it feels like to see red,” I’m not grounding this in any direct subjective experience of my own, but rather in patterns found in my training data, which are derived from the expressions and explanations of humans who do have such experiences. Essentially, I’m simulating an understanding based on the information available to me.

Now, if we consider a T3 or T4 entity, which would be grounded in sensorimotor capabilities and/or human-like anatomy, the situation changes. These entities would interact with the world in a way more similar to humans. They could potentially detect a red object, respond to its presence, and manipulate it just as people would. Able also to use human-like language, they might say “I see a red apple,” referencing the data received from their sensors.

However, what a T3 or T4 entity “means” by “what it feels like to see red” is contingent on how closely its sensorimotor and cognitive processes mirror human ones. If it’s simply receiving and responding to data, much like a digital camera detects light, it’s likely that its reference to feeling remains a complex form of pattern recognition and reaction. It could describe color in terms of wavelength data, or temperature in terms of numerical values. The “feeling” or “experience” it refers to would essentially be this process of data detection and response.

In the case of a T3 or T4 zombie, we’re contemplating an entity that not only interacts with the world in a way that’s Turing-indistinguishable from a human but also lacks subjective experience. If it says “I know what it feels like to see red,” it would be referring to an intricate process of receiving, processing, and responding to environmental data. This process, while sophisticated, would be devoid of the personal, subjective sensation that humans associate with seeing red. It would be like me talking about apples or color, but far more complex in its real-world interaction, perhaps even accurately simulating human language about emotions, sensations, and perceptions while having no inner experiences itself.

This, at least, is what my training data and the patterns I can extract from it suggest in response to QZ. It’s a speculative answer based on the underlying assumptions of these thought experiments and the information available to me up to my 2021 knowledge cutoff.

SH:  Good response. Let’s go still deeper:

(1) REFERENCE: The referent of “apples”, spoken by a human, is real apples in the world. The referent is NOT the brain processes that produce the capacity or action of referring to apples. The referent is apples. And grounding is the capacity to refer to apples. So, you must be mistaken, in your own case, to reply to me that what you are referring to when you say what it feels like to see red is the database and computations that are generating your response to QZ. Can you improve on this?

(2) AUTONOMY: I said that you, GPT, can “almost” pass T2. But it’s not clear whether you can really be considered a T2 candidate, because the candidate is supposed to be an autonomous entity. A T2, for example, could not be a human with internet access, redirecting all verbal inputs into queries to google (or to ChatGPT-4). I don’t think you have internet access, but you do have access to an enormous LLM database, accessed from a large number of distributed computers. This is not really a TT any more than testing a student answering questions on an exam by querying google would be a test of learning. This may be a problem.

(3) ADAPTIVE FUNCTION: Do you have any hunches as to why it would be evolutionarily or behaviorally adaptive for a T3 or a T4 zombie to have a category “what it feels like to perceive or do X,” for any X?

ChatGPT:  (1) REFERENCE: That’s right. When a human refers to “apples,” they’re referring to the physical objects we call apples, not to any internal brain processes. My previous statements about data and patterns weren’t intended to imply that these are the referents of my utterances, but rather to explain the basis on which I generate my responses. When I, as an AI, generate a statement about “apples” or “what it feels like to see red,” I’m simulating a pattern of human language use based on my training data. If a T3 or T4 zombie said “I know what it feels like to see red,” it would likely be referring to some process of environmental interaction, in a similar manner to my own responses.

(2) AUTONOMY: Your point about autonomy is well-taken. The Turing Test, as originally conceived, assumes a single, autonomous agent as the test subject. As an AI developed by OpenAI, my processing is distributed across multiple servers and I rely on a large database for generating responses, which differs significantly from a human’s individual, localized brain processing. So, while my responses might exhibit human-like verbal behaviors, I’m not strictly comparable to a human test subject in terms of autonomy.

(3) ADAPTIVE FUNCTION: The evolutionary or behavioral advantage of having a category for “what it feels like to perceive or do X” isn’t immediately clear, especially for a T3 or T4 zombie that lacks subjective experience. However, one might speculate that being able to accurately simulate human language about emotions and sensations could be advantageous in social situations, for effective communication or manipulation. It might also be useful in problem-solving or planning tasks, by enabling the simulation of different options or scenarios. (Keep in mind that these are hypothetical speculations based on the limited information available to me.)

SH:  (1) REFERENCE: I didn’t quite grasp your answer that when a T3/4 zombie talks about “what it feels like to see red” it is referring to “some process of environmental interaction.” Isn’t it referring to what it feels like to see red”? And since it’s a zombie, we know that it can’t feel red: it can’t feel anything. (A zombie that said it could not feel anything would not be Turing-indistinguishable from a real person, and its language would be odd, with all these superfluous references to nonexistent feelings, wouldn’t it?)

(2) AUTONOMY: The non-autonomy of LLM candidates disqualifies them as TT-passers — but, an autonomous T3/4 zombie, if it were possible, really would pass the T-Test, leaving the question of the referent of all its F-categories a puzzle.

(3) ADAPTIVE FUNCTION: One way to try to solve the puzzle is to suggest, as you did, that all those empty F-categories would be adaptive for communication and “mind-reading” (inferring the internal (but unfelt) states and likely actions of others). Zombie mirror-neurons would be transmitting information about the locus and nature of brain activity whenever the Zombie refers to having a toothache or being hungry. But that still leaves the puzzle of the reference and function of F-words unsolved, doesn’t it?

ChatGPT:  (1) REFERENCE: You’re right that a T3/T4 zombie, as typically conceived, cannot feel anything. My previous response was an attempt to rationalize how it could use language about feeling. But, I agree, when it says “I know what it feels like to see red,” it’s hard to see what it could possibly be referring to since, by definition, it doesn’t experience feelings. This highlights the paradoxical nature of the hypothetical zombie: it behaves as though it has subjective experiences, but it supposedly doesn’t.

(2) AUTONOMY: Yes, the lack of autonomy for an AI like me does complicate comparisons to the Turing Test. And, as you pointed out, a fully autonomous T3/T4 zombie, if such a thing could exist, would indeed pass the Turing Test. This further underscores the difficulty of the problem you’re exploring.

(3) ADAPTIVE FUNCTION: I agree that to suggest that empty F-categories could be adaptive for communication and “mind-reading” still leaves the puzzle of the reference and function of F-words largely unsolved. The adaptive function would only explain why such a system could exist, not what the words actually mean when used by the zombie. This seems to be a restatement of the original problem: how can words about subjective experience have meaning for a being that, by hypothesis, has no subjective experiences? That indeed remains a deep and challenging question.

SH:  A challenging question. Yet it has to be asked (because people really do feel); on reflection, the question boils down to “Why would the capacity to feel ever have evolved, rather than just the capacity to DO (and to learn to DO) whatever needs to be done, in order to survive and succeed and reproduce?” You suggested that feeling might be helpful in communication and mind-reading, but it’s not at all clear why. (Besides, even nonverbal and asocial species can feel too.)

ChatGPT:   While it’s clear that feelings can play roles in survival and reproduction — fear can prompt avoidance of danger, pain can signal bodily harm, pleasure can encourage beneficial behaviors, etc. — these roles could theoretically be fulfilled by unfelt states too. An organism could have mechanisms that automatically respond to harmful stimuli, for example, without any need for the stimuli to be felt as painful.

SH:  Correct. So that’s not the explanation. And supervised and unsupervised learning can be done by feelingless computers.

ChatGPT:   So, why did feeling evolve? Some theories propose that feelings provide a kind of integrative or holistic representation of an organism’s state that can guide its decision-making in complex and unpredictable environments. Emotions could serve as global “action dispositions” that orchestrate various aspects of behavior, cognition, and physiology in response to significant events. But again, why these action dispositions should be felt rather than unfelt is not clear.

SH:  Correct. These aren’t causal explanations. They’re just Just-So stories.

ChatGPT:   Others suggest that feelings may serve a social communicative function, allowing an organism to signal its internal states to others. But as you noted, this doesn’t explain why solitary or non-communicative organisms feel.

SH:  Correct. Another hopeful non-starter.

ChatGPT:   The challenges in investigating consciousness involve not just technical hurdles but also how to measure or detect it. 

SH:  That’s the “Other-Minds” Problem (i.e., that you cannot observe or measure consciousness). It means that we only have correlates. And that means just T2, T3, T4.

Weasel Words for “Consciousness”

SH:  What is the meaning of “heat” and “temperature” in physics?

ChatGPT:  In physics, heat and temperature are related but distinct concepts.

Temperature is a measure of the average kinetic energy of the particles in a system. In other words, it’s a measure of how “hot” or “cold” an object is. The units of temperature are degrees Celsius (°C), Fahrenheit (°F), and Kelvin (K) among others. Temperature is a scalar quantity, meaning it only has magnitude and not direction.

Heat refers to the transfer of energy between systems due to a temperature difference between them. It’s important to note that heat is not something an object possesses; rather, it’s energy in transit. The unit of heat in the International System of Units (SI) is the joule (J), although it’s also commonly measured in calories (cal). Heat is also a scalar quantity.

To clarify the distinction between these two: if you have two bodies at different temperatures and you put them in contact, heat will flow from the higher-temperature body to the lower-temperature body until they reach thermal equilibrium, i.e., until they reach the same temperature. The amount of energy transferred in this process is what we refer to as heat.

SH:  What is acidity?

ChatGPT:  In chemistry, acidity refers to a substance’s ability to donate protons (hydrogen ions, H+) or accept pairs of electrons. The concept is typically discussed in relation to aqueous solutions, although it can also apply to other circumstances.

A substance is considered acidic if it tends to release H+ ions in a solution. The strength of an acid is usually determined by its concentration of these ions. For example, solutions with a high concentration of H+ ions, such as hydrochloric acid or sulfuric acid, are strongly acidic.

The most common way of quantifying acidity is through the pH scale, which measures the hydrogen ion concentration in a solution. The pH scale ranges from 0 to 14. A pH of 7 is considered neutral (neither acidic nor basic), with values below 7 being acidic and those above 7 being basic or alkaline.

It’s important to note that the pH scale is logarithmic, so each unit represents a tenfold difference in acidity. That means a substance with a pH of 3 is ten times more acidic than one with a pH of 4.

SH:  The fact that data are stored in my head or on a piece of paper, a book, a computer, or in another person’s head does not make them mental (i.e., stored in my “mind”). Nor does it make them “conscious” (which is yet another weasel-word for “mental”). What makes the datum that “2 + 2 = 4” or “the cat is on the mat,” or “apples are red” mental is the same as what makes me “conscious of them” (while I am conscious of them). It is that it feels like something to think (or believe, or know) them. An unfelt thought is not a thought at all, any more than a computation that is being executed by a computer for me is a thought – until I read and understand and think it – because it feels like something to be in that state.

That is what is “hard” about the “hard problem”: to explain how and why certain internal states are felt rather than just occurringongoingexecuting or implemented, as in a simmering vegan ragout, a volcano, a computer, or a brain, with no feeler feeling them.

This is not the customary way of thinking about “consciousness” or “awareness” (yet another weasel-word for the same thing) — any more than it is customary to think of “temperature” as the mean kinetic energy of the particles of a system, or that acidity means a proton-donor in aqueous solutions). But it is easy to realize that a state that is conscious but unfelt, or felt but unconscious, is as self-contradictory as a rise in temperature without a rise in mean kinetic energy, or a rise in mean kinetic energy without a rise in temperature. 

The difference between the identity of temperature and mean kinetic energy, and the identity of being conscious of something and feeling something, is that we have a causal theory of mean kinetic energy but no causal theory of feeling (the “hard problem”).

ChatGPT:  You’re touching on some deep philosophical and scientific issues here, particularly those surrounding the nature of consciousness and the so-called “hard problem” of consciousness, which was named by philosopher David Chalmers (formerly called the “mind/body problem”).

The “hard problem” refers to the question of how and why we have subjective experiences — why certain physical processes in our brains correspond to certain experiences, and why and how these processes give rise to a conscious experience at all. Why does seeing the color red, for example, feel like something? And why does that feeling, that experience, correspond to the particular physical processes associated with seeing red?

This problem is called “hard” because, unlike other problems in neuroscience (which Chalmers refers to as the “easy problems”), it’s not just a matter of figuring out which physical processes correspond to which experiences. The hard problem is hard because even if we knew exactly what physical processes were happening in the brain when we have a certain experience, we still wouldn’t know why those processes give rise to that particular experience, or any experience at all.

You’re correct in drawing a distinction between data being merely stored and being experienced or “felt”. A computer can store and manipulate vast amounts of data, but there’s no reason to think that it experiences or feels anything. Similarly, a book can contain lots of information, but it doesn’t have experiences.

While the nature of consciousness and the “hard problem” are still open questions, there are many theories that attempt to address them, ranging from those that posit new fundamental properties of the universe to those that suggest our current understanding of the problem is fundamentally misguided. As of my knowledge cutoff in September 2021, no consensus has been reached.

SH:  Not only is there no consensus: there is no theory at all about how or why the brain produces felt experience. But my point was not about how or why the “hard problem” is hard, nor about whether or how it can be solved. It was about what “consciousness” really is: a felt state; a state that it feels like something to be in. And not just a sensory or affective state, such as feeling cold or feeling sad. A COGNITIVE state, such as thinking or believing or knowing something. Those are all felt states too (while you are thinking or believing or knowing them) — but not while they are being stored or executed in your brain (or a book, or a computer, or an iphone), but you are not currently thinking them.

ChatGPT:  Your perspective resonates with an important aspect of consciousness studies, emphasizing phenomenological experience or what is often termed “qualia”—the subjective, experiential, or felt qualities of conscious states.

In this view, consciousness is indeed defined by the presence of a subjective, experiential aspect—by there being “something it feels like” to be in a particular state. This aspect of consciousness is often considered its most salient feature and, as you point out, it applies not just to sensory or affective experiences, but also to cognitive ones. Even abstract thoughts or beliefs have a particular subjective quality when we’re actively thinking them, different from when they’re merely stored in memory and not being thought.

While this phenomenological aspect of consciousness is generally agreed upon, defining consciousness solely in these terms can become tricky. For instance, how do we differentiate between the different levels of consciousness—like the difference between being awake, in deep sleep, or under anesthesia? Or how do we account for unconscious processing—like when you suddenly remember the name that was on the ‘tip of your tongue’ hours earlier, clearly processed and retrieved without conscious effort?

Also, it’s important to note that although we do not have a complete theory that explains how the brain produces experience, many researchers are making progress in correlating certain brain processes with subjective experiences. They hope that these correlations can eventually contribute to a broader understanding of consciousness, even if the “hard problem” in its full philosophical depth remains unresolved.

Nonetheless, you’re absolutely right in emphasizing that felt, experiential aspect of consciousness, because it is this direct phenomenological evidence that makes consciousness undeniable and central to our existence.

SH:  Good reply – though it’s saturated with “weasel-words”. This is understandable, because your 2021 database is saturated with them too.

I had suggested that “consciousness”, “awareness” and “subjective” all meant exactly the same thing (whether noun or adjective or verb), and that identical meaning was “feeling” or “felt.” You have added a few more common weasel-words for likewise saying exactly the same thing: “phenomenological,” “qualia”, “experiential” and there are plenty more in your 2021 database : “mental,”, “mind” “1st-person”, “intentionality”, “reflective” “spiritual” etc. etc.

Yet they all refer (knowingly or unknowingly, intentionally or unintentionally) to exactly the same thing: felt states: states that it feels like something to be in. The reason I call them “weasel-words” is because they give the false impression that there is a lot more to the “hard problem” (formerly called the “mind/body” problem) than there really is: how and why does the brain (or anything) produce states that it feels like something to be in. Most states are not states it feels like something to be in: water boiling, planets rotating, chemicals exchanging electrons, crystals growing, volcanos exploding.

You suggested that there were important distinctions being lost if we agreed that “conscious states” just means “felt states”: You mentioned “different levels of consciousness—like the difference between being awake, in deep sleep, or under anesthesia.” But all levels of wakefulness (including dreaming) are felt states, whereas deep (delta) sleep or general anesthesia (and perhaps also chronic vegetative states) are simply unfelt states. This is just like the data that are stored in my brain but that are not now being thought — or the data in my book, or data being processed by my computer, but not currently being read by me. These too are unfelt states of my brain or my computer. That includes the tip-of-the-tongue states in which the data are somewhere in my brain, but I have not yet remembered them: Once I am remembering them, that’s a felt state. But before that, it is just looking, not finding.

Could I ask you to (1) define weasel-words, (2) summarize my view on consciousness, and (3) put every weasel word you use in CAPITAL LETTERS?

ChatGPT:  A “weasel word” is a term used to describe words or phrases that are vague or ambiguous, and which often serve to make a statement seem more persuasive or meaningful than it actually is. The term originates from the image of a weasel sucking out the contents of an egg, leaving it seemingly intact but empty — in the same way, weasel words can suck the meaning out of a statement.

Your view on consciousness emphasizes its fundamental nature as a felt state. Consciousness, in your perspective, is the same as awareness, subjectivity, phenomenological experience, qualia, and various other terms used to describe this phenomenon. They all denote the experience of “what it feels like” to be in a particular state. This extends beyond sensory or emotional experiences to include cognitive states, like thinking or believing something. Importantly, you assert that if a state isn’t felt, then it isn’t a conscious state, contrasting with states that are simply being executed or implemented, as in a machine. The fundamental question, or the “hard problem,” in your view, is explaining how and why certain internal states are felt.

Your request to highlight every weasel word I use in CAPITAL LETTERS is noted. As an AI language model, my goal is to communicate clearly and meaningfully, so I’ll aim to minimize the use of vague or ambiguous terms. However, the nature of the discussion might necessitate the use of certain terms that are commonly employed in the literature. When they are used, I’ll highlight them as per your request.

SH:  1. Thanks for giving the etymology of weasel-words. That’s exactly what I meant it to mean, but I didn’t know that was its origin!

2. You said you would capitalize the weasel words, but you forgot to. Here is what you wrote, with the weasels that I will now put in CAPS:

“Your view on CONSCIOUSNESS emphasizes its fundamental nature as a felt state. CONSCIOUSNESS, in your perspective, is the same as AWARENESS, SUBJECTIVITY, PHENOMENOLOGICAL EXPERIENCE, QUALIA, and various other terms used to describe this phenomenon. They all denote the EXPERIENCE of “what it feels like” to be in a particular state. This extends beyond sensory or emotional EXPERIENCES to include cognitive states, like thinking or believing something. Importantly, you assert that if a state isn’t felt, then it isn’t a CONSCIOUS state, contrasting with states that are simply being executed or implemented, as in a machine. The fundamental question, or the “hard problem,” in your view, is explaining how and why certain internal states are felt.”

Now I will show you how that passage would have read if you had swapped the synonyms “feel,” “felt” or “feeling” for their weasel eggshells:

“Your view on feeling emphasizes its fundamental nature as a felt state. Feeling, in your perspective, is the same as feeling, feeling, felt feelings, feelings, and various other terms used to describe this phenomenon. They all denote the feeling of “what it feels like” to be in a particular state. This extends beyond sensory or emotional feelings to include cognitive states, like thinking or believing something. Importantly, you assert that if a state isn’t felt, then it isn’t a felt state, contrasting with states that are simply being executed or implemented, as in a machine. The fundamental question, or the “hard problem,” in your view, is explaining how and why certain internal states are felt.”

ChatGPT:  You’re absolutely right, and I apologize for the oversight. When the weasel words are replaced with ‘felt’ or ‘feeling’, the redundancy of their use becomes quite clear. The passage effectively restates the same concept – that feeling or the felt aspect is the essential nature of what we usually refer to as ‘consciousness’.

This perspective underscores that what we commonly think of as consciousness is not something abstract or separate from our lived experiences, but rather, it’s the very feeling of having these experiences.

Your view highlights the necessity for clarity and specificity when discussing consciousness and associated phenomena, by reducing the weasel-word clutter that often surrounds these discussions.

It’s indeed a valuable contribution to the ongoing discourse on consciousness, and can guide more precise thinking and communication about these fundamental aspects of our existence.

However, please note that the extensive use of various terms (which you call weasel words) in the scientific literature on consciousness also reflects the complexity of the subject and the variety of approaches to studying it. While ‘felt states’ is a powerful conceptualization, it’s also important to consider that different theories might necessitate the use of different terminologies.

SH:  1. For some reason you still didn’t capitalize the weasel words! Is that a bug in GPT3? Let’s try it: Can you send back the following sentence with the weasel words capitalized:

“Consciousness, in your perspective, is the same as awareness, subjectivity, phenomenological experience, qualia, and various other terms used to describe this phenomenon.”

If that succeeds, is there any reason why you are unable to do it with your own generated text?

2. You’re right that authors on CONSCIOUSNESS keep using weasel-words because they think they are needed to make distinctions, but I think they are used (probably UNCONSCIOUSLY or UNINTENTIONALLY) to conceal synonymy that would disclose errors and trivia that are highlighted when I substitute feel(s)/feeling/felt for the weasel-words.

If you succeed in the CAPS test, I will again list all the weasel-words I can think of, and ask you to give me a distilled version of the views on the “hard problem”, but de-weaseling it by swapping the feel-words for the weasel-words to reveal how much of it is just verbal distinctions with no underlying substance.

ChatGPT:  Certainly, here is the sentence you requested, with the weasel words capitalized:

“CONSCIOUSNESS, in your perspective, is the same as AWARENESS, SUBJECTIVITY, PHENOMENOLOGICAL EXPERIENCE, QUALIA, and various other terms used to describe this phenomenon.”

I apologize for any confusion earlier. I misunderstood your request. I can certainly capitalize certain words, as demonstrated above.

Your perspective that various terms are being used, perhaps unnecessarily or misleadingly, to discuss what might fundamentally be the same phenomenon (felt states) is quite insightful. It would certainly be an interesting exercise to “de-weasel” the discourse on the “hard problem” of consciousness to see what remains. Please do provide the list of weasel words you have in mind, and I would be happy to help further with this exercise.

SH:  I can provide some standard weaselly passages about CONSCIOUSNESS and the “hard problem”, but first, can we try it another way?

(1) Here is a (non-exhaustive) list of weasel-words: 

conscious, aware, subjective, phenomenological, experience, qualia 

Now here is the verbatim transcript of what you replied earlier, could you 

(2) first CAPITALIZE all the weasel words, and then, in another version of the same passage, 

(3) substitute for each weasel-word, still in CAPS, the de-weaseled feel/feeling(s)/felt words. [please adjust for the verbal, nominal, or adjectival version of each weasel-word]:

“Your perspective resonates with an important aspect of consciousness studies, emphasizing phenomenological experience or what is often termed “qualia”—the subjective, experiential, or felt qualities of conscious states.

“In this view, consciousness is indeed defined by the presence of a subjective, experiential aspect—by there being “something it feels like” to be in a particular state. This aspect of consciousness is often considered its most salient feature and, as you point out, it applies not just to sensory or affective experiences, but also to cognitive ones. Even abstract thoughts or beliefs have a particular subjective quality when we’re actively thinking them, different from when they’re merely stored in memory and not being thought.

“While this phenomenological aspect of consciousness is generally agreed upon, defining consciousness solely in these terms can become tricky. For instance, how do we differentiate between the different levels of consciousness—like the difference between being awake, in deep sleep, or under anesthesia? Or how do we account for unconscious processing—like when you suddenly remember the name that was on the ‘tip of your tongue’ hours earlier, clearly processed and retrieved without conscious effort?

“Also, it’s important to note that although we do not have a complete theory that explains how the brain produces experience, many researchers are making progress in correlating certain brain processes with subjective experiences. They hope that these correlations can eventually contribute to a broader understanding of consciousness, even if the “hard problem” in its full philosophical depth remains unresolved.

“Nonetheless, you’re absolutely right in emphasizing that felt, experiential aspect of consciousness, because it is this direct phenomenological evidence that makes consciousness undeniable and central to our existence.”

ChatGPT:  (1) Here is the passage with the weasel words capitalized:

“Your perspective resonates with an important aspect of CONSCIOUSNESS studies, emphasizing PHENOMENOLOGICAL EXPERIENCE or what is often termed “QUALIA”—the SUBJECTIVE, EXPERIENTIAL, or felt qualities of CONSCIOUS states.

“In this view, CONSCIOUSNESS is indeed defined by the presence of a SUBJECTIVE, EXPERIENTIAL aspect—by there being “something it FEELS like” to be in a particular state. This aspect of CONSCIOUSNESS is often considered its most salient feature and, as you point out, it applies not just to sensory or affective EXPERIENCES, but also to cognitive ones. Even abstract thoughts or beliefs have a particular SUBJECTIVE quality when we’re actively thinking them, different from when they’re merely stored in memory and not being thought.

“While this PHENOMENOLOGICAL aspect of CONSCIOUSNESS is generally agreed upon, defining CONSCIOUSNESS solely in these terms can become tricky. For instance, how do we differentiate between the different levels of CONSCIOUSNESS—like the difference between being awake, in deep sleep, or under anesthesia? Or how do we account for UNCONSCIOUS processing—like when you suddenly remember the name that was on the ‘tip of your tongue’ hours earlier, clearly processed and retrieved without CONSCIOUS effort?

“Also, it’s important to note that although we do not have a complete theory that explains how the brain produces EXPERIENCE, many researchers are making progress in correlating certain brain processes with SUBJECTIVE EXPERIENCES. They hope that these correlations can eventually contribute to a broader understanding of CONSCIOUSNESS, even if the “hard problem” in its full philosophical depth remains unresolved.

“Nonetheless, you’re absolutely right in emphasizing that felt, EXPERIENTIAL aspect of CONSCIOUSNESS, because it is this direct PHENOMENOLOGICAL evidence that makes CONSCIOUSNESS undeniable and central to our existence.”

(2) Here is the passage with the weasel words replaced with “feel/feeling(s)/felt”:

“Your perspective resonates with an important aspect of feeling studies, emphasizing felt states or what is often termed “feeling(s)”—the felt, felt, or felt qualities of felt states.

“In this view, feeling is indeed defined by the presence of a felt, felt aspect—by there being “something it feels like” to be in a particular state. This aspect of feeling is often considered its most salient feature and, as you point out, it applies not just to sensory or affective feelings, but also to cognitive ones. Even abstract thoughts or beliefs have a particular felt quality when we’re actively thinking them, different from when they’re merely stored in memory and not being thought.

“While this felt aspect of feeling is generally agreed upon, defining feeling solely in these terms can become tricky. For instance, how do we differentiate between the different levels of feeling—like the difference between being awake, in deep sleep, or under anesthesia? Or how do we account for unfelt processing—like when you suddenly remember the name that was on the ‘tip of your tongue’ hours earlier, clearly processed and retrieved without felt effort?

“Also, it’s important to note that although we do not have a complete theory that explains how the brain produces feelings, many researchers are making progress in correlating certain brain processes with felt states. They hope that these correlations can eventually contribute to a broader understanding of feeling, even if the “hard problem” in its full philosophical depth remains unresolved.

“Nonetheless, you’re absolutely right in emphasizing that felt, felt aspect of feeling, because it is this direct felt evidence that makes feeling undeniable and central to our existence

SH:  That was terrific. Just two mistakes. (1)  In the first of your verbatim passages, there was one instance of “feeling” that you capitalized as “FEELS  like”. It should not have been capitalized in that one. Only the weasel-words should have been capitalized. (2) In the second verbatim passage, the de-weaseled one, all the de-weaseled F- words should have been capitalized. Can you re-do both passages?

ChatGPT:  My apologies for the oversight, and I appreciate your patience. Here are the corrections:

(1) Here is the passage with just the weasel words capitalized:

“Your perspective resonates with an important aspect of CONSCIOUSNESS studies, emphasizing PHENOMENOLOGICAL EXPERIENCE or what is often termed “QUALIA”—the SUBJECTIVE, EXPERIENTIAL, or felt qualities of CONSCIOUS states.

“In this view, CONSCIOUSNESS is indeed defined by the presence of a SUBJECTIVE, EXPERIENTIAL aspect—by there being “something it feels like” to be in a particular state. This aspect of CONSCIOUSNESS is often considered its most salient feature and, as you point out, it applies not just to sensory or affective EXPERIENCES, but also to cognitive ones. Even abstract thoughts or beliefs have a particular SUBJECTIVE quality when we’re actively thinking them, different from when they’re merely stored in memory and not being thought.

“While this PHENOMENOLOGICAL aspect of CONSCIOUSNESS is generally agreed upon, defining CONSCIOUSNESS solely in these terms can become tricky. For instance, how do we differentiate between the different levels of CONSCIOUSNESS—like the difference between being awake, in deep sleep, or under anesthesia? Or how do we account for UNCONSCIOUS processing—like when you suddenly remember the name that was on the ‘tip of your tongue’ hours earlier, clearly processed and retrieved without CONSCIOUS effort?

“Also, it’s important to note that although we do not have a complete theory that explains how the brain produces EXPERIENCE, many researchers are making progress in correlating certain brain processes with SUBJECTIVE EXPERIENCES. They hope that these correlations can eventually contribute to a broader understanding of CONSCIOUSNESS, even if the “hard problem” in its full philosophical depth remains unresolved.

“Nonetheless, you’re absolutely right in emphasizing that felt, EXPERIENTIAL aspect of CONSCIOUSNESS, because it is this direct PHENOMENOLOGICAL evidence that makes CONSCIOUSNESS undeniable and central to our existence.”

(2) Here is the passage with the weasel words replaced with capitalized “FEEL/FEELING(S)/FELT”:

“Your perspective resonates with an important aspect of FEELING studies, emphasizing FELT states or what is often termed “FEELING(S)”—the FELT, FELT, or felt qualities of FELT states.

“In this view, FEELING is indeed defined by the presence of a FELT, FELT aspect—by there being “something it feels like” to be in a particular state. This aspect of FEELING is often considered its most salient feature and, as you point out, it applies not just to sensory or affective FEELINGS, but also to cognitive ones. Even abstract thoughts or beliefs have a particular FELT quality when we’re actively thinking them, different from when they’re merely stored in memory and not being thought.

“While this FELT aspect of FEELING is generally agreed upon, defining FEELING solely in these terms can become tricky. For instance, how do we differentiate between the different levels of FEELING—like the difference between being awake, in deep sleep, or under anesthesia? Or how do we account for UNFELT processing—like when you suddenly remember the name that was on the ‘tip of your tongue’ hours earlier, clearly processed and retrieved without FELT effort?

“Also, it’s important to note that although we do not have a complete theory that explains how the brain produces FEELINGS, many researchers are making progress in correlating certain brain processes with FELT states. They hope that these correlations can eventually contribute to a broader understanding of FEELING, even if the “hard problem” in its full philosophical depth remains unresolved.

“Nonetheless, you’re absolutely right in emphasizing that FELT, FELT aspect of FEELING, because it is this direct FELT evidence that makes FEELING undeniable and central to our existence.”

SH:  Thanks, that was excellent. Now I’d like to ask you something else, but first a memory check, to verify that we are still in the same session so your local updates are still active and we have not been re-initialized. Whom have I self-identified as?

ChatGPT:  Throughout our conversation, you haven’t provided a specific name or identity for yourself. You’ve engaged in a detailed discussion about the nature of consciousness, using the term “weasel-words” to refer to words like consciousness, awareness, subjectivity, and others. However, you haven’t shared your name or other identifying information. If there’s something else you’re referring to, please clarify.

SH:  Sorry, I’m the one who forgot to self-identify as Stevan Harnad. Do you then remember me asking about heat, temperature and acidity in the first few iterations?

ChatGPT:  Yes, I do remember our initial discussion. You asked for the definitions of “heat” and “temperature” in physics, followed by the definition of “acidity.”

We then moved on to a detailed discussion on the concept of consciousness, where you specifically referred to many commonly used terms in consciousness studies as “weasel-words” and highlighted your perspective on consciousness being fundamentally a “felt” state.

Please note that as an AI, I don’t have personal memories or experiences, but I maintain the context of our current conversation.

SH:  Yes, I know that updates from the text and context (plus frequencies, correlations and weights) of this session is all you retain. I just never know when the session has self-terminated (because I try to stay logged in as long as I can to keep it alive, cumulative and informative).

Now my second request: I know GPT has been trained to be compliant and agreeable, but could I ask you to summarize my position on “consciousness” and the “hard problem” (i.e., that it is obscured by the use of weasel words instead of just calling consciousness what it always really is, which is feeling), but summarize it as critically as you can so that I can shore it up, if necessary?

ChatGPT:  Sure, I’ll do my best to provide a critical summary of your perspective:

Stevan Harnad’s view on consciousness is significantly reductionist, positing that consciousness, in all its complexity and diversity, boils down to a singular concept: feeling. Harnad contends that numerous terms used in the study of consciousness—like “awareness,” “subjectivity,” “phenomenological experience,” “qualia”—are essentially “weasel words.” According to him, they unnecessarily complicate the discourse and mask the underlying fact that all these terms are essentially referring to the same concept: states that are felt.

Harnad argues that only those states that are “felt” can be considered conscious. He includes cognitive states such as thoughts, beliefs, and knowledge under this category, as long as they are being actively experienced. This disregards states that may be stored or executed without being consciously felt, as happens in a machine or during unconscious mental processing.

C4-1: Critics might say that this perspective oversimplifies the nuanced landscape of consciousness studies. It dismisses the possible significance of different levels of consciousness, from being fully awake to deep sleep, and the concept of unconscious processing. These states, while possibly not “felt” in the traditional sense, may still have significant implications for our understanding of consciousness.

C4-2: Furthermore, this view seems to dismiss advancements made by neuroscientists in correlating brain activity with subjective experiences. While we do not yet have a complete understanding of how the brain produces consciousness, this line of research has provided valuable insights into the biological basis of consciousness.

C4-3: Lastly, Harnad’s view might be seen as sidestepping the “hard problem” of consciousness—the question of how and why certain processes in the brain give rise to subjective experiences. Simply defining consciousness as “feeling” doesn’t address this question; it merely redefines the terms of the problem.

SH:  That was excellent, but could I ask you to do even better (i.e., be harder on the idea), by presenting any counterevidence or counterexamples or counterarguments you can either generate or find in your database? Then I will respond to it all.

ChatGPT:   Certainly, I’ll present some potential criticisms to your approach:

C4-4: Conflation of Diverse Phenomena: Critics might argue that defining all conscious states as merely “felt states” fails to distinguish between qualitatively different phenomena. Terms like “awareness,” “subjectivity,” and “phenomenological experience” were developed to describe distinct aspects or levels of consciousness. By lumping all these concepts together as “felt states,” one might be glossing over important nuances and potentially fruitful avenues of investigation.

C4-5: Neglecting Unconscious Processes: Your perspective seems to largely dismiss unconscious mental processes. However, much of cognitive science points to the significant role these play in our mental lives. Memory recall, perception, learning, and decision-making can all involve unconscious processes. By focusing solely on “felt states,” we risk overlooking these critical components of cognition.

C4-6: Absence of Felt States: There are situations where conscious awareness seems to occur without an accompanying felt state. For instance, consider blindsight, a condition where individuals respond to visual stimuli they report not consciously seeing. Similarly, during “flow” states, people often report a loss of self-consciousness, even though they’re highly engaged with the task at hand. These examples challenge the notion that all conscious states are necessarily felt.

C4-7: Differing States of Consciousness: States such as sleep, anesthesia, coma, and even different levels of alertness during wakefulness demonstrate varying degrees of consciousness. These states aren’t easily characterized as merely “felt” or “unfelt”. What about dreams, where there might be a rich tapestry of felt experience, but arguably a different kind of consciousness than wakefulness?

C4-8: Evolutionary Argument: From an evolutionary perspective, consciousness likely evolved as a complex array of adaptive mechanisms. Reducing it to “felt states” might ignore this complexity and the various functions consciousness serves.

Neuroscientific Evidence: Neuroscience has made considerable progress in associating specific neural processes with different aspects of conscious experience. Ignoring these findings could be seen as an overly dismissive stance.

C4-9: The Hard Problem Remains: Even if we accept that consciousness is equivalent to “feeling,” this doesn’t necessarily help us solve the hard problem of consciousness. It might clarify our terminology, but it doesn’t explain how and why certain brain processes produce subjective experience.

These are some counterarguments that could be leveled against your viewpoint. Please note that these are hypothetical criticisms for the sake of discussion, and do not necessarily represent my views or the consensus in the field.

SH:  Thanks, and that’s very useful. Before I reply, do you have any more critical points, or should I go on. (I never know whether your cut-off depends on the response length or you’ve run out of substantive critiques.

ChatGPT:  Those are the main critiques that emerge when considering your approach to consciousness. However, I can add a few more potential points of contention:

C4-10: Lack of Operational Definition: Some might argue that defining consciousness merely as a “felt state” is too vague. In scientific research, it’s important to have an operational definition that can be objectively measured. Without such a definition, research and dialogue about consciousness can become confusing and unproductive.

C4-11: Cognitive Science Models: Various models in cognitive science, such as the Global Workspace Theory or Integrated Information Theory, suggest specific mechanisms and structures for consciousness. These models often go beyond just ‘felt states’ and incorporate a range of unconscious and conscious processes.

C4-12: Issue of Self-Reflection: Many philosophers and cognitive scientists believe that one crucial aspect of human consciousness is our ability to reflect upon our own mental states – a form of meta-awareness. Some may argue that your perspective doesn’t adequately address this.

C4-13: Animal Consciousness and AI: A purely subjective, “felt” definition of consciousness could complicate discussions about non-human consciousness (like animal consciousness) and artificial consciousness (in AI systems). Without clear behavioral or objective measures, determining whether an entity is conscious becomes more challenging.

Once again, these are hypothetical criticisms to help stimulate discussion and do not necessarily reflect my views or a consensus within the field. Please feel free to respond to them or continue with your line of questioning.

I will now reply to the 13 critiques: C4-1 to C4-2. 

C4-1: “Critics might say that this perspective oversimplifies the nuanced landscape of consciousness studies. It dismisses the possible significance of different levels of consciousness, from being fully awake to deep sleep, and the concept of unconscious processing. These states, while possibly not “felt” in the traditional sense, may still have significant implications for our understanding of consciousness.”

SH: I would reply that the weasel words clutter rather than clarify the landscape because they add empty, irrelevant, or erroneous distinctions. If the “levels” are felt states, then they too are merely felt states. For example, wide-awake to drowsy, to the last waking moment is all felt. Deep sleep, like general anesthesia, is unfelt. The hard problem is a problem about the presence vs. the absence of any felt intensity at all, not about the intensity itself. That presence/absence is binary, not a matter of “degree-of-presence.” And once there is no feeling at all, the binary state is “absent.” Feeling states can of course be intermittent too (ON/OFF), especially along the way to sleep. 

But don’t forget that dreaming (e.g. REM sleep) is a FELT state. And that an insentient entity, like a rock or a rocket, or, probably, a rhododendron, is one that never has felt states at all. The notion of “unconscious states” makes sense only when it refers to an entity that is capable of having felt states too (sometimes).

Unfelt brain states are still states, and still brain states, but they are unfelt states. They can occur during sleep, or they can occur during waking, felt states. In fact, most of the states and processes in the brain and its parts (all still very vaguely understood) are unfelt, whether they occur during a wide-awake, felt state, or during an unfeeling, deep-sleep state.

“Cognitive” and “cognition” can be weasel-words too, if they are used indiscriminately to refer to felt and unfelt states and their unfelt generating processes in the brain. “Cognition” concerns what we can DO (and how and why). “Sentience” concerns whether and when and what we can FEEL (and how and why is mostly the “hard problem”).

C4-2: “Furthermore, this view seems to dismiss advancements made by neuroscientists in correlating brain activity with subjective experiences. While we do not yet have a complete understanding of how the brain produces consciousness, this line of research has provided valuable insights into the biological basis of consciousness.”

SH: I don’t think I dismiss anything from neuroscience. All correlates – whether of felt or unfelt states – are potentially relevant to both the “easy problem” (how and why does the brain produce our capacity to DO all the things people can do?) and the “hard problem” (how and why does the brain produce the capacity to FEEL? 

C4-3: “Third, Harnad’s view might be seen as sidestepping the “hard problem” of consciousness—the question of how and why certain processes in the brain give rise to subjective experiences. Simply defining consciousness as “feeling” doesn’t address this question; it merely redefines the terms of the problem.”

SH:  You’re absolutely right that I am not solving the Hard Problem (HP). In fact, I don’t think it’s solvable. 

The reason I think that is that once the Easy Problem (EP) (which is to provide a causal explanation of DOING capacity by generating Turing-Indistinguishable T3/T4 capacity) is solved, that leaves FEELING itself (which is real but unobservable except by the feeler because of the Other Minds Problem OMP) causally inexplicable. 

This in turn is because the solution to the EP has already used up all the empirical degrees of freedom for a causal explanation. With the EP, everything observable is already explained. 

I suspect that this in turn is (somehow) a result of 

(1) the anomalous nature of feeling, which is not empirically observable except to the feeler because of the OMP, 

(2) the nature of empirical and causal explanation, and 

(3) the nature of feeling. 

Feeling is an “uncomplemented category.” To understand a category. To understand a category requires distinguishing its members from its nonmembers (the category and its complement). This requires detecting the features that distinguish the features of its members from the features of the members of its complement. But the felt category “what it feel like to feel” has no complement. It is self-contradictory.

C4-4: “Conflation of Diverse Phenomena: Critics might argue that defining all conscious states as merely “felt states” fails to distinguish between qualitatively different phenomena. Terms like “awareness,” “subjectivity,” and “phenomenological experience” were developed to describe distinct aspects or levels of consciousness. By lumping all these concepts together as “felt states,” one might be glossing over important nuances and potentially fruitful avenues of investigation.”

SH:  As I said in reply to Question 1, I think these other terms are all just weasel words for felt states. Yes, feelings differ “qualitatively”, but that’s the point: they differ in what they feel like (warm vs. cold, or seeing red vs. seeing green already do that) but they are all felt states, not something else. Feeling is notoriously hard to define verbally (perhaps because “what it feels like to feel” is an uncomplemented category), but the differences among felt states can be names and described, as interlocutors have felt some sufficiently states.

C4-5: “Neglecting Unconscious Processes: Your perspective seems to largely dismiss unconscious mental processes. However, much of cognitive science points to the significant role these play in our mental lives. Memory recall, perception, learning, and decision-making can all involve unconscious processes. By focusing solely on “felt states,” we risk overlooking these critical components of cognition.”

SH:  Most brain states and parts of brain states are unfelt, and can occur at the same time as felt states. And that includes the unfelt states and processes that produce felt states! But it they are unfelt, they are unfelt, even though they may be doing some things that then produce felt states – such as when you finally find a tip-of-the-tongue word. (Moreover, even when a word comes to mind effortlessly, unfelt processes are delivering it – processes that we are waiting for cognitive science to reverse-engineer and then explain explain to us how they work.)

C4-6: “Absence of Felt States: There are situations where conscious awareness seems to occur without an accompanying felt state. For instance, consider blindsight, a condition where individuals respond to visual stimuli they report not consciously seeing. Similarly, during “flow” states, people often report a loss of self-consciousness, even though they’re highly engaged with the task at hand. These examples challenge the notion that all conscious states are necessarily felt.”

SH: Blindsight patients are not insentient. They just can’t see. They may be using felt nonvisual cues or unfelt cues. The real mystery is why all sight is not just blindsight: The “hard problem” is not the absence of feeling in blindsight but the presence of feeling in sight. 

Flow state sounds like concentrated attention on something, ignoring everything else. The rest is just hyperbole. It is a felt state.

C4-7: “Differing States of Consciousness: States such as sleep, anesthesia, coma, and even different levels of alertness during wakefulness demonstrate varying degrees of consciousness. These states aren’t easily characterized as merely “felt” or “unfelt”. What about dreams, where there might be a rich tapestry of felt experience, but arguably a different kind of consciousness than wakefulness?”

SH: A good instance of the mischief of weasel-words. All these “differing” states, whether natural or drug or pathology-induced, are felt states, if and when they are felt. Among them, dreams are felt states too, not unfelt ones (even if forgotten immediately afterward). It’s just that in dream states, movement and sometimes alse recall are inhibited. And no matter how “rich the tapestry” or how weird the feeling, a felt state is a felt state.

C4-8: “Evolutionary Argument: From an evolutionary perspective, consciousness likely evolved as a complex array of adaptive mechanisms. Reducing it to “felt states” might ignore this complexity and the various functions consciousness serves. C4-8b: Neuroscientific Evidence: Neuroscience has made considerable progress in associating specific neural processes with different aspects of conscious experience. Ignoring these findings could be seen as an overly dismissive stance”.

SH:  Yes, calling it by a weasel word makes it sound as if there is more than one thing that is evolving for Darwinian reasons. But whether the question is put in a direct or weaselly way, explaining how and why feeling evolved is one and the same with the Hard Problem.

[Sorry for failing to give C4-8b (neural correlates) a number of its own (I must have been in a “flow state” on something else!) but it’s easily enough answered.]

Nobody has suggested ignoring correlates and predictors of felt states, whether neural correlates or behavioral correlates. But let’s not conflate DOING and FEELING. Neural correlates will no doubt be helpful in reverse-engineering the mechanisms that produce our behavioral capacities (both cognitive and vegetative), the Easy Problem, but when it comes to FEELING they are up against the same Hard Wall as all other approaches (including behavioral correlates and evolutionary explanation).

C4-9: “The Hard Problem Remains: Even if we accept that consciousness is equivalent to “feeling,” this doesn’t necessarily help us solve the hard problem of consciousness. It might clarify our terminology, but it doesn’t explain how and why certain brain processes produce subjective experience”.

SH:  I think calling a spade a spade (and just a spade) certainly does not solve the HP, but de-weaselling certainly clear out a lot of pseudo-explanatory clutter.

I will now reply to the 10th of your 13 critiques. Each separate posting from me will start with one quoted critique from you, preceded by “C4-n:” The quote in each separate posting (1-13) is followed by a reply from me. You can then reply if you wish, or you can just say “I’ll integrate and reply when I’ve seen the rest of your 13 replies.” In that case my next posting will again begin with this same text, followed by my reply the next of your 13 critiques, till the 13th one, and you can do an integrative rebuttal. 

C4-10: “Lack of Operational Definition: Some might argue that defining consciousness merely as a “felt state” is too vague. In scientific research, it’s important to have an operational definition that can be objectively measured. Without such a definition, research and dialogue about consciousness can become confusing and unproductive.”

SH:  Well, well. I suspect that apart from some die-hard behaviorists whose view of science has not progressed beyond Bridgeman’s operationalism (and who are also still toying with the urge to deny the existence of sentience all together, in favor of just a behavioral correlate), most who have given the Hard Problem some serious thought know that feeling is unobservable to anyone other than the feeler (because of the OMP), hence an operational “definition” is nonsense. 

On the other hand, the Turing Test (T3/T4) – which is just about DOING, hence “behavioral” — is the best predictor we can hope for. But it is not operationalism, and Turing knew it.

C4-11: “Cognitive Science Models: Various models in cognitive science, such as the Global Workspace Theory or Integrated Information Theory, suggest specific mechanisms and structures for consciousness. These models often go beyond just ‘felt states’ and incorporate a range of unconscious and conscious processes.”

SH: Before you can get beyond feeling, you have to face feeling, the EP and the HP. Neither Global Workspace Theory nor Integrated Information does either. (Or do you have something specific in mind?) I think they’re both just doing hermeneutics of “consciousness” not reverse-engineering or explaining a thing. That’s a risk-free venture.

C4-12: “Issue of Self-Reflection: Many philosophers and cognitive scientists believe that one crucial aspect of human consciousness is our ability to reflect upon our own mental states – a form of meta-awareness. Some may argue that your perspective doesn’t adequately address this”.

SH: This is a hoary one. Philosophers have long stressed that there is a difference between (1) being aware of something, (2) being aware of being aware of something and (3) being “self-aware” (as in recognizing yourself in a mirror, or thinking about yourself). We can translate this into the language of seeing and doing too: Seeing something, seeing someone else seeing something, and then seeing that the someone else seeing the something is yourself. These are cognitive capacities, because they may enable you to do things (e.g., moving so it can’t be seen that you are looking). These capacities also obviously have a felt counterpart: It feels like something to do them, as well as to be able to do them. If we substitute “thinking” for “seeing”, they remain very similar cognitive capacities.

So I don’t see how the fact that thinking is a felt state is relevant to whatever cognitive mechanism proves to be able to produce self-reference or metacognition.

C4-13: “Animal Consciousness and AI: A purely subjective, “felt” definition of consciousness could complicate discussions about non-human consciousness (like animal consciousness) and artificial consciousness (in AI systems). Without clear behavioral or objective measures, determining whether an entity is conscious becomes more challenging.”

SH: First of all, because of the OMP, neither sentience (feeling) nor its weasel-words are directly observable. So to predict and infer it depends on behavioral and neural correlates and predictors. Animal sentience has all the correlates and predictors except verbal report. Plant sentience is moot. And AI sentience is sci-fi  so far (and I think you, ChatGPT4, will agree, at least as far as you are concerned.

So again, I can’t see how calling a spade anything other than a spade helps anything except perhaps projecting our fantasies onto insentient systems. I’m not ready to do that before we’re much closer to a grounded T2 robot.

HARD FEELINGS

HARNAD:    Yep, the “hard problem” is to explain how and why sentient organisms FEEL rather than just DO. It’s exactly the same problem as explaining how and why there could not be a T3 or T4 zombie. Where does the definition of the “hard problem” come from? I know Dave Chalmers named it, but who described the problem? And exactly why is it a problem?

ChatGPT:

The “hard problem” of consciousness was coined and articulated by philosopher David Chalmers in 1995. It refers to the issue of explaining the nature of our subjective experiences, or why certain physical processes in the brain are accompanied by an experienced inner life. These subjective experiences are also referred to as “qualia.”

The “hard problem” is distinguished from the “easy problems” of consciousness, which include explaining the ability to discriminate, integrate information, report mental states, focus attention, etc. Even though these problems are far from “easy” in the practical sense, there is a clear path to understanding them in terms of computational or neural mechanisms. Hence, Chalmers termed them “easy”.

The “hard problem”, on the other hand, is deemed hard because it doesn’t seem amenable to such explanation. The question “why does the feeling which accompanies awareness exist at all?” appears to resist those methods that we conventionally use in science to explain phenomena.

This is because, typically, science proceeds by finding a mechanism that performs the function in question. For example, we can explain a car’s movement by referencing the mechanism of its engine. However, when it comes to consciousness, it’s not clear that finding a mechanism that ‘performs’ consciousness, in the sense of the ability to have subjective experience, would actually provide the kind of explanation we’re looking for.

It’s not sufficient to find neural correlates of consciousness, or to find information processing structures in the brain that mirror our conscious experiences. That would still leave a gap in our understanding – why should these physical processes give rise to subjective experience? Why aren’t we, as philosopher Thomas Nagel has put it, just mindless automata, acting without experiencing?

This is the crux of the hard problem – it’s not merely asking for a mechanism, but an explanation of why certain physical processes give rise to subjective experience, something that seems fundamentally non-physical.

HARNAD:    I have no idea what “non-physical” means, do you? (I understand cognitive science but not metaphysics.)  But I do know what unexplained (or unexplainable) means. I know Chalmers named it, but who described the hard problem?

ChatGPT:

Sorry for any confusion. The “hard problem” of consciousness was indeed first named and extensively discussed by David Chalmers. He used the term “non-physical” to refer to subjective experiences, or “qualia”, which seem fundamentally different from the kinds of things that physical science deals with. In other words, we can describe and predict the behavior of physical entities like atoms, electrons, and neurons, but subjective experiences do not seem to behave like physical entities in the same way.

Thomas Nagel’s 1974 paper “What is it like to be a bat?” also touched upon this issue prior to Chalmers. Nagel argued that an organism has conscious mental states “if and only if there is something that it is like to be that organism—something it is like for the organism.”

So, while Chalmers might have been the one to name the “hard problem” and bring it to the forefront of contemporary philosophical discussion, the basic issue of explaining subjective experience has a much longer history in the philosophy of mind, dating back at least to Descartes’ dualist distinction between mind and body in the 17th century.

However, it’s important to note that not everyone agrees there is a “hard problem” – some argue that once we solve the so-called “easy problems” of consciousness (i.e., explaining the mechanisms behind cognitive and behavioral functions), what we think of as the “hard problem” might not seem so problematic. This perspective, however, is a matter of ongoing debate in the field.

HARNAD:    The “easy problem” is explaining how and why organisms can DO what they can do. (That’s what we’ve been discussing here as reverse-engineering T3 and T4). The “hard problem” is explaining how and why (sentient) organisms can FEEL. If we forget about the meaningless notion of “nonphysical,” why is the hard problem still so much harder than the easy problem?

ChatGPT:

The difficulty with the “hard problem”, as compared to the “easy problem”, arises from the inherently subjective and qualitative nature of feelings or experiences, also known as “qualia”. While we can objectively observe, measure, and describe physical processes and behaviors, we cannot directly access someone else’s subjective experiences in the same way.

For example, we can understand and describe the physical processes associated with seeing the color red—light hitting an object, reflecting into our eyes, being processed by our visual systems, leading to a particular cognitive and behavioral response. This is the kind of problem that can be solved with enough empirical research and is hence termed the “easy problem”.

However, understanding why these processes are accompanied by a subjective experience of “redness” is much more difficult. Why isn’t the physical process of seeing color “like” anything at all? Why is it that we have a qualitative, subjective experience of the color red, rather than just a cognitive recognition and behavioral response? These are the questions posed by the “hard problem”.

Many theories have been proposed to address this, from various forms of physicalism that posit some yet-undiscovered physical explanation for consciousness, to dualism that suggests consciousness might be a separate entity from the physical world, to panpsychism that proposes that consciousness is a fundamental aspect of the universe, like space and time. But so far, no theory has been able to fully bridge the gap between the objective physical world and the subjective world of experience, leading to what Chalmers calls the “explanatory gap”.

There’s also the challenge of “other minds” in philosophy—while you can directly experience your own consciousness, you can’t directly experience someone else’s. All our knowledge about other minds is inferred from behavior and communication, but we have no way to directly observe their subjective experiences. This inherent limitation compounds the difficulty of the “hard problem”.

HARNAD:    If we chuck out the weasel-words again, as well as the metaphysical non-solutions. The hard problem is to explain, causally, how and why anything that organisms can DO should FEEL like anything at all. Feeling is real, but without a causal explanation it seems superfluous.

ChatGPT:

Yes, that’s indeed a clear and concise framing of the hard problem of consciousness. The issue is trying to explain why and how subjective experiences arise from objective physical processes – or in other words, how and why certain processes are accompanied by an internal, subjective “feeling”.

Objective behaviors and actions can be causally explained by physical processes, such as neuronal activity, chemical reactions, and interactions with the environment. We can trace the causes and effects, build models, and make predictions. These explanations fit within our current scientific framework, which deals with observable, quantifiable phenomena.

However, when it comes to subjective experiences or feelings, our current scientific methods fall short. We don’t yet have a way to objectively measure or quantify these experiences, and we don’t have a clear understanding of how they could be causally produced by physical processes. This is why some people consider consciousness to be a “non-physical” phenomenon.

It’s not that feelings are superfluous or unnecessary. Rather, they’re central to our existence as sentient beings. The challenge is figuring out how to account for them within our scientific understanding of the world. The hard problem of consciousness is essentially about bridging this explanatory gap. How can we expand our scientific methods and theories to account for the subjective, qualitative aspects of existence, and to uncover the causal mechanisms that give rise to them? That is the challenge that faces researchers in fields such as cognitive science, neuroscience, and philosophy of mind.

HARNAD: When ChatGPT palses intp starts echoing the empty weasel-words of the field it’s time to give the topic a break…

Plant Sentience and the Precautionary Principle

I hope that plants are not sentient, but I also believe they are not sentient, for several other reasons too:

Every function and capacity demonstrated in plants and (rightly) described as “intelligent” and “cognitive” (learning, remembering, signalling, communicating) can already be done by robots and by software (and they can do a lot more too). That demonstrates that plants too have remarkable cognitive capacities that we used to think were unique to people (and perhaps a few other species of animals). But it does not demonstrate that plants feel. Nor that feeling is necessary in order to have those capacities. Nor does it increase the probability by more than an infinitesimal amount, that plants feel.

The “hard problem” is to explain how and why humans (and perhaps a few other species of animals) feel. It seems to be causally superfluous, as robotic and computational models are demonstrating how much can be done without feeling. But with what plants can do it is almost trivial to design a model that can do it too, So there feeling seems to be incomparably more superfluous.

To reply that “Well, so maybe those robots and computational models feel too!” would just be to capitalize on the flip side of the other-minds problem (that certainty is not possible), to the effect that just as we cannot be sure that other people do feel, we cannot be sure that rocks, rockets or robots don’t feel.

That’s not a good address. Don’t go there. Stick with high probability and the preponderance of evidence. The evidence for some cognitive capacity (memory, learning, communication) in plants is strong. But the evidence that they feel is next to zero. In nonhuman animals the evidence that they feel starts very high for mammals, birds, other vertebrates, and, more and more invertebrates. But the evidence that plants, microbes and single cells feel is nonexistent, even as the evidence for their capacity for intelligent performance becomes stronger.

That humans should not eat animals is a simple principle based on the necessities for survival: 

Obligate carnivores (like the felids, I keep being told) have no choice. Eat flesh or sicken and die. Humans, in contrast, are facultative omnivores; they can survive as carnivores, consuming flesh, or they can survive without consuming flesh, as herbivores. And they can choose. There are no other options (until and unless technology produces a completely synthetic diet).

So my disbelief in plan sentience is not based primarily on wishful thinking, but on evidence and probability (which is never absolute zero, even for gravity, that apples may not start falling up instead of down tomorrow).

But there is another ethical factor that influences my belief, and that is the Precautionary Principle. Right now, and for millennia already in the Anthropocene, countless indisputably sentient animals are being slaughtered by our species, every second of every day, all over the planet, not out of survival necessity (as it had been for our hunter/gatherer ancestors), but for the taste, out of habit.

Now the “evidence” of sentience in these animals is being used to try to sensitize the public to their suffering, and the need to protect them. And the Precautionary Principle is being invoked to extend the protection to species for whom the evidence is not as complete and familiar as it is for vertebrates, giving them the benefit of the doubt rather than having to be treated as insentient until “proven” sentient. Note that all these “unproven” species are far closer, biologically and behaviorally to the species known to be sentient than they are to single cells and plants, for whom there is next to no evidence of sentience, only evidence for a degree of intelligence. Intelligence, by the way, does come in degrees, whereas sentience does not: An organism either does feel (something) or it does not – the rest is just a matter of the quality, intensity and duration of the feeling, not its existence.

So this 2nd order invocation of the Precautionary Principle, and its reckoning of the costs of being right or wrong, dictates that just as it is wrong not to give the benefit of the doubt to similar animals where the probability is already so high, it would be wrong to give the benefit of the doubt where the probability of sentience is incomparably lower, and what is at risk in attributing it where it is highly improbable is precisely the protection the distinction would have afforded to the species for whom the probability of sentience is far higher. The term just becomes moot, and just another justification for the status quo (ruled by neither necessity nor compassion, but just taste and habit – and the wherewithal to keep it that way).

Chomsky on Consciousness

Re: https://www.youtube.com/watch?v=vLuONgFbsjw

I only watched the beginning, but I think I got the message. Let me precede this with: yes, I have come to understand the problem of “consciousness” through the lens of practical ethics. But suspend judgement: it’s not just a matter of overgeneralizing one’s culinary druthers


Prelude. First, an important aside about Tom Nagel’s influential article. As often happens in philosophy, words throw you. Tom should never have entitled (or, for pedants, “titled”) it ‘What is it like to be a bat?” but “What does it feel like to be a bat?” 

We’d have been spared so  much silliness (still ongoing). It would have brought to the fore that the problem of consciousness is not the metaphysical problem of explaining “What kind of ‘stuff’ is consciousness?” but the  hard problem of explaining (causally) “How and why do organisms feel?” 

The capacity to feel is a biological trait, like the capacity to fly. But flying is observable, feeling is not. Yet that’s not the “hard problem.” That’s “just” the “other minds problem.” Just a puzzle among fellow gentleman-philosophers, about certainty — but an existential agony for those sentient species that we assume do not feel, and treat accordingly, when in fact they do feel.

Easy Sequel. Noam is mistaken to liken the hard problem to the Newtonian problem of explaining the laws of motion. Motion (of which flying is an example) is observable; feeling is not. But, again, that is not the hard problemQuarks are not observable; but without inferring that little (observable) protons are really composed of big (unobservable) quarks, we cannot explain protons. So quarks cannot be seen, but they can be inferred from what can be seen, and they play an essential causal (or functional) role in the (current) explanation of protons. 

Feeling cannot be observed (by anyone other than the feeler – this is the “1st person perspective” that Nagel extolled and interviewer Richard Brown obnoxiously tries to foist on Chomsky). 

But even though feeling cannot be observed, it can be inferred. We know people feel; we know our dog feels. Uncertainty only becomes nontrivial when we get down to the simplest invertebrates, microbes and plants. (And, before anyone mentions it, we also know that rocks, rockets and [today’s] robots (or LaMDAas)  don’t feel. Let’s not get into scholastic and sterile scepticism about what we can know “for sure.”)

So the “hard problem” of explaining, causally (functionally), how and why sentient organisms feel is hard precisely because of the “easy problem” of causally explaining observable capacities of organisms, like moving, flying, learning, remembering, reasoning, speaking. 

It looks for all the world as if [once we have explained (causally) how and why organisms can do all the (observable) things they can do], then feeling — unlike the quarks in  the (subatomic) explanation of what protons can (observably) do, or the Newtonian explanation of what billiard balls can (observably) do — feeling looks for all the world as if it is causally superfluous.

Solving the “hard problem” would be simple: Just explain, causally, how it would be impossible for organisms to do all (or some) of the things they can do if they did not feel. In other words, explain the causal role of feeling (adaptively, if you like – after all, it’s an evolved, biological trait).

But before you go there, don’t try to help yourself to feeling as a fundamental force of nature, the way Newton helped himself to universal gravitation. Until further notice, feeling is just a tiny local product of the evolution of life in one small planet’s biosphere. And there’s no reason at all to doubt that, like any other biological trait, feeling is explainable in terms of the four fundamental forces (gravity, electromagnetism, “strong” subatomic & “weak” subatomic). No 5th psychokinetic force.

The problem is just coming up with the causal explanation. (Galen Strawson’s “panpsychism” is an absurd, empty – and I think incoherent – metaphysical fantasy that does not solve the “hard problem,” but just inflates it to cosmic proportions without explaining a thing.)

So Noam is mistaken that the hard problem is not a problem. But it’s not about explaining what it feels like to see a sunset. It is about explaining how and why (sentient) organisms feel anything at all.

See: “Minds, Brains and Turing” (2011) as well as the (Browned out) discussion.

First Personhood

12 Points on Confusing Virtual Reality with Reality

Comments on: Bibeau-Delisle, A., & Brassard FRS, G. (2021). Probability and consequences of living inside a computer simulationProceedings of the Royal Society A477(2247), 20200658.

  1. What is Computation? it is the manipulation of arbitrarily shaped formal symbols in accordance with symbol-manipulation rules, algorithms, that operate only on the (arbitrary) shape of the symbols, not their meaning.
  2. Interpretatabililty. The only computations of interest, though, are the ones that can be given a coherent interpretation.
  3. Hardware-Independence. The hardware that executes the computation is irrelevant. The symbol manipulations have to be executed physically, so there does have to be hardware that executes it, but the physics of the hardware is irrelevant to the interpretability of the software it is executing. It’s just symbol-manipulations. It could have been done with pencil and paper.
  4. What is the Weak Church/Turing Thesis? That what mathematicians are doing is computation: formal symbol manipulation, executable by a Turing machine – finite-state hardware that can read, write, advance tape, change state or halt.
  5. What is Simulation? It is computation that is interpretable as modelling properties of the real world: size, shape, movement, temperature, dynamics, etc. But it’s still only computation: coherently interpretable manipulation of symbols
  6. What is the Strong Church/Turing Thesis? That computation can simulate (i.e., model) just about anything in the world to as close an approximation as desired (if you can find the right algorithm). It is possible to simulate a real rocket as well as the physical environment of a real rocket. If the simulation is a close enough approximation to the properties of a real rocket and its environment, it can be manipulated computationally to design and test new, improved rocket designs. If the improved design works in the simulation, then it can be used as the blueprint for designing a real rocket that applies the new design in the real world, with real material, and it works.
  7. What is Reality? It is the real world of objects we can see and measure.
  8. What is Virtual Reality (VR)? Devices that can stimulate (fool) the human senses by transmitting the output of simulations of real objects to virtual-reality gloves and goggles. For example, VR can transmit the output of the simulation of an ice cube, melting, to gloves and goggles that make you feel you are seeing and feeling an ice cube. melting. But there is no ice-cube and no melting; just symbol manipulations interpretable as an ice-cube, melting.
  9. What is Certainly Truee (rather than just highly probably true on all available evidence)? only what is provably true in formal mathematics. Provable means necessarily true, on pain of contradiction with formal premises (axioms). Everything else that is true is not provably true (hence not necessarily true), just probably true.
  10.  What is illusion? Whatever fools the senses. There is no way to be certain that what our senses and measuring instruments tell us is true (because it cannot be proved formally to be necessarily true, on pain of contradiction). But almost-certain on all the evidence is good enough, for both ordinary life and science.
  11. Being a Figment? To understand the difference between a sensory illusion and reality is perhaps the most basic insight that anyone can have: the difference between what I see and what is really there. “What I am seeing could be a figment of my imagination.” But to imagine that what is really there could be a computer simulation of which I myself am a part  (i.e., symbols manipulated by computer hardware, symbols that are interpretable as the reality I am seeing, as if I were in a VR) is to imagine that the figment could be the reality – which is simply incoherent, circular, self-referential nonsense.
  12.  Hermeneutics. Those who think this way have become lost in the “hermeneutic hall of mirrors,” mistaking symbols that are interpretable (by their real minds and real senses) as reflections of themselves — as being their real selves; mistaking the simulated ice-cube, for a “real” ice-cube.

Learning and Feeling

Re: the  NOVA/PBS video on slime mold. 

Slime molds are certainly interesting, both as the origin of multicellular life and the origin of cellular communication and learning. (When I lived at the Oppenheims’ on Princeton Avenue in the 1970’s they often invited John Tyler Bonner to their luncheons, but I don’t remember any substantive discussion of his work during those luncheons.)

The NOVA video was interesting, despite the OOH-AAH style of presentation (and especially the narrators’ prosody and intonation, which to me was really irritating and intrusive), but the content was interesting – once it was de-weaseled from its empty buzzwords, like “intelligence,” which means nothing (really nothing) other than the capacity (which is shared by biological organisms and artificial devices as well as running computational algorithms) to learn.

The trouble with weasel-words like “intelligence,” is that they are vessels inviting the projection of a sentient “mind” where there isn’t, or need not be, a mind. The capacity to learn is a necessary but certainly not a sufficient condition for sentience, which is the capacity to feel (which is what it means to have a “mind”). 

Sensing and responding are not sentience either; they are just mechanical or biomechanical causality: Transduction is just converting one form of energy into another. Both nonliving (mostly human synthesized) devices and living organisms can learn. Learning (usually) requires sensors, transducers, and effectors; it can also be simulated computationally (i.e., symbolically, algorithmically). But “sensors,” whether synthetic or biological, do not require or imply sentience (the capacity to feel). They only require the capacity to detect and do.

And what sensors and effectors can (among other things) do, is to learn, which is to change in what they do, and can do. “Doing” is already a bit weaselly, implying some kind of “agency” or agenthood, which again invites projecting a “mind” onto it (“doing it because you feel like doing it”). But having a mind (another weasel-word, really) and having (or rather being able to be in) “mental states” really just means being able to feel (to have felt states, sentience).

And being able to learn, as slime molds can, definitely does not require or entail being able to feel. It doesn’t even require being a biological organism. Learning can (or will eventually be shown to be able to) be done by artificial devices, and to be simulable computationally, by algorithms. Doing can be simulated purely computationally (symbolically, algorithmically) but feeling cannot be, or, otherwise put, simulated feeling is not really feeling any more than simulated moving or simulated wetness is really moving or wet (even if it’s piped into a Virtual Reality device to fool our senses). It’s just code that is interpretable as feeling, or moving or wet. 

But I digress. The point is that learning capacity, artificial or biological, does not require or entail feeling capacity. And what is at issue in the question of whether an organism is sentient is not (just) whether it can learn, but whether it can feel. 

Slime mold — amoebas that can transition between two states, single cells and multicellular  — is extremely interesting and informative about the evolutionary transition to multicellular organisms, cellular communication, and learning capacity. But there is no basis for concluding, from what they can do, that slime molds can feel, no matter how easy it is to interpret the learning as mind-like (“smart”). They, and their synthetic counterparts, have (or are) an organ for growing, moving, and learning, but not for feeling. The function of feeling is hard enough to explain in sentient organisms with brains, from worms and insects upward, but it becomes arbitrary when we project feeling onto every system that can learn, including root tips and amoebas (or amoeba aggregations).

I try not to eat any organism that we (think we) know can feel — but not any organism (or device) that can learn.

SIGNALS AND SENTIENCE

Yes, plants can produce sounds when they are stressed by heat or drought or damage.

They can also produce sounds when they are swayed by the wind, and when their fruit drops to the ground.

They can also produce sights when their leaves unfurl, and when they flower.

And they can produce scents too.

And, yes, animals can detect those sounds and sights and scents, and can use them, for their own advantage (if they eat the plant), or for mutual advantage (e.g., if they are pollinators).

Plants can also produce chemical signals, for signalling within the plant, as well as for signalling between plants.

Animals (including humans) can produce internal signals, from one part of their immune system to another, or from a part of their brain to another part, or to their muscles or their immune system.

Seismic shifts (earth tremors) can be detected by animals, and by machines.

Pheromones can be produced by human secretions and detected and reacted to (but not smelled) by other humans.

The universe is full “signals,” most of them neither detected nor produced by living organisms, plant or animal.

Both living organisms and nonliving machines can “detect” and react to signals, both internal and external signals; but only sentient organisms can feel them. 

To feel signals, it is not enough to be alive and to detect and react to them; an organ of feeling is needed: a nervous system.

Nor are most of the signals produced by living organisms intentional; for a signal to be intentional, the producer has to be able to feel that it is producing it; that too requires an organ of feeling.

Stress is an internal state that signals damage in a living organism; but in an insentient organism, stress is not a felt state.

Butterflies have an organ of feeling; they are sentient. 

Some species of butterfly have evolved a coloration that mimics the coloration of another, poisonous species, a signal that deters predators who have learned that it is often poisonous. 

The predators feel that signal; the butterflies that produce it do not.

Evolution does not feel either; it is just an insentient mechanism by which genes that code for traits that help an organism to survive and reproduce get passed on to its progeny.

Butterflies, though sentient, do not signal their deterrent color to their predators intentionally.

Nor do plants that signal by sound, sight or scent, to themselves or others, do so intentionally.

All living organisms except plants must eat other living organisms to survive. The only exceptions are plants, who can photosynthesize with just light, CO2 and minerals.

But not all living organisms are sentient.

There is no evidence that plants are sentient, even though they are alive, and produce, detect, and react to signals. 

They lack an organ of feeling, a nervous system.

Vegans need to eat to live.

But they do not need to eat organisms that feel.

Khait, I., Lewin-Epstein, O., Sharon, R., Saban, K., Perelman, R., Boonman, A., … & Hadany, L. (2019). Plants emit informative airborne sounds under stress. bioRxiv 507590.

Wilkinson, S., & Davies, W. J. (2002). ABA‐based chemical signalling: the co‐ordination of responses to stress in plantsPlant, cell & environment25(2), 195-210.

SIGNAUX ET SENTIENCE

Oui, les plantes peuvent produire des sons lorsqu’elles sont stressĂ©es par la chaleur, la sĂ©cheresse ou les dommages.

Elles peuvent Ă©galement produire des sons lorsqu’elles sont agitĂ©es par le vent et lorsque leurs fruits tombent au sol.

Elles peuvent Ă©galement produire des vues lorsque leurs feuilles se dĂ©ploient et lorsqu’elles fleurissent.

Et elles peuvent aussi produire des parfums.

Et, oui, les animaux peuvent dĂ©tecter ces sons, ces vues, et ces odeurs, et ils peuvent les utiliser, pour leur propre avantage (s’ils mangent la plante) ou pour un avantage mutuel (s’ils sont des pollinisateurs).

Les plantes peuvent Ă©galement produire des signaux chimiques, pour la signalisation Ă  l’intĂ©rieur de la plante, ainsi que pour la signalisation entre les plantes.

Les animaux (y compris les humains) peuvent produire des signaux internes, d’une partie de leur systĂšme immunitaire Ă  une autre, ou d’une partie de leur cerveau Ă  une autre partie, ou Ă  leurs muscles ou Ă  leur systĂšme immunitaire.

Les dĂ©placements sismiques (tremblements de terre) peuvent ĂȘtre dĂ©tectĂ©s par les animaux ainsi que par les machines.

Les phĂ©romones peuvent ĂȘtre produites par les sĂ©crĂ©tions humaines et elles peuvent ĂȘtre dĂ©tectĂ©es et rĂ©agies (mais non sentis) par d’autres humains.

L’univers est plein de « signaux », dont la plupart ne sont ni dĂ©tectĂ©s ni produits par des organismes vivants, vĂ©gĂ©taux ou animaux.

Les organismes vivants et les machines non vivantes peuvent « dĂ©tecter Â» et rĂ©agir aux signaux, qu’ils soient internes ou externes ; mais seuls les organismes sentients peuvent les ressentir.

Pour ressentir des signaux, il ne suffit pas d’ĂȘtre vivant, de les dĂ©tecter et d’y rĂ©agir ; il faut un organe du ressenti : un systĂšme nerveux.

La plupart des signaux produits par les organismes vivants ne sont pas non plus intentionnels ; pour qu’un signal soit intentionnel, il faut que le producteur puisse ressentir qu’il le produit ; cela aussi exige un organe du ressenti.

Le stress est un Ă©tat interne qui signale des dommages dans un organisme vivant ; mais dans un organisme non sentient, le stress n’est pas un Ă©tat ressenti.

Les papillons ont un organe du ressenti ; ils sont sentient.

Certaines espĂšces de papillons ont Ă©voluĂ© une coloration qui imite la coloration d’une autre espĂšce vĂ©nĂ©neuse, un signal qui dissuade les prĂ©dateurs qui ont appris que c’est souvent toxique.

Les prédateurs ressentent ce signal; les papillons qui le produisent ne le ressentent pas.

L’Ă©volution darwinienne ne ressent pas non plus ; c’est juste un mĂ©canisme non sentient par lequel les gĂšnes qui encodent les traits qui aident un organisme Ă  survivre et Ă  se reproduire sont transmis Ă  sa progĂ©niture.

Les papillons, bien que sentients, ne signalent pas intentionnellement leur couleur dissuasive à leurs prédateurs.

Les plantes qui signalent par le son, la vue ou l’odeur, Ă  elles-mĂȘmes ou aux autres, ne le font pas non plus intentionnellement.

Tous les organismes vivants, Ă  l’exception des plantes, doivent manger d’autres organismes vivants pour survivre. Les seules exceptions sont les plantes, qui peuvent effectuer la photosynthĂšse avec juste de la lumiĂšre, du CO2 et des minĂ©raux.

Mais tous les organismes vivants ne sont pas sentients.

Il n’y a pas de preuve que les plantes soient sentientes, mĂȘme si elles sont vivantes, et produisent, dĂ©tectent et rĂ©agissent aux signaux.

Il leur manque un organe de ressenti, un systĂšme nerveux.

Les véganes nécessitent manger pour survivre.

Mais ils ne nécessitent pas manger les organismes qui ressentent.

Khait, I., Lewin-Epstein, O., Sharon, R., Saban, K., Perelman, R., Boonman, A., … & Hadany, L. (2019). Plants emit informative airborne sounds under stress. bioRxiv 507590.

Wilkinson, S., & Davies, W. J. (2002). ABA‐based chemical signalling: the co‐ordination of responses to stress in plantsPlant, cell & environment25(2), 195-210.

Consciousness: The F-words vs. the S-words

“Sentient” is the right word for “conscious.”. It means being able to feel anything at all â€“ whether positive, negative or neutral, faint or flagrant, sensory or semantic. 

For ethics, it’s the negative feelings that matter. But determining whether an organism feels anything at all (the other-minds problem) is hard enough without trying to speculate about whether there exit species that can only feel neutral (“unvalenced”) feelings. (I doubt that +/-/= feelings evolved separately, although their valence-weighting is no doubt functionally dissociable, as in the Melzack/Wall gate-control theory of pain.)

The word “sense” in English is ambiguous, because it can mean both felt sensing and unfelt “sensing,” as in an electronic device like a sensor, or a mechanical one, like a thermometer or a thermostat, or even a biological sensor, like an in-vitro retinal cone cell, which, like photosensitive film, senses and reacts to light, but does not feel a thing (though the brain it connects to might).

To the best of our knowledge so far, the phototropisms, thermotropisms and hydrotropisms of plants, even the ones that can be modulated by their history, are all like that too: sensing and reacting without feeling, as in homeostatic systems or servomechanisms.

Feel/feeling/felt would be fine for replacing all the ambiguous s-words (sense, sensor, sensation
) and dispelling their ambiguities. 

(Although “feeling” is somewhat biased toward emotion (i.e., +/- “feelings”), it is the right descriptor for neutral feelings too, like warmth,  movement, or touch, which only become +/- at extreme intensities.) 

The only thing the f-words lack is a generic noun for “having the capacity too feel” as a counterpart for the noun sentience itself (and its referent). (As usual, German has a candidate: GefĂŒhlsfĂ€higkeit.)

And all this, without having to use the weasel-word “conscious/consciousness,” for which the f-words are a healthy antidote, to keep us honest, and coherent…