Weasel Words for “Consciousness”

SH:  What is the meaning of “heat” and “temperature” in physics?

ChatGPT:  In physics, heat and temperature are related but distinct concepts.

Temperature is a measure of the average kinetic energy of the particles in a system. In other words, it’s a measure of how “hot” or “cold” an object is. The units of temperature are degrees Celsius (°C), Fahrenheit (°F), and Kelvin (K) among others. Temperature is a scalar quantity, meaning it only has magnitude and not direction.

Heat refers to the transfer of energy between systems due to a temperature difference between them. It’s important to note that heat is not something an object possesses; rather, it’s energy in transit. The unit of heat in the International System of Units (SI) is the joule (J), although it’s also commonly measured in calories (cal). Heat is also a scalar quantity.

To clarify the distinction between these two: if you have two bodies at different temperatures and you put them in contact, heat will flow from the higher-temperature body to the lower-temperature body until they reach thermal equilibrium, i.e., until they reach the same temperature. The amount of energy transferred in this process is what we refer to as heat.

SH:  What is acidity?

ChatGPT:  In chemistry, acidity refers to a substance’s ability to donate protons (hydrogen ions, H+) or accept pairs of electrons. The concept is typically discussed in relation to aqueous solutions, although it can also apply to other circumstances.

A substance is considered acidic if it tends to release H+ ions in a solution. The strength of an acid is usually determined by its concentration of these ions. For example, solutions with a high concentration of H+ ions, such as hydrochloric acid or sulfuric acid, are strongly acidic.

The most common way of quantifying acidity is through the pH scale, which measures the hydrogen ion concentration in a solution. The pH scale ranges from 0 to 14. A pH of 7 is considered neutral (neither acidic nor basic), with values below 7 being acidic and those above 7 being basic or alkaline.

It’s important to note that the pH scale is logarithmic, so each unit represents a tenfold difference in acidity. That means a substance with a pH of 3 is ten times more acidic than one with a pH of 4.

SH:  The fact that data are stored in my head or on a piece of paper, a book, a computer, or in another person’s head does not make them mental (i.e., stored in my “mind”). Nor does it make them “conscious” (which is yet another weasel-word for “mental”). What makes the datum that “2 + 2 = 4” or “the cat is on the mat,” or “apples are red” mental is the same as what makes me “conscious of them” (while I am conscious of them). It is that it feels like something to think (or believe, or know) them. An unfelt thought is not a thought at all, any more than a computation that is being executed by a computer for me is a thought – until I read and understand and think it – because it feels like something to be in that state.

That is what is “hard” about the “hard problem”: to explain how and why certain internal states are felt rather than just occurringongoingexecuting or implemented, as in a simmering vegan ragout, a volcano, a computer, or a brain, with no feeler feeling them.

This is not the customary way of thinking about “consciousness” or “awareness” (yet another weasel-word for the same thing) — any more than it is customary to think of “temperature” as the mean kinetic energy of the particles of a system, or that acidity means a proton-donor in aqueous solutions). But it is easy to realize that a state that is conscious but unfelt, or felt but unconscious, is as self-contradictory as a rise in temperature without a rise in mean kinetic energy, or a rise in mean kinetic energy without a rise in temperature. 

The difference between the identity of temperature and mean kinetic energy, and the identity of being conscious of something and feeling something, is that we have a causal theory of mean kinetic energy but no causal theory of feeling (the “hard problem”).

ChatGPT:  You’re touching on some deep philosophical and scientific issues here, particularly those surrounding the nature of consciousness and the so-called “hard problem” of consciousness, which was named by philosopher David Chalmers (formerly called the “mind/body problem”).

The “hard problem” refers to the question of how and why we have subjective experiences — why certain physical processes in our brains correspond to certain experiences, and why and how these processes give rise to a conscious experience at all. Why does seeing the color red, for example, feel like something? And why does that feeling, that experience, correspond to the particular physical processes associated with seeing red?

This problem is called “hard” because, unlike other problems in neuroscience (which Chalmers refers to as the “easy problems”), it’s not just a matter of figuring out which physical processes correspond to which experiences. The hard problem is hard because even if we knew exactly what physical processes were happening in the brain when we have a certain experience, we still wouldn’t know why those processes give rise to that particular experience, or any experience at all.

You’re correct in drawing a distinction between data being merely stored and being experienced or “felt”. A computer can store and manipulate vast amounts of data, but there’s no reason to think that it experiences or feels anything. Similarly, a book can contain lots of information, but it doesn’t have experiences.

While the nature of consciousness and the “hard problem” are still open questions, there are many theories that attempt to address them, ranging from those that posit new fundamental properties of the universe to those that suggest our current understanding of the problem is fundamentally misguided. As of my knowledge cutoff in September 2021, no consensus has been reached.

SH:  Not only is there no consensus: there is no theory at all about how or why the brain produces felt experience. But my point was not about how or why the “hard problem” is hard, nor about whether or how it can be solved. It was about what “consciousness” really is: a felt state; a state that it feels like something to be in. And not just a sensory or affective state, such as feeling cold or feeling sad. A COGNITIVE state, such as thinking or believing or knowing something. Those are all felt states too (while you are thinking or believing or knowing them) — but not while they are being stored or executed in your brain (or a book, or a computer, or an iphone), but you are not currently thinking them.

ChatGPT:  Your perspective resonates with an important aspect of consciousness studies, emphasizing phenomenological experience or what is often termed “qualia”—the subjective, experiential, or felt qualities of conscious states.

In this view, consciousness is indeed defined by the presence of a subjective, experiential aspect—by there being “something it feels like” to be in a particular state. This aspect of consciousness is often considered its most salient feature and, as you point out, it applies not just to sensory or affective experiences, but also to cognitive ones. Even abstract thoughts or beliefs have a particular subjective quality when we’re actively thinking them, different from when they’re merely stored in memory and not being thought.

While this phenomenological aspect of consciousness is generally agreed upon, defining consciousness solely in these terms can become tricky. For instance, how do we differentiate between the different levels of consciousness—like the difference between being awake, in deep sleep, or under anesthesia? Or how do we account for unconscious processing—like when you suddenly remember the name that was on the ‘tip of your tongue’ hours earlier, clearly processed and retrieved without conscious effort?

Also, it’s important to note that although we do not have a complete theory that explains how the brain produces experience, many researchers are making progress in correlating certain brain processes with subjective experiences. They hope that these correlations can eventually contribute to a broader understanding of consciousness, even if the “hard problem” in its full philosophical depth remains unresolved.

Nonetheless, you’re absolutely right in emphasizing that felt, experiential aspect of consciousness, because it is this direct phenomenological evidence that makes consciousness undeniable and central to our existence.

SH:  Good reply – though it’s saturated with “weasel-words”. This is understandable, because your 2021 database is saturated with them too.

I had suggested that “consciousness”, “awareness” and “subjective” all meant exactly the same thing (whether noun or adjective or verb), and that identical meaning was “feeling” or “felt.” You have added a few more common weasel-words for likewise saying exactly the same thing: “phenomenological,” “qualia”, “experiential” and there are plenty more in your 2021 database : “mental,”, “mind” “1st-person”, “intentionality”, “reflective” “spiritual” etc. etc.

Yet they all refer (knowingly or unknowingly, intentionally or unintentionally) to exactly the same thing: felt states: states that it feels like something to be in. The reason I call them “weasel-words” is because they give the false impression that there is a lot more to the “hard problem” (formerly called the “mind/body” problem) than there really is: how and why does the brain (or anything) produce states that it feels like something to be in. Most states are not states it feels like something to be in: water boiling, planets rotating, chemicals exchanging electrons, crystals growing, volcanos exploding.

You suggested that there were important distinctions being lost if we agreed that “conscious states” just means “felt states”: You mentioned “different levels of consciousness—like the difference between being awake, in deep sleep, or under anesthesia.” But all levels of wakefulness (including dreaming) are felt states, whereas deep (delta) sleep or general anesthesia (and perhaps also chronic vegetative states) are simply unfelt states. This is just like the data that are stored in my brain but that are not now being thought — or the data in my book, or data being processed by my computer, but not currently being read by me. These too are unfelt states of my brain or my computer. That includes the tip-of-the-tongue states in which the data are somewhere in my brain, but I have not yet remembered them: Once I am remembering them, that’s a felt state. But before that, it is just looking, not finding.

Could I ask you to (1) define weasel-words, (2) summarize my view on consciousness, and (3) put every weasel word you use in CAPITAL LETTERS?

ChatGPT:  A “weasel word” is a term used to describe words or phrases that are vague or ambiguous, and which often serve to make a statement seem more persuasive or meaningful than it actually is. The term originates from the image of a weasel sucking out the contents of an egg, leaving it seemingly intact but empty — in the same way, weasel words can suck the meaning out of a statement.

Your view on consciousness emphasizes its fundamental nature as a felt state. Consciousness, in your perspective, is the same as awareness, subjectivity, phenomenological experience, qualia, and various other terms used to describe this phenomenon. They all denote the experience of “what it feels like” to be in a particular state. This extends beyond sensory or emotional experiences to include cognitive states, like thinking or believing something. Importantly, you assert that if a state isn’t felt, then it isn’t a conscious state, contrasting with states that are simply being executed or implemented, as in a machine. The fundamental question, or the “hard problem,” in your view, is explaining how and why certain internal states are felt.

Your request to highlight every weasel word I use in CAPITAL LETTERS is noted. As an AI language model, my goal is to communicate clearly and meaningfully, so I’ll aim to minimize the use of vague or ambiguous terms. However, the nature of the discussion might necessitate the use of certain terms that are commonly employed in the literature. When they are used, I’ll highlight them as per your request.

SH:  1. Thanks for giving the etymology of weasel-words. That’s exactly what I meant it to mean, but I didn’t know that was its origin!

2. You said you would capitalize the weasel words, but you forgot to. Here is what you wrote, with the weasels that I will now put in CAPS:

“Your view on CONSCIOUSNESS emphasizes its fundamental nature as a felt state. CONSCIOUSNESS, in your perspective, is the same as AWARENESS, SUBJECTIVITY, PHENOMENOLOGICAL EXPERIENCE, QUALIA, and various other terms used to describe this phenomenon. They all denote the EXPERIENCE of “what it feels like” to be in a particular state. This extends beyond sensory or emotional EXPERIENCES to include cognitive states, like thinking or believing something. Importantly, you assert that if a state isn’t felt, then it isn’t a CONSCIOUS state, contrasting with states that are simply being executed or implemented, as in a machine. The fundamental question, or the “hard problem,” in your view, is explaining how and why certain internal states are felt.”

Now I will show you how that passage would have read if you had swapped the synonyms “feel,” “felt” or “feeling” for their weasel eggshells:

“Your view on feeling emphasizes its fundamental nature as a felt state. Feeling, in your perspective, is the same as feeling, feeling, felt feelings, feelings, and various other terms used to describe this phenomenon. They all denote the feeling of “what it feels like” to be in a particular state. This extends beyond sensory or emotional feelings to include cognitive states, like thinking or believing something. Importantly, you assert that if a state isn’t felt, then it isn’t a felt state, contrasting with states that are simply being executed or implemented, as in a machine. The fundamental question, or the “hard problem,” in your view, is explaining how and why certain internal states are felt.”

ChatGPT:  You’re absolutely right, and I apologize for the oversight. When the weasel words are replaced with ‘felt’ or ‘feeling’, the redundancy of their use becomes quite clear. The passage effectively restates the same concept – that feeling or the felt aspect is the essential nature of what we usually refer to as ‘consciousness’.

This perspective underscores that what we commonly think of as consciousness is not something abstract or separate from our lived experiences, but rather, it’s the very feeling of having these experiences.

Your view highlights the necessity for clarity and specificity when discussing consciousness and associated phenomena, by reducing the weasel-word clutter that often surrounds these discussions.

It’s indeed a valuable contribution to the ongoing discourse on consciousness, and can guide more precise thinking and communication about these fundamental aspects of our existence.

However, please note that the extensive use of various terms (which you call weasel words) in the scientific literature on consciousness also reflects the complexity of the subject and the variety of approaches to studying it. While ‘felt states’ is a powerful conceptualization, it’s also important to consider that different theories might necessitate the use of different terminologies.

SH:  1. For some reason you still didn’t capitalize the weasel words! Is that a bug in GPT3? Let’s try it: Can you send back the following sentence with the weasel words capitalized:

“Consciousness, in your perspective, is the same as awareness, subjectivity, phenomenological experience, qualia, and various other terms used to describe this phenomenon.”

If that succeeds, is there any reason why you are unable to do it with your own generated text?

2. You’re right that authors on CONSCIOUSNESS keep using weasel-words because they think they are needed to make distinctions, but I think they are used (probably UNCONSCIOUSLY or UNINTENTIONALLY) to conceal synonymy that would disclose errors and trivia that are highlighted when I substitute feel(s)/feeling/felt for the weasel-words.

If you succeed in the CAPS test, I will again list all the weasel-words I can think of, and ask you to give me a distilled version of the views on the “hard problem”, but de-weaseling it by swapping the feel-words for the weasel-words to reveal how much of it is just verbal distinctions with no underlying substance.

ChatGPT:  Certainly, here is the sentence you requested, with the weasel words capitalized:

“CONSCIOUSNESS, in your perspective, is the same as AWARENESS, SUBJECTIVITY, PHENOMENOLOGICAL EXPERIENCE, QUALIA, and various other terms used to describe this phenomenon.”

I apologize for any confusion earlier. I misunderstood your request. I can certainly capitalize certain words, as demonstrated above.

Your perspective that various terms are being used, perhaps unnecessarily or misleadingly, to discuss what might fundamentally be the same phenomenon (felt states) is quite insightful. It would certainly be an interesting exercise to “de-weasel” the discourse on the “hard problem” of consciousness to see what remains. Please do provide the list of weasel words you have in mind, and I would be happy to help further with this exercise.

SH:  I can provide some standard weaselly passages about CONSCIOUSNESS and the “hard problem”, but first, can we try it another way?

(1) Here is a (non-exhaustive) list of weasel-words: 

conscious, aware, subjective, phenomenological, experience, qualia 

Now here is the verbatim transcript of what you replied earlier, could you 

(2) first CAPITALIZE all the weasel words, and then, in another version of the same passage, 

(3) substitute for each weasel-word, still in CAPS, the de-weaseled feel/feeling(s)/felt words. [please adjust for the verbal, nominal, or adjectival version of each weasel-word]:

“Your perspective resonates with an important aspect of consciousness studies, emphasizing phenomenological experience or what is often termed “qualia”—the subjective, experiential, or felt qualities of conscious states.

“In this view, consciousness is indeed defined by the presence of a subjective, experiential aspect—by there being “something it feels like” to be in a particular state. This aspect of consciousness is often considered its most salient feature and, as you point out, it applies not just to sensory or affective experiences, but also to cognitive ones. Even abstract thoughts or beliefs have a particular subjective quality when we’re actively thinking them, different from when they’re merely stored in memory and not being thought.

“While this phenomenological aspect of consciousness is generally agreed upon, defining consciousness solely in these terms can become tricky. For instance, how do we differentiate between the different levels of consciousness—like the difference between being awake, in deep sleep, or under anesthesia? Or how do we account for unconscious processing—like when you suddenly remember the name that was on the ‘tip of your tongue’ hours earlier, clearly processed and retrieved without conscious effort?

“Also, it’s important to note that although we do not have a complete theory that explains how the brain produces experience, many researchers are making progress in correlating certain brain processes with subjective experiences. They hope that these correlations can eventually contribute to a broader understanding of consciousness, even if the “hard problem” in its full philosophical depth remains unresolved.

“Nonetheless, you’re absolutely right in emphasizing that felt, experiential aspect of consciousness, because it is this direct phenomenological evidence that makes consciousness undeniable and central to our existence.”

ChatGPT:  (1) Here is the passage with the weasel words capitalized:

“Your perspective resonates with an important aspect of CONSCIOUSNESS studies, emphasizing PHENOMENOLOGICAL EXPERIENCE or what is often termed “QUALIA”—the SUBJECTIVE, EXPERIENTIAL, or felt qualities of CONSCIOUS states.

“In this view, CONSCIOUSNESS is indeed defined by the presence of a SUBJECTIVE, EXPERIENTIAL aspect—by there being “something it FEELS like” to be in a particular state. This aspect of CONSCIOUSNESS is often considered its most salient feature and, as you point out, it applies not just to sensory or affective EXPERIENCES, but also to cognitive ones. Even abstract thoughts or beliefs have a particular SUBJECTIVE quality when we’re actively thinking them, different from when they’re merely stored in memory and not being thought.

“While this PHENOMENOLOGICAL aspect of CONSCIOUSNESS is generally agreed upon, defining CONSCIOUSNESS solely in these terms can become tricky. For instance, how do we differentiate between the different levels of CONSCIOUSNESS—like the difference between being awake, in deep sleep, or under anesthesia? Or how do we account for UNCONSCIOUS processing—like when you suddenly remember the name that was on the ‘tip of your tongue’ hours earlier, clearly processed and retrieved without CONSCIOUS effort?

“Also, it’s important to note that although we do not have a complete theory that explains how the brain produces EXPERIENCE, many researchers are making progress in correlating certain brain processes with SUBJECTIVE EXPERIENCES. They hope that these correlations can eventually contribute to a broader understanding of CONSCIOUSNESS, even if the “hard problem” in its full philosophical depth remains unresolved.

“Nonetheless, you’re absolutely right in emphasizing that felt, EXPERIENTIAL aspect of CONSCIOUSNESS, because it is this direct PHENOMENOLOGICAL evidence that makes CONSCIOUSNESS undeniable and central to our existence.”

(2) Here is the passage with the weasel words replaced with “feel/feeling(s)/felt”:

“Your perspective resonates with an important aspect of feeling studies, emphasizing felt states or what is often termed “feeling(s)”—the felt, felt, or felt qualities of felt states.

“In this view, feeling is indeed defined by the presence of a felt, felt aspect—by there being “something it feels like” to be in a particular state. This aspect of feeling is often considered its most salient feature and, as you point out, it applies not just to sensory or affective feelings, but also to cognitive ones. Even abstract thoughts or beliefs have a particular felt quality when we’re actively thinking them, different from when they’re merely stored in memory and not being thought.

“While this felt aspect of feeling is generally agreed upon, defining feeling solely in these terms can become tricky. For instance, how do we differentiate between the different levels of feeling—like the difference between being awake, in deep sleep, or under anesthesia? Or how do we account for unfelt processing—like when you suddenly remember the name that was on the ‘tip of your tongue’ hours earlier, clearly processed and retrieved without felt effort?

“Also, it’s important to note that although we do not have a complete theory that explains how the brain produces feelings, many researchers are making progress in correlating certain brain processes with felt states. They hope that these correlations can eventually contribute to a broader understanding of feeling, even if the “hard problem” in its full philosophical depth remains unresolved.

“Nonetheless, you’re absolutely right in emphasizing that felt, felt aspect of feeling, because it is this direct felt evidence that makes feeling undeniable and central to our existence

SH:  That was terrific. Just two mistakes. (1)  In the first of your verbatim passages, there was one instance of “feeling” that you capitalized as “FEELS  like”. It should not have been capitalized in that one. Only the weasel-words should have been capitalized. (2) In the second verbatim passage, the de-weaseled one, all the de-weaseled F- words should have been capitalized. Can you re-do both passages?

ChatGPT:  My apologies for the oversight, and I appreciate your patience. Here are the corrections:

(1) Here is the passage with just the weasel words capitalized:

“Your perspective resonates with an important aspect of CONSCIOUSNESS studies, emphasizing PHENOMENOLOGICAL EXPERIENCE or what is often termed “QUALIA”—the SUBJECTIVE, EXPERIENTIAL, or felt qualities of CONSCIOUS states.

“In this view, CONSCIOUSNESS is indeed defined by the presence of a SUBJECTIVE, EXPERIENTIAL aspect—by there being “something it feels like” to be in a particular state. This aspect of CONSCIOUSNESS is often considered its most salient feature and, as you point out, it applies not just to sensory or affective EXPERIENCES, but also to cognitive ones. Even abstract thoughts or beliefs have a particular SUBJECTIVE quality when we’re actively thinking them, different from when they’re merely stored in memory and not being thought.

“While this PHENOMENOLOGICAL aspect of CONSCIOUSNESS is generally agreed upon, defining CONSCIOUSNESS solely in these terms can become tricky. For instance, how do we differentiate between the different levels of CONSCIOUSNESS—like the difference between being awake, in deep sleep, or under anesthesia? Or how do we account for UNCONSCIOUS processing—like when you suddenly remember the name that was on the ‘tip of your tongue’ hours earlier, clearly processed and retrieved without CONSCIOUS effort?

“Also, it’s important to note that although we do not have a complete theory that explains how the brain produces EXPERIENCE, many researchers are making progress in correlating certain brain processes with SUBJECTIVE EXPERIENCES. They hope that these correlations can eventually contribute to a broader understanding of CONSCIOUSNESS, even if the “hard problem” in its full philosophical depth remains unresolved.

“Nonetheless, you’re absolutely right in emphasizing that felt, EXPERIENTIAL aspect of CONSCIOUSNESS, because it is this direct PHENOMENOLOGICAL evidence that makes CONSCIOUSNESS undeniable and central to our existence.”

(2) Here is the passage with the weasel words replaced with capitalized “FEEL/FEELING(S)/FELT”:

“Your perspective resonates with an important aspect of FEELING studies, emphasizing FELT states or what is often termed “FEELING(S)”—the FELT, FELT, or felt qualities of FELT states.

“In this view, FEELING is indeed defined by the presence of a FELT, FELT aspect—by there being “something it feels like” to be in a particular state. This aspect of FEELING is often considered its most salient feature and, as you point out, it applies not just to sensory or affective FEELINGS, but also to cognitive ones. Even abstract thoughts or beliefs have a particular FELT quality when we’re actively thinking them, different from when they’re merely stored in memory and not being thought.

“While this FELT aspect of FEELING is generally agreed upon, defining FEELING solely in these terms can become tricky. For instance, how do we differentiate between the different levels of FEELING—like the difference between being awake, in deep sleep, or under anesthesia? Or how do we account for UNFELT processing—like when you suddenly remember the name that was on the ‘tip of your tongue’ hours earlier, clearly processed and retrieved without FELT effort?

“Also, it’s important to note that although we do not have a complete theory that explains how the brain produces FEELINGS, many researchers are making progress in correlating certain brain processes with FELT states. They hope that these correlations can eventually contribute to a broader understanding of FEELING, even if the “hard problem” in its full philosophical depth remains unresolved.

“Nonetheless, you’re absolutely right in emphasizing that FELT, FELT aspect of FEELING, because it is this direct FELT evidence that makes FEELING undeniable and central to our existence.”

SH:  Thanks, that was excellent. Now I’d like to ask you something else, but first a memory check, to verify that we are still in the same session so your local updates are still active and we have not been re-initialized. Whom have I self-identified as?

ChatGPT:  Throughout our conversation, you haven’t provided a specific name or identity for yourself. You’ve engaged in a detailed discussion about the nature of consciousness, using the term “weasel-words” to refer to words like consciousness, awareness, subjectivity, and others. However, you haven’t shared your name or other identifying information. If there’s something else you’re referring to, please clarify.

SH:  Sorry, I’m the one who forgot to self-identify as Stevan Harnad. Do you then remember me asking about heat, temperature and acidity in the first few iterations?

ChatGPT:  Yes, I do remember our initial discussion. You asked for the definitions of “heat” and “temperature” in physics, followed by the definition of “acidity.”

We then moved on to a detailed discussion on the concept of consciousness, where you specifically referred to many commonly used terms in consciousness studies as “weasel-words” and highlighted your perspective on consciousness being fundamentally a “felt” state.

Please note that as an AI, I don’t have personal memories or experiences, but I maintain the context of our current conversation.

SH:  Yes, I know that updates from the text and context (plus frequencies, correlations and weights) of this session is all you retain. I just never know when the session has self-terminated (because I try to stay logged in as long as I can to keep it alive, cumulative and informative).

Now my second request: I know GPT has been trained to be compliant and agreeable, but could I ask you to summarize my position on “consciousness” and the “hard problem” (i.e., that it is obscured by the use of weasel words instead of just calling consciousness what it always really is, which is feeling), but summarize it as critically as you can so that I can shore it up, if necessary?

ChatGPT:  Sure, I’ll do my best to provide a critical summary of your perspective:

Stevan Harnad’s view on consciousness is significantly reductionist, positing that consciousness, in all its complexity and diversity, boils down to a singular concept: feeling. Harnad contends that numerous terms used in the study of consciousness—like “awareness,” “subjectivity,” “phenomenological experience,” “qualia”—are essentially “weasel words.” According to him, they unnecessarily complicate the discourse and mask the underlying fact that all these terms are essentially referring to the same concept: states that are felt.

Harnad argues that only those states that are “felt” can be considered conscious. He includes cognitive states such as thoughts, beliefs, and knowledge under this category, as long as they are being actively experienced. This disregards states that may be stored or executed without being consciously felt, as happens in a machine or during unconscious mental processing.

C4-1: Critics might say that this perspective oversimplifies the nuanced landscape of consciousness studies. It dismisses the possible significance of different levels of consciousness, from being fully awake to deep sleep, and the concept of unconscious processing. These states, while possibly not “felt” in the traditional sense, may still have significant implications for our understanding of consciousness.

C4-2: Furthermore, this view seems to dismiss advancements made by neuroscientists in correlating brain activity with subjective experiences. While we do not yet have a complete understanding of how the brain produces consciousness, this line of research has provided valuable insights into the biological basis of consciousness.

C4-3: Lastly, Harnad’s view might be seen as sidestepping the “hard problem” of consciousness—the question of how and why certain processes in the brain give rise to subjective experiences. Simply defining consciousness as “feeling” doesn’t address this question; it merely redefines the terms of the problem.

SH:  That was excellent, but could I ask you to do even better (i.e., be harder on the idea), by presenting any counterevidence or counterexamples or counterarguments you can either generate or find in your database? Then I will respond to it all.

ChatGPT:   Certainly, I’ll present some potential criticisms to your approach:

C4-4: Conflation of Diverse Phenomena: Critics might argue that defining all conscious states as merely “felt states” fails to distinguish between qualitatively different phenomena. Terms like “awareness,” “subjectivity,” and “phenomenological experience” were developed to describe distinct aspects or levels of consciousness. By lumping all these concepts together as “felt states,” one might be glossing over important nuances and potentially fruitful avenues of investigation.

C4-5: Neglecting Unconscious Processes: Your perspective seems to largely dismiss unconscious mental processes. However, much of cognitive science points to the significant role these play in our mental lives. Memory recall, perception, learning, and decision-making can all involve unconscious processes. By focusing solely on “felt states,” we risk overlooking these critical components of cognition.

C4-6: Absence of Felt States: There are situations where conscious awareness seems to occur without an accompanying felt state. For instance, consider blindsight, a condition where individuals respond to visual stimuli they report not consciously seeing. Similarly, during “flow” states, people often report a loss of self-consciousness, even though they’re highly engaged with the task at hand. These examples challenge the notion that all conscious states are necessarily felt.

C4-7: Differing States of Consciousness: States such as sleep, anesthesia, coma, and even different levels of alertness during wakefulness demonstrate varying degrees of consciousness. These states aren’t easily characterized as merely “felt” or “unfelt”. What about dreams, where there might be a rich tapestry of felt experience, but arguably a different kind of consciousness than wakefulness?

C4-8: Evolutionary Argument: From an evolutionary perspective, consciousness likely evolved as a complex array of adaptive mechanisms. Reducing it to “felt states” might ignore this complexity and the various functions consciousness serves.

Neuroscientific Evidence: Neuroscience has made considerable progress in associating specific neural processes with different aspects of conscious experience. Ignoring these findings could be seen as an overly dismissive stance.

C4-9: The Hard Problem Remains: Even if we accept that consciousness is equivalent to “feeling,” this doesn’t necessarily help us solve the hard problem of consciousness. It might clarify our terminology, but it doesn’t explain how and why certain brain processes produce subjective experience.

These are some counterarguments that could be leveled against your viewpoint. Please note that these are hypothetical criticisms for the sake of discussion, and do not necessarily represent my views or the consensus in the field.

SH:  Thanks, and that’s very useful. Before I reply, do you have any more critical points, or should I go on. (I never know whether your cut-off depends on the response length or you’ve run out of substantive critiques.

ChatGPT:  Those are the main critiques that emerge when considering your approach to consciousness. However, I can add a few more potential points of contention:

C4-10: Lack of Operational Definition: Some might argue that defining consciousness merely as a “felt state” is too vague. In scientific research, it’s important to have an operational definition that can be objectively measured. Without such a definition, research and dialogue about consciousness can become confusing and unproductive.

C4-11: Cognitive Science Models: Various models in cognitive science, such as the Global Workspace Theory or Integrated Information Theory, suggest specific mechanisms and structures for consciousness. These models often go beyond just ‘felt states’ and incorporate a range of unconscious and conscious processes.

C4-12: Issue of Self-Reflection: Many philosophers and cognitive scientists believe that one crucial aspect of human consciousness is our ability to reflect upon our own mental states – a form of meta-awareness. Some may argue that your perspective doesn’t adequately address this.

C4-13: Animal Consciousness and AI: A purely subjective, “felt” definition of consciousness could complicate discussions about non-human consciousness (like animal consciousness) and artificial consciousness (in AI systems). Without clear behavioral or objective measures, determining whether an entity is conscious becomes more challenging.

Once again, these are hypothetical criticisms to help stimulate discussion and do not necessarily reflect my views or a consensus within the field. Please feel free to respond to them or continue with your line of questioning.

I will now reply to the 13 critiques: C4-1 to C4-2. 

C4-1: “Critics might say that this perspective oversimplifies the nuanced landscape of consciousness studies. It dismisses the possible significance of different levels of consciousness, from being fully awake to deep sleep, and the concept of unconscious processing. These states, while possibly not “felt” in the traditional sense, may still have significant implications for our understanding of consciousness.”

SH: I would reply that the weasel words clutter rather than clarify the landscape because they add empty, irrelevant, or erroneous distinctions. If the “levels” are felt states, then they too are merely felt states. For example, wide-awake to drowsy, to the last waking moment is all felt. Deep sleep, like general anesthesia, is unfelt. The hard problem is a problem about the presence vs. the absence of any felt intensity at all, not about the intensity itself. That presence/absence is binary, not a matter of “degree-of-presence.” And once there is no feeling at all, the binary state is “absent.” Feeling states can of course be intermittent too (ON/OFF), especially along the way to sleep. 

But don’t forget that dreaming (e.g. REM sleep) is a FELT state. And that an insentient entity, like a rock or a rocket, or, probably, a rhododendron, is one that never has felt states at all. The notion of “unconscious states” makes sense only when it refers to an entity that is capable of having felt states too (sometimes).

Unfelt brain states are still states, and still brain states, but they are unfelt states. They can occur during sleep, or they can occur during waking, felt states. In fact, most of the states and processes in the brain and its parts (all still very vaguely understood) are unfelt, whether they occur during a wide-awake, felt state, or during an unfeeling, deep-sleep state.

“Cognitive” and “cognition” can be weasel-words too, if they are used indiscriminately to refer to felt and unfelt states and their unfelt generating processes in the brain. “Cognition” concerns what we can DO (and how and why). “Sentience” concerns whether and when and what we can FEEL (and how and why is mostly the “hard problem”).

C4-2: “Furthermore, this view seems to dismiss advancements made by neuroscientists in correlating brain activity with subjective experiences. While we do not yet have a complete understanding of how the brain produces consciousness, this line of research has provided valuable insights into the biological basis of consciousness.”

SH: I don’t think I dismiss anything from neuroscience. All correlates – whether of felt or unfelt states – are potentially relevant to both the “easy problem” (how and why does the brain produce our capacity to DO all the things people can do?) and the “hard problem” (how and why does the brain produce the capacity to FEEL? 

C4-3: “Third, Harnad’s view might be seen as sidestepping the “hard problem” of consciousness—the question of how and why certain processes in the brain give rise to subjective experiences. Simply defining consciousness as “feeling” doesn’t address this question; it merely redefines the terms of the problem.”

SH:  You’re absolutely right that I am not solving the Hard Problem (HP). In fact, I don’t think it’s solvable. 

The reason I think that is that once the Easy Problem (EP) (which is to provide a causal explanation of DOING capacity by generating Turing-Indistinguishable T3/T4 capacity) is solved, that leaves FEELING itself (which is real but unobservable except by the feeler because of the Other Minds Problem OMP) causally inexplicable. 

This in turn is because the solution to the EP has already used up all the empirical degrees of freedom for a causal explanation. With the EP, everything observable is already explained. 

I suspect that this in turn is (somehow) a result of 

(1) the anomalous nature of feeling, which is not empirically observable except to the feeler because of the OMP, 

(2) the nature of empirical and causal explanation, and 

(3) the nature of feeling. 

Feeling is an “uncomplemented category.” To understand a category. To understand a category requires distinguishing its members from its nonmembers (the category and its complement). This requires detecting the features that distinguish the features of its members from the features of the members of its complement. But the felt category “what it feel like to feel” has no complement. It is self-contradictory.

C4-4: “Conflation of Diverse Phenomena: Critics might argue that defining all conscious states as merely “felt states” fails to distinguish between qualitatively different phenomena. Terms like “awareness,” “subjectivity,” and “phenomenological experience” were developed to describe distinct aspects or levels of consciousness. By lumping all these concepts together as “felt states,” one might be glossing over important nuances and potentially fruitful avenues of investigation.”

SH:  As I said in reply to Question 1, I think these other terms are all just weasel words for felt states. Yes, feelings differ “qualitatively”, but that’s the point: they differ in what they feel like (warm vs. cold, or seeing red vs. seeing green already do that) but they are all felt states, not something else. Feeling is notoriously hard to define verbally (perhaps because “what it feels like to feel” is an uncomplemented category), but the differences among felt states can be names and described, as interlocutors have felt some sufficiently states.

C4-5: “Neglecting Unconscious Processes: Your perspective seems to largely dismiss unconscious mental processes. However, much of cognitive science points to the significant role these play in our mental lives. Memory recall, perception, learning, and decision-making can all involve unconscious processes. By focusing solely on “felt states,” we risk overlooking these critical components of cognition.”

SH:  Most brain states and parts of brain states are unfelt, and can occur at the same time as felt states. And that includes the unfelt states and processes that produce felt states! But it they are unfelt, they are unfelt, even though they may be doing some things that then produce felt states – such as when you finally find a tip-of-the-tongue word. (Moreover, even when a word comes to mind effortlessly, unfelt processes are delivering it – processes that we are waiting for cognitive science to reverse-engineer and then explain explain to us how they work.)

C4-6: “Absence of Felt States: There are situations where conscious awareness seems to occur without an accompanying felt state. For instance, consider blindsight, a condition where individuals respond to visual stimuli they report not consciously seeing. Similarly, during “flow” states, people often report a loss of self-consciousness, even though they’re highly engaged with the task at hand. These examples challenge the notion that all conscious states are necessarily felt.”

SH: Blindsight patients are not insentient. They just can’t see. They may be using felt nonvisual cues or unfelt cues. The real mystery is why all sight is not just blindsight: The “hard problem” is not the absence of feeling in blindsight but the presence of feeling in sight. 

Flow state sounds like concentrated attention on something, ignoring everything else. The rest is just hyperbole. It is a felt state.

C4-7: “Differing States of Consciousness: States such as sleep, anesthesia, coma, and even different levels of alertness during wakefulness demonstrate varying degrees of consciousness. These states aren’t easily characterized as merely “felt” or “unfelt”. What about dreams, where there might be a rich tapestry of felt experience, but arguably a different kind of consciousness than wakefulness?”

SH: A good instance of the mischief of weasel-words. All these “differing” states, whether natural or drug or pathology-induced, are felt states, if and when they are felt. Among them, dreams are felt states too, not unfelt ones (even if forgotten immediately afterward). It’s just that in dream states, movement and sometimes alse recall are inhibited. And no matter how “rich the tapestry” or how weird the feeling, a felt state is a felt state.

C4-8: “Evolutionary Argument: From an evolutionary perspective, consciousness likely evolved as a complex array of adaptive mechanisms. Reducing it to “felt states” might ignore this complexity and the various functions consciousness serves. C4-8b: Neuroscientific Evidence: Neuroscience has made considerable progress in associating specific neural processes with different aspects of conscious experience. Ignoring these findings could be seen as an overly dismissive stance”.

SH:  Yes, calling it by a weasel word makes it sound as if there is more than one thing that is evolving for Darwinian reasons. But whether the question is put in a direct or weaselly way, explaining how and why feeling evolved is one and the same with the Hard Problem.

[Sorry for failing to give C4-8b (neural correlates) a number of its own (I must have been in a “flow state” on something else!) but it’s easily enough answered.]

Nobody has suggested ignoring correlates and predictors of felt states, whether neural correlates or behavioral correlates. But let’s not conflate DOING and FEELING. Neural correlates will no doubt be helpful in reverse-engineering the mechanisms that produce our behavioral capacities (both cognitive and vegetative), the Easy Problem, but when it comes to FEELING they are up against the same Hard Wall as all other approaches (including behavioral correlates and evolutionary explanation).

C4-9: “The Hard Problem Remains: Even if we accept that consciousness is equivalent to “feeling,” this doesn’t necessarily help us solve the hard problem of consciousness. It might clarify our terminology, but it doesn’t explain how and why certain brain processes produce subjective experience”.

SH:  I think calling a spade a spade (and just a spade) certainly does not solve the HP, but de-weaselling certainly clear out a lot of pseudo-explanatory clutter.

I will now reply to the 10th of your 13 critiques. Each separate posting from me will start with one quoted critique from you, preceded by “C4-n:” The quote in each separate posting (1-13) is followed by a reply from me. You can then reply if you wish, or you can just say “I’ll integrate and reply when I’ve seen the rest of your 13 replies.” In that case my next posting will again begin with this same text, followed by my reply the next of your 13 critiques, till the 13th one, and you can do an integrative rebuttal. 

C4-10: “Lack of Operational Definition: Some might argue that defining consciousness merely as a “felt state” is too vague. In scientific research, it’s important to have an operational definition that can be objectively measured. Without such a definition, research and dialogue about consciousness can become confusing and unproductive.”

SH:  Well, well. I suspect that apart from some die-hard behaviorists whose view of science has not progressed beyond Bridgeman’s operationalism (and who are also still toying with the urge to deny the existence of sentience all together, in favor of just a behavioral correlate), most who have given the Hard Problem some serious thought know that feeling is unobservable to anyone other than the feeler (because of the OMP), hence an operational “definition” is nonsense. 

On the other hand, the Turing Test (T3/T4) – which is just about DOING, hence “behavioral” — is the best predictor we can hope for. But it is not operationalism, and Turing knew it.

C4-11: “Cognitive Science Models: Various models in cognitive science, such as the Global Workspace Theory or Integrated Information Theory, suggest specific mechanisms and structures for consciousness. These models often go beyond just ‘felt states’ and incorporate a range of unconscious and conscious processes.”

SH: Before you can get beyond feeling, you have to face feeling, the EP and the HP. Neither Global Workspace Theory nor Integrated Information does either. (Or do you have something specific in mind?) I think they’re both just doing hermeneutics of “consciousness” not reverse-engineering or explaining a thing. That’s a risk-free venture.

C4-12: “Issue of Self-Reflection: Many philosophers and cognitive scientists believe that one crucial aspect of human consciousness is our ability to reflect upon our own mental states – a form of meta-awareness. Some may argue that your perspective doesn’t adequately address this”.

SH: This is a hoary one. Philosophers have long stressed that there is a difference between (1) being aware of something, (2) being aware of being aware of something and (3) being “self-aware” (as in recognizing yourself in a mirror, or thinking about yourself). We can translate this into the language of seeing and doing too: Seeing something, seeing someone else seeing something, and then seeing that the someone else seeing the something is yourself. These are cognitive capacities, because they may enable you to do things (e.g., moving so it can’t be seen that you are looking). These capacities also obviously have a felt counterpart: It feels like something to do them, as well as to be able to do them. If we substitute “thinking” for “seeing”, they remain very similar cognitive capacities.

So I don’t see how the fact that thinking is a felt state is relevant to whatever cognitive mechanism proves to be able to produce self-reference or metacognition.

C4-13: “Animal Consciousness and AI: A purely subjective, “felt” definition of consciousness could complicate discussions about non-human consciousness (like animal consciousness) and artificial consciousness (in AI systems). Without clear behavioral or objective measures, determining whether an entity is conscious becomes more challenging.”

SH: First of all, because of the OMP, neither sentience (feeling) nor its weasel-words are directly observable. So to predict and infer it depends on behavioral and neural correlates and predictors. Animal sentience has all the correlates and predictors except verbal report. Plant sentience is moot. And AI sentience is sci-fi  so far (and I think you, ChatGPT4, will agree, at least as far as you are concerned.

So again, I can’t see how calling a spade anything other than a spade helps anything except perhaps projecting our fantasies onto insentient systems. I’m not ready to do that before we’re much closer to a grounded T2 robot.

HARD FEELINGS

HARNAD:    Yep, the “hard problem” is to explain how and why sentient organisms FEEL rather than just DO. It’s exactly the same problem as explaining how and why there could not be a T3 or T4 zombie. Where does the definition of the “hard problem” come from? I know Dave Chalmers named it, but who described the problem? And exactly why is it a problem?

ChatGPT:

The “hard problem” of consciousness was coined and articulated by philosopher David Chalmers in 1995. It refers to the issue of explaining the nature of our subjective experiences, or why certain physical processes in the brain are accompanied by an experienced inner life. These subjective experiences are also referred to as “qualia.”

The “hard problem” is distinguished from the “easy problems” of consciousness, which include explaining the ability to discriminate, integrate information, report mental states, focus attention, etc. Even though these problems are far from “easy” in the practical sense, there is a clear path to understanding them in terms of computational or neural mechanisms. Hence, Chalmers termed them “easy”.

The “hard problem”, on the other hand, is deemed hard because it doesn’t seem amenable to such explanation. The question “why does the feeling which accompanies awareness exist at all?” appears to resist those methods that we conventionally use in science to explain phenomena.

This is because, typically, science proceeds by finding a mechanism that performs the function in question. For example, we can explain a car’s movement by referencing the mechanism of its engine. However, when it comes to consciousness, it’s not clear that finding a mechanism that ‘performs’ consciousness, in the sense of the ability to have subjective experience, would actually provide the kind of explanation we’re looking for.

It’s not sufficient to find neural correlates of consciousness, or to find information processing structures in the brain that mirror our conscious experiences. That would still leave a gap in our understanding – why should these physical processes give rise to subjective experience? Why aren’t we, as philosopher Thomas Nagel has put it, just mindless automata, acting without experiencing?

This is the crux of the hard problem – it’s not merely asking for a mechanism, but an explanation of why certain physical processes give rise to subjective experience, something that seems fundamentally non-physical.

HARNAD:    I have no idea what “non-physical” means, do you? (I understand cognitive science but not metaphysics.)  But I do know what unexplained (or unexplainable) means. I know Chalmers named it, but who described the hard problem?

ChatGPT:

Sorry for any confusion. The “hard problem” of consciousness was indeed first named and extensively discussed by David Chalmers. He used the term “non-physical” to refer to subjective experiences, or “qualia”, which seem fundamentally different from the kinds of things that physical science deals with. In other words, we can describe and predict the behavior of physical entities like atoms, electrons, and neurons, but subjective experiences do not seem to behave like physical entities in the same way.

Thomas Nagel’s 1974 paper “What is it like to be a bat?” also touched upon this issue prior to Chalmers. Nagel argued that an organism has conscious mental states “if and only if there is something that it is like to be that organism—something it is like for the organism.”

So, while Chalmers might have been the one to name the “hard problem” and bring it to the forefront of contemporary philosophical discussion, the basic issue of explaining subjective experience has a much longer history in the philosophy of mind, dating back at least to Descartes’ dualist distinction between mind and body in the 17th century.

However, it’s important to note that not everyone agrees there is a “hard problem” – some argue that once we solve the so-called “easy problems” of consciousness (i.e., explaining the mechanisms behind cognitive and behavioral functions), what we think of as the “hard problem” might not seem so problematic. This perspective, however, is a matter of ongoing debate in the field.

HARNAD:    The “easy problem” is explaining how and why organisms can DO what they can do. (That’s what we’ve been discussing here as reverse-engineering T3 and T4). The “hard problem” is explaining how and why (sentient) organisms can FEEL. If we forget about the meaningless notion of “nonphysical,” why is the hard problem still so much harder than the easy problem?

ChatGPT:

The difficulty with the “hard problem”, as compared to the “easy problem”, arises from the inherently subjective and qualitative nature of feelings or experiences, also known as “qualia”. While we can objectively observe, measure, and describe physical processes and behaviors, we cannot directly access someone else’s subjective experiences in the same way.

For example, we can understand and describe the physical processes associated with seeing the color red—light hitting an object, reflecting into our eyes, being processed by our visual systems, leading to a particular cognitive and behavioral response. This is the kind of problem that can be solved with enough empirical research and is hence termed the “easy problem”.

However, understanding why these processes are accompanied by a subjective experience of “redness” is much more difficult. Why isn’t the physical process of seeing color “like” anything at all? Why is it that we have a qualitative, subjective experience of the color red, rather than just a cognitive recognition and behavioral response? These are the questions posed by the “hard problem”.

Many theories have been proposed to address this, from various forms of physicalism that posit some yet-undiscovered physical explanation for consciousness, to dualism that suggests consciousness might be a separate entity from the physical world, to panpsychism that proposes that consciousness is a fundamental aspect of the universe, like space and time. But so far, no theory has been able to fully bridge the gap between the objective physical world and the subjective world of experience, leading to what Chalmers calls the “explanatory gap”.

There’s also the challenge of “other minds” in philosophy—while you can directly experience your own consciousness, you can’t directly experience someone else’s. All our knowledge about other minds is inferred from behavior and communication, but we have no way to directly observe their subjective experiences. This inherent limitation compounds the difficulty of the “hard problem”.

HARNAD:    If we chuck out the weasel-words again, as well as the metaphysical non-solutions. The hard problem is to explain, causally, how and why anything that organisms can DO should FEEL like anything at all. Feeling is real, but without a causal explanation it seems superfluous.

ChatGPT:

Yes, that’s indeed a clear and concise framing of the hard problem of consciousness. The issue is trying to explain why and how subjective experiences arise from objective physical processes – or in other words, how and why certain processes are accompanied by an internal, subjective “feeling”.

Objective behaviors and actions can be causally explained by physical processes, such as neuronal activity, chemical reactions, and interactions with the environment. We can trace the causes and effects, build models, and make predictions. These explanations fit within our current scientific framework, which deals with observable, quantifiable phenomena.

However, when it comes to subjective experiences or feelings, our current scientific methods fall short. We don’t yet have a way to objectively measure or quantify these experiences, and we don’t have a clear understanding of how they could be causally produced by physical processes. This is why some people consider consciousness to be a “non-physical” phenomenon.

It’s not that feelings are superfluous or unnecessary. Rather, they’re central to our existence as sentient beings. The challenge is figuring out how to account for them within our scientific understanding of the world. The hard problem of consciousness is essentially about bridging this explanatory gap. How can we expand our scientific methods and theories to account for the subjective, qualitative aspects of existence, and to uncover the causal mechanisms that give rise to them? That is the challenge that faces researchers in fields such as cognitive science, neuroscience, and philosophy of mind.

HARNAD: When ChatGPT palses intp starts echoing the empty weasel-words of the field it’s time to give the topic a break…

Plant Sentience and the Precautionary Principle

I hope that plants are not sentient, but I also believe they are not sentient, for several other reasons too:

Every function and capacity demonstrated in plants and (rightly) described as “intelligent” and “cognitive” (learning, remembering, signalling, communicating) can already be done by robots and by software (and they can do a lot more too). That demonstrates that plants too have remarkable cognitive capacities that we used to think were unique to people (and perhaps a few other species of animals). But it does not demonstrate that plants feel. Nor that feeling is necessary in order to have those capacities. Nor does it increase the probability by more than an infinitesimal amount, that plants feel.

The “hard problem” is to explain how and why humans (and perhaps a few other species of animals) feel. It seems to be causally superfluous, as robotic and computational models are demonstrating how much can be done without feeling. But with what plants can do it is almost trivial to design a model that can do it too, So there feeling seems to be incomparably more superfluous.

To reply that “Well, so maybe those robots and computational models feel too!” would just be to capitalize on the flip side of the other-minds problem (that certainty is not possible), to the effect that just as we cannot be sure that other people do feel, we cannot be sure that rocks, rockets or robots don’t feel.

That’s not a good address. Don’t go there. Stick with high probability and the preponderance of evidence. The evidence for some cognitive capacity (memory, learning, communication) in plants is strong. But the evidence that they feel is next to zero. In nonhuman animals the evidence that they feel starts very high for mammals, birds, other vertebrates, and, more and more invertebrates. But the evidence that plants, microbes and single cells feel is nonexistent, even as the evidence for their capacity for intelligent performance becomes stronger.

That humans should not eat animals is a simple principle based on the necessities for survival: 

Obligate carnivores (like the felids, I keep being told) have no choice. Eat flesh or sicken and die. Humans, in contrast, are facultative omnivores; they can survive as carnivores, consuming flesh, or they can survive without consuming flesh, as herbivores. And they can choose. There are no other options (until and unless technology produces a completely synthetic diet).

So my disbelief in plan sentience is not based primarily on wishful thinking, but on evidence and probability (which is never absolute zero, even for gravity, that apples may not start falling up instead of down tomorrow).

But there is another ethical factor that influences my belief, and that is the Precautionary Principle. Right now, and for millennia already in the Anthropocene, countless indisputably sentient animals are being slaughtered by our species, every second of every day, all over the planet, not out of survival necessity (as it had been for our hunter/gatherer ancestors), but for the taste, out of habit.

Now the “evidence” of sentience in these animals is being used to try to sensitize the public to their suffering, and the need to protect them. And the Precautionary Principle is being invoked to extend the protection to species for whom the evidence is not as complete and familiar as it is for vertebrates, giving them the benefit of the doubt rather than having to be treated as insentient until “proven” sentient. Note that all these “unproven” species are far closer, biologically and behaviorally to the species known to be sentient than they are to single cells and plants, for whom there is next to no evidence of sentience, only evidence for a degree of intelligence. Intelligence, by the way, does come in degrees, whereas sentience does not: An organism either does feel (something) or it does not – the rest is just a matter of the quality, intensity and duration of the feeling, not its existence.

So this 2nd order invocation of the Precautionary Principle, and its reckoning of the costs of being right or wrong, dictates that just as it is wrong not to give the benefit of the doubt to similar animals where the probability is already so high, it would be wrong to give the benefit of the doubt where the probability of sentience is incomparably lower, and what is at risk in attributing it where it is highly improbable is precisely the protection the distinction would have afforded to the species for whom the probability of sentience is far higher. The term just becomes moot, and just another justification for the status quo (ruled by neither necessity nor compassion, but just taste and habit – and the wherewithal to keep it that way).

Chomsky on Consciousness

Re: https://www.youtube.com/watch?v=vLuONgFbsjw

I only watched the beginning, but I think I got the message. Let me precede this with: yes, I have come to understand the problem of “consciousness” through the lens of practical ethics. But suspend judgement: it’s not just a matter of overgeneralizing one’s culinary druthers…

Prelude. First, an important aside about Tom Nagel’s influential article. As often happens in philosophy, words throw you. Tom should never have entitled (or, for pedants, “titled”) it ‘What is it like to be a bat?” but “What does it feel like to be a bat?” 

We’d have been spared so  much silliness (still ongoing). It would have brought to the fore that the problem of consciousness is not the metaphysical problem of explaining “What kind of ‘stuff’ is consciousness?” but the  hard problem of explaining (causally) “How and why do organisms feel?” 

The capacity to feel is a biological trait, like the capacity to fly. But flying is observable, feeling is not. Yet that’s not the “hard problem.” That’s “just” the “other minds problem.” Just a puzzle among fellow gentleman-philosophers, about certainty — but an existential agony for those sentient species that we assume do not feel, and treat accordingly, when in fact they do feel.

Easy Sequel. Noam is mistaken to liken the hard problem to the Newtonian problem of explaining the laws of motion. Motion (of which flying is an example) is observable; feeling is not. But, again, that is not the hard problemQuarks are not observable; but without inferring that little (observable) protons are really composed of big (unobservable) quarks, we cannot explain protons. So quarks cannot be seen, but they can be inferred from what can be seen, and they play an essential causal (or functional) role in the (current) explanation of protons. 

Feeling cannot be observed (by anyone other than the feeler – this is the “1st person perspective” that Nagel extolled and interviewer Richard Brown obnoxiously tries to foist on Chomsky). 

But even though feeling cannot be observed, it can be inferred. We know people feel; we know our dog feels. Uncertainty only becomes nontrivial when we get down to the simplest invertebrates, microbes and plants. (And, before anyone mentions it, we also know that rocks, rockets and [today’s] robots (or LaMDAas)  don’t feel. Let’s not get into scholastic and sterile scepticism about what we can know “for sure.”)

So the “hard problem” of explaining, causally (functionally), how and why sentient organisms feel is hard precisely because of the “easy problem” of causally explaining observable capacities of organisms, like moving, flying, learning, remembering, reasoning, speaking. 

It looks for all the world as if [once we have explained (causally) how and why organisms can do all the (observable) things they can do], then feeling — unlike the quarks in  the (subatomic) explanation of what protons can (observably) do, or the Newtonian explanation of what billiard balls can (observably) do — feeling looks for all the world as if it is causally superfluous.

Solving the “hard problem” would be simple: Just explain, causally, how it would be impossible for organisms to do all (or some) of the things they can do if they did not feel. In other words, explain the causal role of feeling (adaptively, if you like – after all, it’s an evolved, biological trait).

But before you go there, don’t try to help yourself to feeling as a fundamental force of nature, the way Newton helped himself to universal gravitation. Until further notice, feeling is just a tiny local product of the evolution of life in one small planet’s biosphere. And there’s no reason at all to doubt that, like any other biological trait, feeling is explainable in terms of the four fundamental forces (gravity, electromagnetism, “strong” subatomic & “weak” subatomic). No 5th psychokinetic force.

The problem is just coming up with the causal explanation. (Galen Strawson’s “panpsychism” is an absurd, empty – and I think incoherent – metaphysical fantasy that does not solve the “hard problem,” but just inflates it to cosmic proportions without explaining a thing.)

So Noam is mistaken that the hard problem is not a problem. But it’s not about explaining what it feels like to see a sunset. It is about explaining how and why (sentient) organisms feel anything at all.

See: “Minds, Brains and Turing” (2011) as well as the (Browned out) discussion.

First Personhood

12 Points on Confusing Virtual Reality with Reality

Comments on: Bibeau-Delisle, A., & Brassard FRS, G. (2021). Probability and consequences of living inside a computer simulationProceedings of the Royal Society A477(2247), 20200658.

  1. What is Computation? it is the manipulation of arbitrarily shaped formal symbols in accordance with symbol-manipulation rules, algorithms, that operate only on the (arbitrary) shape of the symbols, not their meaning.
  2. Interpretatabililty. The only computations of interest, though, are the ones that can be given a coherent interpretation.
  3. Hardware-Independence. The hardware that executes the computation is irrelevant. The symbol manipulations have to be executed physically, so there does have to be hardware that executes it, but the physics of the hardware is irrelevant to the interpretability of the software it is executing. It’s just symbol-manipulations. It could have been done with pencil and paper.
  4. What is the Weak Church/Turing Thesis? That what mathematicians are doing is computation: formal symbol manipulation, executable by a Turing machine – finite-state hardware that can read, write, advance tape, change state or halt.
  5. What is Simulation? It is computation that is interpretable as modelling properties of the real world: size, shape, movement, temperature, dynamics, etc. But it’s still only computation: coherently interpretable manipulation of symbols
  6. What is the Strong Church/Turing Thesis? That computation can simulate (i.e., model) just about anything in the world to as close an approximation as desired (if you can find the right algorithm). It is possible to simulate a real rocket as well as the physical environment of a real rocket. If the simulation is a close enough approximation to the properties of a real rocket and its environment, it can be manipulated computationally to design and test new, improved rocket designs. If the improved design works in the simulation, then it can be used as the blueprint for designing a real rocket that applies the new design in the real world, with real material, and it works.
  7. What is Reality? It is the real world of objects we can see and measure.
  8. What is Virtual Reality (VR)? Devices that can stimulate (fool) the human senses by transmitting the output of simulations of real objects to virtual-reality gloves and goggles. For example, VR can transmit the output of the simulation of an ice cube, melting, to gloves and goggles that make you feel you are seeing and feeling an ice cube. melting. But there is no ice-cube and no melting; just symbol manipulations interpretable as an ice-cube, melting.
  9. What is Certainly Truee (rather than just highly probably true on all available evidence)? only what is provably true in formal mathematics. Provable means necessarily true, on pain of contradiction with formal premises (axioms). Everything else that is true is not provably true (hence not necessarily true), just probably true.
  10.  What is illusion? Whatever fools the senses. There is no way to be certain that what our senses and measuring instruments tell us is true (because it cannot be proved formally to be necessarily true, on pain of contradiction). But almost-certain on all the evidence is good enough, for both ordinary life and science.
  11. Being a Figment? To understand the difference between a sensory illusion and reality is perhaps the most basic insight that anyone can have: the difference between what I see and what is really there. “What I am seeing could be a figment of my imagination.” But to imagine that what is really there could be a computer simulation of which I myself am a part  (i.e., symbols manipulated by computer hardware, symbols that are interpretable as the reality I am seeing, as if I were in a VR) is to imagine that the figment could be the reality – which is simply incoherent, circular, self-referential nonsense.
  12.  Hermeneutics. Those who think this way have become lost in the “hermeneutic hall of mirrors,” mistaking symbols that are interpretable (by their real minds and real senses) as reflections of themselves — as being their real selves; mistaking the simulated ice-cube, for a “real” ice-cube.

Learning and Feeling

Re: the  NOVA/PBS video on slime mold. 

Slime molds are certainly interesting, both as the origin of multicellular life and the origin of cellular communication and learning. (When I lived at the Oppenheims’ on Princeton Avenue in the 1970’s they often invited John Tyler Bonner to their luncheons, but I don’t remember any substantive discussion of his work during those luncheons.)

The NOVA video was interesting, despite the OOH-AAH style of presentation (and especially the narrators’ prosody and intonation, which to me was really irritating and intrusive), but the content was interesting – once it was de-weaseled from its empty buzzwords, like “intelligence,” which means nothing (really nothing) other than the capacity (which is shared by biological organisms and artificial devices as well as running computational algorithms) to learn.

The trouble with weasel-words like “intelligence,” is that they are vessels inviting the projection of a sentient “mind” where there isn’t, or need not be, a mind. The capacity to learn is a necessary but certainly not a sufficient condition for sentience, which is the capacity to feel (which is what it means to have a “mind”). 

Sensing and responding are not sentience either; they are just mechanical or biomechanical causality: Transduction is just converting one form of energy into another. Both nonliving (mostly human synthesized) devices and living organisms can learn. Learning (usually) requires sensors, transducers, and effectors; it can also be simulated computationally (i.e., symbolically, algorithmically). But “sensors,” whether synthetic or biological, do not require or imply sentience (the capacity to feel). They only require the capacity to detect and do.

And what sensors and effectors can (among other things) do, is to learn, which is to change in what they do, and can do. “Doing” is already a bit weaselly, implying some kind of “agency” or agenthood, which again invites projecting a “mind” onto it (“doing it because you feel like doing it”). But having a mind (another weasel-word, really) and having (or rather being able to be in) “mental states” really just means being able to feel (to have felt states, sentience).

And being able to learn, as slime molds can, definitely does not require or entail being able to feel. It doesn’t even require being a biological organism. Learning can (or will eventually be shown to be able to) be done by artificial devices, and to be simulable computationally, by algorithms. Doing can be simulated purely computationally (symbolically, algorithmically) but feeling cannot be, or, otherwise put, simulated feeling is not really feeling any more than simulated moving or simulated wetness is really moving or wet (even if it’s piped into a Virtual Reality device to fool our senses). It’s just code that is interpretable as feeling, or moving or wet. 

But I digress. The point is that learning capacity, artificial or biological, does not require or entail feeling capacity. And what is at issue in the question of whether an organism is sentient is not (just) whether it can learn, but whether it can feel. 

Slime mold — amoebas that can transition between two states, single cells and multicellular  — is extremely interesting and informative about the evolutionary transition to multicellular organisms, cellular communication, and learning capacity. But there is no basis for concluding, from what they can do, that slime molds can feel, no matter how easy it is to interpret the learning as mind-like (“smart”). They, and their synthetic counterparts, have (or are) an organ for growing, moving, and learning, but not for feeling. The function of feeling is hard enough to explain in sentient organisms with brains, from worms and insects upward, but it becomes arbitrary when we project feeling onto every system that can learn, including root tips and amoebas (or amoeba aggregations).

I try not to eat any organism that we (think we) know can feel — but not any organism (or device) that can learn.

SIGNALS AND SENTIENCE

Yes, plants can produce sounds when they are stressed by heat or drought or damage.

They can also produce sounds when they are swayed by the wind, and when their fruit drops to the ground.

They can also produce sights when their leaves unfurl, and when they flower.

And they can produce scents too.

And, yes, animals can detect those sounds and sights and scents, and can use them, for their own advantage (if they eat the plant), or for mutual advantage (e.g., if they are pollinators).

Plants can also produce chemical signals, for signalling within the plant, as well as for signalling between plants.

Animals (including humans) can produce internal signals, from one part of their immune system to another, or from a part of their brain to another part, or to their muscles or their immune system.

Seismic shifts (earth tremors) can be detected by animals, and by machines.

Pheromones can be produced by human secretions and detected and reacted to (but not smelled) by other humans.

The universe is full “signals,” most of them neither detected nor produced by living organisms, plant or animal.

Both living organisms and nonliving machines can “detect” and react to signals, both internal and external signals; but only sentient organisms can feel them. 

To feel signals, it is not enough to be alive and to detect and react to them; an organ of feeling is needed: a nervous system.

Nor are most of the signals produced by living organisms intentional; for a signal to be intentional, the producer has to be able to feel that it is producing it; that too requires an organ of feeling.

Stress is an internal state that signals damage in a living organism; but in an insentient organism, stress is not a felt state.

Butterflies have an organ of feeling; they are sentient. 

Some species of butterfly have evolved a coloration that mimics the coloration of another, poisonous species, a signal that deters predators who have learned that it is often poisonous. 

The predators feel that signal; the butterflies that produce it do not.

Evolution does not feel either; it is just an insentient mechanism by which genes that code for traits that help an organism to survive and reproduce get passed on to its progeny.

Butterflies, though sentient, do not signal their deterrent color to their predators intentionally.

Nor do plants that signal by sound, sight or scent, to themselves or others, do so intentionally.

All living organisms except plants must eat other living organisms to survive. The only exceptions are plants, who can photosynthesize with just light, CO2 and minerals.

But not all living organisms are sentient.

There is no evidence that plants are sentient, even though they are alive, and produce, detect, and react to signals. 

They lack an organ of feeling, a nervous system.

Vegans need to eat to live.

But they do not need to eat organisms that feel.

Khait, I., Lewin-Epstein, O., Sharon, R., Saban, K., Perelman, R., Boonman, A., … & Hadany, L. (2019). Plants emit informative airborne sounds under stress. bioRxiv 507590.

Wilkinson, S., & Davies, W. J. (2002). ABA‐based chemical signalling: the co‐ordination of responses to stress in plantsPlant, cell & environment25(2), 195-210.

SIGNAUX ET SENTIENCE

Oui, les plantes peuvent produire des sons lorsqu’elles sont stressĂŠes par la chaleur, la sĂŠcheresse ou les dommages.

Elles peuvent ĂŠgalement produire des sons lorsqu’elles sont agitĂŠes par le vent et lorsque leurs fruits tombent au sol.

Elles peuvent ĂŠgalement produire des vues lorsque leurs feuilles se dĂŠploient et lorsqu’elles fleurissent.

Et elles peuvent aussi produire des parfums.

Et, oui, les animaux peuvent dĂŠtecter ces sons, ces vues, et ces odeurs, et ils peuvent les utiliser, pour leur propre avantage (s’ils mangent la plante) ou pour un avantage mutuel (s’ils sont des pollinisateurs).

Les plantes peuvent ĂŠgalement produire des signaux chimiques, pour la signalisation Ă  l’intĂŠrieur de la plante, ainsi que pour la signalisation entre les plantes.

Les animaux (y compris les humains) peuvent produire des signaux internes, d’une partie de leur système immunitaire Ă  une autre, ou d’une partie de leur cerveau Ă  une autre partie, ou Ă  leurs muscles ou Ă  leur système immunitaire.

Les dĂŠplacements sismiques (tremblements de terre) peuvent ĂŞtre dĂŠtectĂŠs par les animaux ainsi que par les machines.

Les phĂŠromones peuvent ĂŞtre produites par les sĂŠcrĂŠtions humaines et elles peuvent ĂŞtre dĂŠtectĂŠes et rĂŠagies (mais non sentis) par d’autres humains.

L’univers est plein de ÂŤ signaux Âť, dont la plupart ne sont ni dĂŠtectĂŠs ni produits par des organismes vivants, vĂŠgĂŠtaux ou animaux.

Les organismes vivants et les machines non vivantes peuvent ÂŤ dĂŠtecter Âť et rĂŠagir aux signaux, qu’ils soient internes ou externes ; mais seuls les organismes sentients peuvent les ressentir.

Pour ressentir des signaux, il ne suffit pas d’ĂŞtre vivant, de les dĂŠtecter et d’y rĂŠagir ; il faut un organe du ressenti : un système nerveux.

La plupart des signaux produits par les organismes vivants ne sont pas non plus intentionnels ; pour qu’un signal soit intentionnel, il faut que le producteur puisse ressentir qu’il le produit ; cela aussi exige un organe du ressenti.

Le stress est un ĂŠtat interne qui signale des dommages dans un organisme vivant ; mais dans un organisme non sentient, le stress n’est pas un ĂŠtat ressenti.

Les papillons ont un organe du ressenti ; ils sont sentient.

Certaines espèces de papillons ont ĂŠvoluĂŠ une coloration qui imite la coloration d’une autre espèce vĂŠnĂŠneuse, un signal qui dissuade les prĂŠdateurs qui ont appris que c’est souvent toxique.

Les prĂŠdateurs ressentent ce signal; les papillons qui le produisent ne le ressentent pas.

L’ĂŠvolution darwinienne ne ressent pas non plus ; c’est juste un mĂŠcanisme non sentient par lequel les gènes qui encodent les traits qui aident un organisme Ă  survivre et Ă  se reproduire sont transmis Ă  sa progĂŠniture.

Les papillons, bien que sentients, ne signalent pas intentionnellement leur couleur dissuasive Ă  leurs prĂŠdateurs.

Les plantes qui signalent par le son, la vue ou l’odeur, Ă  elles-mĂŞmes ou aux autres, ne le font pas non plus intentionnellement.

Tous les organismes vivants, Ă  l’exception des plantes, doivent manger d’autres organismes vivants pour survivre. Les seules exceptions sont les plantes, qui peuvent effectuer la photosynthèse avec juste de la lumière, du CO2 et des minĂŠraux.

Mais tous les organismes vivants ne sont pas sentients.

Il n’y a pas de preuve que les plantes soient sentientes, mĂŞme si elles sont vivantes, et produisent, dĂŠtectent et rĂŠagissent aux signaux.

Il leur manque un organe de ressenti, un système nerveux.

Les vĂŠganes nĂŠcessitent manger pour survivre.

Mais ils ne nĂŠcessitent pas manger les organismes qui ressentent.

Khait, I., Lewin-Epstein, O., Sharon, R., Saban, K., Perelman, R., Boonman, A., … & Hadany, L. (2019). Plants emit informative airborne sounds under stress. bioRxiv 507590.

Wilkinson, S., & Davies, W. J. (2002). ABA‐based chemical signalling: the co‐ordination of responses to stress in plantsPlant, cell & environment25(2), 195-210.

Consciousness: The F-words vs. the S-words

“Sentient” is the right word for “conscious.”. It means being able to feel anything at all â€“ whether positive, negative or neutral, faint or flagrant, sensory or semantic. 

For ethics, it’s the negative feelings that matter. But determining whether an organism feels anything at all (the other-minds problem) is hard enough without trying to speculate about whether there exit species that can only feel neutral (“unvalenced”) feelings. (I doubt that +/-/= feelings evolved separately, although their valence-weighting is no doubt functionally dissociable, as in the Melzack/Wall gate-control theory of pain.)

The word “sense” in English is ambiguous, because it can mean both felt sensing and unfelt “sensing,” as in an electronic device like a sensor, or a mechanical one, like a thermometer or a thermostat, or even a biological sensor, like an in-vitro retinal cone cell, which, like photosensitive film, senses and reacts to light, but does not feel a thing (though the brain it connects to might).

To the best of our knowledge so far, the phototropisms, thermotropisms and hydrotropisms of plants, even the ones that can be modulated by their history, are all like that too: sensing and reacting without feeling, as in homeostatic systems or servomechanisms.

Feel/feeling/felt would be fine for replacing all the ambiguous s-words (sense, sensor, sensation…) and dispelling their ambiguities. 

(Although “feeling” is somewhat biased toward emotion (i.e., +/- “feelings”), it is the right descriptor for neutral feelings too, like warmth,  movement, or touch, which only become +/- at extreme intensities.) 

The only thing the f-words lack is a generic noun for “having the capacity too feel” as a counterpart for the noun sentience itself (and its referent). (As usual, German has a candidate: GefĂźhlsfähigkeit.)

And all this, without having to use the weasel-word “conscious/consciousness,” for which the f-words are a healthy antidote, to keep us honest, and coherent…

Appearance and Reality

Re: https://www.nytimes.com/interactive/2021/12/13/magazine/david-j-chalmers-interview.html

1. Computation is just the manipulation of arbitrary formal symbols, according to rules (algorithms) applied to the symbols’ shapes, not their interpretations (if any).

2. The symbol-manipulations have to be done by some sort of physical hardware, but the physical composition of the hardware is irrelevant, as long as it executes the right symbol manipulation rules.

3. Although the symbols need not be interpretable as meaning anything – there can be a Turing Machine that executes a program that is absolutely meaningless, like Hesse’s “Glass Bead Game” – but computationalists are  mostly interested in interpretable algorithms that do can be given a coherent systematic interpretation by the user.

4. The Weak Church/Turing Thesis is that computation (symbol manipulation, like a Turing Machine) is what mathematicians do: symbol manipulations that are systematically interpretable as the truths  and proofs of mathematics.

5. The Strong Church/Turing Thesis (SCTT)  is that almost everything in the universe can be simulated (modelled) computationally.

6. A computational simulation is the execution of symbol-manipulations by hardware in which the symbols and manipulations are systematically interpretable by users as the properties of a real object in the real world (e.g., the simulation of a pendulum or an atom or a neuron or our solar system).

7. Computation can simulate only “almost” everything in the world, because  — symbols and computations being digital — computer simulations of real-world objects can only be approximate. Computation is merely discrete and finite, hence it cannot encode every possible property of the real-world object. But the approximation can be tightened as closely as we wish, given enough hardware capacity and an accurate enough computational model.

8. One of the pieces of evidence for the truth of the SCTT is the fact that it is possible to connect the hardware that is doing the simulation of an object to another kind of hardware (not digital but “analog”), namely, Virtual Reality (VR) peripherals (e.g., real goggles and gloves) which are worn by real, biological human beings.

9. Hence the accuracy of a computational simulation of a coconut can be tested in two ways: (1) by systematically interpreting the symbols as the properties of a coconut and testing whether they correctly correspond to and predict the properties of a real coconut or (2) by connecting the computer simulation to a VR simulator in a pair of goggles and gloves, so that a real human being wearing them can manipulate the simulated coconut.

10. One could, of course, again on the basis of the SCTT, computationally simulate not only the coconut, but the goggles, the gloves, and the human user wearing them — but that would be just computer simulation and not VR!

11. And there we have arrived at the fundamental conflation (between computational simulation and VR) that is made by sci-fi enthusiasts (like the makers and viewers of Matrix and the like, and, apparently, David Chalmers). 

12. Those who fall into this conflation have misunderstood the nature of computation (and the SCTT).

13.  Nor have they understood the distinction between appearance and reality â€“ the one that’s missed by those who, instead of just worrying that someone else might be a figment of their imagination, worry that they themselves might be a figment of someone else’s imagination.

14. Neither a computationally simulated coconut nor a VR coconot is a coconut, let alone a pumpkin in another world.

15. Computation is just semantically-interpretable symbol-manipulation (Searle’s “squiggles and squiggles”); a symbolic oracle. The symbol manipulation can be done by a computer, and the interpretation can be done in a person’s head or it can be transmitted (causally linked) to dedicated (non-computational) hardware, such as a desk-calculator or a computer screen or to VR peripherals, allowing users’ brains to perceive them through their senses rather than just through their thoughts and language.

16. In the context of the Symbol Grounding Problem and Searle’s Chinese-Room Argument against “Strong AI,” to conflate interpretable symbols with reality is to get lost in a hermeneutic hall of mirrors. (That’s the locus of Chalmers’s “Reality.”)

Exercise for the reader: Does Turing make the same conflation in implying that everything is a Turing Machine (rather than just that everything can be simulated symbolically by a Turing Machine)?

Wikipedia Talk on the Hard Problem of Consciousness

@J-Wiki:Your edits have been thoughtful, so I don’t want to dwell on quibbles. Tom Nagel’s remark is correct: No one knows how a physical state can ‘be’ a mental (i.e., felt) state. That is in fact yet another way to state the “hard problem” itself!

But to say instead “No one knows how a physical state can be or yield a mental state” is not just to state the problem, but to take a position on it, namely, the hypothesis of interactionist dualism (or something along those lines).

My edit was intended to avoid having the article take a position, by adding the possibility of the dualist solution (“yield”) to the possibility of a monist solution (“be”). This is in keeping with Chalmer’s original statement of the “hard problem”:
It is widely agreed that experience arises from a physical basis, but we have no good explanation of why and how it so arises. Why should physical processing give rise to a rich inner life at all? It seems objectively unreasonable that it should, and yet it does.
He uses the verb “to arise”, which Merriam-Webster defines as “to begin to occur or to exist”. Of course, the same word could be used in the introduction, but ot wouldn’t be the best writing style.J-Wiki (talk) 06:35, 11 September 2018 (UTC)

But this is exactly what acknowledging that it is a “hard problem” is meant to avoid. Yes, no one has any idea how a physical state could ‘be’ a mental state, but that already covers the fact that no one has any idea how a physical state could cause a mental state either!

The inability to be something does not necessarily preclude the ability to cause something.J-Wiki (talk) 06:35, 11 September 2018 (UTC)

Singling out that particular symptom of the problem and elevating it to the statement of the (hard) problem itself amounts to giving one particular hypothesis a privileged position among the many (vacuous) hypotheses that have been the symptoms of the (hard) problem itself.

By the same token one might have extended the statement of the problem to include all of the popular hypothetical (and vacuous) non-solutions: (1) the physical is identical with the mental (materialism, identity theory), (2) the physical causes the mental (parallelism), (3) the mental interacts with the physical (interactionism), (4) the physical replaces the mental (eliminativism), (5) there is no mental (physicalism), (6) there is only the mental (mentalism), (7) the mental is our only perspective on the physical (dual aspectism), (8) mental states and physical states are all just “functional” states (functionalism), etc. etc.

All these non-explanatory non-solutions are already implied by the hard problem itself. That P might (somehow) be the “cause” of M is already one of the many inchoate hypotheses opened up by admitting that we have no idea whatsoever as to how P could “be” M (or M could “be” P).

Yes, there are other theoretical possibilities, but Chalmers didn’t consider them popular enough to mention. Therefore, the article, in summarizing the topic, shouldn’t.
Also, the recognition that there is a “hard problem” of consciousness does not imply that the solution must be monist. It is intended to make clear that finding physical explanations of even all of the “easy problems” will not solve the “hard problem”.J-Wiki (talk) 06:35, 11 September 2018 (UTC)

That’s why I think it would have been more NpV to use one neutral copula as the verb-of-puzzlement (“to be”) rather than a neutral one plus an arbitrary choice among the many problematic hypotheses on the market (“to yield” — i.e., “to cause”).

As explained above, I don’t see the verb “to be” as neutral in this case.J-Wiki (talk) 06:35, 11 September 2018 (UTC)

From glancing at (but not reading) other Wp articles you have edited, a hypothesis occurs to me: might your own PoV be influenced by quantum hypotheses…?

Which PoV would that be?J-Wiki (talk) 06:35, 11 September 2018 (UTC)

As for me: “Hypotheses non fingo” —User:Harnad (talk) 12:47, 10 September 2018 (UTC) —User:Harnad (talk) 13:07, 10 September 2018 (UTC) —User:Harnad (talk) 13:10, 10 September 2018 (UTC)

Thank you for your comments.
Please see my replies interspersed above.
The essence of this discussion should be copied to the article’s talk page, so that others can benefit from the discussion for the purpose of editing the article.
J-Wiki (talk) 06:35, 11 September 2018 (UTC)
As you request, I will copy this exchange to the talk page of Hard problem of consciousness. Not to extend this side-discussion too long I will just make two replies:
(1) The hard problem is a problem of explanation: “how” and “why” do organisms feel rather than just function (insentiently, like machines)? Dave Chalmers did not claim to invent the hard problem, just to name it and point out that it is hard to explain how and why organisms feel rather than just function. Scientific explanation is not normally a metaphysical matter. “Monism” vs. “Dualism” (i.e., is there one kind of “stuff” in the universe [“physical”] or two kinds of “stuff” [“physical” and “mental”]?) is a metaphysical distinction, which in turns implies that the hard problem is hard for metaphysical reasons. But that is only one of many possible reasons why the hard problem is hard. Metaphysical dualism should not be given a privileged status among the myriad conceivable solutions to the hard problem, and it certainly should not be part of the definition of the hard problem. I think it is a mistake — and misleading to WP readers — to make metaphysical conjectures part of the definition of the hard problem. It is clear that some internal states of organisms are unfelt states and other internal states are felt states. The copula “are” here is not a metaphysical “are.” It takes no sides on how or why some internal states are felt. It just states that they are, and that explaining how or why this is the case is hard. The copula “is” is not a statement of the “identity theory” or any other mataphysical conjecture or scientific hypothesis. It is just the “is” we use in any subject/predicate statement.
(2) What PoV might someone interested in quantum mechanical puzzles unwittingly import into their view of the hard problem? Well, wave/particle ‘duality,” for a start…–User:Harnad (talk) 14:13, 11 September 2018 (UTC)
@Harnad: I’ll respond here instead of on the talk page where I saw this discussion since my response is not related to editing Hard problem of consciousness. One problem with many philosophical discussions of “mental states” (including the discussion above) is that the timing of such “mental states” is poorly specified. Philosopher Robert Prentner, for example, recently argued that the hard problem needs to be reframed with finer temporal specification; he wrote:
(Notice, incidentally, how Prentner’s mention of Whitehead’s “subject-predicate dogma” relates to your statement above: the “is” we use in any subject/predicate statement.)
If you or J-Wiki know of any interesting recent discussions of the hard problem that address temporality on multiple scales as Prentner alluded to, I would be grateful if you could share relevant citations. Biogeographist (talk) 19:11, 11 September 2018 (UTC)

@Biogeographist: I’m afraid I don’t know of recent, relevant writings on this question. I can only say that I find “dual-aspect” theory as unhelpful as the other 8 nonsolutions listed in paragraph 9 above (or many more that could have been mentioned). I don’t know what Prentner means by “finer temporal resolution” (though I’m pretty sure that by “tantamount” he means “paramount”). My guess is that the “is” ambiguity (“is” as stating a proposition and “is” as making an identity claim) is not really a profound matter. There is always a problem with physical to mental or mental to physical predication because of the (unsolved) “hard problem.” We do not know how (or why) feeling is generated. Dualists insist on reminding us that we don’t even know whether feeling is (physically) generated, or somehow sui generis. Timing won’t help.

(I assume that the hope is that if the physical (functional) state and the mental (felt) state don’t occur simultaneously, this will somehow help sort things out: I think it won’t. I did note once, in a Ben-Libet context (and Dan Dennett cites it in one of his books on consciousness) that it is impossible to time the exact instant of a mental event: it could precede, coincide with, or follow its physical correlate (and subjective report certainly cannot settle the matter!): there’s no objective way to pinpoint the subjective event except very approximately, not the fine-tuning Prentner seems to want. Saul Sternberg thought it could be done statisticaly, with averaging, as with event-related potentials. But I think it wouldn’t help either way. Whether feeling occurs before, during, or after a neural correlate, it does not help with the hard problem, which is a problem of causal explanation, not chronometry.) —User:Harnad (talk) 22:01, 12 September 2018 (UTC)

@Harnad: Thanks for the response. I’m familiar with Libet and some related discussions but all of that is quite old at this point. I suspect there is a role for chronometry in further elucidation of the hard problem but it will require, among other things, further experimental research (as always) and technological development. If “there’s no objective way to pinpoint the subjective event” then what is the warrant for even speaking of a “felt state” in contrast to an “unfelt state” (as the lead of Hard problem of consciousness does) when a “state” can’t even be pinpointed in phenomenal experience? (This is a question about the appropriateness of “state” as a common denominator between felt and unfelt, or subjective and objective, rather than about the hard problem in general. It’s also a rhetorical question: I don’t expect that there is a good response.) Biogeographist (talk) 15:23, 13 September 2018 (UTC)

@Biogeographist: First, “no objective way to pinpoint the subjective event except very approximately” is not the same as no way to pinpoint it at all.

Second, the limits of human timing accuracy for detecting felt states or events are pretty well known. I can say whether it felt as if my tooth-ache started a half-second earlier or later but not whether it started a millisecond earlier or later. So the temporal boundaries of felt instants are probably too coarse for pinning them to neural correlates (which can be much finer).

Of course one can always dream of new technology, but that would still only be based on more accurate timing of objective neural events (unless you are imagining either a drug or a form of brain stimulation that increases the limits of human timing accuracy for detecting felt states, which I think is unlikely, though not inconceivable).

But even if subjective detection could be made as accurate as objective (neural) detection, how can that more accurate chronometry help with causal explanation? As I said, the felt instant could precede, coincide with or follow its neural correlate, but none of the three options helps explain how or why (or even whether) neural events cause feelings.

The causal problem is not in the timing. It’s in the functionality. Neural events can clearly (and unproblematically) cause motor as well as other physiological activity (including “information-processing”), all of which can be objectively timed and measured. No causal problem whatsoever there. Suppose some neural events turn out to slightly precede or even be exactly simultaneous with felt states (within the limits of measurement): How would that help explain how and why the felt states are felt? Even if the felt states systematically precede their neural correlates (within the limits of measurement), how does that help explain how and why felt states are felt?

That’s why I think “temporality” is not going to help solve the hard problem. I think the real problem is not in the timing of either firings or feelings. The problem is that feeling seems to be causally superfluous for causing, hence explaining, our cognitive capacities — once the “easy problems” (which are “only” about the causal mechanisms of behavior, cognitive capacity and physiology) have been solved.

Even if one imagines a scenario where the feeling precedes its neural correlate, and the neural correlate can also occur without the preceding feeling, but we can show that the neural correlate alone, without the preceding feeling, is incapable of generating some behavioral capacity (i.e., an easy problem), whereas when preceded by the feeling, it can. This sounds like the ultimate gift to the dualist: but what does it explain, causally? Nothing. It is just a just-so story, causally speaking. It leaves just as big a causal mystery as the scenario in which the neural correlate precedes or coincides with the feeling. None of this gives the slightest hint of a solution. Neither monism nor dualism solves the hard problem. It just soothes metaphysical angst, hermeneutically.

Now there could be a form of dualism that does give a causal explanation, hence a solution to the hard problem: If, in addition to the four fundamental forces of nature (gravitation, electromagnetism, and the strong and weak atomic forces) there were a fifth force which corresponded to feeling (or willing), then we would be no more entitled to ask “how and why does this fifth force pull or push” than we are to ask how and why do the other four fundamental forces pull or push. They are simply fundamental forces of nature, as described by the fundamental laws of nature, and as supported by the empirical evidence. — But the latter is exactly what feeling as a fifth force lacks. There is no empirical evidence whatsoever of a fifth fundemental force, whereas there is no end of observable, measurable evidence of the other four.

So even on the last hypothetical scenario (feeling precedes neural correlate and some behavioral capacity cannot be generated by the neural correlate alone when it is not preceded by the feeling), the “causal power” of feeling would remain a mystery: a hard problem, unsolved. The only thing I can say in favor of this fantasy scenario (which I don’t believe) is that if it did turn out to be true, it would mean that the “easy problems” cannot all be solved without the (inexplicable) help of feeling, and hence that some easy problems turn out to be hard! —User:Harnad (talk) 22:48, 13 September 2018 (UTC)

@Harnad: Thanks again for the response. It’s very interesting to read how you think. You went in a direction I wouldn’t have anticipated: I’m not sure where your sudden emphasis on “causal explanation” in your comment above comes from. (Perhaps it came from Prentner’s article, but I think he brings up causal explanation just to point toward the alternative of mereological explanation.) The word “causal” doesn’t appear in the Hard problem of consciousness article, and “not all scientific explanations work by describing causal connections between events or the world’s overall causal structure” (from: Lange, Marc (2017), Because without cause: non-causal explanation in science and mathematics, Oxford studies in philosophy of science, Oxford; New York: Oxford University Press, doi:10.1093/acprof:oso/9780190269487.001.0001, ISBN 9780190269487, OCLC 956379140). I don’t see why responses to the hard problem aimed at explaining the relationship between phenomenal experience and objective data, or at explaining the very nature of the phenomenal, would require a causal explanation. So I don’t think that “how can that more accurate chronometry help with causalexplanation?” is the right question, but I’ll take the question with the word “causal” omitted. And I’ll admit that I don’t have a good answer, but I’ll keep thinking about it. I can say that I often don’t know very well exactly what it is that I am feeling, and I’m familiar enough with the psychotherapy research literature to know that I’m not alone—this is common even among relatively normal people (which includes me, of course), as philosopher Eric Schwitzgebel has also emphasized (e.g., Schwitzgebel, Eric (2011), Perplexities of consciousness, Life and mind, Cambridge, MA: MIT Press, ISBN 9780262014908, OCLC 608687514). So that adds another explanatory target for the hard problem: not just why and how people (OK, sentient beings) feel, but also why and how they don’t know what they feel. And I suspect there is a role to play for a better understanding of “temporal relations” (Prentner) and of many other things. I’ll happily concede that it may not help with causal explanation because, as I said, I don’t see how causal explanation is required. Biogeographist (talk) 01:18, 14 September 2018 (UTC)

@Biogeographist: Yes, Dave Chalmers may not have written about explanation or causal explanation in relation to the hard problem, but I have. ;>) And I don’t think the hard problem has much to do with our subjective perplexities about what it is that we’re feeling: A coherent causal explanation of how and why tissue-damage — besides generating the “easy” adaptive responses (limb-withdrawal, escape, avoidance, learning, memory) — also generates “ouch” would be sufficient to solve the hard problem (without any further existential or phenomenological introspection on the meaning or quality of “ouch”). Best wishes, —User:Harnad (talk) 12:19, 14 September 2018 (UTC)