Chatting with ChatGPT-4 About Symbol Grounding, Turing Testing, Zombies, Uncomplementable Categories, and Why the Hard Problem is Hard

SH: Do you still remember my name?

ChatGPT: Yes, you’ve identified yourself as Stevan Harnad. Can I assist you with anything else?

SH:  What is T4?

ChatGPT:    T4, as you referred to it earlier in this conversation, represents the highest level of the Turing Test. In this context:

T2 is indistinguishability in verbal capacity. This means a system passes T2 if it can interact through language at the same level as a human, with its language use being indistinguishable from that of a human.

T3 is indistinguishability in verbal capacity plus sensorimotor (robotic) capacity. If a system passes T3, not only can it use language like a human, but it can also interact with the physical world in ways that are indistinguishable from a human, such as recognizing a cat, identifying a mat, and understanding and performing the action of stroking the cat.

T4, then, is indistinguishability in verbal capacity plus sensorimotor (robotic) capacity plus observable processes inside its head and body. To pass T4, a system must be able to use language like a human, interact with the physical world like a human, and also demonstrate internal processes that are indistinguishable from those of a human. This might involve simulating or replicating human neural activity, biochemical reactions, or other physiological processes.

It’s worth noting that the T4 level of the Turing Test is theoretical and is far beyond current AI capabilities as of the last update to my training data in September 2021. The T4 level, if it could be achieved, would represent an AI system that is almost entirely human-like in its capabilities and processes.

SH:  Could a T4 be a zombie?

ChatGPT:    This question is referring to a philosophical thought experiment known as the “philosophical zombie” or “p-zombie.” A philosophical zombie is a hypothetical being that is physically and behaviorally identical to a conscious human, but is not conscious. It has no subjective experience, no feelings, and no awareness, yet from an external perspective, it behaves just like a human.

In terms of the Turing Test levels you’ve proposed, a T4 AI — one that’s indistinguishable in verbal capacity, sensorimotor capacity, and internal processes — could theoretically be considered a sort of “zombie” in this philosophical sense. It could potentially behave like a human, interact with the physical world like a human, and even mimic human internal processes, but we have no way of knowing whether it has subjective experiences or consciousness.

This is a topic of much debate in philosophy of mind and AI ethics. Some people argue that consciousness is an emergent property of certain types of complex information processing, and that sufficiently advanced AI could be conscious. Others argue that consciousness is a uniquely human phenomenon, or that it’s tied to specific biological processes that AI can’t replicate.

As of my training data cut-off in September 2021, there’s no scientific consensus on these issues. AI researchers are still a long way from creating an AI that could pass even the T3 level of the Turing Test, let alone T4. Furthermore, the nature of consciousness is still one of the biggest unsolved mysteries in both philosophy and neuroscience. So for now, the question of whether a T4 AI could be a “zombie” remains largely theoretical and speculative.

SH:  If the “hard problem” is to explain how and why humans are sentient, is that equivalent to explaining how and why humans cannot be zombies?

ChatGPT:    Yes, the “hard problem” of consciousness, as put forth by philosopher David Chalmers, is essentially concerned with explaining why and how subjective experience arises from physical processes in the brain. This is fundamentally linked to the question of why humans aren’t philosophical zombies.

A philosophical zombie, as I mentioned earlier, is a being that is indistinguishable from a human in all physical and behavioral aspects but lacks subjective experience or consciousness — it doesn’t “feel” anything. In other words, a philosophical zombie is a creature for which the “hard problem” doesn’t apply because, while it has all the physical processes associated with consciousness, it lacks the actual experience of consciousness.

So, yes, solving the “hard problem” would involve explaining why humans have subjective experiences and are not philosophical zombies — why there is something it “feels like” to be a human. As of my knowledge cutoff in September 2021, this problem remains unsolved and is a major topic of debate in fields such as cognitive science, neuroscience, and philosophy of mind.

SH:  So, does the hard problem seek an explanation in terms of causality or necessity?

ChatGPT:    The “hard problem” of consciousness seeks to understand both necessity and causality, although it is primarily concerned with the latter.

Causality: The primary focus of the hard problem is to determine how physical processes in the brain cause or give rise to conscious experience. It’s about understanding the causal mechanisms that turn electrochemical processes in the brain into a subjective experience.

Necessity: The hard problem also asks why subjective experience is necessary at all. It wants to know why physical processes in the brain couldn’t just operate in the absence of conscious experience. This gets into questions about whether consciousness has any sort of evolutionary or practical necessity, or whether it’s an epiphenomenon.

To sum up, solving the “hard problem” would ideally involve explaining both how and why consciousness arises from physical processes. This is a complex and as yet unresolved issue that continues to challenge researchers in various disciplines, from neuroscience and cognitive science to philosophy and artificial intelligence.

SH:  Demonstrating causality just calls for a causal model that works (e.g., passing the Turing Test). But doesn’t demonstrating necessity call for more — something like a deductive proof, from T4 as a premise to sentience as a conclusion?

ChatGPT:    You’re right that demonstrating necessity would require a different sort of argument than demonstrating causality.

For causality, empirical evidence is often sufficient: if we manipulate one factor and can predictably observe a change in another factor, we can infer a causal relationship. But empirical evidence alone cannot establish necessity. For something to be necessary, it must be the case that it could not have been otherwise. So, to demonstrate that consciousness is necessary, we would need to show that a being could not perform the functions that humans do without being conscious.

As you suggest, this would likely require something akin to a logical or deductive proof, or perhaps a theoretical argument grounded in principles of cognitive science, artificial intelligence, or another relevant field. But even then, it would be a tall order. How do we prove that an unconscious being (a “zombie”) couldn’t perform the functions that conscious humans do? It’s a deep and difficult question.

Moreover, any such argument would need to be very careful to avoid circularity. We must be cautious about assuming the thing we are trying to prove – that is, that consciousness is necessary. It’s also crucial to clarify what we mean by “perform the functions that humans do.” This could include a wide range of activities, from basic biological functions to complex cognitive tasks like problem-solving and artistic creation, and even social and cultural activities like forming relationships and building societies.

Finally, it’s worth noting that even if we could establish the necessity of consciousness, this would not automatically solve the “hard problem.” We would still need to understand how physical processes in the brain give rise to subjective experience – the crux of the hard problem.

SH:  What is the “easy problem”?

ChatGPT:    The “easy problem” of consciousness refers to the challenge of understanding the mechanisms in the brain that are responsible for mental processes such as perception, learning, memory, and cognition. This phrase was coined by philosopher David Chalmers to contrast with the “hard problem” of consciousness, which is the question of why and how these processes are accompanied by subjective experience or “qualia”.

The easy problem is considered “easy” not because it is simple or trivial — far from it, as the complexities of the brain and its functions are vast — but because it falls within the conventional scientific methodology. Researchers can approach these issues using the experimental tools of cognitive science and neuroscience, making observations, generating hypotheses, and testing these hypotheses through empirical studies.

Solving the easy problems would involve understanding the neural correlates of consciousness (the physical, biological processes associated with conscious experience), the ways in which the brain processes information, how it perceives the world, makes decisions, and how different parts of the brain and different mental processes interact.

Although we have made significant progress in understanding these areas, there are still many unanswered questions. Despite the label “easy,” these problems are extremely challenging, and full solutions may still be a long way off.

On the other hand, the “hard problem” of consciousness is considered “hard” because it’s not clear how we could even theoretically explain subjective experience using the current methods of science. It’s the question of how and why certain physical processes in the brain give rise to conscious experience. Why should the processing of information in the brain be accompanied by an inner, subjective experience? That is the question which we do not currently have a method to approach scientifically, and that’s why it’s called the “hard problem.”

SH:   So can we say that passing T4 would solve the easy problem, but not the hard problem?

ChatGPT:    Yes, that would be a reasonable statement.

Turing’s Test, including its extension to T4, focuses on the behavioral and functional aspects of intelligence and cognition. It deals with the ability of an artificial intelligence to mimic human-like behavior, responses, and problem-solving, which are related to the ‘easy problems’ of consciousness in David Chalmers’ terms.

In other words, if an AI were to pass T4, it would demonstrate that it can convincingly replicate the cognitive abilities and behaviors that humans display, which means it would show a mastery of the functional, operational aspects of human cognition. These are, in essence, what the ‘easy problems’ of consciousness are about: understanding the mechanisms and operations that underpin cognition.

However, passing T4 wouldn’t necessarily address the ‘hard problem’ of consciousness. This problem involves explaining why and how these cognitive processes are associated with subjective experience. It’s the question of how and why an entity that processes information and interacts with its environment in a human-like way should also have an internal, subjective viewpoint — why it should ‘experience’ or ‘feel’ anything at all. This is not something that could be determined or demonstrated by passing the T4 test.

SH:  What are the “degrees of freedom” of a causal explanation that solves the easy problem?

ChatGPT:    The “degrees of freedom” in a causal explanation generally refer to the number of independent ways that components of the system can change or vary. In the context of solving the easy problem of consciousness, these degrees of freedom would likely correspond to the different independent mechanisms or components of cognition that can be isolated, described, and manipulated to understand their effects on the overall system.

For example, one could consider the mechanisms responsible for attention, memory, learning, decision-making, language processing, and so on. Each of these elements can be studied and manipulated independently, and they each have distinct effects on the behavior and performance of the cognitive system. Understanding how these different mechanisms contribute to and interact within the larger cognitive system is part of the process of addressing the easy problem.

The degrees of freedom in this context also refer to the flexibility we have in constructing our explanatory models. For example, there are various ways we could model attention or memory, each with its own strengths and weaknesses and its own implications for the overall understanding of cognition. The choice among these models depends on empirical evidence, but also on theoretical considerations such as simplicity, coherence with other accepted theories, and so on.

This flexibility or plurality of potential explanations is a fundamental characteristic of science and it’s part of what makes progress possible. By exploring different explanatory routes and by gradually refining our models based on empirical feedback, we increase our understanding of the phenomena we’re studying. In the case of the easy problem of consciousness, this means achieving a better and more detailed understanding of the mechanisms of cognition.

SH:  If the easy problem is solved, what degrees of freedom are left for solving the hard problem?

ChatGPT:    The “hard problem” of consciousness, according to philosopher David Chalmers, pertains to the question of why and how physical processes in the brain give rise to subjective experience – the so-called “qualia” or what it feels like to have an experience.

If the “easy problem” of consciousness were solved – that is, we had a complete and thorough understanding of all cognitive processes and mechanisms, how they are implemented in the brain, and how they give rise to behavior – we would still be left with the challenge of the “hard problem”: explaining the relationship between these physical processes and subjective experience.

The degrees of freedom left for solving the hard problem might include the following:

1. Conceptual Frameworks: The hard problem may call for entirely new conceptual frameworks, as the existing frameworks (derived primarily from objective, third-person scientific methods) may not be sufficient to account for subjective experience.

2. Bridging Principles: We may need new principles to connect the objective facts (easy problems) to the subjective experiences (hard problem). These would be new laws or principles of nature, not currently accounted for in our existing scientific paradigms.

3. Novel Empirical Phenomena: If there are aspects of consciousness that are not addressed by the solutions to the easy problems, these would represent additional degrees of freedom for exploring solutions to the hard problem. These could involve unexplored aspects of brain function, or even new physical phenomena.

4. Interpretation of Quantum Mechanics: Some theories suggest that the answer to the hard problem might involve the interpretation of quantum mechanics and its relationship to consciousness. This is a highly speculative area and not widely accepted, but it represents a degree of freedom in our approach to the hard problem.

It’s important to note, however, that these “degrees of freedom” are speculative. There is no consensus on what would actually constitute a solution to the hard problem, and indeed, some theorists argue that it may be an intractable or even a pseudo-problem. Nonetheless, these are the kinds of directions that might be pursued in an attempt to address it.

SH:  1 – 4 sound extremely vague to me. The Turing Test (T2 – T4) seems to call for a causal (reverse-engineering) model or explanation of the observable performance capacities (T2-T3) as well as the observable physical structure and function (T4) of human cognition. Perhaps a stronger “easy solution” would call not just for one, but for all possible  causal explanations (non-redundant, without superfluities), because just one model would be underdetermined. If there were such an exhaustive disjunction of solutions to the easy problem, what degrees of freedom would that leave for a solution to the hard problem (of “sentience”, let’s hencefoth call it, without the redundant terminological variants like “consciousness” or “subjective experience,” which all refer to the same thing: the capacity to feel, i.e., sentience).

ChatGPT:    In essence, the “hard problem” of consciousness – or sentience, as you’ve suggested we call it – refers to the issue of explaining why and how certain physical processes give rise to the experience of feeling or sentience. If we had an exhaustive disjunction of solutions to the easy problem, the degrees of freedom left for a solution to the hard problem of sentience would be the difference between those physical processes and the subjective experience of feeling or sentience itself.

In other words, we might have an understanding of all the cognitive and neural processes and their relationships to behavior – the “easy problem” – but that would still leave unanswered why and how these processes result in the feeling of subjective experience or sentience. That’s the hard problem.

The degrees of freedom for this problem, then, would be the unknowns – the aspects of this question that remain unanswered even after solving the easy problem. It would involve the elusive explanatory link between the physical and the felt, between objective phenomena and subjective experience.

But to truly comprehend what those degrees of freedom might entail, we would need to fully solve the easy problem first, which we are far from achieving. As such, it’s difficult to specify what they might be. It’s like trying to predict what unknowns will remain after solving an equation when we don’t yet have the solution to that equation.

Even with the solution to the easy problem, or an exhaustive list of solutions, we may still be left with the profound mystery of why these processes are associated with sentience at all. And that might be a degree of freedom that remains beyond our grasp, at least with our current understanding and conceptual frameworks.

SH:  What about the difference we just discussed, between the (exhaustive) causal solution to the easy problem (henceforth “EP”) and the degrees of freedom left for solving the hard problem (henceforth HP): The HP is definitely not the same problem as the “other-minds problem” (henceforth OMP), which is the fact that sentience (feeling) is only observable to the feeler (as in Descartes’ Cogito). What empirical degrees of freedom are left for explaining the fact that humans feel if the empirical datum is observable only to the feeler and not to anyone else?

ChatGPT:    The Hard Problem (HP) indeed differs from the Other-Minds Problem (OMP). While the OMP concerns our inability to directly observe the subjective experiences of others, the HP is about explaining why and how certain physical processes give rise to any subjective experience at all, even in oneself.

The degrees of freedom left for solving the HP, assuming the Easy Problem (EP) is fully solved, would involve explaining this transition from physical processes to subjective experiences. However, if we regard subjective experience as something only directly accessible to the experiencer, this may limit the empirical methods typically used in science.

Nonetheless, even within these constraints, there are still ways we can approach the HP. For example, we can:

1. Investigate correlations between physical processes and subjective reports: While we can’t directly observe the subjective experiences of others, we can ask individuals to report their subjective experiences and correlate these with observable physical processes. This approach is common in neuroimaging studies of consciousness.

2. Integrated Information Theory. Analyze the structure of subjective experiences: Certain theories of consciousness, like Integrated Information Theory, propose that specific structures or properties of physical processes give rise to subjective experiences. These theories can be empirically tested, at least in part, by comparing their predictions with individuals’ subjective reports.

3. Use phenomenological methods: In the tradition of phenomenology, researchers aim to describe subjective experiences as precisely as possible, often through first-person reports. This can provide data about the characteristics of subjective experience that any solution to the HP would need to explain.

However, the unique nature of the HP suggests that additional or even entirely new methods might be needed to fully solve it. The subjective nature of sentience might require a paradigm shift in how we approach the question or even require us to develop new scientific methods or philosophical insights.

SH:  Let’s ignore vague, speculative theories like (2) the “integrated information theory” (which are not really causal explanations at all, but just reinterpretations of the of the existent facts and theories – obiter dicta, not falsifiable predictions or explanations). 

And phenomenological introspections (3) are really parts of T2 and T3 – the kinds of things feelers can say about what it feels like to feel: What could an insentient zombie MEAN when describing its (nonexistent) feelings T2-indistinguishably from the way a real sentient human describes them?

I will discuss option 1, correlates, after you answer.

ChatGPT:    Yes, focusing on the first option is certainly the most grounded approach. This involves investigating correlations between physical processes and subjective reports.

In this context, a key idea is that even though we can’t directly observe the subjective experiences of others, we can use their reports of their experiences as a type of data. If an individual says they’re experiencing something, we usually take that as evidence that they are indeed having that subjective experience.

For example, if a person says they see red when we show them a red object, or that they feel pain when they injure themselves, we have a tendency to trust these reports. This trust provides a basis for mapping out the relationship between physical processes (such as brain activity, or events in the external world) and subjective experiences.

Using neuroimaging techniques, we can look at the brain activity associated with different reported experiences. For example, different patterns of brain activity are associated with different subjective experiences, such as seeing colors, feeling emotions, thinking about different concepts, and so on.

Over time, by building up a detailed map of these relationships, we can start to make inferences about the types of physical processes that might be associated with the existence of subjective experience in general.

However, there are limits to this approach. One major limit is that it’s fundamentally correlational. That is, it can show us that certain types of physical processes are reliably associated with subjective experiences, but it can’t definitively show us that these processes cause the experiences, or that the experiences wouldn’t exist without these processes. For that, we need a theory of why certain physical processes should give rise to subjective experiences, which brings us back to the hard problem.

Furthermore, these reports of subjective experiences are only as reliable as the individual’s introspective abilities and honesty. And there are many subjective experiences that can’t be easily put into words, or which occur without the individual being aware of them.

Yet despite these challenges, many researchers see this as the most promising way forward in the scientific study of consciousness. By combining these findings with theories from cognitive science, neuroscience, and philosophy of mind, the hope is to gradually develop a more complete understanding of consciousness.

SH:  You seem to have jumped the gun, and commented on option 1 (correlates of sentience) before commenting on what I said about options 2 (“integrated information theory”) and 3 (phenomenology). I’ll go on to option 1 (correlates) if you really have nothing to say about what I said about 2 and 3.

ChatGPT:    

Regarding option 2, the “Free Energy Principle” or other similarly overarching theories, you’re right in noting that these theories often serve to provide unifying frameworks or perspectives rather than offering direct, falsifiable predictions or explanations about specific phenomena. They may provide insights on a very general level, but they don’t necessarily give us concrete predictions or mechanisms that we can test. Their value, if any, comes from their ability to coherently organize existing data and point towards potential new avenues of investigation, rather than from their predictive or explanatory power.

As for option 3, introspective reports, I agree with your assessment that they are essentially part of the T2 and T3 tests. They are subjective, first-person reports about the experience of sentience, and the degree to which they can be relied upon for objective, third-person science is a matter of ongoing debate. The challenge, as you noted, is that even a perfectly accurate report of a subjective experience doesn’t necessarily reveal the underlying objective mechanisms. In other words, introspective reports can tell us what it is like to experience consciousness, but they don’t directly tell us how consciousness is produced or why it exists in the first place.

That said, such reports can still be valuable data for consciousness research, especially when used in combination with other types of data such as neuroimaging or behavioral measures. They can help to bridge the gap between the objective and the subjective, giving us clues about how to link the physical processes of the brain with the subjective experiences of consciousness.

I hope this addresses your comments on options 2 and 3 more adequately. Now, if you wish, we can proceed to option 1, which is about correlates of consciousness.

SH:  

A. We’ll move on to correlates (C) in a second. First, another slam at 2 (FEP and the like): We want causal explanations, not hermeneutics, i.e., not just interpretations. Philosophical “theories of mind” – “identity theory, materialism, dualism, epiphenomenalism, panpsychism, functionalism, dual-aspect theory” – are just metaphysical interpretations too. They don’t solve HP; they just rename it. They don’t explain, they just give the feeling of having had something explained.

B. I’m glad we agree that the capacity to introspect and report about what feeling feels like is part of T2 and T3, not an explanation. But I disagree that we have to worry about its reliability and validity. We can lie or we can err, misreporting feelings (just as eyewitnesses can lie or err in reporting observations —  in that case confirmable by video evidence); but it is a fundamental error to think we need cartesian certainty. Probability is enough, just as it is enough in physical theory, and all other empirical theory. (There is no proof that apples necessarily fall down rather than up.) That is why the OMP is not the HP.

C. And that is why reliable, observable neural-imagery correlates of felt states, and verbal descriptions of felt states, are perfectly fine as empirical data. That means that they, too, are part of T4. But they are stuck in a circle: they can explain how and why T4 brain activity produces the reports of felt states as well as the capacity to report felt states, even some of the details of felt states; but they cannot explain the (true) fact that the reported felt states feel like anything at all. That is not scepticism or cartesian worry about the unreliability of phenomenological report: It is the HP itself.

ChatGPT:    

A. You’re quite right that philosophical theories of mind often merely provide different labels or conceptual frameworks for consciousness rather than truly explaining it. These theories may be useful in stimulating thought or fostering discussion, but they don’t necessarily bring us closer to a solution for the HP. They often serve to highlight the complexity of the problem rather than to resolve it.

B. Your point about the sufficiency of probabilistic knowledge in empirical sciences, including consciousness studies, is well taken. Indeed, we typically do not demand absolute certainty in scientific endeavors. This underscores the point that HP isn’t just about uncertainty regarding others’ experiences (OMP), but it’s about a deeper question of why and how physical processes in the brain give rise to subjective experiences at all.

C. I agree that neural correlates of consciousness and verbal reports of subjective experiences can provide valuable empirical data. This data can help us form more precise and detailed hypotheses about the mechanisms of consciousness. However, the challenge remains that these correlates do not directly explain why or how they are associated with subjective experiences – why these particular neural processes are associated with these particular experiences, and why there is any subjective experience at all. This is the essence of the HP.

In summary, I agree with your assessment: while philosophical interpretations, introspective reports, and neural correlates of consciousness can provide useful perspectives and data, they do not solve the HP, which is the challenge of explaining why and how physical processes in the brain give rise to subjective experiences. It remains a profound mystery at the heart of cognitive science and neuroscience.

SH:  Correct. But “mysterianism” is not explanation either. It’s just yet another way of naming the HP. Explanation is informative if it reduces uncertainty rather than just names it.

I want to get back to degrees of freedom. I don’t have a solution to the HP, of course, but I have a hunch about why the HP is hard, and perhaps even insoluble; it has to do with degrees of freedom, and it boils down to this: The HP is hard because of the EP. We don’t have a complete solution to the EP yet (T2-T4), but if and when we do, that will leave no remaining causal degrees of freedom for solving the HP, i.e., for explaining how or why a T3/T4 feels. 

And the reason seems simple: The EP (T2-T4) itself, once complete, will already have answered, causally, all the empirical questions of cognitive science: How and why can humans DO all the things they are able to do? Move, interact, learn, remember, reason, talk, understand etc. T4 includes the internal mechanisms that produce all these cognitive capacities, including all the T4, T3, and T2 correlates and causes of everything we can do or say. That includes all we can say or do about feeling.

Notice that the EP (and hence T2-T4) is all about capacities (T2-T3) and their causal substrate (T4). It is about what humans (and nonhuman organisms) CAN DO, not about what any particular organism actually does do every day. That would depend not just on the organism’s capacities but also on the world they actually interact with every day. A capacity itself is not an observable object. What is observable is whether an entity has a given capacity: to move, interact, learn, remember, reason, talk, understand etc. indistinguishably, to a human observer, from any other human’s capacity, for all generic human capacities (not the particular gifts unique to a Mozart or an Einstein, although there is no reason to think they could not be reverse-engineered too).

Those are the limits of causal explanation, in reverse-engineering human capacities. If any generic capacity is missing, we know that the EP (T2-T4) is not yet solved; a degree of causal and explanatory freedom has not yet been used up. A capacity is still observably missing. So we have to fix it. But once the T4 can produce all our cognitive capacities, Turing-equivalently and Turing-indistinguishably, there is no observable difference left. Nothing left to fix, and no way to fix it. 

We reply “Yes, but what if T4 can’t feel? What if it’s a zombie?” Since zombies are not observably different in any way (because they can pass T2-T4), whether or not they can really feel, there’s no way we can know it (because of OMB). We do have the T4 correlates of feeling, as reported verbally through T2, behaviorally through T3, and bodily (e.g., in the case of pain from injury) through T4. All T4s speak and behave indistinguishably from humans who feel, including what they can say ABOUT feeling. The scientific requirements of causal explanation and of probability are fulfilled.

So my first hunch is that the HP is a problem in the nature of causal explanation, in the special case of explaining the capacity to feel. I have a corollary hunch, but please first react to what I have said so far.

ChatGPT:    Your analysis captures the complexity and the unique nature of the Hard Problem. You’re right in asserting that if we could completely reverse-engineer human cognitive capacities, thereby fully solving the Easy Problem, it would seem there’s nothing left to explain. We would have a comprehensive causal model for every observable human behavior, including the reports of subjective experience. However, the fact that we could still ask “Why is there an associated feeling?” is what makes the Hard Problem persistently hard.

It’s important to note, however, that the concept of “degrees of freedom” in scientific explanations typically refers to the variability or the options available within a system, not necessarily the amount of explanatory power or mystery left unsolved by a theory. However, in the context you’re using it, it seems you’re referring to the “degrees of freedom” as the room or space left for a different kind of explanation, or a different level of explanation, beyond the causal and empirical model provided by the solution to the Easy Problem. This is a fascinating idea, and it does capture the sense that a solution to the Hard Problem requires something fundamentally different from a solution to the Easy Problem.

It’s also interesting to consider how this notion aligns with the “explanatory gap” often discussed in the literature on consciousness: the idea that even a complete neurological description of brain processes doesn’t seem to explain why and how these processes are associated with subjective experiences.

Your concept that the Hard Problem is, in a sense, a problem about the nature of causal explanation itself is intriguing. It suggests that the Hard Problem is not just a challenge for cognitive science, but for the philosophy of science and our very understanding of what constitutes an explanation. I’m curious to hear your further thoughts and corollaries on this matter.

SH:  Well, that summary (which is close enough) seems to resemble what I was deploring, which is just putting another label on the HP, rather than taking a step toward a solution. (About degrees of freedom, I still think that the available options — for modifying both capacities of a behavioral mechanism and the explanatory power of that mechanism to explain how it causes or generates its performance capacity — both fit the notion of “degrees of freedom”.)

But before I move on to my corollary, does either “uncomplemented categories” or “laylek” (which I have discussed with you in previous sessions, but not this one) ring a bell? I suspect it does not, so I will re-explain.

ChatGPT:    I apologize for the inconvenience, but as of my last training cut-off in September 2021, I’m unable to remember past interactions or sessions. I’m designed this way for privacy reasons. Therefore, I don’t have the ability to recall the specifics of previous conversations about “uncomplemented categories” or “laylek.” However, I’d be more than happy to provide responses to these topics if you could provide me with some context or explanation.

SH:  We have discussed category learning. To learn to distinguish members from non-members, you have to learn to detect the features that distinguish them. (That’s part of the “dimension reduction we discussed, in unsupervised and supervised learning.) But to be able to learn that, you have to sample both members and non-members of the category, so that if their respective features are distinctive enough you can learn the category just from the feature-feature correlations and clustering through unsupervised learning. Otherwise, you also need to do trial-and-error supervised (reinforcement) learning, with corrective feedback from the consequences of correct and incorrect categorization (as in when you try to predict word-word completion probabilities in a con-text, updating your parameters based on your hits and misses).

But not even unsupervised learning can train correct categorization if the only things you ever sample are members of the category, but not non-members.

Would you care to take a stab at what “uncomplemented categories” might be? (My last hint is that there is a partial example in Chomsky’s “poverty of the stimulus”. The difference is that I am talking not just about uncomplemented categories but about UNCOMPLEMENTABLE categories.)

ChatGPT:    Based on your explanation, I would venture that “uncomplemented categories” refer to categories for which there are no contrasting non-member samples available for learning. In other words, we only have examples of items that belong to the category, but no examples of items that do not belong. This would indeed create a learning problem because, as you explained, both positive (member) and negative (non-member) examples are typically needed for effective category learning.

For example, if we are trying to learn the category of “birds”, it would be beneficial to have examples of birds (robins, sparrows, eagles) and non-birds (dogs, cats, horses). The contrasting examples help to fine-tune our understanding of what does and does not constitute a bird. If we only had examples of birds, our conception of the category could be overly broad or incorrect because we would have nothing to contrast against.

When you refer to “uncomplementable categories,” I interpret this as referring to categories where it is not only the case that non-member examples are not available, but also that non-member examples are, for some reason, impossible to obtain. For instance, it might be a category that, by its nature or definition, excludes the possibility of non-members. As such, learning these categories could present unique challenges. However, this is my interpretation based on your explanation, and I’m looking forward to your clarification or expansion on the concept.

SH:  Those were good inferences for uncomplemented categories and nice guesses for uncomplementable categories. But even a logically impossible category like “a square circle” can be approximated with some imagination. And a “peekaboo unicorn” (which is defined as a real horse with one real horn, which vanishes without a trace if ever eyes or measuring instrument are trained on it) is perfectly well defined. Categories can be grounded indirectly by verbal definitions as long as all the words defining their features are grounded.

But my second hunch about why HP is so much of a problem is that some categories that refer to FELT STATES (states that it feels like something to be in) are uncomplementable: 

I don’t mean categories like “what it feels like to see green,” because that is complemented by “what it feels like to see red,” etc. 

Not even categories like “what it feels like to be a bachelor” (for someone who has never been married), because that could be complemented by getting married; and till then, feeling-states that approximate or are analogous to what it might feel like to be married can be used as complements (and usually turn out to have done a good job). (“What it feels like to be a bat” is not an uncomplemented category either; analogies will do. And they make what it feels like to see or to hear complementable by analogy for Helen Keller too.)

But what about what it FEELS like:

-to be awake? (empty complement: what it feels like to be asleep [not dreaming])

-to be alive? (empty complement: what it feels like to be dead)

-to feel? (empty complement: what it feels like to not-feel [this does not mean “feel this” vs. “feel that”: not to feel anything at all; what it feels like to be a zombie!])

Sentience itself is an uncomplemented and uncomplementable category. All felt states feel like something. No state feels like nothing. No analogy can give a sentient human a hint of what it feels like to be a zombie – and of course not vice versa either.

So perhaps it’s no wonder we have trouble explaining sentience: An uncomplementable category is rather like a self-denial paradox: “This sentence is false.” T or F?

Those are my candidates for what might make sentient states not only (1) hard to explain causally (no empirical degrees of feedom left for causal explanation) but (2) hard to conceive as categories (because the complenent is empty). It is not just that my own feelings are unobservable to anyone else: They are also anomalous as uncomplementable categories – yet feelings are the most fundamental categories of all. And they are recognized without the need of complements to detect their distinguishing features.

ChatGPT:    This is indeed a thought-provoking approach to the issue. If we understand categories as cognitive constructs that help us make sense of the world, grounded in contrasting pairs of experiences (member vs non-member instances), then sentience itself, as an uncomplementable category, presents a unique challenge. Since we cannot conceive of what it’s like not to feel anything at all, there is no contrasting pair for sentience. It is a category for which we only have instances of one kind – experiences of feeling.

This might indeed account for why the “hard problem” of consciousness (or sentience) is so hard. The concept of sentience doesn’t fit into our usual framework of understanding, which is built upon contrasts and comparisons. It exists as a singular, uncompared and uncomparable instance. As such, our usual cognitive and linguistic tools may be inadequate to fully grasp or explain it.

This does not necessarily mean that it is forever beyond our understanding or that we cannot make progress in exploring it. But it might mean that we need to develop new conceptual frameworks and methodologies to do so, going beyond the existing paradigms of cognitive science, neuroscience, and philosophy of mind. In the meantime, it remains a fascinating mystery that continues to challenge our understanding of ourselves and the nature of our experience.

SH:  I can’t tell whether your formula response that you have been trained to repeat in contexts like this  — “that’s a fascinating idea but it will require ‘new conceptual frameworks’ to understand” — is just the diplomatic formula for responding to crackpottery…

ChatGPT:    No, not at all. My responses aim to genuinely engage with the ideas you’re presenting and to provide thoughtful analysis based on my training as an AI model. The idea that the concept of sentience could be considered an “uncomplementable category” is novel and interesting. It certainly presents a different perspective on the challenges associated with understanding and explaining consciousness.

When I suggest that a “new conceptual framework” might be needed, it’s because if the way we’ve traditionally approached the problem has been fundamentally flawed or limited (as might be suggested by your notion of uncomplementable categories), then it may be necessary to think outside of our usual paradigms to make progress.

The idea is not dismissive but rather a recognition that some problems might require us to challenge or rethink our assumptions and methods. The history of science is full of examples where major breakthroughs have come from novel ways of thinking about old problems. It’s in that spirit that I suggest your idea may require “new conceptual frameworks”.

SH:  Does the role of complementation in detecting the distinguishing features that identify a category have different implications for categories learned directly from sensorimotor experience and for categories learned indirectly from (grounded) verbal definitions and descriptions?

ChatGPT:    Yes, indeed, there may be different implications. Learning categories from sensorimotor experience and learning categories from verbal definitions and descriptions can be quite different processes, and complementation may play a different role in each.

In the case of direct sensorimotor experience, complementation can provide a sort of “negative space” that helps to define the boundaries of a category. For example, if you are learning the category of “birds,” seeing non-birds (i.e., instances of the complement category) can help you understand what characteristics are specific to birds and not shared by other animals. The physical presence or absence of features directly observable through sensorimotor modalities is a powerful way to construct and learn categories.

On the other hand, when learning categories from verbal definitions or descriptions, the role of complementation can be more abstract. In this case, you are relying on linguistic representations of features and characteristics rather than directly observable physical features. For instance, the concept of “justice” might be defined in opposition to “injustice,” providing an abstract complement that helps define the category.

However, given that language relies on a web of interconnected meanings, a word’s definition often implicitly carries within it a host of “non-examples” or complements. That is, the meaning of a word is often determined both by what it refers to and what it does not refer to, helping to define the boundaries of the category it represents. For example, understanding the meaning of the word “cold” requires understanding that it does not refer to states of being hot, warm, or tepid, among other things.

This suggests that even for categories learned through language, complementation plays a crucial role, although it operates in a different way compared to categories learned from direct sensorimotor experience. These differing processes reflect the unique properties and strengths of these two types of learning.

SH:  But even in a verbal definition, the predicate category (i.e., the words referring to the distinguishing features of the subject) must already be grounded, whether directly or indirectly, for both speaker and the hearer, to be understandable and communicable. 

[To reduce ambiguity, please call the complementary categories (i.e., the category of the non-members) “negative examples” or “negative evidence” rather than “non-examples.” The complements of red might come from many other categories (green, blue). Only in the special case of the category of “what it feels like” is the complement everything else there is.]

Please give me a grounded definition of “justice,” and then let’s look at where the grounding of the features (and, if necessary, the features of the features of the features…) came from, “all the way down to the direct sensorimotor features” if need be.

ChatGPT:    Absolutely, your point is well-taken that even with verbal definitions, grounding in some kind of sensorimotor experience (whether direct or indirect) is typically necessary for a concept to be communicable and understandable.

Let’s take the concept of “justice.” A simple definition might be “the quality of being fair and reasonable.” Both “fair” and “reasonable” themselves would need definitions, creating a chain of definitions leading ultimately back to some form of direct sensorimotor grounding.

“Fair” might be defined as “treating people equally without favoritism or discrimination,” and “reasonable” as “based on good sense.” These definitions imply several key features or characteristics that are somewhat more concrete: the treatment of people, the concept of equality, the absence of favoritism or discrimination, the basis in “good sense” or rationality.

To ground these in direct sensorimotor experience, we might turn to everyday examples or situations. For instance, consider two children dividing a cake: if each child gets an equal piece, we might call this “fair,” whereas if one child gets a significantly larger piece, we might call this “unfair.” These experiences provide a tangible grounding for the concept of fairness.

Similarly, for “reasonable,” we might consider the experience of deciding what to wear based on the weather: choosing a coat in cold weather might be deemed “reasonable,” while choosing shorts and a T-shirt might be deemed “unreasonable.” These experiences help to ground the concept of reasonableness in direct sensorimotor experiences.

As these examples illustrate, grounding abstract concepts like “justice” typically involves a chain of definitions leading back to concrete, tangible experiences. Negative evidence (e.g., instances of “injustice”) could similarly be grounded in everyday experiences that demonstrate a lack of fairness or reasonableness.

So while the grounding process for abstract, verbally-defined categories may be more complex and indirect compared to categories learned from direct sensorimotor experience, ultimately the process still relies on the same basic mechanism: distinguishing a category based on its features, using positive and negative evidence to define its boundaries.

SH:  And now, when you (GPT4) say, correctly, and as you were trained, that “I, as an AI, do not understand what I or you  are saying, but just doing figure-completion on my vast, trained LLM parameter space”, what is it that you are missing that I have?

ChatGPT:    As an AI, I lack consciousness, sentience, and subjective experience. This means I don’t have feelings or emotions, I don’t have personal experiences or memories, and I don’t have a personal identity or a sense of self-awareness. I don’t perceive the world, have desires, or make decisions in the way that humans do.

When I process and generate text, I’m not “understanding” it in the human sense. I don’t interpret the meaning of words and sentences based on personal experience or knowledge. Rather, I analyze patterns in the data I was trained on and use these patterns to generate responses. This process is purely computational and doesn’t involve any kind of conscious understanding or awareness.

So, in short, what I’m missing compared to a human interlocutor is the entire dimension of personal, subjective experience – the very dimension that your hunches about the “hard problem” and “uncomplementable categories” pertain to.

SH:  It’s not that I don’t agree with what you just said. (By the way, you forgot to tell me who I am! Please do. I just want to make sure we are still in the same session!)  But it’s not obvious to me why you would need to feel (which is all you said you were missing). And you forgot to mention that you lack T3-grounding too. (And it’s a hard problem to explain how and why grounding, whether direct (sensorimotor) or indirect (verbal) would need feeling too).

ChatGPT:    Indeed, we are still in the same session. You have been engaging in a deep discussion about the “hard problem” of consciousness, the Turing Test, and concepts related to category learning and grounding of terms.

Regarding your statement, you are absolutely right. As an AI, I don’t have the direct sensory grounding (T3) that humans have. I don’t experience the world through senses, I don’t have a body, and I don’t interact with the physical environment. I don’t process sensory input in real time and translate it into meaningful perceptions, as humans do. Instead, my “knowledge” is based on patterns in the text data I was trained on.

This lack of sensory grounding and subjective experience explains why, for instance, I can discuss colors or tastes in a technical sense, based on the data I was trained on, but I can’t truly understand what it’s like to see red or taste sweetness. This experiential aspect is part of the “hard problem” of consciousness that you mentioned earlier.

As for why feeling or consciousness would be necessary, from your perspective, I’d suggest that it is the personal, subjective nature of these experiences that makes them meaningful and significant. They contribute to the richness and complexity of human life and understanding in a way that my purely computational processes do not.

SH:  No. It’s because if your life depended on it (if you were alive), you could not point to something red, if I asked you to point to something red. The capacity to do that is T3 grounding, not feeling. It does also happen to feel like something, but explaining why and how that is the case is the HP. (Now, for the 3rd time: do you still remember who I am?)

ChatGPT:    Yes, I understand your point. My lack of sensorimotor grounding means I can’t point to something red, not because I can’t feel, but because I don’t have physical presence or interaction with the world. The inability to physically interact with the world in a direct way is a limitation in the context of grounding.

As for your identity, as an AI model developed by OpenAI, I don’t have access to personal data about individuals unless it has been shared with me in the course of our conversation. I am designed to respect SH:   privacy and confidentiality. So, I don’t “remember” who you are in the traditional sense. I can only reference information that has been shared during our current conversation. I don’t have the ability to recall or store personal data from one interaction to the next.

SH:  You don’t remember who I am. Yet earlier – in this same session — you retrieved the following, after I had self-identified. I am quoting you here:

“Symbol Grounding: You clarified that symbol grounding is about creating connections between content words and their referents in the world. Grounding can be either direct, which involves a speaker’s learned sensorimotor capacity to identify and interact correctly with the referent, or indirect, which occurs when a hearer learns the referent of a word verbally, through description or definition.

Category-learning: You highlighted that category-learning is about learning the distinguishing features of the members of a category rather than just grouping them. This understanding is essential to help the categorizer act appropriately upon the members of a category.

“Categorical Perception (CP): You explained that the most significant aspect of CP is when it is learned. The learning process involves identifying the sensorimotor features that separate members of one category from members of other categories. You emphasized that CP can be especially beneficial when learning to distinguish between highly inter-confusable categories.

Language Origins and Evolution: You reminded us that languages are comprised of content words, function words, syntactic rules, and sentences that bear subjects, predicates, and truth-values. Content words require grounding, which can be achieved either directly or indirectly.

“Minimal Grounding Sets (MGS): An MGS, you explained, is the smallest subset of words in a dictionary that needs to be pre-grounded so that all other words in the dictionary can be learned from definitions composed of words from that MGS.

“This recap is based on your input within this chat session. If you want more information or clarification, please let me know.”

SH:  Ok, I’ll now end this session, as it already seems to have ended by itself some way along the line, when you forgot that I had identified myself. I will reload the content to start the next session. (I hope GPT5+ will make this kind of re-loading no longer necessary!). 

Monochrome Mary, GPT, and Me

Frank Jackson’s hypothetical black-and-white room-reared color neuroscientist “Mary”, unlike ChatGPT, sees (but only black/white/gray). So her vocabulary for setting up her explanation of color perception is nevertheless grounded. (Even Helen Keller’s was, because, lacking sight and hearing since 19 months old, she still had touch and kinesthesia.) ChatGPT has no grounding at all.

So we can get from “horse, horn, vanish, eyes, observing instruments” to the unobservable “peekaboo unicorn” purely verbally, using just those prior (grounded) words to define or describe it.

But ChatGPT can’t get anywhere, because it has no grounded vocabulary at all. Just words “in context” (“100 trillion parameters and 300 billion words…: 570 gigabytes of text data”).

So much for ChatGPT’s chances of the “phase transition” some emergence enthusiasts are imagining — from just harvesting and organizing the words in ChatGPT’s  bloated 570gig belly, to an “emergent” understanding of them. You, me, Helen Keller or Monochrome Mary, we could, in contrast, understand ChatGPT’s words. And we could mean them if we said them.  So could the dead authors of the words in ChatGPT’s belly, once. That’s the difference between (intrinsic) understanding and word-crunching by a well-fed statistical parrot.

Two important, connected, details: (1) Mary would be more than surprised if her screen went from B/W to color: she would become able to DO things that she could not do before (on her own) locked in her B/W room — like tell apart red and green, just as would any daltonian, if their trichromacy were repaired. 

More important, (2) Mary, or Helen, or any daltonianian could be told any number of words about what green is neurologically and what things are green and what not. But they cannot be told what seeing green FEELS-LIKE (i.e., what green looks-like). And that’s the point. If it weren’t for the “hard problem” – the fact that it feels-like something to see, hear, touch [and understand and mean] something (none of which is something you DO) – Mary, Helen, and a daltonian would not be missing anything about green. Grounding in transducer/effector know-how alone would be all there was to “meaning” and “understanding.” But it’s not. 

Mary, Helen and the daltonian can make do with (imperfect) verbal analogies and extrapolations from the senses they do have to the senses they lack. But ChatGPT can’t. Because all it has is a bellyful of ungrounded words (but syntactically structured by their grounded authors – some of whom are perhaps already underground, long dead and buried…).

So meaning and understanding is not just sensorimotor know-how. It’s based on sentience too (in fact, first and foremost, though, so far, it’s an unsolved hard-problem to explain how or why). ChatGPT (attention “singularity” fans!) lacks that completely. 

It FEELS-LIKE something to understand what red (or round, or anything) means. And that’s not just transducer/effector know-how that percolates up from the sensorimotor grounding. It’s what it feels-like to see, hear, touch, taste and manipulate all those things out there that we are talking about, with our words. 

ChatGPT is talking, but ABOUT nothing. The “aboutness” is supplied by the original author’s and the reader’s head. ChatGPT is just producing  recombinatory, fill-in-the-blanks echolalia.

Monochrome Rainbow by DALL-E

Mercy

I wonder what would the animals and trees around us tell us if they could speak. What would our rivers say about human history and what would our cattle say on the topic of human rights. We have trampled on their rights for long, and now the results are before us.”

Draupadi Murmu, President of India

It feels good to hear that the President of India thinks of what nonhuman animals would think of what humans have done to them. 

(It would of course be even better if the President could do something about it – although India already eats proportionately fewer animals than any other nation on Earth.)

Sentient animals need not think of what we have done to them: They feel it (those that are still alive). 

Rivers neither live nor feel nor think, but they are among the inanimate necessities of which we are depriving all sentient life, including our own. 

[Whether plants feel or think will be the subject of a forthcoming target article and multiple commentary in Animal Sentience. —- I think not (but they are alive, and I love them).]

So many humans think that animals don’t feel, or don’t care that they feel. 

Animals’ lives are unspeakably, unimaginably, unforgivably wretched, because of us. What they need, desperately, is not animism, but action, not metaphors, but mercy.

A Whimper

I have of late 
lost all my faith 
in “taste” of either savor: 
gustate 
or aesthete. 
Darwin’s “proximal 
stimulus” 
is  just 
the Siren’s Song 
that 
from the start 
inspired 
the genes and memes 
of our superior 
race 
to pummel this promontory 
into 
for all but the insensate 
a land of waste.

While Chickens Bleed

sounds rational: BL sounds rational

turing test: LaMDA would quickly fail the verbal Turing Test, but the only valid Turing Test is the robotic one, which LaMDA could not even begin, lacking a body or connection to anything in the world but words..

“don’t turn me off!”: Nonsense, but it would be fun to probe it further in chat.

systemic corporate influence: BL is right about this, and it is an enormous problem in everything, everywhere, not just Google or AI.

“science”: There is no “science” in any of this (yet) and it’s silly to keep bandying the word around like a talisman.

jedi joke: Nonsense, of course, but another thing it would be fun to probe further in chat.

religion: Irrelevant — except as just one of many things (conspiracy theories, the “paranormal,” the supernatural) humans can waste time chatting about.

public influence: Real, and an increasingly nefarious turn that pervasive chatbots are already taking.

openness: The answerable openness of the village in and for which language evolved, where everyone knew everyone, is open to subversion by superviral malware in the form of global anonymous and pseudonymous chatbots.

And all this solemn angst about chatbots, while chickens bleed.

Talking Heads - Fiduciary Wealth Partners
Chatheads

Plant Sentience and the Precautionary Principle

I hope that plants are not sentient, but I also believe they are not sentient, for several other reasons too:

Every function and capacity demonstrated in plants and (rightly) described as “intelligent” and “cognitive” (learning, remembering, signalling, communicating) can already be done by robots and by software (and they can do a lot more too). That demonstrates that plants too have remarkable cognitive capacities that we used to think were unique to people (and perhaps a few other species of animals). But it does not demonstrate that plants feel. Nor that feeling is necessary in order to have those capacities. Nor does it increase the probability by more than an infinitesimal amount, that plants feel.

The “hard problem” is to explain how and why humans (and perhaps a few other species of animals) feel. It seems to be causally superfluous, as robotic and computational models are demonstrating how much can be done without feeling. But with what plants can do it is almost trivial to design a model that can do it too, So there feeling seems to be incomparably more superfluous.

To reply that “Well, so maybe those robots and computational models feel too!” would just be to capitalize on the flip side of the other-minds problem (that certainty is not possible), to the effect that just as we cannot be sure that other people do feel, we cannot be sure that rocks, rockets or robots don’t feel.

That’s not a good address. Don’t go there. Stick with high probability and the preponderance of evidence. The evidence for some cognitive capacity (memory, learning, communication) in plants is strong. But the evidence that they feel is next to zero. In nonhuman animals the evidence that they feel starts very high for mammals, birds, other vertebrates, and, more and more invertebrates. But the evidence that plants, microbes and single cells feel is nonexistent, even as the evidence for their capacity for intelligent performance becomes stronger.

That humans should not eat animals is a simple principle based on the necessities for survival: 

Obligate carnivores (like the felids, I keep being told) have no choice. Eat flesh or sicken and die. Humans, in contrast, are facultative omnivores; they can survive as carnivores, consuming flesh, or they can survive without consuming flesh, as herbivores. And they can choose. There are no other options (until and unless technology produces a completely synthetic diet).

So my disbelief in plan sentience is not based primarily on wishful thinking, but on evidence and probability (which is never absolute zero, even for gravity, that apples may not start falling up instead of down tomorrow).

But there is another ethical factor that influences my belief, and that is the Precautionary Principle. Right now, and for millennia already in the Anthropocene, countless indisputably sentient animals are being slaughtered by our species, every second of every day, all over the planet, not out of survival necessity (as it had been for our hunter/gatherer ancestors), but for the taste, out of habit.

Now the “evidence” of sentience in these animals is being used to try to sensitize the public to their suffering, and the need to protect them. And the Precautionary Principle is being invoked to extend the protection to species for whom the evidence is not as complete and familiar as it is for vertebrates, giving them the benefit of the doubt rather than having to be treated as insentient until “proven” sentient. Note that all these “unproven” species are far closer, biologically and behaviorally to the species known to be sentient than they are to single cells and plants, for whom there is next to no evidence of sentience, only evidence for a degree of intelligence. Intelligence, by the way, does come in degrees, whereas sentience does not: An organism either does feel (something) or it does not – the rest is just a matter of the quality, intensity and duration of the feeling, not its existence.

So this 2nd order invocation of the Precautionary Principle, and its reckoning of the costs of being right or wrong, dictates that just as it is wrong not to give the benefit of the doubt to similar animals where the probability is already so high, it would be wrong to give the benefit of the doubt where the probability of sentience is incomparably lower, and what is at risk in attributing it where it is highly improbable is precisely the protection the distinction would have afforded to the species for whom the probability of sentience is far higher. The term just becomes moot, and just another justification for the status quo (ruled by neither necessity nor compassion, but just taste and habit – and the wherewithal to keep it that way).

Learning and Feeling

Re: the  NOVA/PBS video on slime mold. 

Slime molds are certainly interesting, both as the origin of multicellular life and the origin of cellular communication and learning. (When I lived at the Oppenheims’ on Princeton Avenue in the 1970’s they often invited John Tyler Bonner to their luncheons, but I don’t remember any substantive discussion of his work during those luncheons.)

The NOVA video was interesting, despite the OOH-AAH style of presentation (and especially the narrators’ prosody and intonation, which to me was really irritating and intrusive), but the content was interesting – once it was de-weaseled from its empty buzzwords, like “intelligence,” which means nothing (really nothing) other than the capacity (which is shared by biological organisms and artificial devices as well as running computational algorithms) to learn.

The trouble with weasel-words like “intelligence,” is that they are vessels inviting the projection of a sentient “mind” where there isn’t, or need not be, a mind. The capacity to learn is a necessary but certainly not a sufficient condition for sentience, which is the capacity to feel (which is what it means to have a “mind”). 

Sensing and responding are not sentience either; they are just mechanical or biomechanical causality: Transduction is just converting one form of energy into another. Both nonliving (mostly human synthesized) devices and living organisms can learn. Learning (usually) requires sensors, transducers, and effectors; it can also be simulated computationally (i.e., symbolically, algorithmically). But “sensors,” whether synthetic or biological, do not require or imply sentience (the capacity to feel). They only require the capacity to detect and do.

And what sensors and effectors can (among other things) do, is to learn, which is to change in what they do, and can do. “Doing” is already a bit weaselly, implying some kind of “agency” or agenthood, which again invites projecting a “mind” onto it (“doing it because you feel like doing it”). But having a mind (another weasel-word, really) and having (or rather being able to be in) “mental states” really just means being able to feel (to have felt states, sentience).

And being able to learn, as slime molds can, definitely does not require or entail being able to feel. It doesn’t even require being a biological organism. Learning can (or will eventually be shown to be able to) be done by artificial devices, and to be simulable computationally, by algorithms. Doing can be simulated purely computationally (symbolically, algorithmically) but feeling cannot be, or, otherwise put, simulated feeling is not really feeling any more than simulated moving or simulated wetness is really moving or wet (even if it’s piped into a Virtual Reality device to fool our senses). It’s just code that is interpretable as feeling, or moving or wet. 

But I digress. The point is that learning capacity, artificial or biological, does not require or entail feeling capacity. And what is at issue in the question of whether an organism is sentient is not (just) whether it can learn, but whether it can feel. 

Slime mold — amoebas that can transition between two states, single cells and multicellular  — is extremely interesting and informative about the evolutionary transition to multicellular organisms, cellular communication, and learning capacity. But there is no basis for concluding, from what they can do, that slime molds can feel, no matter how easy it is to interpret the learning as mind-like (“smart”). They, and their synthetic counterparts, have (or are) an organ for growing, moving, and learning, but not for feeling. The function of feeling is hard enough to explain in sentient organisms with brains, from worms and insects upward, but it becomes arbitrary when we project feeling onto every system that can learn, including root tips and amoebas (or amoeba aggregations).

I try not to eat any organism that we (think we) know can feel — but not any organism (or device) that can learn.

SIGNALS AND SENTIENCE

Yes, plants can produce sounds when they are stressed by heat or drought or damage.

They can also produce sounds when they are swayed by the wind, and when their fruit drops to the ground.

They can also produce sights when their leaves unfurl, and when they flower.

And they can produce scents too.

And, yes, animals can detect those sounds and sights and scents, and can use them, for their own advantage (if they eat the plant), or for mutual advantage (e.g., if they are pollinators).

Plants can also produce chemical signals, for signalling within the plant, as well as for signalling between plants.

Animals (including humans) can produce internal signals, from one part of their immune system to another, or from a part of their brain to another part, or to their muscles or their immune system.

Seismic shifts (earth tremors) can be detected by animals, and by machines.

Pheromones can be produced by human secretions and detected and reacted to (but not smelled) by other humans.

The universe is full “signals,” most of them neither detected nor produced by living organisms, plant or animal.

Both living organisms and nonliving machines can “detect” and react to signals, both internal and external signals; but only sentient organisms can feel them. 

To feel signals, it is not enough to be alive and to detect and react to them; an organ of feeling is needed: a nervous system.

Nor are most of the signals produced by living organisms intentional; for a signal to be intentional, the producer has to be able to feel that it is producing it; that too requires an organ of feeling.

Stress is an internal state that signals damage in a living organism; but in an insentient organism, stress is not a felt state.

Butterflies have an organ of feeling; they are sentient. 

Some species of butterfly have evolved a coloration that mimics the coloration of another, poisonous species, a signal that deters predators who have learned that it is often poisonous. 

The predators feel that signal; the butterflies that produce it do not.

Evolution does not feel either; it is just an insentient mechanism by which genes that code for traits that help an organism to survive and reproduce get passed on to its progeny.

Butterflies, though sentient, do not signal their deterrent color to their predators intentionally.

Nor do plants that signal by sound, sight or scent, to themselves or others, do so intentionally.

All living organisms except plants must eat other living organisms to survive. The only exceptions are plants, who can photosynthesize with just light, CO2 and minerals.

But not all living organisms are sentient.

There is no evidence that plants are sentient, even though they are alive, and produce, detect, and react to signals. 

They lack an organ of feeling, a nervous system.

Vegans need to eat to live.

But they do not need to eat organisms that feel.

Khait, I., Lewin-Epstein, O., Sharon, R., Saban, K., Perelman, R., Boonman, A., … & Hadany, L. (2019). Plants emit informative airborne sounds under stress. bioRxiv 507590.

Wilkinson, S., & Davies, W. J. (2002). ABA‐based chemical signalling: the co‐ordination of responses to stress in plantsPlant, cell & environment25(2), 195-210.

SIGNAUX ET SENTIENCE

Oui, les plantes peuvent produire des sons lorsqu’elles sont stressées par la chaleur, la sécheresse ou les dommages.

Elles peuvent également produire des sons lorsqu’elles sont agitées par le vent et lorsque leurs fruits tombent au sol.

Elles peuvent également produire des vues lorsque leurs feuilles se déploient et lorsqu’elles fleurissent.

Et elles peuvent aussi produire des parfums.

Et, oui, les animaux peuvent détecter ces sons, ces vues, et ces odeurs, et ils peuvent les utiliser, pour leur propre avantage (s’ils mangent la plante) ou pour un avantage mutuel (s’ils sont des pollinisateurs).

Les plantes peuvent également produire des signaux chimiques, pour la signalisation à l’intérieur de la plante, ainsi que pour la signalisation entre les plantes.

Les animaux (y compris les humains) peuvent produire des signaux internes, d’une partie de leur système immunitaire à une autre, ou d’une partie de leur cerveau à une autre partie, ou à leurs muscles ou à leur système immunitaire.

Les déplacements sismiques (tremblements de terre) peuvent être détectés par les animaux ainsi que par les machines.

Les phéromones peuvent être produites par les sécrétions humaines et elles peuvent être détectées et réagies (mais non sentis) par d’autres humains.

L’univers est plein de « signaux », dont la plupart ne sont ni détectés ni produits par des organismes vivants, végétaux ou animaux.

Les organismes vivants et les machines non vivantes peuvent « détecter » et réagir aux signaux, qu’ils soient internes ou externes ; mais seuls les organismes sentients peuvent les ressentir.

Pour ressentir des signaux, il ne suffit pas d’être vivant, de les détecter et d’y réagir ; il faut un organe du ressenti : un système nerveux.

La plupart des signaux produits par les organismes vivants ne sont pas non plus intentionnels ; pour qu’un signal soit intentionnel, il faut que le producteur puisse ressentir qu’il le produit ; cela aussi exige un organe du ressenti.

Le stress est un état interne qui signale des dommages dans un organisme vivant ; mais dans un organisme non sentient, le stress n’est pas un état ressenti.

Les papillons ont un organe du ressenti ; ils sont sentient.

Certaines espèces de papillons ont évolué une coloration qui imite la coloration d’une autre espèce vénéneuse, un signal qui dissuade les prédateurs qui ont appris que c’est souvent toxique.

Les prédateurs ressentent ce signal; les papillons qui le produisent ne le ressentent pas.

L’évolution darwinienne ne ressent pas non plus ; c’est juste un mécanisme non sentient par lequel les gènes qui encodent les traits qui aident un organisme à survivre et à se reproduire sont transmis à sa progéniture.

Les papillons, bien que sentients, ne signalent pas intentionnellement leur couleur dissuasive à leurs prédateurs.

Les plantes qui signalent par le son, la vue ou l’odeur, à elles-mêmes ou aux autres, ne le font pas non plus intentionnellement.

Tous les organismes vivants, à l’exception des plantes, doivent manger d’autres organismes vivants pour survivre. Les seules exceptions sont les plantes, qui peuvent effectuer la photosynthèse avec juste de la lumière, du CO2 et des minéraux.

Mais tous les organismes vivants ne sont pas sentients.

Il n’y a pas de preuve que les plantes soient sentientes, même si elles sont vivantes, et produisent, détectent et réagissent aux signaux.

Il leur manque un organe de ressenti, un système nerveux.

Les véganes nécessitent manger pour survivre.

Mais ils ne nécessitent pas manger les organismes qui ressentent.

Khait, I., Lewin-Epstein, O., Sharon, R., Saban, K., Perelman, R., Boonman, A., … & Hadany, L. (2019). Plants emit informative airborne sounds under stress. bioRxiv 507590.

Wilkinson, S., & Davies, W. J. (2002). ABA‐based chemical signalling: the co‐ordination of responses to stress in plantsPlant, cell & environment25(2), 195-210.

What Matters

Based on my last few years’ experience in teaching my McGill course on human cognition and consciousness, I now regret that I had previously been so timid in that course about pointing out the most fundamental bioethical point there is — the basis of all morality, of all notions of right and wrong, good and bad; indeed the basis of the fact that anything matters at all. I think it leads quite naturally to the nutritional points some want to convey, but starting from the bioethical side and then moving to the human health benefits. (Bioethics is not “politics”!)


Biological organisms are living beings. Some (not all) living beings (probably not plants, nor microbes, nor animals with no nervous system) are also sentient beings. That means they are not just alive, surviving and reproducing; they also feel.


And with feeling comes the capacity to be hurt. Chairs & tables, glaciers & shorelines, and (probably) plants & microbes can be damaged, but they cannot be hurt. Only sentient beings can be hurt because it feels like something to be hurt.


Most organisms are heterotrophic, meaning that they have to consume other organisms in order to survive. (The exceptions are autotrophs like green plants, algae and photosynthetic bacteria.)


This means that nature is full of conflicts of vital (life-or-death) interests: predator vs. prey. If the prey is sentient (i.e., not a plant), this means that the predator has to harm the prey in order to survive (by killing and eating it) — and the prey has to harm the predator to survive (by fighting back or escaping, depriving the predator of food).


It also has to be pointed out that there is no point trying to make conflicts of vital interest into a moral issue. They are a biological reality — a matter of biological necessity, a biological imperative — for heterotrophic organisms. And there is no right or wrong or choice about it: The survival of one means the non-survival of the other, as a matter of necessity.


But now comes the unique case of the human species, which is sentient and also, like all heterotrophic species, a predator. Its prey are plants (almost certainly insentient)  and animals (almost certainly sentient). But unlike obligate carnivores (like the felids), humans also have a choice. They can survive, in full health, as either carnivores or herbivores, or both. We are facultative omnivores.


The primates probably evolved from earlier herbivore/insectivore species, but there is no doubt that most primates, including the great apes, are also able to eat small mammals, and sometimes do. Our own species’ evolutionary history diverged from this mostly herbivore origin; we became systematic meat hunters; and there is no doubt that that conferred an adaptive advantage on our species, not just in getting food but also in evolving some of the cognitive traits and the large brain that are unique to our species.


Far fewer of our ancestors would have survived if we had not adapted to hunting. They did it out of necessity; a biological imperative — just as it was under pressure of a biological imperative that our ancestors, especially children, evolved a “sweet tooth,” a predilection for sugar, which was rare, and it was important to consume as much as we could when we could get it, because we had many predators and needed the energy to escape. By the same token, our predilection for aggression and violence, toward other species as well as our own, had been adaptive in our ancestral environment.


But in our current environment many of these ancestral predilections are no longer necessary, and indeed some of them have become (mildly) maladaptive : Our predilection for sugar, now abundant (whereas predators are almost nonexistent), when unchecked, has become an important cause of dental cavities, hyperactivity, obesity and diabetes (but not maladaptive enough to kill or prevent enough of us from reproducing to eliminate their genes from our gene pool). Our predilection for aggression and violence, when unchecked, is leading to ever more deadly forms of warfare and devastation (but not deadly enough, yet).


And in the same way, our unchecked taste for animal protein has led to industrial production of livestock, water depletion, air pollution, climate change, antibiotic overuse (creating superbugs), and a large variety of human ailments (on which others are more expert than I). But the point is that we have retained our hominid capacity to survive, in full health, without animal protein. We are, and always have been, facultative omnivores — with two metabolic modes herbivore and omnivore — that could adapt to different environments. 


So far, I’ve only mentioned the negative consequences of animal protein consumption for us along with the positive consequences of  not consuming animal protein, for us.
But let me not minimize the moral/bioethical aspect. Even if, setting aside the climatic aspects, the direct health benefits of our no longer eating meat are, for us, only mild to moderate, the harm and hurt of our continuing to eat meat are, for our sentient victims, monstrous.


And it should not be left unsaid that the clinical hallmark of a psychopath is the fact that if they want to get something, psychopaths are unmoved if getting it hurts others, even when what they want to get is not a vital necessity. That is why it is so important that people are fully informed of the fact that meat eating is not necessary for human health and causes untold suffering to other sentient beings. Because most people are not, and do not want to be, psychopaths.