Chomsky on Consciousness

Re: https://www.youtube.com/watch?v=vLuONgFbsjw

I only watched the beginning, but I think I got the message. Let me precede this with: yes, I have come to understand the problem of “consciousness” through the lens of practical ethics. But suspend judgement: it’s not just a matter of overgeneralizing one’s culinary druthers


Prelude. First, an important aside about Tom Nagel’s influential article. As often happens in philosophy, words throw you. Tom should never have entitled (or, for pedants, “titled”) it ‘What is it like to be a bat?” but “What does it feel like to be a bat?” 

We’d have been spared so  much silliness (still ongoing). It would have brought to the fore that the problem of consciousness is not the metaphysical problem of explaining “What kind of ‘stuff’ is consciousness?” but the  hard problem of explaining (causally) “How and why do organisms feel?” 

The capacity to feel is a biological trait, like the capacity to fly. But flying is observable, feeling is not. Yet that’s not the “hard problem.” That’s “just” the “other minds problem.” Just a puzzle among fellow gentleman-philosophers, about certainty — but an existential agony for those sentient species that we assume do not feel, and treat accordingly, when in fact they do feel.

Easy Sequel. Noam is mistaken to liken the hard problem to the Newtonian problem of explaining the laws of motion. Motion (of which flying is an example) is observable; feeling is not. But, again, that is not the hard problemQuarks are not observable; but without inferring that little (observable) protons are really composed of big (unobservable) quarks, we cannot explain protons. So quarks cannot be seen, but they can be inferred from what can be seen, and they play an essential causal (or functional) role in the (current) explanation of protons. 

Feeling cannot be observed (by anyone other than the feeler – this is the “1st person perspective” that Nagel extolled and interviewer Richard Brown obnoxiously tries to foist on Chomsky). 

But even though feeling cannot be observed, it can be inferred. We know people feel; we know our dog feels. Uncertainty only becomes nontrivial when we get down to the simplest invertebrates, microbes and plants. (And, before anyone mentions it, we also know that rocks, rockets and [today’s] robots (or LaMDAas)  don’t feel. Let’s not get into scholastic and sterile scepticism about what we can know “for sure.”)

So the “hard problem” of explaining, causally (functionally), how and why sentient organisms feel is hard precisely because of the “easy problem” of causally explaining observable capacities of organisms, like moving, flying, learning, remembering, reasoning, speaking. 

It looks for all the world as if [once we have explained (causally) how and why organisms can do all the (observable) things they can do], then feeling — unlike the quarks in  the (subatomic) explanation of what protons can (observably) do, or the Newtonian explanation of what billiard balls can (observably) do — feeling looks for all the world as if it is causally superfluous.

Solving the “hard problem” would be simple: Just explain, causally, how it would be impossible for organisms to do all (or some) of the things they can do if they did not feel. In other words, explain the causal role of feeling (adaptively, if you like – after all, it’s an evolved, biological trait).

But before you go there, don’t try to help yourself to feeling as a fundamental force of nature, the way Newton helped himself to universal gravitation. Until further notice, feeling is just a tiny local product of the evolution of life in one small planet’s biosphere. And there’s no reason at all to doubt that, like any other biological trait, feeling is explainable in terms of the four fundamental forces (gravity, electromagnetism, “strong” subatomic & “weak” subatomic). No 5th psychokinetic force.

The problem is just coming up with the causal explanation. (Galen Strawson’s “panpsychism” is an absurd, empty – and I think incoherent – metaphysical fantasy that does not solve the “hard problem,” but just inflates it to cosmic proportions without explaining a thing.)

So Noam is mistaken that the hard problem is not a problem. But it’s not about explaining what it feels like to see a sunset. It is about explaining how and why (sentient) organisms feel anything at all.

See: “Minds, Brains and Turing” (2011) as well as the (Browned out) discussion.

First Personhood

Consciousness: The F-words vs. the S-words

“Sentient” is the right word for “conscious.”. It means being able to feel anything at all â€“ whether positive, negative or neutral, faint or flagrant, sensory or semantic. 

For ethics, it’s the negative feelings that matter. But determining whether an organism feels anything at all (the other-minds problem) is hard enough without trying to speculate about whether there exit species that can only feel neutral (“unvalenced”) feelings. (I doubt that +/-/= feelings evolved separately, although their valence-weighting is no doubt functionally dissociable, as in the Melzack/Wall gate-control theory of pain.)

The word “sense” in English is ambiguous, because it can mean both felt sensing and unfelt “sensing,” as in an electronic device like a sensor, or a mechanical one, like a thermometer or a thermostat, or even a biological sensor, like an in-vitro retinal cone cell, which, like photosensitive film, senses and reacts to light, but does not feel a thing (though the brain it connects to might).

To the best of our knowledge so far, the phototropisms, thermotropisms and hydrotropisms of plants, even the ones that can be modulated by their history, are all like that too: sensing and reacting without feeling, as in homeostatic systems or servomechanisms.

Feel/feeling/felt would be fine for replacing all the ambiguous s-words (sense, sensor, sensation
) and dispelling their ambiguities. 

(Although “feeling” is somewhat biased toward emotion (i.e., +/- “feelings”), it is the right descriptor for neutral feelings too, like warmth,  movement, or touch, which only become +/- at extreme intensities.) 

The only thing the f-words lack is a generic noun for “having the capacity too feel” as a counterpart for the noun sentience itself (and its referent). (As usual, German has a candidate: GefĂŒhlsfĂ€higkeit.)

And all this, without having to use the weasel-word “conscious/consciousness,” for which the f-words are a healthy antidote, to keep us honest, and coherent…

*IF* plants HAD feelings, how WOULD this affect our advocacy for animals?

That plants do feel is about as improbable as it is that animals (including humans) do not feel. (The only real uncertainty is about the very lowest invertebrates and microbes, at the juncture with plants, and evidence suggests that the capacity to feel depends on having a nervous system, and the behavioral capacities the nervous system produces.)

Because animals feel, it is unethical (in fact, monstrous) to harm them, when we have a choice. We don’t need to eat animals to survive and be healthy, so there we have a choice.

Plants almost certainly do not feel, but even if they did feel, we would have no choice but to eat them (until we can synthesize them) because otherwise we die.