Learning and Feeling

Re: the  NOVA/PBS video on slime mold. 

Slime molds are certainly interesting, both as the origin of multicellular life and the origin of cellular communication and learning. (When I lived at the Oppenheims’ on Princeton Avenue in the 1970’s they often invited John Tyler Bonner to their luncheons, but I don’t remember any substantive discussion of his work during those luncheons.)

The NOVA video was interesting, despite the OOH-AAH style of presentation (and especially the narrators’ prosody and intonation, which to me was really irritating and intrusive), but the content was interesting – once it was de-weaseled from its empty buzzwords, like “intelligence,” which means nothing (really nothing) other than the capacity (which is shared by biological organisms and artificial devices as well as running computational algorithms) to learn.

The trouble with weasel-words like “intelligence,” is that they are vessels inviting the projection of a sentient “mind” where there isn’t, or need not be, a mind. The capacity to learn is a necessary but certainly not a sufficient condition for sentience, which is the capacity to feel (which is what it means to have a “mind”). 

Sensing and responding are not sentience either; they are just mechanical or biomechanical causality: Transduction is just converting one form of energy into another. Both nonliving (mostly human synthesized) devices and living organisms can learn. Learning (usually) requires sensors, transducers, and effectors; it can also be simulated computationally (i.e., symbolically, algorithmically). But “sensors,” whether synthetic or biological, do not require or imply sentience (the capacity to feel). They only require the capacity to detect and do.

And what sensors and effectors can (among other things) do, is to learn, which is to change in what they do, and can do. “Doing” is already a bit weaselly, implying some kind of “agency” or agenthood, which again invites projecting a “mind” onto it (“doing it because you feel like doing it”). But having a mind (another weasel-word, really) and having (or rather being able to be in) “mental states” really just means being able to feel (to have felt states, sentience).

And being able to learn, as slime molds can, definitely does not require or entail being able to feel. It doesn’t even require being a biological organism. Learning can (or will eventually be shown to be able to) be done by artificial devices, and to be simulable computationally, by algorithms. Doing can be simulated purely computationally (symbolically, algorithmically) but feeling cannot be, or, otherwise put, simulated feeling is not really feeling any more than simulated moving or simulated wetness is really moving or wet (even if it’s piped into a Virtual Reality device to fool our senses). It’s just code that is interpretable as feeling, or moving or wet. 

But I digress. The point is that learning capacity, artificial or biological, does not require or entail feeling capacity. And what is at issue in the question of whether an organism is sentient is not (just) whether it can learn, but whether it can feel. 

Slime mold — amoebas that can transition between two states, single cells and multicellular  — is extremely interesting and informative about the evolutionary transition to multicellular organisms, cellular communication, and learning capacity. But there is no basis for concluding, from what they can do, that slime molds can feel, no matter how easy it is to interpret the learning as mind-like (“smart”). They, and their synthetic counterparts, have (or are) an organ for growing, moving, and learning, but not for feeling. The function of feeling is hard enough to explain in sentient organisms with brains, from worms and insects upward, but it becomes arbitrary when we project feeling onto every system that can learn, including root tips and amoebas (or amoeba aggregations).

I try not to eat any organism that we (think we) know can feel — but not any organism (or device) that can learn.

Feeling vs. Moving

Sentience — which means the capacity to feel *something* (anything) — can differ in quality (seeing red feels different from hearing a cricket), or in intensity (getting kicked hard feels worse than getting lightly tapped) or in duration (now you feel, now you don’t).

But the difference between whether an organism has or lacks the capacity to feel anything at all , be it ever so faint or brief, is all-or-none, not a matter of degree along some sort of “continuum.”

Mammals, birds, reptiles, fish, and probably most or all invertebrates can feel (something, sometimes) — but not rhododendrons or Rhizobium radiobacter or Rutstroemia firma… or any of today’s robots.

There is no more absolute difference than that between a sentient entity and an insentient one, even if both are living organisms.

(Sedatives can dim feeling, general anesthesia can temporarily turn it off, and death or brain-death can turn it off permanently, but the capacity or incapacity to feel anything at all, ever, is all-or-none.)

Zeno's Paradoxes | Interesting Thing of the Day

Consciousness: The F-words vs. the S-words

“Sentient” is the right word for “conscious.”. It means being able to feel anything at all – whether positive, negative or neutral, faint or flagrant, sensory or semantic. 

For ethics, it’s the negative feelings that matter. But determining whether an organism feels anything at all (the other-minds problem) is hard enough without trying to speculate about whether there exit species that can only feel neutral (“unvalenced”) feelings. (I doubt that +/-/= feelings evolved separately, although their valence-weighting is no doubt functionally dissociable, as in the Melzack/Wall gate-control theory of pain.)

The word “sense” in English is ambiguous, because it can mean both felt sensing and unfelt “sensing,” as in an electronic device like a sensor, or a mechanical one, like a thermometer or a thermostat, or even a biological sensor, like an in-vitro retinal cone cell, which, like photosensitive film, senses and reacts to light, but does not feel a thing (though the brain it connects to might).

To the best of our knowledge so far, the phototropisms, thermotropisms and hydrotropisms of plants, even the ones that can be modulated by their history, are all like that too: sensing and reacting without feeling, as in homeostatic systems or servomechanisms.

Feel/feeling/felt would be fine for replacing all the ambiguous s-words (sense, sensor, sensation…) and dispelling their ambiguities. 

(Although “feeling” is somewhat biased toward emotion (i.e., +/- “feelings”), it is the right descriptor for neutral feelings too, like warmth,  movement, or touch, which only become +/- at extreme intensities.) 

The only thing the f-words lack is a generic noun for “having the capacity too feel” as a counterpart for the noun sentience itself (and its referent). (As usual, German has a candidate: Gefühlsfähigkeit.)

And all this, without having to use the weasel-word “conscious/consciousness,” for which the f-words are a healthy antidote, to keep us honest, and coherent…

Haggling about the price

Anonymous: Trying again with Weinberg and “Third Thoughts”. Into chapter on so-called Symmetry, and on Higgs. For the life of me I can’t believe they are on about anything real — anything that has thingness, rather than just counters in a language with arcane rules and words whose meanings are dependent only on the rules of their use and relationships with each other. Yet I accept that through some long chain of reasoning within that language, at some point there are “observations” which, by a long chain of derivations and dependencies are of something presumably real. And I accept that these people are way smarter than I and are not trying to fool people. This does little to shove a stable reality under their ideas, but just leaves me indifferent to particle physics. Did people feel like that about Newton  or Galileo?

Individuals (my apple, “Charlie”) are categories that are grounded directly in my sensorimotor experience (though it requires an act of inductive faith, bolstered by the biologically inbuilt feeling I have that “Charlie” is the same “thing” across time – which is already an abstraction). 

There’s your thinginess; as concrete as things ever get.

“Charlie” is red, which, too, is still a direct sensorimotor category, but already more of an abstraction from my direct experience, more of a leap of faith. “Colored” and “color” (and other “universals”) still moreso. 

The moon and all of its properties too.

“True” and “truth” are likewise way out there, no longer directly sensorimotor, but a verbal combination of properties (likewise named categories) ultimately grounded in sensorimotor ones.

Begins to feel like the kind of faith we feel for the theorems we prove in maths and algebra, far from the axioms we began with, but based on a faith (though not much immediate memory) in the rules of derivation that we learned, that make sense locally but become a blur when they become a long chain we hardly remember.

So aren’t electrons, or quarks, or superstrings, or chirality or superposition just still more of the usual leaps of faith that all categorization and abstraction entail? Far from the inbuilt sense of “things” that Darwin helpfully underwrites in our perception – but no different from most of the other things we feel we know and understand across time.

So in the end it boils down not to the reality of things but, as usual, the “hard problem” of why anything feels like anything at all…

(Or just the usual (Shavian?) quip, about having established our profession, just haggling about the price…)