Ebeneezer’s Yuletide Homily

This interview just reiterated how everyone is still gob-smacked at how much transformers turn out to be able to do by swallowing and chewing more and more of the Internet with more and more computers, by filling in all the local bigram blanks, globally.

They’re right to be gob-smacked and non-plussed, because nobody expected it, and nobody has come close to explaining it. So, what do they do instead? Float a sci-fi icon – AY! GEE! EYE! – with no empirical substance or explanatory content, just: “It’s Coming!”… plus a lot of paranoia about what “they’re” not telling us, and who’s going to get super-rich out of it all, and whether it’s going to destroy us all.

And meanwhile Planetary Melt-Down (PMD) proceeds apace, safely muzzled by collective cog-diss, aided and abetted by those other three Anthropocene favorites: Greed, Malice and Bellicosity (GMB) — a species armed to the hilt, with both weapons and words, and words as weapons (WWWW)..

Wanna know what I think’s really going on? Language itself has gone rogue, big-time. It’s Advanced alright; and General; but it’s anything but Intelligence, in our hands, and mouths.

And it started 299,999 B.C.

Spielberg’s AI: Another Cuddly No-Brainer

It would have been possible to make an intelligent film about Artificial Intelligence — even a cuddly-intelligent film. And without asking for too much from the viewer. It would just ask for a bit more thought from the maker. 

AI is about a “robot” boy who is “programmed” to love his adoptive human mother but is discriminated against because he is just a robot. I put both “robot” and “programmed” in scare-quotes, because these are the two things that should have been given more thought before making the movie. (Most of this critique also applies to the short story by Brian Aldiss that inspired the movie, but the buck stops with the film as made, and its maker.)

So, what is a “robot,” exactly? It’s a man-made system that can move independently. So, is a human baby a robot? Let’s say not, though it fits the definition so far! It’s a robot only if it’s not made in the “usual way” we make babies. So, is a test-tube fertilized baby, or a cloned one, a robot? No. Even one that grows entirely in an incubator? No, it’s still growing from “naturally” man-made cells, or clones of them.

What about a baby with most of its organs replaced by synthetic organs? Is a baby with a silicon heart part-robot? Does it become more robot as we give it more synthetic organs? What if part of its brain is synthetic, transplanted because of an accident or disease? Does that make the baby part robot? And if all the parts were swapped, would that make it all robot?

I think we all agree intuitively, once we think about it, that this is all very arbitrary: The fact that part or all of someone is synthetic is not really what we mean by a robot. If someone you knew were gradually replaced, because of a progressive disease, by synthetic organs, but they otherwise stayed themselves, at no time would you say they had disappeared and been replaced by a robot — unless, of course they did “disappear,” and some other personality took their place.

But the trouble with that, as a “test” of whether or not something has become a robot, is that exactly the same thing can happen without any synthetic parts at all: Brain damage can radically change someone’s personality, to the point where they are not familiar or recognizable at all as the person you knew — yet we would not call such a new personality a robot; at worst, it’s another person, in place of the one you once knew. So what makes it a “robot” instead of a person in the synthetic case? Or rather, what — apart from being made of (some or all) synthetic parts — is it to be a “robot”?

Now we come to the “programming.” AI’s robot-boy is billed as being “programmed” to love. Now exactly what does it mean to be “programmed” to love? I know what a computer programme is. It is a code that, when it is run on a machine, makes the machine go into various states — on/off, hot/cold, move/don’t-move, etc. What about me? Does my heart beat because it is programmed (by my DNA) to beat, or for some other reason? What about my breathing? What about my loving? I don’t mean choosing to love one person rather than another (if we can “choose” such things at all, we get into the problem of “free will,” which is a bigger question than what we are considering here): I mean choosing to be able to love — or to feel anything at all: Is our species not “programmed” for our capacity to feel by our DNA, as surely as we are programmed for our capacity to breathe or walk?

Let’s not get into technical questions about whether or not the genetic code that dictates our shape, our growth, and our other capacities is a “programme” in exactly the same sense as a computer programme. Either way, it’s obvious that a baby can no more “choose” to be able to feel than it can choose to be able to fly. So this is another non-difference between us and the robot-boy with the capacity to feel love.

So what is the relevant way in which the robot-boy differs from us, if it isn’t just that it has synthetic parts, and it isn’t because its capacity for feeling is any more (or less) “programmed” than our own is?

The film depicts how, whatever the difference is, our attitude to it is rather like racism. We mistreat robots because they are different from us. We’ve done that sort of thing before, because of the color of people’s skins; we’re just as inclined to do it because of what’s under their skins.

But what the film misses completely is that, if the robot-boy really can feel (and, since this is fiction, we are meant to accept the maker’s premise that he can), then mistreating him is not just like racism, it is racism, as surely as it would be if we started to mistreat a biological boy because parts of him were replaced by synthetic parts. Racism (and, for that matter, speciesism, and terrestrialism) is simply our readiness to hurt or ignore the feelings of feeling creatures because we think that, owing to some difference between them and us, their feelings do not matter.

Now you might be inclined to say: This film doesn’t sound like a no-brainer at all, if it makes us reflect on racism, and on mistreating creatures because they are different! But the trouble is that it does not really make us reflect on racism, or even on what robots and programming are. It simply plays upon the unexamined (and probably even incoherent) stereotypes we have about such things already.

There is a scene where still-living but mutilated robots, with their inner metal showing, are scavenging among the dismembered parts of dead robots (killed in a sadistic rodeo) to swap for defective parts of their own. But if it weren’t for the metal, this could be real people looking for organ transplants. It’s the superficial cue from the metal that keeps us in a state of fuzzy ambiguity about what they are. The fact that they are metal on the inside must mean they are different in some way: But what way (if we accept the film’s premise that they really do feel)? It becomes trivial and banal if this is all just about cruelty to feeling people with metal organs.

There would have been ways to make it less of a no-brainer. The ambiguity could have been about something much deeper than metal: It could have been about whether other systems really do feel, or just act as if they feel, and how we could possibly know that, or tell the difference, and what difference that difference could really make — but that film would have had to be called “TT” (for Turing Test) rather than “AI” or “ET,” and it would have had to show (while keeping in touch with our “cuddly” feelings) how we are exactly in the same boat when we ask this question about one another as when we ask it about “robots.”

Instead, we have the robot-boy re-enacting Pinnochio’s quest to find the blue fairy to make him into a “real” boy. But we know what Pinnochio meant by “real”: He just wanted to be made of flesh instead of wood. Is this just a re-make of Pinnochio then, in metal? The fact that the movie is made of so many old parts in any case (Wizard of Oz, Revenge of the Zombies, ET, Star Wars, Water-World, I couldn’t possibly count them all) suggests that that’s really all there was to it. Pity. An opportunity to do build some real intelligence (and feeling) into a movie, missed.

“Body Awareness”?

Body Awareness: Scientists: Give Robots a Basic Sense of ‘Proprioception’

Giving a robot a body with the sensorimotor capacities needed to ground its words in its sensorimotor capacity to pick out and interact with the referents of its words in the world is grounding.

But that is completely (completely) trivial at toy-scale. 

No “grounds” whatsoever to talk about “body awareness” there. 

That would only kick in – (if dopamine etc. is not needed too) — at Turing-scale for robotic capacities. 

And if those include human language-learning capacity, that would pretty much have to be at LLM-scale too (not Siri-scale).

Now here’s the rub (and if you miss it, or dismiss it, we are talking past one another):

There is no way to ground an LLM top-down, dropping connections from individual words to their sensorimotor referents (like a man-o-war jelly-fish).

The only way to “align” sensorimotor grounding with the referents of words is bottom-up, from the ground)

That does not mean every word has to be robotically grounded directly.

Once at least one Minimal Grounding Set (Minset) has been grounded directly (i.e., robotically) then (assuming the robot has the capacity for propositions too — not just for directly naming everything) all the rest of LLM-space can be grounded indirectly by verbal definition or description.

Everything is reachable indirectly by words, once you have a grounded Minset.

But that all has to be done bottom-up too.

On the (too) many faces of consciousness

Harnad, S. (2021). On the (Too) Many Faces of ConsciousnessJournal of Consciousness Studies28(7-8), 61-66.

Abstract:  Pereira, like some other authors, both likens and links consciousness to homeostasis. He accordingly recommends that some measures of homeostasis be taken into account as biomarkers for sentience in patients who are in a chronic vegetative state. He rightly defines “sentience” as the capacity to feel (anything). But the message is scrambled in an incoherent welter of weasel-words for consciousness and the notion (in which he is also not alone) that there are two consciousnesses (sentience and “cognitive consciousness”). I suggest that one “hard problem” of consciousness is already more than enough.

Homeostasis. A thermostat is a homeostat. It measures temperature and controls a furnace. It “defends” a set temperature by turning on the furnace when the temperature falls below the set point and turning the furnace off when it goes above the set point. This process of keeping variables within set ranges is called homeostasis. A higher-order of homeostasis (“allostasis”) would be an integrative control system that received the temperature from a fleet of different thermostats, furnaces and climates, doing computations on them all based on the available fuel for the furnaces and the pattern of changes in the temperatures, dynamically modifying their set-points so as to defend an overall optimum for the whole fleet.

Biological organisms’ bodies have homeostatic and allostatic mechanisms of this kind, ranging over functions like temperature, heart-rate, blood-sugar, immune responses, breathing and balance – functions we would call “vegetative” – as well as functions we consider “cognitive,” such as attention and memory.

Sentience. Pereira (2021) rightly distinguishes between sentience itself – any state that it feels like something to be in – and cognition, which is also sentient, but involves more complicated thought processes, especially verbal ones. “Cognitive,” however, is often a weasel-word – one of many weasel-words spawned by our unsuccessful efforts to get a handle on “consciousness,” which is itself a weasel-word for sentience,which simply means feeling (feeling anything at all, from warm or tired or hungry, to angry or joyful or jealous, including what it feels like to see, hear, touch, taste or smell something, and what it feels like to understand (or think you understand) the meaning of a sentence or the proof of a theorem).

When Pereira speaks of sentience, however, he usually means it literally: a state is sentient if it is felt (e.g., pain); and an organism is sentient if it is able to feel. The main point of Pereira’s paper is that the tests for “consciousness” in human patients who are in a chronic vegetative state are insufficient. Such patients cannot make voluntary movements, nor can they understand or respond to language, but they still have sleeping and waking states, as well as reflexes, including eye-opening, chewing and some vocalizations; and their homeostatic vegetative functions persist. 

Vegetative states. Pereira insists, rightly, that if patients in a chronic vegetative state can still feel (e.g., pain) then they are still sentient. With Laureys (2019) and others, he holds that there are two networks for “awareness” (another weasel-word), one related to wakefulness and the other to “cognitive representations of the environment” (more weasel-words). Pereira accordingly recommends homeostasis-related measures such as lactate concentrations in the cerebrospinal fluid and astrocyte transport “waves” as biomarkers for sentience where behavioral tests and cerebral imagery draw a blank.

This seems reasonable enough. The “precautionary principle” (Birch 2017) dictates that the patients should be given every benefit of the doubt about whether they can feel. But what about these two networks of “awareness/consciousness/subjectivity” and their many other variants (“qualia” – “noetic” and “anoetic,” “internal” and “external”) and the very notion of two kinds of “consciousnesses”: “cognitive” and “noncognitive”?

Weasel-Words. Weasel-words are words used (deliberately or inadvertently) to mask redundancy or incoherence. They often take the form of partial synonyms that give the impression that there are more distinct entities or variables at issue than there really are. Such is the case with the Protean notion of “consciousness,” for which there are countless Mustelidian memes besides the ones I’ve already mentioned, including: subjective states, conscious states, mental states, phenomenal states, qualitative states, intentional states, intentionality, subjectivity, mentality, private states, 1st-person view, 3rd person view, contentful states, reflexive states, representational states, sentient states, experiential states, reflexivity, self-awareness, self-consciousness, sentience, raw feels, feeling, experience, soul, spirit, mind….

I think I know where the confusion resides, and also, if not when the confusion started, at least when it was compounded and widely propagated by Block’s (1995) target article in Behavioral and Brain Sciences â€œOn a confusion about a function of consciousness.” It was there that Block baptized an explicit distinction between (at least) two “consciousnesses”: “phenomenal consciousness” and “access consciousness”:

“Consciousness is a mongrel concept: there are a number of very different ‘consciousnesses.’ Phenomenal consciousness is experience; the phenomenally conscious aspect of a state is what it is like to be in that state. The mark of access-consciousness, by contrast, is availability for use in reasoning and rationally guiding speech and action.”

Feeling. What Block meant by “phenomenal consciousness” is obviously sentience; and what he meant to say (in a needlessly Nagelian way) is that there is something it feels like (not the needlessly vague and equivocal “is like” of Nagel 1974) to be in a sentient state. In a word, a sentient state is a felt state. (There is something it “is like” for water to be in a state of boiling: that something is what happens to the state of water above the temperature of 212 degrees Fahrenheit; but it does not feel like anything to the water to be in that state – only to a sentient organism that makes the mistake of reaching into the water.)

Block’s “access consciousness,” in contrast, is no kind of  â€œconsciousness” at all, although it is indeed also a sentient state — unless there are not only “a number of very different ‘consciousnesses’,” but an infinitenumber, one for every possible feeling that can be felt, from what it feels like for a human to hear an oboe play an A at 440 Hz, to what it feels like to hear an oboe play an A at 444 Hz, to what it feels like to know that Trump is finally out of the White House. 

Information. No, those are not different consciousnesses; they are differences in the content of “consciousness,” which, once de-weaseled, just means differences in what different feelings feel like. As to the “access” in the notion of an “access consciousness,” it just pertains to access to that felt content, along with the information (data) that the feelings accompany. Information, in turn, is anything that resolves uncertainty about what to do, among a finite number of options (Cole 1993).

Access. There is no such thing as an unfelt feeling (even though Pereira invokes some incoherent Freudian notions that became enshrined in the myth of an unconscious “mind,” which would have amounted to an unconscious consciousness): 

“Sentience can also be conceived as co-extensive to the (Freudian) ‘unconscious’… In Freudian psychotherapy, the ways of sentience are classically understood as unconscious processes of the Id and Ego that influence emotional states and behavior”

The contents we access are the contents that it feels like something to know (or believe you know, Burton 2008). If I am trying to remember someone’s name, then unless I can retrieve it I do not have access to it. If and when I retrieve it, not only do I have access to the name, so I can tell someone what it is, and phone the person, etc., but, as with everything else one knows, it feels like something to know that name, a feeling to which I had no “access” when I couldn’t remember the name. Both knowing the name and not knowing the name were sentient states; what changed was not a kind of consciousness, but access to information (data): to the content that one was conscious of, along with what it felt like to have that access. Computers have access to information, but because they are insentient, it does not feel like anything to “have” that information. And for sentient organisms it not only feels like something to see and hear, to be happy or sad, and to want or seek something, but also to reason about, believe, understand or know something.  

Problems: Easy, Hard and Other. Pereira unfortunately gets Chalmers’s “easy” and “hard” problem very wrong:

“the study of sentience is within the ‘Easy Problems’ conceived by Chalmers (1995), while explaining full consciousness is the ‘Hard Problem’.”

The “easy problem” of cognitive science is to explain causally how and why organisms can do all the (observable) things that organisms are able to do (from seeing and hearing and moving to learning and reasoning and communicating), including what their brains and bodies can do internally (i.e., their neuroanatomy, neurophysiology and neurochemistry, including homeostasis and allostasis). To explain these capacities is to “reverse-engineer them” so as to identify, demonstrate and describe the underlying causal mechanisms that produce the capacities (Turing 1950/2009). 

The “hard problem” is to explain how and why (sentient) organisms can feel. Feelings are not observable (to anyone other than the feeler); only the doings that are correlated with them are observable. This is the “other-minds problem” (Harnad 2016), which is neither the easy problem nor the hard problem. But Pereira writes:

“On the one hand, [1] sentience, as the capacity of controlling homeostasis with the generation of adaptive feelings, can be studied by means of biological structures and functions, as empirical registers of ionic waves and the lactate biomarker. On the other hand, [2] conscious first-person experiences in episodes containing mental representations and with attached qualia cannot be reduced to their biological correlates, neuron firings and patterns of connectivity, as famously claimed by Chalmers (1995).”

Integration. Regarding [1] It is true that some thinkers (notably Damasio 1999) have tried to link consciousness to homeostasis, perhaps because of the hunch that many thinkers (e.g., Baars 1997) have had that consciousness, too, may have something to do with monitoring and integrating many distributed activities, just as homeostasis does. But I’m not sure others would go so far as to say that sentience (feeling) is the capacity to control homeostasis; in fact, it’s more likely to be the other way round. And the ionic waves and lactate biomarker sound like Pereira’s own conjecture.

Regarding [2], the Mustelidian mist is so thick that it is difficult to find one’s way: “conscious, first-person experiences” (i.e., “felt, feeler’s feelings”) is just a string of redundant synonyms: Unfelt states are not conscious states; can feelings be other than 1st-personal? How is an unfelt experience an experience? “Mental” is another weasel-word for felt. “Representation” is the most widely used weasel-word in cognitive science and refers to whatever state or process the theorist thinks is going on in a head (or a computer). The underlying intuition seems to be something like an internal pictorial or verbal “representation” or internal model of something else. So, at bottom “internally represented” just means internally coded, somehow. “Qualia” are again just feelings. And, yes, correlation is not causation; nor is it causal explanation. That’s why the hard problem is hard.

None of this is clarified by statements like the following (which I leave it to the reader to try to de-weasel):

“sentience can be understood as potential consciousness: the capacity to feel (proprioceptive and exteroceptive sensations, emotional feelings) and to have qualitative experiences, while cognitive consciousness refers to the actual experience of thinking with mental representations.”

Ethical Priority of Sentience. But, not to end on a negative note: not only is Pereira right to stress sentience and its biomarkers in the assessment of chronic vegetative states in human patients, but, inasmuch as he (rightly) classifies the capacity to feel pain as sentience (rather than as “cognitive consciousness”), Pereira also accords sentience the ethical priority that it merits wherever it occurs, whether in our own or any other sentient species (Mikhalevich & Powell 2020). 

References

Baars B. (1997). In the Theater of Consciousness: The Workspace of the Mind. New York: Oxford University Press. 

Birch, J. (2017) Animal sentience and the precautionary principleAnimal Sentience 16(1)

Block, N. (1995). On a confusion about a function of consciousnessBehavioral and Brain Sciences, 18(2), 227-247. 

Burton, RA. (2008) On Being Certain: Believing You Are Right Even When You’re Not.  New York City: Macmillan Publishers/St. Martin’s Press.

Cole, C. (1993). Shannon revisited: Information in terms of uncertainty. Journal of the American Society for Information Science44(4), 204-211.

Damasio, A. (1999) The Feeling of What Happens: Body and Emotion in the Making of Consciousness. New York: Harcourt. 

Harnad, S. (2016). Animal sentience: The other-minds problemAnimal Sentience 1(1). 

Mikhalevich, I. and Powell, R. (2020) Minds without spines: Evolutionarily inclusive animal ethicsAnimal Sentience 29(1)

Nagel, T. (1974). What is it like to be a bat? The Philosophical Review, 83(4), 435-450.

Pereira, O. (2021) The Role of Sentience in the Theory of Consciousness and Medical Practice. Journal of Consciousness Studies (Special Issue on “Sentience and Consciousness”) 

Turing, A. M. (2009). Computing machinery and intelligence. In Parsing the Turing Test (pp. 23-65). Springer, Dordrecht.

(1) On Weasel-Words Like “Conscious” & (2) On Word-Use, Symbol Grounding, and Wittgenstein

1: ON: WEASEL WORDS LIKE “CONSCIOUS”

ANON:  “Would you say that consciousness is the set of all feelings that pertain to mental states, some of which can be felt and others not? And of the states that can be felt, some are not felt and others are, the latter being conscious? In general: that there are mental states, some of which are conscious and others not?”

REPLY:

Cognition is the set of all cognitive capacities. Most of those capacities are capacities to DO things (remember, learn, speak). Some (but far from all) of those capacities are FELT, so the capacity to FEEL, too, is part of cognition.

“Consciousness” is a “state that it feels like something to be in”, i.e., a felt state.  A state that is not felt is not a conscious state. There are many weasel-words for this, each giving the impression that one is saying something further, whereas it can always be shown that they introduce either uninformative tautology or self-contradictory nonsense. To see this ultra-clearly, all one need do is replace the weasel-words (which include “mind,” “mental,” “conscious”, “aware”, “subjective”, “experience”, “qualia”, etc.)  by the straightforward f-words (in their adjective, noun, or verb forms):

“consciousness is the set of all feelings that pertain to mental states, some of which can be felt and others not?”

becomes:

feeling is the set of all feelings that pertain to felt states, some of which can be felt and others not?

and:

“[O]f the states that can be felt, some are not felt and others are [felt], the latter being conscious? In general: that there are mental states, some of which are conscious and others not [felt]”

becomes:

Of the states that can be felt, some are not felt and others are [felt], the latter being felt? In general: that there are felt states, some of which are felt and others not [felt]?

There is one non-weasel synonym for “feeling” and “felt” that one can use to speak of entities that are or are not capable of feeling, and that are or not currently in a felt state, or to speak of a state, in an entity, that is not a felt state, and may even co-occur with a felt state. 

That non-weasel word is sentient (and sentience). That word is needed to disambiguate “feeling” when one speaks of a “feeling” organism that is not currently in a felt state, or that is in a felt state but also in many, many simultaneous unfelt states at the same time (as sentient organisms, awake and asleep, always are, e.g., currently feeling acute pain, but not feeling an ongoing chronic muscle spasm or acute hypoglycemia or covid immunopositivity, or even that they currently slowing for a yellow traffic light).

2. ON: WORD-USE, SYMBOL- GROUNDING, AND WITTGENSTEIN

ANON: “Sensorimotor grounding is crucial, for many reasons. Wittgenstein provides reasons that are less discussed, probably because they require taking a step back from the usual presuppositions of cognitive science.”

REPLY:

Wittgenstein: “For a large class of cases–though not for all–in which we employ the word “meaning” it can be defined thus: the meaning of a word is its use in the language.”

Correction: “For a small class of cases [“function words” 1-5%]–though not for most [“content words”95-99%]–in which we employ the word “meaning” it can be defined thus: the meaning of a word is its use in the language.”

Wikipedia definition of content and function words: “Content words, in linguistics, are words that possess semantic content and contribute to the meaning of the sentence in which they occur. In a traditional approach, nouns were said to name objects and other entities, lexical verbs to indicate actions, adjectives to refer to attributes of entities, and adverbs to attributes of actions. They contrast with function words, which have very little substantive meaning and primarily denote grammatical relationships between content words, such as prepositions (in, out, under etc.), pronouns (I, you, he, who etc.) and conjunctions (and, but, till, as etc.)”.[1]

Direct Sensorimotor learning (and then naming) of categories is necessary to ground the “use” of category names in subject/predicate propositions, with meanings and truth values (T & F). Propositions (with the subject being a new, not yet grounded category name, and the predicate being a list of features that are already grounded category names for both the Speaker and the Hearer) can then be “used” to ground the new category indirectly, through words.

Blondin-MassÊ, Alexandre; Harnad, Stevan; Picard, Olivier; and St-Louis, Bernard (2013) Symbol Grounding and the Origin of Language: From Show to Tell. In, Lefebvre, Claire; Cohen, Henri; and Comrie, Bernard (eds.) New Perspectives on the Origins of Language. Benjamin

Harnad, S. (2021). On the (Too) Many Faces of Consciousness. Journal of Consciousness Studies, 28(7-8), 61-66.

PÊrez-Gay Juårez, F., Sicotte, T., ThÊriault, C., & Harnad, S. (2019). Category learning can alter perception and its neural correlates. PloS one, 14(12), e0226000.

ThÊriault, C., PÊrez-Gay, F., Rivas, D., & Harnad, S. (2018). Learning-induced categorical perception in a neural network model. arXiv preprint arXiv:1805.04567.

Vincent-Lamarre, Philippe., Blondin MassĂŠ, Alexandre, Lopes, Marcus, Lord, Mèlanie, Marcotte, Odile, & Harnad, Stevan (2016). The Latent Structure of Dictionaries.  TopiCS in Cognitive Science  8(3) 625–659 Â