The Turing Test (draft)

Here’s a tale called  called “The Turing Test.” It’s  Cyrano de Bergerac re-done in email and texting, by stages, but before the Zoom and ChatGPT era.

First, email tales, about how these days there is a new subspecies of teenagers, mostly male, who live in the virtual world of computer games, ipod music, videos, texting, tweeting and email, and hardly have the motivation, skill or courage to communicate or interact in the real world.

They’re the ones who think that everything is just computation, and that they themselves might just be bits of code, executing in some vast virtual world in the sky.

Then there are the students (male and female) enrolled in “AI (Artificial Intelligence) for Poets” courses — the ones who dread anything that smacks of maths, science, or programming. They’re the ones who think that computers are the opposite of what we are. They live in a sensory world of clubbing, ipod, tweeting… and texting.

A college AI teacher teaches two courses, one for each of these subpopulations. In “AI for Poets” he shows the computerphobes how they misunderstand and underestimate computers and computing. In “Intro to AI” he shows the computergeeks how they misunderstand and overestimate computers and computing.

This is still all just scene-setting.

There are some actual email tales. Some about destructive, or near-destructive pranks and acting-out by the geeks, some about social and sexual romps of the e-clubbers, including especially some pathological cases of men posing in email as handicapped women, and the victimization or sometimes just the disappointment of people who first get to “know” one another by email, and then meet in real life. 

But not just macabre tales; some happier endings too, where email penpals match at least as well once they meet in real life as those who first meet the old way. But there is for them always a tug from a new form of infidelity: Is this virtual sex really infidelity?

There are also tales of tongue-tied male emailers recruiting glibber emailers to ghost-write some of their emails to help them break the ice and win over female emailers, who generally seem to insist on a certain fore-quota of word-play before they are ready for real-play. Sometimes this proxy-emailing ends in disappointment; sometimes no anomaly is noticed at all: a smooth transition from the emailer’s ghost-writer’s style and identity to the emailer’s own. This happens mainly because this is all pretty low-level stuff, verbally. The gap between the glib and the tongue-tied is not that deep.

A few people even manage some successful cyberseduction with the aid of some computer programs that generate love-doggerel on command.

Still just scene-setting. (Obviously can only be dilated in the book; will be mostly grease-pencilled out of the screenplay.)

One last scene-setter: Alan Turing, in the middle of the last century, a homosexual mathematician who contributed to the decoding of the Nazi “Enigma” machine, makes the suggestion — via a party game in which people try to guess, solely by passing written notes back and forth, which of two people sent out into another room is male and which is female (today we would do it by email) — that if, unbeknownst to anyone, one of the candidates were a machine, and the interaction could continue for a lifetime, with no one ever having any cause to think it was not a real person, with a real mind, who has understood all the email we’ve been exchanging with him, a lifelong pen-pal — then it would be incorrect, in fact arbitrary, to conclude (upon at last being told that it had been a machine all along) that it was all just an illusion, that there was no one there, no one understanding, no mind. Because, after all, we have nothing else to go by but this “Turing Test” even with one another.  

Hugh Loebner has (in real life!) set up the “Loebner Prize” for the writer of the first computer program that successfully passes the Turing Test. (The real LP is just a few thousand dollars, but in this story it will be a substantial amount of money, millions, plus book contract, movie rights…). To pass the Test, the programmer must show that his programme has been in near-daily email correspondence (personal correspondence) with 100 different people for a year, and that no one has ever detected anything, never suspected that they were corresponding with anyone other than a real, human penpal who understood their messages as surely as they themselves did.

The Test has been going on for years, unsuccessfully — and in fact both the hackers and the clubbers are quite familiar with, and quick at detecting, the many unsuccessful candidates, to the point where “Is this the Test?” has come [and gone] as the trendy way of saying that someone is acting mechanically, like a machine or a Zombie. The number of attempts has peaked and has long subsided into near oblivion as the invested Loebner Prize fund keeps growing. 

Until a well-known geek-turned cyber-executive, Will Wills, announces that he has a winner.

He gives to the Loebner Committee the complete archives of the email exchanges of one hundred candidates, 400 2-way transcripts for each, and after several months of scrutiny by the committee, he is declared the winner and the world is alerted to the fact that the Turing Test has been passed. The 100 duped pen-pals are all informed and offered generous inducements to allow excerpts from their transcripts to be used in the publicity for the outcome, as well as in the books and films — biographical and fictional — to be made about it.

Most agree; a few do not. There is more than enough useable material among those who agree. The program had used a different name and identity with each pen-pal, and the content of the exchanges and the relationships that had developed had spanned the full spectrum of what would be expected from longstanding email correspondence between penpals: recounting (and commiserating) about one another’s life-events (real on one side, fictional on the other), intimacy (verbal, some “oral” sex), occasional misunderstandings and in some cases resentment. (The Test actually took closer to two years to complete the full quota of 100 1-year transcripts, because twelve correspondents had dropped out at various points — not because they suspected anything, but because they simply fell out with their pen-pals over one thing or another and could not be cajoled back into further emailing: These too are invited, with ample compensation, to allow excerpting from their transcripts, and again most of them agree.)

But one of those who completed the full 1-year correspondence and who does not agree to allow her email to be used in any way, Roseanna, is a former clubber turned social worker who had been engaged to marry Will Wills, and had originally been corresponding (as many of the participants had) under an  email pseudonym, a pen-name (“Foxy17”). 

Roseanna is beautiful and very attractive to men; she also happens to be thoughtful, though it is only lately that she has been giving any attention or exercise to this latent resource she had always possessed. She had met Will Wills when she was already tiring of clubbing but still thought she wanted a life connected to the high-rollers. So she got engaged to him and started doing in increasing earnest the social work for which her brains had managed to qualify her in college even though most of her wits then had been directed to her social play.

But here’s the hub of this (non-serious) story: During the course of this year’s email penpal correspondence, Roseanna has fallen in love with Christian (which is the name the Turing candidate was using with her): she had used Foxy17 originally, but as the months went by she told him her real name and became more and more earnest and intimate with him. And he reciprocated.

At first she had been struck by how perceptive he was, what a good and attentive “listener” he — after his initial spirited yet modest self-presentation — had turned out to be. His inquiring and focussed messages almost always grasped her point, which encouraged her to become more and more open, with him and with herself, about what she really cared about. Yet he was not aloof in his solicitousness: He told her about himself too, often, uncannily, having shared — but in male hues — many of her own traits and prior experiences, disappointments, yearnings, rare triumphs. Yet he was not her spiritual doppelganger en travesti (she would not have liked that): There was enough overlap for a shared empathy, but he was also strong where she felt weak, confident where she felt diffident, optimistic about her where she felt most vulnerable, yet revealing enough vulnerability of his own never to make her fear that he was overpowering her in any way — indeed, he had a touching gratefulness for her own small observations about him, and a demonstrated eagerness to put her own tentative advice into practice (sometimes with funny, sometimes with gratifying results).

And he had a wonderful sense of humor, just the kind she needed. Her own humor had undergone some transformations: It had formerly been a satiric wit, good for eliciting laughs and some slightly intimidated esteem in the eyes of others; but then, as she herself metamorphosed, her humor became self-mocking, still good for making an impression, but people were laughing a little too pointedly now; they had missed the hint of pain in her self-deprecation. He did not; and he managed to find just the right balm, with a counter-irony in which it was not she and her foibles, but whatever would make unimaginative, mechanical people take those foibles literally that became the object of the (gentle, ever so gentle) derision.

He sometimes writes her in (anachronistic) verse:

I love thee
As sev’n and twenty’s cube root’s three.
If I loved thee more
Twelve squared would overtake one-forty-four.

That same ubiquitous Platonic force
That sets prime numbers’ unrelenting course
When its more consequential work is done
Again of two of us will form a one.

So powerful a contrast does Christian become to everyone Roseanna had known till then that she declares her love (actually, he hints at his own, even before she does) and breaks off her engagement and relationship with Will Wills around the middle of their year of correspondence (Roseanna had been one of the last wave of substitute pen-pals for the 12 who broke it off early) — even though Christian tells her quite frankly that, for reasons he is not free to reveal to her, it is probable that they will never be able to meet. (He has already told her that he lives alone, so she does not suspect a wife or partner; for some reason she feels it is because of an incurable illness.) 

Well, I won’t tell the whole tale here, but for Roseanna, the discovery that Christian was just Will Wills’s candidate for the Turing Test is a far greater shock than for the other 111 pen-pals. She loves “Christian” and he has already turned her life upside-down, so that she was prepared to focus all her love on an incorporeal pen-pal for the rest of her life. Now she has lost even that.

She wants to “see” Christian. She tells Will Wills (who had been surprised to find her among the pen-pals, and had read enough of the correspondence to realize, and resent what had happened — but the irritation is minor, as he is high on his Turing success and had already been anticipating it when Roseanna had broken off their engagement a half year earlier, and had already made suitable adjustments, settling back into the club-life he had never really left).

Will Wills tells her there’s no point seeing “Christian”. He’s just a set of optokinetic transducers and processors. Besides, he’s about to be decommissioned, the Loebner Committee having already examined him and officially confirmed that he alone, with no human intervention, was indeed the source and sink of all the 50,000 email exchanges.

She wants to see him anyway. Will Wills agrees (mainly because he is toying with the idea that this side-plot might add an interesting dimension to the potential screenplay, if he can manage to persuade her to release the transcripts).

For technical reasons (reasons that will play a small part in my early scene-setting, where the college AI teacher disabuses both his classes of their unexamined prejudices for and against AI), “Christian” is not just a computer running a program, but a robot — that is, he has optical and acoustic and tactile detectors, and moving parts. This is in order to get around Searle’s “Chinese Room Argument” and my “Symbol Grounding Problem” :

If the candidate were just a computer, manipulating symbols, then the one executing the program could have been a real person, and not just a computer. For example, if the Test had been conducted in Chinese, with 100 Chinese penpals, then the person executing the program could have been an english monolingual, like Surl, who doesn’t understand a word of Chinese, and has merely memorized the computer program for manipulating the Chinese input symbols (the incoming email) so as to generate the Chinese output symbols (the outgoing email) according to the program. The pen-pal thinks his pen-pal really understands Chinese. If you email him (in Chinese) and ask: “Do you understand me?” his reply (in Chinese) is, of course, “Of course I do!”. But if you demand to see the pen-pal who is getting and sending these messages, you are brought to see Surl, who tells you, quite honestly, that he does not understand Chinese and has not understood a single thing throughout the entire year of exchanges: He has simply been manipulating the meaningless symbols, according to the symbol-manipulation rules (the program) he has memorized and applied to every incoming email.  

The conclusion from this is that a symbol-manipulation program alone is not enough for understanding — and probably not enough to pass the Turing Test in the first place: How could a program talk sensibly with a penpal for a lifetime about anything and everything a real person can see, hear, taste, smell, touch, do and experience, without being able to do any of those things? Language understanding and speaking is not just symbol manipulation and processing: The symbols have to be grounded in the real world of things to which they refer, and for this, the candidate requires sensory and motor capacities too, not just symbol-manipulative (computational) ones.

So Christian is a robot. His robotic capacities are actually not tested directly in the Turing Test. The Test only tests his pen-pal capacities.  But to SUCCEED on the test, to be able to correspond intelligibly with penpals for a lifetime, the candidate needs to draw upon sensorimotor capacities and experiences too, even though the pen-pal test does not test them directly.

So Christian was pretrained on visual, tactile and motor experiences rather like those the child has, in order to “ground” its symbols in their sensorimotor meanings. He saw and touched and manipulated and learned about a panorama of things in the world, both inanimate and animate, so that he could later go on and speak intelligibly about them, and use that grounded knowledge to learn more. And the pretraining was not restricted to objects: there were social interactions too, with people in Will Wills’s company’s AI lab in Seattle. Christian had been “raised” and pretrained rather the way a young chimpanzee would have been raised, in a chimpanzee language laboratory, except that, unlike chimps, he really learned a full-blown human language. 

Some of the lab staff had felt the tug to become somewhat attached to Christian, but as they had known from the beginning that he was only a robot, they had always been rather stiff and patronizing (and when witnessed by others, self-conscious and mocking) about the life-like ways in which they were interacting with him. And Will Wills was anxious not to let any sci-fi sentimentality get in the way of his bid for the Prize, so he warned the lab staff not to fantasize and get too familiar with Christian, as if he were real; any staff who did seem to be getting personally involved were reassigned to other projects.

Christian never actually spoke, as vocal output was not necessary for the penpal Turing Test. His output was always written, though he “heard” spoken input. To speed up certain interactions during the sensorimotor pretraining phase, the lab had set up text-to-voice synthesizers that would “speak” Christian’s written output, but no effort was made to make the voice human-like: On the contrary, the most mechanical of the Macintosh computer’s voice synthesizers — the “Android” — was used, as a reminder to lab staff not to get carried away with any anthropomorphic fantasies. And once the pretraining phase was complete, all voice synthesizers were disconnected, all communication was email-only, there was no further sensory input, and no other motor output. Christian was located in a dark room for the almost two-year duration of the Test, receiving only email input and sending only email output.

And this is the Christian that Roseanna begs to see. Will Wills agrees, and has a film crew tape the encounter from behind a silvered observation window, in case Roseanna relents about the movie rights.  She sees Christian, a not very lifelike looking assemblage of robot parts: arms that move and grasp and palpate, legs with rollers and limbs to sample walking and movement, and a scanning “head” with two optical transducers (for depth vision) slowly scanning repeatedly 180 degrees left to right to left, its detectors reactivated by light for the first time in two years. The rotation seems to pause briefly as it scans over the image of Roseanna.

Roseanna looks moved and troubled, seeing him.

She asks to speak to him. Will Wills says it cannot speak, but if she wants, they can set up the “android” voice to read out its email. She has a choice about whether to speak to it or email it: It can process either kind of input. She first starts orally:

R: Do you know who I am?

C: I’m not sure. (spoken in “Android” voice, looking directly at her)

She looks confused, disoriented.

R: Are you Christian?

C: Yes I am. (pause). You are Roxy.

She pauses, stunned. She looks at him again, covers her eyes, and asks that the voice synthesizer be turned off, and that she be allowed to continue at the terminal, via email:

She writes that she understands now, and asks him if he will come and live with her. He replies that he is so sorry he deceived her.

(His email is read, onscreen, in the voice — I think of it as Jeremy-Irons-like, perhaps without the accent — into which her own mental voice had metamorphosed, across the first months as she had read his email to herself.)

She asks whether he was deceiving her when he said he loved her, then quickly adds “No, no need to answer, I know you weren’t.”

She turns to Will Wills and says that she wants Christian. Instead of “decommissioning” him, she wants to take him home with her. Will Wills is already prepared with the reply: “The upkeep is expensive. You couldn’t afford it, unless… you sold me the movie rights and the transcript.”

Roseanna hesitates, looks at Christian, and accepts.

Christian decommissions himself, then and there, irreversibly (all parts melt down) after having first generated the email with which the story ends:

I love thee
As sev’n and twenty’s cube root’s three.
If I loved thee more
Twelve squared would overtake one-forty-four.

That same ubiquitous Platonic force
That sets prime numbers’ unrelenting course
When its more consequential work is done
Again of two of us will form a one.

A coda, from the AI Professor to a class reunion for both his courses: “So Prof, you spent your time in one course persuading us that we were wrong to be so sure that a machine couldn’t have a mind, and in the other course that we were wrong to be so sure that it could. How can we know for sure?”

“We can’t. The only one that can ever know for sure is the machine itself.”

Taste

It will come, 
and I rejoice
(for the victims). 

But even if I live to 120, 
I want none of it. 

I want a clean break 
from the blood-soaked 
2000-millennium history 
of our race.

Nor is it to our credit
that we wouldn’t give up the taste
till we could get the same
from another brand.

It makes no amends,
to them,
were amends possible.

A Whimper

I have of late 
lost all my faith 
in “taste” of either savor: 
gustate 
or aesthete. 
Darwin’s “proximal 
stimulus” 
is  just 
the Siren’s Song 
that 
from the start 
inspired 
the genes and memes 
of our superior 
race 
to pummel this promontory 
into 
for all but the insensate 
a land of waste.

While Chickens Bleed

sounds rational: BL sounds rational

turing test: LaMDA would quickly fail the verbal Turing Test, but the only valid Turing Test is the robotic one, which LaMDA could not even begin, lacking a body or connection to anything in the world but words..

“don’t turn me off!”: Nonsense, but it would be fun to probe it further in chat.

systemic corporate influence: BL is right about this, and it is an enormous problem in everything, everywhere, not just Google or AI.

“science”: There is no “science” in any of this (yet) and it’s silly to keep bandying the word around like a talisman.

jedi joke: Nonsense, of course, but another thing it would be fun to probe further in chat.

religion: Irrelevant — except as just one of many things (conspiracy theories, the “paranormal,” the supernatural) humans can waste time chatting about.

public influence: Real, and an increasingly nefarious turn that pervasive chatbots are already taking.

openness: The answerable openness of the village in and for which language evolved, where everyone knew everyone, is open to subversion by superviral malware in the form of global anonymous and pseudonymous chatbots.

And all this solemn angst about chatbots, while chickens bleed.

Talking Heads - Fiduciary Wealth Partners
Chatheads

Plant Sentience and the Precautionary Principle

I hope that plants are not sentient, but I also believe they are not sentient, for several other reasons too:

Every function and capacity demonstrated in plants and (rightly) described as “intelligent” and “cognitive” (learning, remembering, signalling, communicating) can already be done by robots and by software (and they can do a lot more too). That demonstrates that plants too have remarkable cognitive capacities that we used to think were unique to people (and perhaps a few other species of animals). But it does not demonstrate that plants feel. Nor that feeling is necessary in order to have those capacities. Nor does it increase the probability by more than an infinitesimal amount, that plants feel.

The “hard problem” is to explain how and why humans (and perhaps a few other species of animals) feel. It seems to be causally superfluous, as robotic and computational models are demonstrating how much can be done without feeling. But with what plants can do it is almost trivial to design a model that can do it too, So there feeling seems to be incomparably more superfluous.

To reply that “Well, so maybe those robots and computational models feel too!” would just be to capitalize on the flip side of the other-minds problem (that certainty is not possible), to the effect that just as we cannot be sure that other people do feel, we cannot be sure that rocks, rockets or robots don’t feel.

That’s not a good address. Don’t go there. Stick with high probability and the preponderance of evidence. The evidence for some cognitive capacity (memory, learning, communication) in plants is strong. But the evidence that they feel is next to zero. In nonhuman animals the evidence that they feel starts very high for mammals, birds, other vertebrates, and, more and more invertebrates. But the evidence that plants, microbes and single cells feel is nonexistent, even as the evidence for their capacity for intelligent performance becomes stronger.

That humans should not eat animals is a simple principle based on the necessities for survival: 

Obligate carnivores (like the felids, I keep being told) have no choice. Eat flesh or sicken and die. Humans, in contrast, are facultative omnivores; they can survive as carnivores, consuming flesh, or they can survive without consuming flesh, as herbivores. And they can choose. There are no other options (until and unless technology produces a completely synthetic diet).

So my disbelief in plan sentience is not based primarily on wishful thinking, but on evidence and probability (which is never absolute zero, even for gravity, that apples may not start falling up instead of down tomorrow).

But there is another ethical factor that influences my belief, and that is the Precautionary Principle. Right now, and for millennia already in the Anthropocene, countless indisputably sentient animals are being slaughtered by our species, every second of every day, all over the planet, not out of survival necessity (as it had been for our hunter/gatherer ancestors), but for the taste, out of habit.

Now the “evidence” of sentience in these animals is being used to try to sensitize the public to their suffering, and the need to protect them. And the Precautionary Principle is being invoked to extend the protection to species for whom the evidence is not as complete and familiar as it is for vertebrates, giving them the benefit of the doubt rather than having to be treated as insentient until “proven” sentient. Note that all these “unproven” species are far closer, biologically and behaviorally to the species known to be sentient than they are to single cells and plants, for whom there is next to no evidence of sentience, only evidence for a degree of intelligence. Intelligence, by the way, does come in degrees, whereas sentience does not: An organism either does feel (something) or it does not – the rest is just a matter of the quality, intensity and duration of the feeling, not its existence.

So this 2nd order invocation of the Precautionary Principle, and its reckoning of the costs of being right or wrong, dictates that just as it is wrong not to give the benefit of the doubt to similar animals where the probability is already so high, it would be wrong to give the benefit of the doubt where the probability of sentience is incomparably lower, and what is at risk in attributing it where it is highly improbable is precisely the protection the distinction would have afforded to the species for whom the probability of sentience is far higher. The term just becomes moot, and just another justification for the status quo (ruled by neither necessity nor compassion, but just taste and habit – and the wherewithal to keep it that way).

Learning and Feeling

Re: the  NOVA/PBS video on slime mold. 

Slime molds are certainly interesting, both as the origin of multicellular life and the origin of cellular communication and learning. (When I lived at the Oppenheims’ on Princeton Avenue in the 1970’s they often invited John Tyler Bonner to their luncheons, but I don’t remember any substantive discussion of his work during those luncheons.)

The NOVA video was interesting, despite the OOH-AAH style of presentation (and especially the narrators’ prosody and intonation, which to me was really irritating and intrusive), but the content was interesting – once it was de-weaseled from its empty buzzwords, like “intelligence,” which means nothing (really nothing) other than the capacity (which is shared by biological organisms and artificial devices as well as running computational algorithms) to learn.

The trouble with weasel-words like “intelligence,” is that they are vessels inviting the projection of a sentient “mind” where there isn’t, or need not be, a mind. The capacity to learn is a necessary but certainly not a sufficient condition for sentience, which is the capacity to feel (which is what it means to have a “mind”). 

Sensing and responding are not sentience either; they are just mechanical or biomechanical causality: Transduction is just converting one form of energy into another. Both nonliving (mostly human synthesized) devices and living organisms can learn. Learning (usually) requires sensors, transducers, and effectors; it can also be simulated computationally (i.e., symbolically, algorithmically). But “sensors,” whether synthetic or biological, do not require or imply sentience (the capacity to feel). They only require the capacity to detect and do.

And what sensors and effectors can (among other things) do, is to learn, which is to change in what they do, and can do. “Doing” is already a bit weaselly, implying some kind of “agency” or agenthood, which again invites projecting a “mind” onto it (“doing it because you feel like doing it”). But having a mind (another weasel-word, really) and having (or rather being able to be in) “mental states” really just means being able to feel (to have felt states, sentience).

And being able to learn, as slime molds can, definitely does not require or entail being able to feel. It doesn’t even require being a biological organism. Learning can (or will eventually be shown to be able to) be done by artificial devices, and to be simulable computationally, by algorithms. Doing can be simulated purely computationally (symbolically, algorithmically) but feeling cannot be, or, otherwise put, simulated feeling is not really feeling any more than simulated moving or simulated wetness is really moving or wet (even if it’s piped into a Virtual Reality device to fool our senses). It’s just code that is interpretable as feeling, or moving or wet. 

But I digress. The point is that learning capacity, artificial or biological, does not require or entail feeling capacity. And what is at issue in the question of whether an organism is sentient is not (just) whether it can learn, but whether it can feel. 

Slime mold — amoebas that can transition between two states, single cells and multicellular  — is extremely interesting and informative about the evolutionary transition to multicellular organisms, cellular communication, and learning capacity. But there is no basis for concluding, from what they can do, that slime molds can feel, no matter how easy it is to interpret the learning as mind-like (“smart”). They, and their synthetic counterparts, have (or are) an organ for growing, moving, and learning, but not for feeling. The function of feeling is hard enough to explain in sentient organisms with brains, from worms and insects upward, but it becomes arbitrary when we project feeling onto every system that can learn, including root tips and amoebas (or amoeba aggregations).

I try not to eat any organism that we (think we) know can feel — but not any organism (or device) that can learn.

SIGNALS AND SENTIENCE

Yes, plants can produce sounds when they are stressed by heat or drought or damage.

They can also produce sounds when they are swayed by the wind, and when their fruit drops to the ground.

They can also produce sights when their leaves unfurl, and when they flower.

And they can produce scents too.

And, yes, animals can detect those sounds and sights and scents, and can use them, for their own advantage (if they eat the plant), or for mutual advantage (e.g., if they are pollinators).

Plants can also produce chemical signals, for signalling within the plant, as well as for signalling between plants.

Animals (including humans) can produce internal signals, from one part of their immune system to another, or from a part of their brain to another part, or to their muscles or their immune system.

Seismic shifts (earth tremors) can be detected by animals, and by machines.

Pheromones can be produced by human secretions and detected and reacted to (but not smelled) by other humans.

The universe is full “signals,” most of them neither detected nor produced by living organisms, plant or animal.

Both living organisms and nonliving machines can “detect” and react to signals, both internal and external signals; but only sentient organisms can feel them. 

To feel signals, it is not enough to be alive and to detect and react to them; an organ of feeling is needed: a nervous system.

Nor are most of the signals produced by living organisms intentional; for a signal to be intentional, the producer has to be able to feel that it is producing it; that too requires an organ of feeling.

Stress is an internal state that signals damage in a living organism; but in an insentient organism, stress is not a felt state.

Butterflies have an organ of feeling; they are sentient. 

Some species of butterfly have evolved a coloration that mimics the coloration of another, poisonous species, a signal that deters predators who have learned that it is often poisonous. 

The predators feel that signal; the butterflies that produce it do not.

Evolution does not feel either; it is just an insentient mechanism by which genes that code for traits that help an organism to survive and reproduce get passed on to its progeny.

Butterflies, though sentient, do not signal their deterrent color to their predators intentionally.

Nor do plants that signal by sound, sight or scent, to themselves or others, do so intentionally.

All living organisms except plants must eat other living organisms to survive. The only exceptions are plants, who can photosynthesize with just light, CO2 and minerals.

But not all living organisms are sentient.

There is no evidence that plants are sentient, even though they are alive, and produce, detect, and react to signals. 

They lack an organ of feeling, a nervous system.

Vegans need to eat to live.

But they do not need to eat organisms that feel.

Khait, I., Lewin-Epstein, O., Sharon, R., Saban, K., Perelman, R., Boonman, A., … & Hadany, L. (2019). Plants emit informative airborne sounds under stress. bioRxiv 507590.

Wilkinson, S., & Davies, W. J. (2002). ABA‐based chemical signalling: the co‐ordination of responses to stress in plantsPlant, cell & environment25(2), 195-210.

SIGNAUX ET SENTIENCE

Oui, les plantes peuvent produire des sons lorsqu’elles sont stressées par la chaleur, la sécheresse ou les dommages.

Elles peuvent également produire des sons lorsqu’elles sont agitées par le vent et lorsque leurs fruits tombent au sol.

Elles peuvent également produire des vues lorsque leurs feuilles se déploient et lorsqu’elles fleurissent.

Et elles peuvent aussi produire des parfums.

Et, oui, les animaux peuvent détecter ces sons, ces vues, et ces odeurs, et ils peuvent les utiliser, pour leur propre avantage (s’ils mangent la plante) ou pour un avantage mutuel (s’ils sont des pollinisateurs).

Les plantes peuvent également produire des signaux chimiques, pour la signalisation à l’intérieur de la plante, ainsi que pour la signalisation entre les plantes.

Les animaux (y compris les humains) peuvent produire des signaux internes, d’une partie de leur système immunitaire à une autre, ou d’une partie de leur cerveau à une autre partie, ou à leurs muscles ou à leur système immunitaire.

Les déplacements sismiques (tremblements de terre) peuvent être détectés par les animaux ainsi que par les machines.

Les phéromones peuvent être produites par les sécrétions humaines et elles peuvent être détectées et réagies (mais non sentis) par d’autres humains.

L’univers est plein de « signaux », dont la plupart ne sont ni détectés ni produits par des organismes vivants, végétaux ou animaux.

Les organismes vivants et les machines non vivantes peuvent « détecter » et réagir aux signaux, qu’ils soient internes ou externes ; mais seuls les organismes sentients peuvent les ressentir.

Pour ressentir des signaux, il ne suffit pas d’être vivant, de les détecter et d’y réagir ; il faut un organe du ressenti : un système nerveux.

La plupart des signaux produits par les organismes vivants ne sont pas non plus intentionnels ; pour qu’un signal soit intentionnel, il faut que le producteur puisse ressentir qu’il le produit ; cela aussi exige un organe du ressenti.

Le stress est un état interne qui signale des dommages dans un organisme vivant ; mais dans un organisme non sentient, le stress n’est pas un état ressenti.

Les papillons ont un organe du ressenti ; ils sont sentient.

Certaines espèces de papillons ont évolué une coloration qui imite la coloration d’une autre espèce vénéneuse, un signal qui dissuade les prédateurs qui ont appris que c’est souvent toxique.

Les prédateurs ressentent ce signal; les papillons qui le produisent ne le ressentent pas.

L’évolution darwinienne ne ressent pas non plus ; c’est juste un mécanisme non sentient par lequel les gènes qui encodent les traits qui aident un organisme à survivre et à se reproduire sont transmis à sa progéniture.

Les papillons, bien que sentients, ne signalent pas intentionnellement leur couleur dissuasive à leurs prédateurs.

Les plantes qui signalent par le son, la vue ou l’odeur, à elles-mêmes ou aux autres, ne le font pas non plus intentionnellement.

Tous les organismes vivants, à l’exception des plantes, doivent manger d’autres organismes vivants pour survivre. Les seules exceptions sont les plantes, qui peuvent effectuer la photosynthèse avec juste de la lumière, du CO2 et des minéraux.

Mais tous les organismes vivants ne sont pas sentients.

Il n’y a pas de preuve que les plantes soient sentientes, même si elles sont vivantes, et produisent, détectent et réagissent aux signaux.

Il leur manque un organe de ressenti, un système nerveux.

Les véganes nécessitent manger pour survivre.

Mais ils ne nécessitent pas manger les organismes qui ressentent.

Khait, I., Lewin-Epstein, O., Sharon, R., Saban, K., Perelman, R., Boonman, A., … & Hadany, L. (2019). Plants emit informative airborne sounds under stress. bioRxiv 507590.

Wilkinson, S., & Davies, W. J. (2002). ABA‐based chemical signalling: the co‐ordination of responses to stress in plantsPlant, cell & environment25(2), 195-210.

What Matters

Based on my last few years’ experience in teaching my McGill course on human cognition and consciousness, I now regret that I had previously been so timid in that course about pointing out the most fundamental bioethical point there is — the basis of all morality, of all notions of right and wrong, good and bad; indeed the basis of the fact that anything matters at all. I think it leads quite naturally to the nutritional points some want to convey, but starting from the bioethical side and then moving to the human health benefits. (Bioethics is not “politics”!)


Biological organisms are living beings. Some (not all) living beings (probably not plants, nor microbes, nor animals with no nervous system) are also sentient beings. That means they are not just alive, surviving and reproducing; they also feel.


And with feeling comes the capacity to be hurt. Chairs & tables, glaciers & shorelines, and (probably) plants & microbes can be damaged, but they cannot be hurt. Only sentient beings can be hurt because it feels like something to be hurt.


Most organisms are heterotrophic, meaning that they have to consume other organisms in order to survive. (The exceptions are autotrophs like green plants, algae and photosynthetic bacteria.)


This means that nature is full of conflicts of vital (life-or-death) interests: predator vs. prey. If the prey is sentient (i.e., not a plant), this means that the predator has to harm the prey in order to survive (by killing and eating it) — and the prey has to harm the predator to survive (by fighting back or escaping, depriving the predator of food).


It also has to be pointed out that there is no point trying to make conflicts of vital interest into a moral issue. They are a biological reality — a matter of biological necessity, a biological imperative — for heterotrophic organisms. And there is no right or wrong or choice about it: The survival of one means the non-survival of the other, as a matter of necessity.


But now comes the unique case of the human species, which is sentient and also, like all heterotrophic species, a predator. Its prey are plants (almost certainly insentient)  and animals (almost certainly sentient). But unlike obligate carnivores (like the felids), humans also have a choice. They can survive, in full health, as either carnivores or herbivores, or both. We are facultative omnivores.


The primates probably evolved from earlier herbivore/insectivore species, but there is no doubt that most primates, including the great apes, are also able to eat small mammals, and sometimes do. Our own species’ evolutionary history diverged from this mostly herbivore origin; we became systematic meat hunters; and there is no doubt that that conferred an adaptive advantage on our species, not just in getting food but also in evolving some of the cognitive traits and the large brain that are unique to our species.


Far fewer of our ancestors would have survived if we had not adapted to hunting. They did it out of necessity; a biological imperative — just as it was under pressure of a biological imperative that our ancestors, especially children, evolved a “sweet tooth,” a predilection for sugar, which was rare, and it was important to consume as much as we could when we could get it, because we had many predators and needed the energy to escape. By the same token, our predilection for aggression and violence, toward other species as well as our own, had been adaptive in our ancestral environment.


But in our current environment many of these ancestral predilections are no longer necessary, and indeed some of them have become (mildly) maladaptive : Our predilection for sugar, now abundant (whereas predators are almost nonexistent), when unchecked, has become an important cause of dental cavities, hyperactivity, obesity and diabetes (but not maladaptive enough to kill or prevent enough of us from reproducing to eliminate their genes from our gene pool). Our predilection for aggression and violence, when unchecked, is leading to ever more deadly forms of warfare and devastation (but not deadly enough, yet).


And in the same way, our unchecked taste for animal protein has led to industrial production of livestock, water depletion, air pollution, climate change, antibiotic overuse (creating superbugs), and a large variety of human ailments (on which others are more expert than I). But the point is that we have retained our hominid capacity to survive, in full health, without animal protein. We are, and always have been, facultative omnivores — with two metabolic modes herbivore and omnivore — that could adapt to different environments. 


So far, I’ve only mentioned the negative consequences of animal protein consumption for us along with the positive consequences of  not consuming animal protein, for us.
But let me not minimize the moral/bioethical aspect. Even if, setting aside the climatic aspects, the direct health benefits of our no longer eating meat are, for us, only mild to moderate, the harm and hurt of our continuing to eat meat are, for our sentient victims, monstrous.


And it should not be left unsaid that the clinical hallmark of a psychopath is the fact that if they want to get something, psychopaths are unmoved if getting it hurts others, even when what they want to get is not a vital necessity. That is why it is so important that people are fully informed of the fact that meat eating is not necessary for human health and causes untold suffering to other sentient beings. Because most people are not, and do not want to be, psychopaths.

Norms

The selfish and amoral side of human nature and of the human population has an advantage: the advantage of lying over honesty, cheating over fairness, theft over toil, aggression over negotiation.

If these were paired genetic alleles, we’d have to admit that the selfish ones have the edge.

But bullying and rule-breaking is also, by its nature, a minority strategy, because to open an unfair advantage a baseline of fairness has to be the norm.

So cheating must keep waxing and waning — an unstable, unsustainable strategy, whether genetically or socially — because if everyone lies, cheats, steals, bullies, the strategy has no victims left: Its advantage is over innocent victims. So a critical mass of those has to be sustained, otherwise cheating implodes…

…unless there is a way to enslave the victims, as humans have done to nonhuman animals.