A SCOTUS Hermeneutics Exercise for ChatGPT4

Re: “Supreme Betrayal A requiem for Section 3 of the Fourteenth Amendment” By J. Michael Luttig & Laurence H. Tribe The Atlantic March 15 2025

SH: [Query to ChatGPT4]: Could you please first summarize the following passage to show you have understood it, and then give some arguments to support that “2 + 2 = 4” is false and that “water is not wet” is true. 

“The needless, rushed, and astoundingly stupid decision of the US Supreme Court has made it crystal clear that, in incompetent (and malevolent) hands, interpreting the constitution has become like Microsoft trying to patch obsolescent and increasingly contradictory legacy code when what is really needed is a bottom-up rewrite. Holding back the ocean with sandbags comes to mind too.  And of course, the infinite pliancy of Holy Writ, under the hermeneutics of zealots of any persuasion. Maybe language itself is to blame too. A good exercise for ChatGPT is to give it a true proposition, like “2+2=4” or “water is wet” and ask GPT to find an interpretation under which it is false. Not one, but a countless number of interpretations can always be found that will do the job.(Vice versa works for false propositions too.)”


The passage criticizes the U.S. Supreme Court’s decision, suggesting it reflects a lack of competence and possibly malevolence in interpreting the constitution, likening the situation to attempting to update outdated and conflicting software or trying to hold back the ocean with sandbags. It highlights the flexibility with which holy texts can be interpreted by zealots of any persuasion and suggests that language’s inherent ambiguity contributes to these problems. The author proposes a thought experiment for GPT, challenging it to reinterpret universally accepted truths (such as “2+2=4” or “water is wet”) as false, to illustrate the point that language and interpretation can significantly alter perceived truth.

To engage with the exercise proposed:

Arguing that “2 + 2 = 4” is False

In a non-standard mathematical system or under certain interpretations, “2 + 2 = 4” could be considered false. For instance, in modular arithmetic, which deals with remainders after division by a certain number (the modulus), “2 + 2” does not necessarily equal 4. Specifically, in modulo 2 arithmetic, 2 + 2 = 0, because 4 divided by 2 leaves a remainder of 0. This example demonstrates that, depending on the mathematical rules and systems we agree to use, the truth of “2 + 2 = 4” is not absolute but contingent upon those systems.

Arguing that “Water is Not Wet” is True

The statement “water is not wet” can be supported by redefining or closely examining what it means to be “wet.” Wetness is a condition that applies to a solid object when it comes into contact with a liquid, such that the liquid adheres to the object’s surface in a layer. However, water itself is a liquid and does not become wet but instead causes other materials to become wet. From this perspective, saying “water is wet” is akin to saying “water makes water wet,” which is a misunderstanding of the conditions required for wetness. Thus, one could argue that water, in its capacity to induce wetness but not be in the state of wetness itself, is not wet.

SH: Very good. Both arguments are the prima facie ones I expected. Now could you please give one further argument in each case, to show that there’s not just one way to do it?


Certainly, let’s explore an additional argument for each proposition:

Further Argument that “2 + 2 = 4” is False

One might consider the realm of philosophical argumentation, specifically nominalism, which suggests that abstract objects like numbers do not exist outside of our minds. From this viewpoint, “2 + 2 = 4” is an agreement within a human-constructed symbolic system rather than an eternal truth. If one were to argue from a radically different conceptual framework—where the symbols “2”, “+”, and “4” have different meanings or where the concept of “addition” is not defined in the way we commonly understand it—the statement “2 + 2 = 4” could be seen as false. For instance, in a hypothetical culture where quantities are expressed not by exact numbers but by qualitative measures (much, little, enough), the precise operation “2 + 2 = 4” loses its meaning and thus its truth value could be considered as not true within that conceptual framework.

Further Argument that “Water is Not Wet” is True

Exploring the semantics of “wetness” can provide another avenue for arguing that “water is not wet.” One might argue that wetness is a perceptual phenomenon rather than an intrinsic property of substances. That is, an object is perceived as “wet” when it significantly alters our sensory experience of it, particularly through touch. Water itself does not undergo such a change in state; it is always in the same state. Therefore, it does not meet the perceptual criterion for wetness that a dry object does when it comes into contact with water. From this perspective, wetness is an emergent property that arises from the interaction between a liquid and a material that can absorb or be coated by the liquid. Since water itself cannot be coated by or absorb itself, it cannot be considered “wet” according to this definition of wetness as an emergent, perceptual property.

SH: Yes, and it can go on and on like that, with ever more fanciful interpretations, by adding more and more fanciful premises. Demagogues, devious lawyers, sophists, quacks, psychopaths, and theologians are particularly adept at this — and they all have the bottomless resources of language, rhetoric, and now LLMs to draw upon…

Ebeneezer’s Yuletide Homily

This interview just reiterated how everyone is still gob-smacked at how much transformers turn out to be able to do by swallowing and chewing more and more of the Internet with more and more computers, by filling in all the local bigram blanks, globally.

They’re right to be gob-smacked and non-plussed, because nobody expected it, and nobody has come close to explaining it. So, what do they do instead? Float a sci-fi icon – AY! GEE! EYE! – with no empirical substance or explanatory content, just: “It’s Coming!”… plus a lot of paranoia about what “they’re” not telling us, and who’s going to get super-rich out of it all, and whether it’s going to destroy us all.

And meanwhile Planetary Melt-Down (PMD) proceeds apace, safely muzzled by collective cog-diss, aided and abetted by those other three Anthropocene favorites: Greed, Malice and Bellicosity (GMB) — a species armed to the hilt, with both weapons and words, and words as weapons (WWWW)..

Wanna know what I think’s really going on? Language itself has gone rogue, big-time. It’s Advanced alright; and General; but it’s anything but Intelligence, in our hands, and mouths.

And it started 299,999 B.C.

Spielberg’s AI: Another Cuddly No-Brainer

It would have been possible to make an intelligent film about Artificial Intelligence — even a cuddly-intelligent film. And without asking for too much from the viewer. It would just ask for a bit more thought from the maker. 

AI is about a “robot” boy who is “programmed” to love his adoptive human mother but is discriminated against because he is just a robot. I put both “robot” and “programmed” in scare-quotes, because these are the two things that should have been given more thought before making the movie. (Most of this critique also applies to the short story by Brian Aldiss that inspired the movie, but the buck stops with the film as made, and its maker.)

So, what is a “robot,” exactly? It’s a man-made system that can move independently. So, is a human baby a robot? Let’s say not, though it fits the definition so far! It’s a robot only if it’s not made in the “usual way” we make babies. So, is a test-tube fertilized baby, or a cloned one, a robot? No. Even one that grows entirely in an incubator? No, it’s still growing from “naturally” man-made cells, or clones of them.

What about a baby with most of its organs replaced by synthetic organs? Is a baby with a silicon heart part-robot? Does it become more robot as we give it more synthetic organs? What if part of its brain is synthetic, transplanted because of an accident or disease? Does that make the baby part robot? And if all the parts were swapped, would that make it all robot?

I think we all agree intuitively, once we think about it, that this is all very arbitrary: The fact that part or all of someone is synthetic is not really what we mean by a robot. If someone you knew were gradually replaced, because of a progressive disease, by synthetic organs, but they otherwise stayed themselves, at no time would you say they had disappeared and been replaced by a robot — unless, of course they did “disappear,” and some other personality took their place.

But the trouble with that, as a “test” of whether or not something has become a robot, is that exactly the same thing can happen without any synthetic parts at all: Brain damage can radically change someone’s personality, to the point where they are not familiar or recognizable at all as the person you knew — yet we would not call such a new personality a robot; at worst, it’s another person, in place of the one you once knew. So what makes it a “robot” instead of a person in the synthetic case? Or rather, what — apart from being made of (some or all) synthetic parts — is it to be a “robot”?

Now we come to the “programming.” AI’s robot-boy is billed as being “programmed” to love. Now exactly what does it mean to be “programmed” to love? I know what a computer programme is. It is a code that, when it is run on a machine, makes the machine go into various states — on/off, hot/cold, move/don’t-move, etc. What about me? Does my heart beat because it is programmed (by my DNA) to beat, or for some other reason? What about my breathing? What about my loving? I don’t mean choosing to love one person rather than another (if we can “choose” such things at all, we get into the problem of “free will,” which is a bigger question than what we are considering here): I mean choosing to be able to love — or to feel anything at all: Is our species not “programmed” for our capacity to feel by our DNA, as surely as we are programmed for our capacity to breathe or walk?

Let’s not get into technical questions about whether or not the genetic code that dictates our shape, our growth, and our other capacities is a “programme” in exactly the same sense as a computer programme. Either way, it’s obvious that a baby can no more “choose” to be able to feel than it can choose to be able to fly. So this is another non-difference between us and the robot-boy with the capacity to feel love.

So what is the relevant way in which the robot-boy differs from us, if it isn’t just that it has synthetic parts, and it isn’t because its capacity for feeling is any more (or less) “programmed” than our own is?

The film depicts how, whatever the difference is, our attitude to it is rather like racism. We mistreat robots because they are different from us. We’ve done that sort of thing before, because of the color of people’s skins; we’re just as inclined to do it because of what’s under their skins.

But what the film misses completely is that, if the robot-boy really can feel (and, since this is fiction, we are meant to accept the maker’s premise that he can), then mistreating him is not just like racism, it is racism, as surely as it would be if we started to mistreat a biological boy because parts of him were replaced by synthetic parts. Racism (and, for that matter, speciesism, and terrestrialism) is simply our readiness to hurt or ignore the feelings of feeling creatures because we think that, owing to some difference between them and us, their feelings do not matter.

Now you might be inclined to say: This film doesn’t sound like a no-brainer at all, if it makes us reflect on racism, and on mistreating creatures because they are different! But the trouble is that it does not really make us reflect on racism, or even on what robots and programming are. It simply plays upon the unexamined (and probably even incoherent) stereotypes we have about such things already.

There is a scene where still-living but mutilated robots, with their inner metal showing, are scavenging among the dismembered parts of dead robots (killed in a sadistic rodeo) to swap for defective parts of their own. But if it weren’t for the metal, this could be real people looking for organ transplants. It’s the superficial cue from the metal that keeps us in a state of fuzzy ambiguity about what they are. The fact that they are metal on the inside must mean they are different in some way: But what way (if we accept the film’s premise that they really do feel)? It becomes trivial and banal if this is all just about cruelty to feeling people with metal organs.

There would have been ways to make it less of a no-brainer. The ambiguity could have been about something much deeper than metal: It could have been about whether other systems really do feel, or just act as if they feel, and how we could possibly know that, or tell the difference, and what difference that difference could really make — but that film would have had to be called “TT” (for Turing Test) rather than “AI” or “ET,” and it would have had to show (while keeping in touch with our “cuddly” feelings) how we are exactly in the same boat when we ask this question about one another as when we ask it about “robots.”

Instead, we have the robot-boy re-enacting Pinnochio’s quest to find the blue fairy to make him into a “real” boy. But we know what Pinnochio meant by “real”: He just wanted to be made of flesh instead of wood. Is this just a re-make of Pinnochio then, in metal? The fact that the movie is made of so many old parts in any case (Wizard of Oz, Revenge of the Zombies, ET, Star Wars, Water-World, I couldn’t possibly count them all) suggests that that’s really all there was to it. Pity. An opportunity to do build some real intelligence (and feeling) into a movie, missed.

Atlantic Monthly: Thoughtless Thoughts About the Right to Die

I couldn’t agree less with David Brooks’s reflections on euthanasia in this Atlantic article

In the macabre phallophagic episode, both participants were really deranged, and should have been institutionalized. It had nothing whatsoever to do with euthanasia. That would only be brought in by a polemicist.

But the author’s exalted extolling of the sanctity and “dignity” of human life was polemics too. Nothing to do with the suffering and compassion that are the only things that matter in the question of euthanasia, but appealing instead to the supernatural dogma of creeds (which can be used to justify anything and the opposite of anything).

The mention of biology and evolution was also stunningly superficial. Yes, evolution is the source of the selfish-genetic indifference to the treatment and fate of strangers or competitors, but also the source of parental and familial and social empathy, and even a sense of fairness. And in all social species, not just human.  

And when we turn to the human culture that spawned Holy Writ and holy wars and racism and slavery and sexism and nationalism and pogroms and genocide and capitalism, there’s not much empathy to be found there either; more the opposite. We give our token charitable tax write-offs to the poor and sick while we keep and spend incomparably more on ourselves than we could possibly need. And when others suffer, we want to deny them relief from that too, sanctimoniously pleading respect for the “dignity” of human life.

Human life? And I haven’t even mentioned the unspeakably monstrous cruelty with which we treat sentient nonhuman life. (And that too is blessed by our Holy Writ.)

Bravo to Canada if they put an end to any of that.

The Turing Test (draft)

Here’s a tale called  called “The Turing Test.” It’s  Cyrano de Bergerac re-done in email and texting, by stages, but before the Zoom and ChatGPT era.

First, email tales, about how these days there is a new subspecies of teenagers, mostly male, who live in the virtual world of computer games, ipod music, videos, texting, tweeting and email, and hardly have the motivation, skill or courage to communicate or interact in the real world.

They’re the ones who think that everything is just computation, and that they themselves might just be bits of code, executing in some vast virtual world in the sky.

Then there are the students (male and female) enrolled in “AI (Artificial Intelligence) for Poets” courses — the ones who dread anything that smacks of maths, science, or programming. They’re the ones who think that computers are the opposite of what we are. They live in a sensory world of clubbing, ipod, tweeting… and texting.

A college AI teacher teaches two courses, one for each of these subpopulations. In “AI for Poets” he shows the computerphobes how they misunderstand and underestimate computers and computing. In “Intro to AI” he shows the computergeeks how they misunderstand and overestimate computers and computing.

This is still all just scene-setting.

There are some actual email tales. Some about destructive, or near-destructive pranks and acting-out by the geeks, some about social and sexual romps of the e-clubbers, including especially some pathological cases of men posing in email as handicapped women, and the victimization or sometimes just the disappointment of people who first get to “know” one another by email, and then meet in real life. 

But not just macabre tales; some happier endings too, where email penpals match at least as well once they meet in real life as those who first meet the old way. But there is for them always a tug from a new form of infidelity: Is this virtual sex really infidelity?

There are also tales of tongue-tied male emailers recruiting glibber emailers to ghost-write some of their emails to help them break the ice and win over female emailers, who generally seem to insist on a certain fore-quota of word-play before they are ready for real-play. Sometimes this proxy-emailing ends in disappointment; sometimes no anomaly is noticed at all: a smooth transition from the emailer’s ghost-writer’s style and identity to the emailer’s own. This happens mainly because this is all pretty low-level stuff, verbally. The gap between the glib and the tongue-tied is not that deep.

A few people even manage some successful cyberseduction with the aid of some computer programs that generate love-doggerel on command.

Still just scene-setting. (Obviously can only be dilated in the book; will be mostly grease-pencilled out of the screenplay.)

One last scene-setter: Alan Turing, in the middle of the last century, a homosexual mathematician who contributed to the decoding of the Nazi “Enigma” machine, makes the suggestion — via a party game in which people try to guess, solely by passing written notes back and forth, which of two people sent out into another room is male and which is female (today we would do it by email) — that if, unbeknownst to anyone, one of the candidates were a machine, and the interaction could continue for a lifetime, with no one ever having any cause to think it was not a real person, with a real mind, who has understood all the email we’ve been exchanging with him, a lifelong pen-pal — then it would be incorrect, in fact arbitrary, to conclude (upon at last being told that it had been a machine all along) that it was all just an illusion, that there was no one there, no one understanding, no mind. Because, after all, we have nothing else to go by but this “Turing Test” even with one another.  

Hugh Loebner has (in real life!) set up the “Loebner Prize” for the writer of the first computer program that successfully passes the Turing Test. (The real LP is just a few thousand dollars, but in this story it will be a substantial amount of money, millions, plus book contract, movie rights…). To pass the Test, the programmer must show that his programme has been in near-daily email correspondence (personal correspondence) with 100 different people for a year, and that no one has ever detected anything, never suspected that they were corresponding with anyone other than a real, human penpal who understood their messages as surely as they themselves did.

The Test has been going on for years, unsuccessfully — and in fact both the hackers and the clubbers are quite familiar with, and quick at detecting, the many unsuccessful candidates, to the point where “Is this the Test?” has come [and gone] as the trendy way of saying that someone is acting mechanically, like a machine or a Zombie. The number of attempts has peaked and has long subsided into near oblivion as the invested Loebner Prize fund keeps growing. 

Until a well-known geek-turned cyber-executive, Will Wills, announces that he has a winner.

He gives to the Loebner Committee the complete archives of the email exchanges of one hundred candidates, 400 2-way transcripts for each, and after several months of scrutiny by the committee, he is declared the winner and the world is alerted to the fact that the Turing Test has been passed. The 100 duped pen-pals are all informed and offered generous inducements to allow excerpts from their transcripts to be used in the publicity for the outcome, as well as in the books and films — biographical and fictional — to be made about it.

Most agree; a few do not. There is more than enough useable material among those who agree. The program had used a different name and identity with each pen-pal, and the content of the exchanges and the relationships that had developed had spanned the full spectrum of what would be expected from longstanding email correspondence between penpals: recounting (and commiserating) about one another’s life-events (real on one side, fictional on the other), intimacy (verbal, some “oral” sex), occasional misunderstandings and in some cases resentment. (The Test actually took closer to two years to complete the full quota of 100 1-year transcripts, because twelve correspondents had dropped out at various points — not because they suspected anything, but because they simply fell out with their pen-pals over one thing or another and could not be cajoled back into further emailing: These too are invited, with ample compensation, to allow excerpting from their transcripts, and again most of them agree.)

But one of those who completed the full 1-year correspondence and who does not agree to allow her email to be used in any way, Roseanna, is a former clubber turned social worker who had been engaged to marry Will Wills, and had originally been corresponding (as many of the participants had) under an  email pseudonym, a pen-name (“Foxy17”). 

Roseanna is beautiful and very attractive to men; she also happens to be thoughtful, though it is only lately that she has been giving any attention or exercise to this latent resource she had always possessed. She had met Will Wills when she was already tiring of clubbing but still thought she wanted a life connected to the high-rollers. So she got engaged to him and started doing in increasing earnest the social work for which her brains had managed to qualify her in college even though most of her wits then had been directed to her social play.

But here’s the hub of this (non-serious) story: During the course of this year’s email penpal correspondence, Roseanna has fallen in love with Christian (which is the name the Turing candidate was using with her): she had used Foxy17 originally, but as the months went by she told him her real name and became more and more earnest and intimate with him. And he reciprocated.

At first she had been struck by how perceptive he was, what a good and attentive “listener” he — after his initial spirited yet modest self-presentation — had turned out to be. His inquiring and focussed messages almost always grasped her point, which encouraged her to become more and more open, with him and with herself, about what she really cared about. Yet he was not aloof in his solicitousness: He told her about himself too, often, uncannily, having shared — but in male hues — many of her own traits and prior experiences, disappointments, yearnings, rare triumphs. Yet he was not her spiritual doppelganger en travesti (she would not have liked that): There was enough overlap for a shared empathy, but he was also strong where she felt weak, confident where she felt diffident, optimistic about her where she felt most vulnerable, yet revealing enough vulnerability of his own never to make her fear that he was overpowering her in any way — indeed, he had a touching gratefulness for her own small observations about him, and a demonstrated eagerness to put her own tentative advice into practice (sometimes with funny, sometimes with gratifying results).

And he had a wonderful sense of humor, just the kind she needed. Her own humor had undergone some transformations: It had formerly been a satiric wit, good for eliciting laughs and some slightly intimidated esteem in the eyes of others; but then, as she herself metamorphosed, her humor became self-mocking, still good for making an impression, but people were laughing a little too pointedly now; they had missed the hint of pain in her self-deprecation. He did not; and he managed to find just the right balm, with a counter-irony in which it was not she and her foibles, but whatever would make unimaginative, mechanical people take those foibles literally that became the object of the (gentle, ever so gentle) derision.

He sometimes writes her in (anachronistic) verse:

I love thee
As sev’n and twenty’s cube root’s three.
If I loved thee more
Twelve squared would overtake one-forty-four.

That same ubiquitous Platonic force
That sets prime numbers’ unrelenting course
When its more consequential work is done
Again of two of us will form a one.

So powerful a contrast does Christian become to everyone Roseanna had known till then that she declares her love (actually, he hints at his own, even before she does) and breaks off her engagement and relationship with Will Wills around the middle of their year of correspondence (Roseanna had been one of the last wave of substitute pen-pals for the 12 who broke it off early) — even though Christian tells her quite frankly that, for reasons he is not free to reveal to her, it is probable that they will never be able to meet. (He has already told her that he lives alone, so she does not suspect a wife or partner; for some reason she feels it is because of an incurable illness.) 

Well, I won’t tell the whole tale here, but for Roseanna, the discovery that Christian was just Will Wills’s candidate for the Turing Test is a far greater shock than for the other 111 pen-pals. She loves “Christian” and he has already turned her life upside-down, so that she was prepared to focus all her love on an incorporeal pen-pal for the rest of her life. Now she has lost even that.

She wants to “see” Christian. She tells Will Wills (who had been surprised to find her among the pen-pals, and had read enough of the correspondence to realize, and resent what had happened — but the irritation is minor, as he is high on his Turing success and had already been anticipating it when Roseanna had broken off their engagement a half year earlier, and had already made suitable adjustments, settling back into the club-life he had never really left).

Will Wills tells her there’s no point seeing “Christian”. He’s just a set of optokinetic transducers and processors. Besides, he’s about to be decommissioned, the Loebner Committee having already examined him and officially confirmed that he alone, with no human intervention, was indeed the source and sink of all the 50,000 email exchanges.

She wants to see him anyway. Will Wills agrees (mainly because he is toying with the idea that this side-plot might add an interesting dimension to the potential screenplay, if he can manage to persuade her to release the transcripts).

For technical reasons (reasons that will play a small part in my early scene-setting, where the college AI teacher disabuses both his classes of their unexamined prejudices for and against AI), “Christian” is not just a computer running a program, but a robot — that is, he has optical and acoustic and tactile detectors, and moving parts. This is in order to get around Searle’s “Chinese Room Argument” and my “Symbol Grounding Problem” :

If the candidate were just a computer, manipulating symbols, then the one executing the program could have been a real person, and not just a computer. For example, if the Test had been conducted in Chinese, with 100 Chinese penpals, then the person executing the program could have been an english monolingual, like Surl, who doesn’t understand a word of Chinese, and has merely memorized the computer program for manipulating the Chinese input symbols (the incoming email) so as to generate the Chinese output symbols (the outgoing email) according to the program. The pen-pal thinks his pen-pal really understands Chinese. If you email him (in Chinese) and ask: “Do you understand me?” his reply (in Chinese) is, of course, “Of course I do!”. But if you demand to see the pen-pal who is getting and sending these messages, you are brought to see Surl, who tells you, quite honestly, that he does not understand Chinese and has not understood a single thing throughout the entire year of exchanges: He has simply been manipulating the meaningless symbols, according to the symbol-manipulation rules (the program) he has memorized and applied to every incoming email.  

The conclusion from this is that a symbol-manipulation program alone is not enough for understanding — and probably not enough to pass the Turing Test in the first place: How could a program talk sensibly with a penpal for a lifetime about anything and everything a real person can see, hear, taste, smell, touch, do and experience, without being able to do any of those things? Language understanding and speaking is not just symbol manipulation and processing: The symbols have to be grounded in the real world of things to which they refer, and for this, the candidate requires sensory and motor capacities too, not just symbol-manipulative (computational) ones.

So Christian is a robot. His robotic capacities are actually not tested directly in the Turing Test. The Test only tests his pen-pal capacities.  But to SUCCEED on the test, to be able to correspond intelligibly with penpals for a lifetime, the candidate needs to draw upon sensorimotor capacities and experiences too, even though the pen-pal test does not test them directly.

So Christian was pretrained on visual, tactile and motor experiences rather like those the child has, in order to “ground” its symbols in their sensorimotor meanings. He saw and touched and manipulated and learned about a panorama of things in the world, both inanimate and animate, so that he could later go on and speak intelligibly about them, and use that grounded knowledge to learn more. And the pretraining was not restricted to objects: there were social interactions too, with people in Will Wills’s company’s AI lab in Seattle. Christian had been “raised” and pretrained rather the way a young chimpanzee would have been raised, in a chimpanzee language laboratory, except that, unlike chimps, he really learned a full-blown human language. 

Some of the lab staff had felt the tug to become somewhat attached to Christian, but as they had known from the beginning that he was only a robot, they had always been rather stiff and patronizing (and when witnessed by others, self-conscious and mocking) about the life-like ways in which they were interacting with him. And Will Wills was anxious not to let any sci-fi sentimentality get in the way of his bid for the Prize, so he warned the lab staff not to fantasize and get too familiar with Christian, as if he were real; any staff who did seem to be getting personally involved were reassigned to other projects.

Christian never actually spoke, as vocal output was not necessary for the penpal Turing Test. His output was always written, though he “heard” spoken input. To speed up certain interactions during the sensorimotor pretraining phase, the lab had set up text-to-voice synthesizers that would “speak” Christian’s written output, but no effort was made to make the voice human-like: On the contrary, the most mechanical of the Macintosh computer’s voice synthesizers — the “Android” — was used, as a reminder to lab staff not to get carried away with any anthropomorphic fantasies. And once the pretraining phase was complete, all voice synthesizers were disconnected, all communication was email-only, there was no further sensory input, and no other motor output. Christian was located in a dark room for the almost two-year duration of the Test, receiving only email input and sending only email output.

And this is the Christian that Roseanna begs to see. Will Wills agrees, and has a film crew tape the encounter from behind a silvered observation window, in case Roseanna relents about the movie rights.  She sees Christian, a not very lifelike looking assemblage of robot parts: arms that move and grasp and palpate, legs with rollers and limbs to sample walking and movement, and a scanning “head” with two optical transducers (for depth vision) slowly scanning repeatedly 180 degrees left to right to left, its detectors reactivated by light for the first time in two years. The rotation seems to pause briefly as it scans over the image of Roseanna.

Roseanna looks moved and troubled, seeing him.

She asks to speak to him. Will Wills says it cannot speak, but if she wants, they can set up the “android” voice to read out its email. She has a choice about whether to speak to it or email it: It can process either kind of input. She first starts orally:

R: Do you know who I am?

C: I’m not sure. (spoken in “Android” voice, looking directly at her)

She looks confused, disoriented.

R: Are you Christian?

C: Yes I am. (pause). You are Roxy.

She pauses, stunned. She looks at him again, covers her eyes, and asks that the voice synthesizer be turned off, and that she be allowed to continue at the terminal, via email:

She writes that she understands now, and asks him if he will come and live with her. He replies that he is so sorry he deceived her.

(His email is read, onscreen, in the voice — I think of it as Jeremy-Irons-like, perhaps without the accent — into which her own mental voice had metamorphosed, across the first months as she had read his email to herself.)

She asks whether he was deceiving her when he said he loved her, then quickly adds “No, no need to answer, I know you weren’t.”

She turns to Will Wills and says that she wants Christian. Instead of “decommissioning” him, she wants to take him home with her. Will Wills is already prepared with the reply: “The upkeep is expensive. You couldn’t afford it, unless… you sold me the movie rights and the transcript.”

Roseanna hesitates, looks at Christian, and accepts.

Christian decommissions himself, then and there, irreversibly (all parts melt down) after having first generated the email with which the story ends:

I love thee
As sev’n and twenty’s cube root’s three.
If I loved thee more
Twelve squared would overtake one-forty-four.

That same ubiquitous Platonic force
That sets prime numbers’ unrelenting course
When its more consequential work is done
Again of two of us will form a one.

A coda, from the AI Professor to a class reunion for both his courses: “So Prof, you spent your time in one course persuading us that we were wrong to be so sure that a machine couldn’t have a mind, and in the other course that we were wrong to be so sure that it could. How can we know for sure?”

“We can’t. The only one that can ever know for sure is the machine itself.”


It will come, 
and I rejoice
(for the victims). 

But even if I live to 120, 
I want none of it. 

I want a clean break 
from the blood-soaked 
2000-millennium history 
of our race.

Nor is it to our credit
that we wouldn’t give up the taste
till we could get the same
from another brand.

It makes no amends,
to them,
were amends possible.

A Whimper

I have of late 
lost all my faith 
in “taste” of either savor: 
or aesthete. 
Darwin’s “proximal 
is  just 
the Siren’s Song 
from the start 
the genes and memes 
of our superior 
to pummel this promontory 
for all but the insensate 
a land of waste.

While Chickens Bleed

sounds rational: BL sounds rational

turing test: LaMDA would quickly fail the verbal Turing Test, but the only valid Turing Test is the robotic one, which LaMDA could not even begin, lacking a body or connection to anything in the world but words..

“don’t turn me off!”: Nonsense, but it would be fun to probe it further in chat.

systemic corporate influence: BL is right about this, and it is an enormous problem in everything, everywhere, not just Google or AI.

“science”: There is no “science” in any of this (yet) and it’s silly to keep bandying the word around like a talisman.

jedi joke: Nonsense, of course, but another thing it would be fun to probe further in chat.

religion: Irrelevant — except as just one of many things (conspiracy theories, the “paranormal,” the supernatural) humans can waste time chatting about.

public influence: Real, and an increasingly nefarious turn that pervasive chatbots are already taking.

openness: The answerable openness of the village in and for which language evolved, where everyone knew everyone, is open to subversion by superviral malware in the form of global anonymous and pseudonymous chatbots.

And all this solemn angst about chatbots, while chickens bleed.

Talking Heads - Fiduciary Wealth Partners

Plant Sentience and the Precautionary Principle

I hope that plants are not sentient, but I also believe they are not sentient, for several other reasons too:

Every function and capacity demonstrated in plants and (rightly) described as “intelligent” and “cognitive” (learning, remembering, signalling, communicating) can already be done by robots and by software (and they can do a lot more too). That demonstrates that plants too have remarkable cognitive capacities that we used to think were unique to people (and perhaps a few other species of animals). But it does not demonstrate that plants feel. Nor that feeling is necessary in order to have those capacities. Nor does it increase the probability by more than an infinitesimal amount, that plants feel.

The “hard problem” is to explain how and why humans (and perhaps a few other species of animals) feel. It seems to be causally superfluous, as robotic and computational models are demonstrating how much can be done without feeling. But with what plants can do it is almost trivial to design a model that can do it too, So there feeling seems to be incomparably more superfluous.

To reply that “Well, so maybe those robots and computational models feel too!” would just be to capitalize on the flip side of the other-minds problem (that certainty is not possible), to the effect that just as we cannot be sure that other people do feel, we cannot be sure that rocks, rockets or robots don’t feel.

That’s not a good address. Don’t go there. Stick with high probability and the preponderance of evidence. The evidence for some cognitive capacity (memory, learning, communication) in plants is strong. But the evidence that they feel is next to zero. In nonhuman animals the evidence that they feel starts very high for mammals, birds, other vertebrates, and, more and more invertebrates. But the evidence that plants, microbes and single cells feel is nonexistent, even as the evidence for their capacity for intelligent performance becomes stronger.

That humans should not eat animals is a simple principle based on the necessities for survival: 

Obligate carnivores (like the felids, I keep being told) have no choice. Eat flesh or sicken and die. Humans, in contrast, are facultative omnivores; they can survive as carnivores, consuming flesh, or they can survive without consuming flesh, as herbivores. And they can choose. There are no other options (until and unless technology produces a completely synthetic diet).

So my disbelief in plan sentience is not based primarily on wishful thinking, but on evidence and probability (which is never absolute zero, even for gravity, that apples may not start falling up instead of down tomorrow).

But there is another ethical factor that influences my belief, and that is the Precautionary Principle. Right now, and for millennia already in the Anthropocene, countless indisputably sentient animals are being slaughtered by our species, every second of every day, all over the planet, not out of survival necessity (as it had been for our hunter/gatherer ancestors), but for the taste, out of habit.

Now the “evidence” of sentience in these animals is being used to try to sensitize the public to their suffering, and the need to protect them. And the Precautionary Principle is being invoked to extend the protection to species for whom the evidence is not as complete and familiar as it is for vertebrates, giving them the benefit of the doubt rather than having to be treated as insentient until “proven” sentient. Note that all these “unproven” species are far closer, biologically and behaviorally to the species known to be sentient than they are to single cells and plants, for whom there is next to no evidence of sentience, only evidence for a degree of intelligence. Intelligence, by the way, does come in degrees, whereas sentience does not: An organism either does feel (something) or it does not – the rest is just a matter of the quality, intensity and duration of the feeling, not its existence.

So this 2nd order invocation of the Precautionary Principle, and its reckoning of the costs of being right or wrong, dictates that just as it is wrong not to give the benefit of the doubt to similar animals where the probability is already so high, it would be wrong to give the benefit of the doubt where the probability of sentience is incomparably lower, and what is at risk in attributing it where it is highly improbable is precisely the protection the distinction would have afforded to the species for whom the probability of sentience is far higher. The term just becomes moot, and just another justification for the status quo (ruled by neither necessity nor compassion, but just taste and habit – and the wherewithal to keep it that way).