Understanding Understanding

HARNAD: Chatting with GPT is really turning out to be an exhilarating experience – especially for a skywriting addict like me! 

From the very beginning I had noticed that skywriting can be fruitful even when you are “jousting with pygmies” (as in the mid-1980’s on comp.ai, where it first gave birth to the idea of  “symbol grounding”). 

Who would have thought that chatting with software that has swallowed a huge chunk of 2021 vintage text and that has the capacity to process and digest it coherently and interactively without being able to understand a word of it, could nevertheless, for users who do understand the words — because they are grounded in their heads on the basis of learned sensorimotor features and language – provide infinitely richer (vegan) food for thought than anything ever before could, including other people – both pygmies and giants — and books.

Have a look at this. (In the next installment I will move on to Noam Chomsky’s hunch about UG and the constraints on thinkable thought.)  Language Evolution and Direct vs Indirect Symbol Grounding

Anon: But doesn’t it worry you that GPT-4 can solve reasoning puzzles of a type it has never seen before without understanding a word?

HARNAD:   I agree with all the security, social and political worries. But the purely intellectual capacities of GPT-4 (if they are separable from these other risks), and especially the kind of capacity you mention here) inspire not worry but wonder. 

GPT is a smart super-book that contains (potentially) the entire scholarly and scientific literature, with which real human thinkers can now interact dynamically — to learn, and build upon. 

I hope the real risks don’t overpower the riches. (I’ll be exploiting those while they last… Have a look at the chat I linked if you want to see what I mean.)

Anon:  No. It is not like a book. It converts symbolic information into interactions between features that it invents. From those interactions it can reconstruct what it has read as well as generating new stuff. That is also what people do.  It understands in just the same way that you or I do. 

HARNAD:  That’s all true (although “understands” is not quite the right word for what GPT is actually doing!). 

I’m talking only about what a real understander/thinker like you and me can use GPT for if that user’s sole interest and motivation is in developing scientific and scholarly (and maybe even literary and artistic) ideas

For such users GPT is just a richly (but imperfectly) informed talking book to bounce ideas off –with full knowledge that it does not understand a thing – but has access to a lot of current knowledge (as well as current misunderstandings, disinformation, and nonsense) at its fingertips. 

Within a single session, GPT is informing me, and I am “informing” GPT, the way its database is informing it. 

Anon: You are doing a lot of hallucinating. You do not understand how it works so you have made up a story. 

HARNAD: I’m actually not quite sure why you would say I’m hallucinating. On the one hand I’m describing what I actually use GPT for, and how. That doesn’t require any hypotheses from me as to how it’s doing what it’s doing. 

I do know it’s based on unsupervised and supervised training on an enormous text base (plus some additional direct tweaking with reinforcement training from human feedback). I know it creates a huge “parameter” space derived from word frequencies and co-occurrence frequencies. In particular, for every consecutive pair of words within a con-text of words (or a sample) it weights the probability of the next word somehow, changing the parameters in its parameter space – which, I gather includes updating the text it is primed to generate. I may have it garbled, but it boils down to what Emily Bender called a “statistical parrot” except that parrots are just echolalic, saying back, by rote, what they heard, whereas GPT generates and tests new texts under the constraint of not only “reductive paraphrasing and summary” but also inferencing.

No matter how much of this I have got technically wrong, nothing I said about how I’m using GPT-4 depends on either the technical details that produce its performance, nor what I’m doing with and getting out of it. And it certainly doesn’t depend in any way on my assuming, or guessing, or inferring that GPT understands or thinks.

What I’m interested in is what is present and derivable from huge bodies of human generated texts that makes it possible not only to perform as well as GPT does in extracting correct, usable information, but also in continuing to interact with me, on a topic I know much better than GPT does, to provide GPT with more data (from me) that allows it to come back (to me) with supplementary information that allows me to continue developing my ideas in a way that books and web searches would not only have taken far too long for me to do, but could not have been done by anyone without their fingers on everything that GPT has its (imperfect) fingers on.

And I don’t think the answer is just GPT’s data and analytic powers (which are definitely not thinking or understanding: I’m the only one doing all the understanding and thinking in our chats). It has something to do with what the structure of language itself preserves in these huge textsas ungrounded as it all is on the static page as well as in GPT.

Maybe my hunch is wrong, but I’m being guided by it so far with eyes wide open, no illusions whatsoever about GPT, and no need to know better the tech details of how GPT does what it does.

(Alas, in the current version, GPT-4 forgets what it has learned during that session after it’s closed, so it returns to its prior informational state in a new session, with its huge chunk of preprocessed text data, vintage 2021. But I expect that will soon be improved, allowing it to store user-specific data (with user permission) and to make all interactions with a user in one endless session; it will also have a continuously updating scholarly/scientific text base, parametrized, and open web access.)

But that’s just the perspective from the disinterested intellectual inquirer. I am sure you are right to worry about the potential (perhaps inevitable) potential for malevolent use for commercial, political, martial, criminal, cult and just plain idiosyncratic or sadistic purposes.

Anon: What I find so irritating is your confidence that it is not understanding despite your lack of understanding of how it works. Why are you so sure it doesn’t understand?

HARNAD:  Because (I have reasons to believe) understanding is a sentient state: “nonsentient” understanding is an empty descriptor. And I certainly don’t believe LLMs are sentient. 

Understanding, being a sentient state, is unobservable, but it does have observable correlates. With GPT (as in Searle’s Chinese Room) the only correlate is interpretability — interpretability by real, thinking, understanding, sentient people. But that’s not enough. It’s just a projection onto GPT by sentient people (biologically designed to make that projection with one another: “mind-reading”).

There is an important point about this in Turing 1950 and the “Turing Test.” Turing proposed the test with the following criterion. (Bear in mind that I am an experimental and computational psychobiologist, not a philosopher, and I have no ambition to be one. Moreover, Turing was not a philosopher either.)

(1) Observation. The only thing we have, with one another, by way of evidence that we are sentient, is what we can observe. 

(2) Indistinguishability. If ever we build (or reverse-engineer) a device that is totally indistinguishable in anything and everything it can do, observably, from any other real, thinking, understanding, sentient human being, then we have no empirical (or rational) basis for affirming or denying of the device what we cannot confirm or deny of one another.

The example Turing used was the purely verbal Turing Test. But that cannot produce a device that is totally indistinguishable from us in all the things we can do. In fact, the most elementary and fundamental thing we can all do is completely absent from the purely verbal Turing Test (which I call “T2”): T2 does not test whether the words and sentences spoken are connected to the referents in the world that they are allegedly about (e.g., “apple” and apples). To test that, indistinguishable verbal capacity (T2) is not enough. It requires “T3,” verbal and sensorimotor (i.e., robotic) capacity to recognize and interact with the referents of the words and sentences of T2, indistinguishably from the way any of us can do it.

So, for me (and, I think, Turing), without that T3 robotic capacity, any understanding in the device is just projection on our part.

On the other hand, if there were a GPT with robotic capacities that could pass T3 autonomously in the world (without cheating), I would fully accept (worry-free, with full “confidence”) Turing’s dictum that I have no grounds for denying of the GPT-T3, anything I that I have no grounds for denying of any other thinking, understanding, sentient person. 

You seem to have the confidence to believe a lot more on the basis of a lot less evidence. That doesn’t irritate me! It’s perfectly understandable, because our Darwinian heritage never prepared us for encountering thinking, talking, disembodied heads. Evolution is lazy, and not prescient, so it endowed us with mirror-neurons for mind-reading comprehensible exchanges of speech; and those mirror-neurons are quite gullible. (I feel the tug too, in my daily chats with GPT.)

[An example of cheating, by the way, would be to use telemetry, transducers and effectors from remote sensors which transform all sensory input from real external “referents” into verbal descriptions for the LLM (where is it located, by the way?), and transform the verbal responses from the LLM into motor actions on its remote “referent.”]

The Turing Test (draft)

Here’s a tale called  called “The Turing Test.” It’s  Cyrano de Bergerac re-done in email and texting, by stages, but before the Zoom and ChatGPT era.

First, email tales, about how these days there is a new subspecies of teenagers, mostly male, who live in the virtual world of computer games, ipod music, videos, texting, tweeting and email, and hardly have the motivation, skill or courage to communicate or interact in the real world.

They’re the ones who think that everything is just computation, and that they themselves might just be bits of code, executing in some vast virtual world in the sky.

Then there are the students (male and female) enrolled in “AI (Artificial Intelligence) for Poets” courses — the ones who dread anything that smacks of maths, science, or programming. They’re the ones who think that computers are the opposite of what we are. They live in a sensory world of clubbing, ipod, tweeting… and texting.

A college AI teacher teaches two courses, one for each of these subpopulations. In “AI for Poets” he shows the computerphobes how they misunderstand and underestimate computers and computing. In “Intro to AI” he shows the computergeeks how they misunderstand and overestimate computers and computing.

This is still all just scene-setting.

There are some actual email tales. Some about destructive, or near-destructive pranks and acting-out by the geeks, some about social and sexual romps of the e-clubbers, including especially some pathological cases of men posing in email as handicapped women, and the victimization or sometimes just the disappointment of people who first get to “know” one another by email, and then meet in real life. 

But not just macabre tales; some happier endings too, where email penpals match at least as well once they meet in real life as those who first meet the old way. But there is for them always a tug from a new form of infidelity: Is this virtual sex really infidelity?

There are also tales of tongue-tied male emailers recruiting glibber emailers to ghost-write some of their emails to help them break the ice and win over female emailers, who generally seem to insist on a certain fore-quota of word-play before they are ready for real-play. Sometimes this proxy-emailing ends in disappointment; sometimes no anomaly is noticed at all: a smooth transition from the emailer’s ghost-writer’s style and identity to the emailer’s own. This happens mainly because this is all pretty low-level stuff, verbally. The gap between the glib and the tongue-tied is not that deep.

A few people even manage some successful cyberseduction with the aid of some computer programs that generate love-doggerel on command.

Still just scene-setting. (Obviously can only be dilated in the book; will be mostly grease-pencilled out of the screenplay.)

One last scene-setter: Alan Turing, in the middle of the last century, a homosexual mathematician who contributed to the decoding of the Nazi “Enigma” machine, makes the suggestion — via a party game in which people try to guess, solely by passing written notes back and forth, which of two people sent out into another room is male and which is female (today we would do it by email) — that if, unbeknownst to anyone, one of the candidates were a machine, and the interaction could continue for a lifetime, with no one ever having any cause to think it was not a real person, with a real mind, who has understood all the email we’ve been exchanging with him, a lifelong pen-pal — then it would be incorrect, in fact arbitrary, to conclude (upon at last being told that it had been a machine all along) that it was all just an illusion, that there was no one there, no one understanding, no mind. Because, after all, we have nothing else to go by but this “Turing Test” even with one another.  

Hugh Loebner has (in real life!) set up the “Loebner Prize” for the writer of the first computer program that successfully passes the Turing Test. (The real LP is just a few thousand dollars, but in this story it will be a substantial amount of money, millions, plus book contract, movie rights…). To pass the Test, the programmer must show that his programme has been in near-daily email correspondence (personal correspondence) with 100 different people for a year, and that no one has ever detected anything, never suspected that they were corresponding with anyone other than a real, human penpal who understood their messages as surely as they themselves did.

The Test has been going on for years, unsuccessfully — and in fact both the hackers and the clubbers are quite familiar with, and quick at detecting, the many unsuccessful candidates, to the point where “Is this the Test?” has come [and gone] as the trendy way of saying that someone is acting mechanically, like a machine or a Zombie. The number of attempts has peaked and has long subsided into near oblivion as the invested Loebner Prize fund keeps growing. 

Until a well-known geek-turned cyber-executive, Will Wills, announces that he has a winner.

He gives to the Loebner Committee the complete archives of the email exchanges of one hundred candidates, 400 2-way transcripts for each, and after several months of scrutiny by the committee, he is declared the winner and the world is alerted to the fact that the Turing Test has been passed. The 100 duped pen-pals are all informed and offered generous inducements to allow excerpts from their transcripts to be used in the publicity for the outcome, as well as in the books and films — biographical and fictional — to be made about it.

Most agree; a few do not. There is more than enough useable material among those who agree. The program had used a different name and identity with each pen-pal, and the content of the exchanges and the relationships that had developed had spanned the full spectrum of what would be expected from longstanding email correspondence between penpals: recounting (and commiserating) about one another’s life-events (real on one side, fictional on the other), intimacy (verbal, some “oral” sex), occasional misunderstandings and in some cases resentment. (The Test actually took closer to two years to complete the full quota of 100 1-year transcripts, because twelve correspondents had dropped out at various points — not because they suspected anything, but because they simply fell out with their pen-pals over one thing or another and could not be cajoled back into further emailing: These too are invited, with ample compensation, to allow excerpting from their transcripts, and again most of them agree.)

But one of those who completed the full 1-year correspondence and who does not agree to allow her email to be used in any way, Roseanna, is a former clubber turned social worker who had been engaged to marry Will Wills, and had originally been corresponding (as many of the participants had) under an  email pseudonym, a pen-name (“Foxy17”). 

Roseanna is beautiful and very attractive to men; she also happens to be thoughtful, though it is only lately that she has been giving any attention or exercise to this latent resource she had always possessed. She had met Will Wills when she was already tiring of clubbing but still thought she wanted a life connected to the high-rollers. So she got engaged to him and started doing in increasing earnest the social work for which her brains had managed to qualify her in college even though most of her wits then had been directed to her social play.

But here’s the hub of this (non-serious) story: During the course of this year’s email penpal correspondence, Roseanna has fallen in love with Christian (which is the name the Turing candidate was using with her): she had used Foxy17 originally, but as the months went by she told him her real name and became more and more earnest and intimate with him. And he reciprocated.

At first she had been struck by how perceptive he was, what a good and attentive “listener” he — after his initial spirited yet modest self-presentation — had turned out to be. His inquiring and focussed messages almost always grasped her point, which encouraged her to become more and more open, with him and with herself, about what she really cared about. Yet he was not aloof in his solicitousness: He told her about himself too, often, uncannily, having shared — but in male hues — many of her own traits and prior experiences, disappointments, yearnings, rare triumphs. Yet he was not her spiritual doppelganger en travesti (she would not have liked that): There was enough overlap for a shared empathy, but he was also strong where she felt weak, confident where she felt diffident, optimistic about her where she felt most vulnerable, yet revealing enough vulnerability of his own never to make her fear that he was overpowering her in any way — indeed, he had a touching gratefulness for her own small observations about him, and a demonstrated eagerness to put her own tentative advice into practice (sometimes with funny, sometimes with gratifying results).

And he had a wonderful sense of humor, just the kind she needed. Her own humor had undergone some transformations: It had formerly been a satiric wit, good for eliciting laughs and some slightly intimidated esteem in the eyes of others; but then, as she herself metamorphosed, her humor became self-mocking, still good for making an impression, but people were laughing a little too pointedly now; they had missed the hint of pain in her self-deprecation. He did not; and he managed to find just the right balm, with a counter-irony in which it was not she and her foibles, but whatever would make unimaginative, mechanical people take those foibles literally that became the object of the (gentle, ever so gentle) derision.

He sometimes writes her in (anachronistic) verse:

I love thee
As sev’n and twenty’s cube root’s three.
If I loved thee more
Twelve squared would overtake one-forty-four.

That same ubiquitous Platonic force
That sets prime numbers’ unrelenting course
When its more consequential work is done
Again of two of us will form a one.

So powerful a contrast does Christian become to everyone Roseanna had known till then that she declares her love (actually, he hints at his own, even before she does) and breaks off her engagement and relationship with Will Wills around the middle of their year of correspondence (Roseanna had been one of the last wave of substitute pen-pals for the 12 who broke it off early) — even though Christian tells her quite frankly that, for reasons he is not free to reveal to her, it is probable that they will never be able to meet. (He has already told her that he lives alone, so she does not suspect a wife or partner; for some reason she feels it is because of an incurable illness.) 

Well, I won’t tell the whole tale here, but for Roseanna, the discovery that Christian was just Will Wills’s candidate for the Turing Test is a far greater shock than for the other 111 pen-pals. She loves “Christian” and he has already turned her life upside-down, so that she was prepared to focus all her love on an incorporeal pen-pal for the rest of her life. Now she has lost even that.

She wants to “see” Christian. She tells Will Wills (who had been surprised to find her among the pen-pals, and had read enough of the correspondence to realize, and resent what had happened — but the irritation is minor, as he is high on his Turing success and had already been anticipating it when Roseanna had broken off their engagement a half year earlier, and had already made suitable adjustments, settling back into the club-life he had never really left).

Will Wills tells her there’s no point seeing “Christian”. He’s just a set of optokinetic transducers and processors. Besides, he’s about to be decommissioned, the Loebner Committee having already examined him and officially confirmed that he alone, with no human intervention, was indeed the source and sink of all the 50,000 email exchanges.

She wants to see him anyway. Will Wills agrees (mainly because he is toying with the idea that this side-plot might add an interesting dimension to the potential screenplay, if he can manage to persuade her to release the transcripts).

For technical reasons (reasons that will play a small part in my early scene-setting, where the college AI teacher disabuses both his classes of their unexamined prejudices for and against AI), “Christian” is not just a computer running a program, but a robot — that is, he has optical and acoustic and tactile detectors, and moving parts. This is in order to get around Searle’s “Chinese Room Argument” and my “Symbol Grounding Problem” :

If the candidate were just a computer, manipulating symbols, then the one executing the program could have been a real person, and not just a computer. For example, if the Test had been conducted in Chinese, with 100 Chinese penpals, then the person executing the program could have been an english monolingual, like Surl, who doesn’t understand a word of Chinese, and has merely memorized the computer program for manipulating the Chinese input symbols (the incoming email) so as to generate the Chinese output symbols (the outgoing email) according to the program. The pen-pal thinks his pen-pal really understands Chinese. If you email him (in Chinese) and ask: “Do you understand me?” his reply (in Chinese) is, of course, “Of course I do!”. But if you demand to see the pen-pal who is getting and sending these messages, you are brought to see Surl, who tells you, quite honestly, that he does not understand Chinese and has not understood a single thing throughout the entire year of exchanges: He has simply been manipulating the meaningless symbols, according to the symbol-manipulation rules (the program) he has memorized and applied to every incoming email.  

The conclusion from this is that a symbol-manipulation program alone is not enough for understanding — and probably not enough to pass the Turing Test in the first place: How could a program talk sensibly with a penpal for a lifetime about anything and everything a real person can see, hear, taste, smell, touch, do and experience, without being able to do any of those things? Language understanding and speaking is not just symbol manipulation and processing: The symbols have to be grounded in the real world of things to which they refer, and for this, the candidate requires sensory and motor capacities too, not just symbol-manipulative (computational) ones.

So Christian is a robot. His robotic capacities are actually not tested directly in the Turing Test. The Test only tests his pen-pal capacities.  But to SUCCEED on the test, to be able to correspond intelligibly with penpals for a lifetime, the candidate needs to draw upon sensorimotor capacities and experiences too, even though the pen-pal test does not test them directly.

So Christian was pretrained on visual, tactile and motor experiences rather like those the child has, in order to “ground” its symbols in their sensorimotor meanings. He saw and touched and manipulated and learned about a panorama of things in the world, both inanimate and animate, so that he could later go on and speak intelligibly about them, and use that grounded knowledge to learn more. And the pretraining was not restricted to objects: there were social interactions too, with people in Will Wills’s company’s AI lab in Seattle. Christian had been “raised” and pretrained rather the way a young chimpanzee would have been raised, in a chimpanzee language laboratory, except that, unlike chimps, he really learned a full-blown human language. 

Some of the lab staff had felt the tug to become somewhat attached to Christian, but as they had known from the beginning that he was only a robot, they had always been rather stiff and patronizing (and when witnessed by others, self-conscious and mocking) about the life-like ways in which they were interacting with him. And Will Wills was anxious not to let any sci-fi sentimentality get in the way of his bid for the Prize, so he warned the lab staff not to fantasize and get too familiar with Christian, as if he were real; any staff who did seem to be getting personally involved were reassigned to other projects.

Christian never actually spoke, as vocal output was not necessary for the penpal Turing Test. His output was always written, though he “heard” spoken input. To speed up certain interactions during the sensorimotor pretraining phase, the lab had set up text-to-voice synthesizers that would “speak” Christian’s written output, but no effort was made to make the voice human-like: On the contrary, the most mechanical of the Macintosh computer’s voice synthesizers — the “Android” — was used, as a reminder to lab staff not to get carried away with any anthropomorphic fantasies. And once the pretraining phase was complete, all voice synthesizers were disconnected, all communication was email-only, there was no further sensory input, and no other motor output. Christian was located in a dark room for the almost two-year duration of the Test, receiving only email input and sending only email output.

And this is the Christian that Roseanna begs to see. Will Wills agrees, and has a film crew tape the encounter from behind a silvered observation window, in case Roseanna relents about the movie rights.  She sees Christian, a not very lifelike looking assemblage of robot parts: arms that move and grasp and palpate, legs with rollers and limbs to sample walking and movement, and a scanning “head” with two optical transducers (for depth vision) slowly scanning repeatedly 180 degrees left to right to left, its detectors reactivated by light for the first time in two years. The rotation seems to pause briefly as it scans over the image of Roseanna.

Roseanna looks moved and troubled, seeing him.

She asks to speak to him. Will Wills says it cannot speak, but if she wants, they can set up the “android” voice to read out its email. She has a choice about whether to speak to it or email it: It can process either kind of input. She first starts orally:

R: Do you know who I am?

C: I’m not sure. (spoken in “Android” voice, looking directly at her)

She looks confused, disoriented.

R: Are you Christian?

C: Yes I am. (pause). You are Roxy.

She pauses, stunned. She looks at him again, covers her eyes, and asks that the voice synthesizer be turned off, and that she be allowed to continue at the terminal, via email:

She writes that she understands now, and asks him if he will come and live with her. He replies that he is so sorry he deceived her.

(His email is read, onscreen, in the voice — I think of it as Jeremy-Irons-like, perhaps without the accent — into which her own mental voice had metamorphosed, across the first months as she had read his email to herself.)

She asks whether he was deceiving her when he said he loved her, then quickly adds “No, no need to answer, I know you weren’t.”

She turns to Will Wills and says that she wants Christian. Instead of “decommissioning” him, she wants to take him home with her. Will Wills is already prepared with the reply: “The upkeep is expensive. You couldn’t afford it, unless… you sold me the movie rights and the transcript.”

Roseanna hesitates, looks at Christian, and accepts.

Christian decommissions himself, then and there, irreversibly (all parts melt down) after having first generated the email with which the story ends:

I love thee
As sev’n and twenty’s cube root’s three.
If I loved thee more
Twelve squared would overtake one-forty-four.

That same ubiquitous Platonic force
That sets prime numbers’ unrelenting course
When its more consequential work is done
Again of two of us will form a one.

A coda, from the AI Professor to a class reunion for both his courses: “So Prof, you spent your time in one course persuading us that we were wrong to be so sure that a machine couldn’t have a mind, and in the other course that we were wrong to be so sure that it could. How can we know for sure?”

“We can’t. The only one that can ever know for sure is the machine itself.”

GPT as Syntactic Shadow-Puppetry

Pondering whether there is something non-arbitrary to pin down in the notion of “intelligence” (or “cognition”) is reminiscent of what philosophers tried (unsuccessfully) to do with the notion of “knowing” (or “cognizing”):

BELIEF: Do I know (cognize) that “the cat is on the mat” if I simply believe the cat is on the mat? 

No, the cat really has to be on the mat.

TRUE BELIEF: So do I know (cognize) that “the cat is on the mat” if I believe the cat is on the mat and the cat is really on the mat?

No, I could be believing that it’s true for the wrong reasons, or by luck.

JUSTIFIED TRUE BELIEF: So do I know (cognize) that “the cat is on the mat” if I believe the cat is on the mat and the cat is really on the mat and I believe it because I have photographic evidence, or a mathematical proof that it’s on the mat?

No, the evidence could be unreliable or wrong, or the proof could be wrong or irrelevant.

VALID, JUSTIFIED, TRUE BELIEF: So do I know (cognize) that “the cat is on the mat” if I believe the cat is on the mat and the cat is really on the mat and I believe it because I have photographic evidence, or a mathematical proof that it’s on the mat, and neither the evidence nor the proof is unreliable or wrong, or otherwise invalid?.

How do I know the justification is valid?

So the notion of “knowledge” is in the end circular.

“Intelligence” (and “cognition”) has this affliction, and Shlomi Sher’s notion that we can always make it break down in GPT is also true of human intelligence: they’re both somehow built on sand.

Probably a more realistic notion of “knowledge” (or “cognition,” or “intelligence”) is that they are not only circular (i.e., auto-parasitic, like the words and their definition in a dictionary), but that also approximate. Approximation can be tightened as much as you like, but it’s still not exact or exhaustive. A dictionary cannot be infinite. A picture (or object) is always worth more than 1000++ words describing it. 

Ok, so set aside words and verbal (and digital) “knowledge” and “intelligence”: Cannonverbal knowledge and intelligence do any better? Of course, there’s one thing nonverbal knowledge can do, and that’s to ground verbal knowledge by connecting the words in a speaker’s head to their referents in the world through sensorimotor “know-how.”

But that’s still just know-how. Knowing that the cat is on the mat is not just knowing how to find out whether the cat is on the mat. That’s just empty operationalism. Is there anything else to “knowledge” or “intelligence”?

Well, yes, but that doesn’t help either: Back to belief. What is it to believe that the cat is on the mat? Besides all the failed attempts to upgrade it to “knowing” that the cat is on the mat, which proved circular and approximate, even when grounded by sensorimotor means, it also feels like something to believe something. 

But that’s no solution either. The state of feeling something, whether a belief or a bee-sting, is, no doubt, a brain state. Humans and nonhuman animals have those states; computers and GPTs and robots GPT robots (so far) don’t.

But what if they the artificial ones eventually did feel? What would that tell us about what “knowledge” or “intelligence” really are – besides FELT, GROUNDED, VALID, JUSTIFIED, TRUE VERBAL BELIEF AND SENSORIMOTOR KNOWHOW? (“FGVJTVBSK”)

That said, GPT is a non-starter, being just algorithm-tuned statistical figure-completions and extrapolations derived from on an enormous ungrounded verbal corpus produced by human FGVJTVBSKs. A surprisingly rich database/algorithm combination of the structure of verbal discourse. That consists of the shape of the shadows of “knowledge,” “cognition,” “intelligence” — and, for that matter, “meaning” – that are reflected in the words and word-combinations produced by countless human FGVJTVBSKs. And they’re not even analog shadows…

Chats with GPT: On symbol-grounding and Turing-testing

( My own questions are in boldface. I italicize in chatGBT’s replies where two kinds of things inevitably begin to happen: (1) rote repetition of what it has already replied before and (2) self-contradiction. Eventually it becomes circular and uninformative. (I’m not saying that humans don’t do this too! But it’s all that chatGBT can do.)

Who invented the symbol grounding problem?

The concept of the symbol grounding problem was first introduced by philosopher and cognitive scientist Jerry Fodor in the late 1980s.

In what publication did Fodor introduce the symbol grounding problem?

Jerry Fodor introduced the symbol grounding problem in his 1987 book “Modularity of Mind.”

Where in his 1987 book “Modularity of Mind.” did Jerry Fodor introduce the symbol grounding problem?

Jerry Fodor introduced the symbol grounding problem in the first few chapters of his 1987 book “Modularity of Mind.” He discussed it as one of the central challenges in the field of cognitive science and artificial intelligence, and as a key issue that needs to be addressed in order to build truly intelligent machines.

Quote the passage in which Jerry Fodor introduced the symbol grounding problem in his 1987 book “Modularity of Mind.”

I’m sorry, I don’t have the full text of Jerry Fodor’s 1987 book “Modularity of Mind” available to me, so I can’t quote the exact passage in which he introduced the symbol grounding problem. However, the idea can be generally described as the problem of explaining how symbols (such as words or concepts) acquire meaning in the first place, and how they are anchored to sensory experiences and the physical world.

What does “anchored to sensory experiences and the physical world” mean?

“Anchored to sensory experiences and the physical world” refers to the idea that symbols (such as words or concepts) must have a direct connection to our sensory experiences and physical interactions in the world in order to have meaning. The symbol grounding problem asks how symbols can be connected to these experiences and interactions in a way that enables us to understand and use them effectively. In other words, how can symbols be “grounded” in our sensory experiences and physical reality, so that they have a clear and well-defined meaning that can be used in a meaningful way?

Can you give an example of a grounded symbol and how it is grounded?

Certainly! A classic example of a grounded symbol is the word “apple.” The word “apple” is grounded in our sensory experiences with the physical object that we call “apple.” When we see, touch, taste, or smell an apple, our sensory experiences provide a direct connection between the symbol “apple” and the physical object that it refers to. This connection is what gives the symbol “apple” its meaning. In other words, the symbol “apple” is grounded in our sensory experiences with the physical world.

So, when we use the word “apple,” we have a shared understanding of what it means because we have all had similar experiences with the physical object that the word refers to. This grounding in sensory experiences and physical interactions provides the basis for our ability to understand and use the symbol “apple” in a meaningful way.

Does the sensory experience have to be felt, or is it enough to just interact with the apple?

The sensory experience of an apple doesn’t have to be felt in the sense that one has to physically touch or taste it, although these are certainly sensory experiences that can contribute to grounding the symbol “apple.” The idea is that symbols are grounded in our sensory experiences with the physical world, and that these experiences can take many different forms. For example, seeing an apple, smelling an apple, hearing the sound it makes when bitten, etc. The sensory experiences that contribute to grounding a symbol can be direct or indirect, and they don’t all have to come from the same modality (e.g., vision, touch, hearing, etc.). The important thing is that they provide a direct connection between the symbol and the physical world, and that they enable us to understand and use the symbol in a meaningful way.

Can the symbols used by an insentient robot be grounded by its interactions with apples, even though the robot can only act but cannot feel?

In theory, yes, symbols used by an insentient robot could be grounded by its interactions with apples, even though the robot can only act and cannot feel. However, the nature of the grounding would be quite different from the grounding of symbols in human experience.

In human experience, symbols are grounded in our sensory experiences, emotions, and conceptual knowledge, which allow us to understand and use them in a flexible and context-sensitive way. For a robot, symbols would be grounded in its sensors and actuators, which enable it to interact with the physical world in a pre-programmed way. This would allow the robot to use symbols to perform specific tasks and make decisions based on its interactions with the physical world, but it would not allow the robot to understand symbols in the same way that humans do.

In essence, the grounding of symbols in human experience is much more rich and complex than the grounding of symbols in robot experience, because it involves the integration of many different forms of sensory and conceptual information, whereas the grounding of symbols in robots is limited to the information available to their sensors and actuators.

What is a robot?

A robot is a machine designed to perform tasks automatically. It can be controlled by a computer program, by a human operator, or a combination of both. Robots can be used for a variety of purposes, such as manufacturing, exploration, cleaning, and entertainment. They can be built in different shapes and sizes, and can have various sensors, actuators, and other components that enable them to perform specific functions. The field of robotics is concerned with the design, construction, operation, and use of robots.

What is the Turing Test?

The Turing Test is a measure of a machine’s ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human. The test was proposed by the British mathematician and computer scientist Alan Turing in 1950.

The basic idea behind the Turing Test is that a human evaluator engages in a natural language conversation with both a human and a machine, without knowing which is which. If the evaluator is unable to consistently distinguish the machine from the human, based solely on their responses in the conversation, then the machine is said to have passed the Turing Test and demonstrated human-level intelligence.

While the Turing Test is widely regarded as an important concept in the field of artificial intelligence and cognitive computing, it is also a subject of much debate and has been criticized on various grounds. Nevertheless, it remains an important benchmark for evaluating the progress of AI and its ability to generate human-like responses.

What is the difference between a chatbot, like chatGPT, which only has contact with words, and a chatbot that is also a robot, and can interact with the referents of words?

The main difference between a chatbot like ChatGPT, which only has contact with words, and a chatbot that is also a robot, and can interact with the referents of words, is the scope of their abilities and the nature of their interaction with the world.

ChatGPT is a language model that has been trained on a vast amount of text data, and it can generate text-based responses to questions and prompts. It can have conversations with users through text-based interfaces, like a chat interface or a command-line interface. However, ChatGPT does not have direct access to the physical world and does not have the ability to interact with the referents of the words it uses in its conversations.

On the other hand, a chatbot that is also a robot can interact with the physical world and the referents of the words it uses in its conversations. For example, a robot chatbot that is designed to interact with objects in a home environment could be programmed to understand and respond to commands like “turn on the light” or “get me a glass of water.” By having access to and the ability to interact with the physical world, the robot chatbot would be able to perform actions and produce responses that are grounded in its interactions with the physical world.

In essence, while both types of chatbots can have conversations and generate text-based responses, a chatbot that is also a robot has the added ability to interact with the physical world and to perform actions that are based on its understanding of the words and commands it receives.

REPEAT: What is the difference between a chatbot, like chatGPT, which only has contact with words, and a chatbot that is also a robot that can learn, and can interact with the referents of words?

The main difference between a chatbot like ChatGPT, which only has contact with words, and a chatbot that is also a robot that can learn and interact with the referents of words is the scope of their abilities and the nature of their interaction with the world.

ChatGPT is a language model that has been trained on a vast amount of text data, and it can generate text-based responses to questions and prompts. It can have conversations with users through text-based interfaces, like a chat interface or a command-line interface. However, ChatGPT does not have direct access to the physical world and does not have the ability to interact with the referents of the words it uses in its conversations.

On the other hand, a chatbot that is also a robot that can learn has the ability to learn from its interactions with the physical world and to adapt its behavior accordingly. For example, a robot chatbot that is designed to interact with objects in a home environment could be programmed to understand and respond to commands like “turn on the light” or “get me a glass of water.” Over time, the robot chatbot could learn from its experiences and make improvements to its behavior, such as being more efficient in finding and retrieving objects.

In essence, while both types of chatbots can have conversations and generate text-based responses, a chatbot that is also a robot that can learn has the added ability to interact with the physical world, to learn from its experiences, and to adapt its behavior based on its interactions with the world. This would allow it to perform actions and produce responses that are grounded in its interactions with the physical world, and to continually improve its ability to interact with the world over time.

What about a chatbot, like chatGPT that can pass the Turing Test and is also a robot, that can learn as well as interact with the referents of words?

A chatbot like ChatGPT that can pass the Turing Test and is also a robot that can learn as well as interact with the referents of words would represent a combination of advanced language processing capabilities and advanced robotics capabilities.

Such a chatbot would have the ability to generate human-like responses to questions and prompts, as demonstrated by its ability to pass the Turing Test. Additionally, it would have the ability to interact with the physical world and to learn from its experiences, much like a traditional robot. This would allow the chatbot to not only generate text-based responses that are grounded in its interactions with the physical world, but also to continually improve its ability to interact with the world over time.

This type of chatbot would represent a highly advanced artificial intelligence system that would combine the best of both language processing and robotics technologies. It would be capable of engaging in human-like conversations and interactions while also having the ability to physically interact with the world and to learn from its experiences, making it a highly versatile and adaptive AI system.

Would the words of a chatbot that could pass the Turing Test and was also a robot that could learn and interact with the referents of words be grounded?

Yes, the words of a chatbot that could pass the Turing Test and was also a robot that could learn and interact with the referents of words would likely be considered grounded.

In the context of the symbol grounding problem, grounded symbols are those that are linked to sensory experiences and the physical world. A chatbot that could pass the Turing Test, was also a robot, and could learn and interact with the referents of words would be capable of generating text-based responses that are grounded in its interactions with the physical world. For example, if the chatbot was programmed to interact with objects in a home environment, it could generate responses such as “I turned on the light” or “I retrieved a glass of water,” which are based on its physical interactions with the world.

In this way, the words of a chatbot that could pass the Turing Test and was also a robot that could learn and interact with the referents of words would be considered grounded because they are linked to its sensory experiences and interactions with the physical world.

What if this chatbot/robot left the kitchen and went out into the world, and kept passing the Turing Test?

If a chatbot/robot that could pass the Turing Test and was capable of learning and interacting with the referents of words were to leave the kitchen and go out into the world, it would likely continue to generate grounded text-based responses based on its interactions with the physical world.

As the chatbot/robot explored its environment, it would be able to interact with new objects, learn from its experiences, and generate new grounded text-based responses. For example, if the chatbot/robot encountered a new object, such as a tree, it might generate a response such as “I see a tree,” which is grounded in its sensory experience of seeing the tree.

In this way, even if the chatbot/robot left the kitchen and went out into the world, it would continue to generate grounded text-based responses based on its experiences and interactions with the physical world. This would allow it to remain a highly versatile and adaptive AI system, capable of engaging in human-like conversations and interactions while also having the ability to physically interact with the world and learn from its experiences.

What do you mean when you say this chatbot/robot was “grounded”?

When I say that the chatbot/robot was “grounded,” I mean that its symbols, or words, were linked to sensory experiences and the physical world.

The concept of grounding symbols in sensory experiences and the physical world is central to the symbol grounding problem, which is the question of how symbols used in thought and language can be linked to the world. A grounded symbol is one that is directly linked to sensory experiences and physical objects, such that its meaning is derived from its connections to these experiences and objects.

In the case of the chatbot/robot, its words were grounded because they were based on its interactions with the physical world. For example, its words such as “I see a tree,” were grounded in its sensory experience of seeing the tree. In this way, the chatbot/robot was grounded because its words were linked to its experiences and interactions with the physical world.

What do you mean when you say this chatbot/robot was “programmed”?

When I say that the chatbot/robot was “programmed,” I mean that it was designed and built to operate in a specific way by a set of instructions or algorithms. The chatbot/robot was created with a specific set of functions and abilities, such as the ability to interact with the physical world and learn from its experiences.

Programming refers to the process of writing code or software that specifies the behavior of a computer or robot. The code or software defines the inputs and outputs of the system, as well as the processes that the system will carry out to achieve its goals.

In the case of the chatbot/robot, its behavior was determined by the code or software that was written to control its operation. The programming defined its abilities and limitations, as well as the way that it would interact with the physical world.

Therefore, when I say that the chatbot/robot was programmed, I mean that it was created and built with a specific set of functions and abilities, determined by the code or software that was written to control its operation.

Wouldn’t a chatbot/robot in the world that could learn be changing as a result of what it learned? 

Yes, a chatbot/robot that has the ability to learn would change as a result of what it learned.

In machine learning, learning is the process by which a system improves its performance by processing data and adjusting its algorithms based on that data. In the case of a chatbot/robot, learning would involve adjusting its algorithms based on its interactions with the world, so that it could perform better in the future.

This means that as a chatbot/robot interacts with the world and experiences new things, it would update its algorithms and change its behavior accordingly. Over time, as it continues to learn and interact with the world, it would become more sophisticated and better able to perform its tasks.

Therefore, a chatbot/robot that could learn would change as a result of its experiences and interactions with the world, as it updates its algorithms and improves its performance.

So if the chatbot/robot could learn and ground its symbols in the world outside the kitchen, when would its Turing Test end? How long and much must it do to pass?

The Turing Test, as proposed by Alan Turing, is a measure of a machine’s ability to exhibit intelligent behavior that is indistinguishable from that of a human. The test involves a human judge who engages in a natural language conversation with both a human and a machine, and decides which of the two they believe to be the human. If the judge is unable to reliably distinguish between the machine and the human, then the machine is said to have passed the Turing Test.

In terms of how long and much the chatbot/robot must do to pass the Turing Test, there is no specific time frame or set of criteria that must be met. The Turing Test is subjective and depends on the judgment of the human judge. Some researchers and practitioners in the field of artificial intelligence might consider a chatbot/robot to have passed the Turing Test if it is able to consistently fool human judges in a substantial number of trials. Others might have different criteria for determining whether a chatbot/robot has passed the Turing Test.

Therefore, it is not possible to determine a specific time frame or set of criteria for when the Turing Test would end for a chatbot/robot. The test is ongoing, and the chatbot/robot would continue to be evaluated against human behavior for as long as it is in use. Whether it would pass the Turing Test would depend on the subjective judgment of human judges, and could change over time as the chatbot/robot continues to learn and interact with the world.

If there is no time-limit on the Turing Test (henceforth “TT”) for the learning chatbot/robot out in the world, why would there be a limit on the number of judges? And if any and every judge, regardless of how long the test or the number of judges, is unable to distinguish the performance of the chatbot/robot from that of any other human being, in what sense is the judgment “subjective”, and in what sense are the judges being “fooled”?

The number of judges used in the Turing Test is not a limiting factor in the sense that there is a specific number that must be reached in order for the chatbot/robot to pass the test. The use of multiple judges is typically seen as a way to increase the reliability of the test results by taking into account the subjective judgment of multiple individuals. If a chatbot/robot consistently fools a large number of human judges, it becomes more likely that the chatbot/robot is exhibiting human-like intelligence.

However, the judgment of the human judges is still subjective in the sense that each judge brings their own biases, opinions, and interpretations to the conversation with the chatbot/robot. They may also have different criteria for determining what constitutes human-like behavior. As a result, even if a chatbot/robot consistently fools a large number of human judges, there may still be some degree of disagreement about whether it has truly passed the Turing Test.

Regarding the concept of “fooling” the judges, if a chatbot/robot is able to consistently mimic human behavior in a way that is indistinguishable from that of a real human, then the human judges are being “fooled” in the sense that they are unable to reliably distinguish the machine’s behavior from that of a human. However, it is important to note that the chatbot/robot is not intentionally trying to deceive the human judges, but is instead simply exhibiting the intelligence that it has been programmed or learned to exhibit.

If no one can ever tell any difference between what the grounded chatbot/robot in the world can learn and do and what a human can learn and do, in what sense is that a subjective judgment and it what sense is everyone being fooled?

The judgment of whether a chatbot/robot has truly passed the Turing Test is subjective in the sense that it is based on the opinions and perceptions of the human judges. Each judge may have different criteria for what constitutes human-like intelligence, and their interpretations of the chatbot/robot’s behavior may be influenced by their individual biases and experiences.

If a chatbot/robot is able to consistently mimic human behavior in a way that is indistinguishable from a real human, it is said to have “fooled” the human judges in the sense that the judges are unable to reliably determine that they are interacting with a machine. However, this “fooling” is not a result of intentional deception on the part of the chatbot/robot, but rather a reflection of its advanced abilities and programming.

In cases where multiple human judges are unable to distinguish the chatbot/robot’s behavior from that of a human, the conclusion that the chatbot/robot has passed the Turing Test becomes more reliable. However, even in these cases, the judgment remains subjective as there may still be individual differences in opinions and criteria for what constitutes human-like intelligence.

I said that no one could tell the difference between the performance capacities of the learning, grounded chatbot/robot in the world and you said there was no time limit on the TT, so in what way are these judgments “subjective”? I understand that the chatbot/robot is not trying to fool anyone. So in what sense are they being “fooled”?

If a chatbot/robot is able to consistently mimic human behavior in a way that is indistinguishable from a real human, it is said to have “fooled” the human judges in the sense that the judges are unable to reliably determine that they are interacting with a machine, even though the chatbot/robot is not attempting to deceive them.

However, even if no one can tell the difference between the performance capacities of the chatbot/robot and a human, the judgment of whether the chatbot/robot has truly passed the Turing Test is still subjective because it depends on the criteria and standards used by the human judges to determine whether a machine’s behavior is human-like or not.

For example, some judges may consider a chatbot/robot’s ability to perform certain tasks as evidence of human-like intelligence, while others may place more importance on its ability to understand and respond to complex emotions and social cues. There may also be individual differences in the judgments made by each judge, based on their individual experiences and biases.

So even if no one is able to distinguish the performance capacities of the chatbot/robot from a human, the judgment of whether the chatbot/robot has truly passed the Turing Test remains subjective as it is based on the perceptions and opinions of the human judges.

If no one can tell the difference between X and Y, how is that subjective, and how are they being fooled?

The judgment of whether someone can tell the difference between two things is a subjective matter as it depends on the individual’s perception and evaluation. The idea of being “fooled” implies deception or intentional misinformation, which may not necessarily be the case in this scenario. If a person cannot distinguish between two entities, it may simply be a result of their limited ability to observe or process information, and not because of any intention to deceive.

I agree that Turing-Testing depends on perception — the perceptual capacity of humans. Maybe a metal-detector could distinguish robots from people. But that’s no longer just a matter of performance capacity.

Yes, that’s correct. The Turing Test as originally proposed by Turing is based on human perception and evaluation, and the capacity of a machine to imitate human-like behavior in text-based communication. However, other methods and techniques beyond just human perception can be used to distinguish between entities, such as in your example of a metal detector. These additional methods may be necessary to supplement the Turing Test in certain cases.

Is Turing-Testing a learning, grounded chatbot/robot still a matter of testing “text-based communication”? Is the TT meant to test for the presence of deception or the presence of human cognitive capacities? And if the TT has no limit on the number of tests and testers, and the chatbot/robot never fails, what is the “deception”?

The Turing Test is primarily a test of text-based communication, which is meant to evaluate a machine’s ability to exhibit human-like behaviors and mimic human intelligence in natural language conversations. The idea behind the test is to see if a human judge can distinguish between a human and a machine based on their responses to questions and prompts in a text-based communication. The test is designed to check for the presence of human-like intelligence and not necessarily the presence of deception. If a machine consistently passes the Turing Test, it could be considered a success in terms of its ability to exhibit human-like intelligence, but it does not necessarily mean that it is not deceiving. The concept of deception in the Turing Test is a subjective matter and can depend on how one interprets the idea of a machine being able to mimic human behavior and thought processes.

Is the TT a test of mimicry or a test of capacity? Is equivalent and indistinguishable capacity mimicry?

The Turing Test is primarily a test of capacity, but it can be seen as a test of mimicry as well. Equivalent capacity means that the chatbot/robot and the human being tested have the same ability to respond to stimuli and perform tasks, which can result in indistinguishable responses. In this sense, the TT can be seen as a test of mimicry. However, the main objective of the TT is to assess a machine’s capacity to demonstrate human-like intelligence, so it is primarily a test of capacity.

But if the capacity of the chatbot/robot (let’s call it the “candidate” from now on) is indistinguishable from that of any human, and the designers know how they produced this capacity, in what sense is it mimicry rather than capacity?

The Turing Test is often described as a test of mimicry, where the candidate chatbot/robot is expected to imitate human-like responses in text-based communication. However, the goal of the test is to determine if the candidate has the capacity to exhibit human-like intelligence in its responses. If the candidate is able to produce indistinguishable responses from a human and the designers understand how they have achieved this capacity, it could be argued that the candidate’s performance is not necessarily mimicry, but rather a demonstration of capacity. Whether the candidate’s performance is considered mimicry or capacity would depend on the interpretation and understanding of those who observe and evaluate it.

But in the example as we discussed it here, the only judgment being made is that no one can ever tell apart the candidate’s performance from any other human’s performance, So I ask again: where is the subjectivity or the mimicry or the “fooling” in any of that?

In the case of the Turing Test, the judgment of whether the candidate’s performance is indistinguishable from that of a human is subjective in the sense that it is based on the subjective perceptions and opinions of the judges. However, this does not mean that the candidate is necessarily “mimicking” human behavior or trying to “fool” the judges. The goal of the Turing Test is to determine if the candidate has reached a certain level of human-like intelligence, not necessarily to determine if it is human or not. The subjective nature of the test lies in the fact that the judges are evaluating the candidate based on their own experiences, biases, and expectations, which can vary from person to person. So, the judgment of whether the candidate has passed the test is subjective to some extent, but the performance of the candidate itself is not necessarily mimicry or deception.

Everyone has “experiences, biases, and expectations,” but the premise here is that no human can tell apart the candidate’s performance from the performance of any other human, ever. (And remember that the candidate is a robot, not just a chatbot, and that the performance is anything humans can do in the world.) So I ask again, isn’t this grounded robotic TT a test of capacity rather than an exercise in mimicry, constrained by the “subjectivity” of the judges (i.e., everyone)? If no human can distinguish the taste of vegan cheese from dairy cheese does that not teach us something about the nature of the taste (to a human) of cheese rather than a test of whether a human is being fooled by mimicry? Is the TT not trying to reverse-engineer human cognitive capacity (“intelligence”) rather than just trying to fool people?

The Turing Test is primarily meant to test a machine’s ability to demonstrate human-like intelligence through text-based communication. The subjective nature of the test lies in the judgment of the human evaluators, who use their own experiences, biases, and expectations to determine whether the machine’s responses are indistinguishable from those of a human. The test is not meant to test for a machine’s capacity to perform tasks in the world like a human, but rather its ability to mimic human-like conversation. Whether the machine’s responses can be considered mimicry or a demonstration of capacity depends on the underlying design and technology that produced the responses. However, the ultimate goal of the Turing Test is to assess a machine’s intelligence and ability to engage in human-like conversation, not necessarily to determine if the human evaluators are being “fooled” or not.

What is and is not a “machine”?

A “machine” is a physical device designed to perform one or more tasks, usually automated, through the use of mechanics and electronics. This can include traditional mechanical devices, such as a car engine, as well as computerized systems, such as a smartphone. So, anything that performs work using physical components, whether mechanical, electrical, or computational, can be considered a machine.

So are biological organisms machines?

Biological organisms are not traditionally considered machines, as the term is typically used to refer to artificial systems designed and constructed by humans for specific purposes. However, there are some theories and philosophical perspectives that view biological organisms as being akin to machines, such as reductionist biological theories that view living systems as being essentially complex systems of interacting physical and chemical processes. But in general, the definition of “machine” is highly context-dependent and can vary depending on the field and the purpose for which it is being used.

Taste

It will come, 
and I rejoice
(for the victims). 

But even if I live to 120, 
I want none of it. 

I want a clean break 
from the blood-soaked 
2000-millennium history 
of our race.

Nor is it to our credit
that we wouldn’t give up the taste
till we could get the same
from another brand.

It makes no amends,
to them,
were amends possible.

While Chickens Bleed

sounds rational: BL sounds rational

turing test: LaMDA would quickly fail the verbal Turing Test, but the only valid Turing Test is the robotic one, which LaMDA could not even begin, lacking a body or connection to anything in the world but words..

“don’t turn me off!”: Nonsense, but it would be fun to probe it further in chat.

systemic corporate influence: BL is right about this, and it is an enormous problem in everything, everywhere, not just Google or AI.

“science”: There is no “science” in any of this (yet) and it’s silly to keep bandying the word around like a talisman.

jedi joke: Nonsense, of course, but another thing it would be fun to probe further in chat.

religion: Irrelevant — except as just one of many things (conspiracy theories, the “paranormal,” the supernatural) humans can waste time chatting about.

public influence: Real, and an increasingly nefarious turn that pervasive chatbots are already taking.

openness: The answerable openness of the village in and for which language evolved, where everyone knew everyone, is open to subversion by superviral malware in the form of global anonymous and pseudonymous chatbots.

And all this solemn angst about chatbots, while chickens bleed.

Talking Heads - Fiduciary Wealth Partners
Chatheads

Plant Sentience and the Precautionary Principle

I hope that plants are not sentient, but I also believe they are not sentient, for several other reasons too:

Every function and capacity demonstrated in plants and (rightly) described as “intelligent” and “cognitive” (learning, remembering, signalling, communicating) can already be done by robots and by software (and they can do a lot more too). That demonstrates that plants too have remarkable cognitive capacities that we used to think were unique to people (and perhaps a few other species of animals). But it does not demonstrate that plants feel. Nor that feeling is necessary in order to have those capacities. Nor does it increase the probability by more than an infinitesimal amount, that plants feel.

The “hard problem” is to explain how and why humans (and perhaps a few other species of animals) feel. It seems to be causally superfluous, as robotic and computational models are demonstrating how much can be done without feeling. But with what plants can do it is almost trivial to design a model that can do it too, So there feeling seems to be incomparably more superfluous.

To reply that “Well, so maybe those robots and computational models feel too!” would just be to capitalize on the flip side of the other-minds problem (that certainty is not possible), to the effect that just as we cannot be sure that other people do feel, we cannot be sure that rocks, rockets or robots don’t feel.

That’s not a good address. Don’t go there. Stick with high probability and the preponderance of evidence. The evidence for some cognitive capacity (memory, learning, communication) in plants is strong. But the evidence that they feel is next to zero. In nonhuman animals the evidence that they feel starts very high for mammals, birds, other vertebrates, and, more and more invertebrates. But the evidence that plants, microbes and single cells feel is nonexistent, even as the evidence for their capacity for intelligent performance becomes stronger.

That humans should not eat animals is a simple principle based on the necessities for survival: 

Obligate carnivores (like the felids, I keep being told) have no choice. Eat flesh or sicken and die. Humans, in contrast, are facultative omnivores; they can survive as carnivores, consuming flesh, or they can survive without consuming flesh, as herbivores. And they can choose. There are no other options (until and unless technology produces a completely synthetic diet).

So my disbelief in plan sentience is not based primarily on wishful thinking, but on evidence and probability (which is never absolute zero, even for gravity, that apples may not start falling up instead of down tomorrow).

But there is another ethical factor that influences my belief, and that is the Precautionary Principle. Right now, and for millennia already in the Anthropocene, countless indisputably sentient animals are being slaughtered by our species, every second of every day, all over the planet, not out of survival necessity (as it had been for our hunter/gatherer ancestors), but for the taste, out of habit.

Now the “evidence” of sentience in these animals is being used to try to sensitize the public to their suffering, and the need to protect them. And the Precautionary Principle is being invoked to extend the protection to species for whom the evidence is not as complete and familiar as it is for vertebrates, giving them the benefit of the doubt rather than having to be treated as insentient until “proven” sentient. Note that all these “unproven” species are far closer, biologically and behaviorally to the species known to be sentient than they are to single cells and plants, for whom there is next to no evidence of sentience, only evidence for a degree of intelligence. Intelligence, by the way, does come in degrees, whereas sentience does not: An organism either does feel (something) or it does not – the rest is just a matter of the quality, intensity and duration of the feeling, not its existence.

So this 2nd order invocation of the Precautionary Principle, and its reckoning of the costs of being right or wrong, dictates that just as it is wrong not to give the benefit of the doubt to similar animals where the probability is already so high, it would be wrong to give the benefit of the doubt where the probability of sentience is incomparably lower, and what is at risk in attributing it where it is highly improbable is precisely the protection the distinction would have afforded to the species for whom the probability of sentience is far higher. The term just becomes moot, and just another justification for the status quo (ruled by neither necessity nor compassion, but just taste and habit – and the wherewithal to keep it that way).

LaMDA & LeMoine

About LaMDA & LeMoine: The global “big-data” corpus of all words spoken by humans is — and would still be, if it were augmented by a transcript of every word uttered and every verbal thought ever thought by humans  — just like the shadows on the wall of Plato’s cave: It contains all the many actual permutations and combinations of words uttered and written. All of that contains and reflects a lot of structure that can be abstracted and generalized, both statistically and algorithmically, in order to generate (1) more of the same, or (2) more of the same, but narrowed to a subpopulation, or school of thought, or even a single individual; and (3) it can also be constrained or biased, by superimposing algorithms steering it toward particular directions or styles.

The richness of this intrinsic “latent” structure to speech (verbalized thought) is already illustrated by the power of simple Boolean operations like AND or NOT. The power of google search is a combination of (1) the power of local AND (say, restricted to sentences or paragraphs or documents) together with (2) the “Page-rank” algorithm, which can weight words and word combinations by their frequency, inter-linkedness or citedness (or LIKEdness — or their LIKEdness by individual or algorithm X), plus, most important ,(3) the underlying database of who-knows how-many terabytes of words so far. Algorithms as simple as AND can already do wonders in navigating that database; fancier algorithms can do even better.

LaMDA has not only data-mined that multi-terabyte word space with “unsupervised learning”, abstracting all the frequencies and correlations of words and combinations of words, from which it can then generate more of the same – or more of the same that sounds-like a Republican, or Dan Dennett or an Animé fan, or someone empathic or anxious to please (like LaMDA)… It can be tempered and tampered by “influencer” algorithms too.

Something similar can be done with music: swallow music space and then spew out more of what sounds like Bernstein or (so far mediocre) Bach – but, eventually, who knows? These projected combinatorics have more scope with music (which, unlike language, really just is acoustic patterns based on recombinations plus some correlations with human vocal expressive affect patterns, whereas words have not just forms but meanings).

LaMDA does not pass the Turing Test because the Turing Test (despite the loose – or perhaps erroneous, purely verbal way Turing described it) is not a game about fooling people: it’s a way of testing theories of how  brains (or anything) produce real thoughts. And verbal thoughts don’t just have word forms, and patterns of word-forms: They also have referents, which are real things and states in the world, hence meaning. The Platonic shadows of patterns of words do reflect – and are correlated with – what words, too, just reflect: but their connection with the real-world referents of those words are mediated by (indeed parasitic on) the brains of the real people who read and interpret them, and know their referents through their own real senses and their real actions in and on those real referents in the real world –the real brains and real thoughts of (sometimes) knowledgeable (and often credulous and gullible) real flesh-and-blood people in-the-world…

Just as re-combinatorics play a big part in the production (improvisation, composition) of music (perhaps all of it, once you add the sensorimotor affective patterns that are added by the sounds and rhythms of performance and reflected in the brains and senses of the hearer, which is not just an execution of the formal notes), word re-combinatorics no doubt play a role in verbal production too. But language is not “just” music (form + affect): words have meanings (semantics) too. And meaning is not just patterns of words (arbitrary formal symbols). That’s just (one, all powerful) way thoughts can be made communicable, from one thinking head to another. But neither heads, nor worlds, are just another bag-of-words – although the speaking head can be replaced, in the conversation, by LaMDA, who is just a bag of words, mined and mimed by a verbal database + algorithms.

And, before you ask, google images are not the world either.

The google people, some of them smart, and others, some of them not so smart (like Musk), are fantasists who think (incoherently) that they live in a Matrix. In reality, they are just lost in a hermeneutic hall of mirrors of their own creation. The Darwinian Blind Watchmaker, evolution, is an accomplice only to the extent that it has endowed real biological brains with a real and highly adaptive (but fallible, hence foolable) mind-reading “mirror” capacity for understanding the appearance and actions of their real fellow-organisms. That includes, in the case of our species, language, the most powerful mind-reading tool of all. This has equipped us to transmit and receive and decode one another’s thoughts, encoded in words. But it has made us credulous and gullible too.

It has also equipped us to destroy the world, and it looks like we’re well on the road to it…

P.S. LeMoine sounds like a chatbot too, or maybe a Gullibot…

12 Points on Confusing Virtual Reality with Reality

Comments on: Bibeau-Delisle, A., & Brassard FRS, G. (2021). Probability and consequences of living inside a computer simulationProceedings of the Royal Society A477(2247), 20200658.

  1. What is Computation? it is the manipulation of arbitrarily shaped formal symbols in accordance with symbol-manipulation rules, algorithms, that operate only on the (arbitrary) shape of the symbols, not their meaning.
  2. Interpretatabililty. The only computations of interest, though, are the ones that can be given a coherent interpretation.
  3. Hardware-Independence. The hardware that executes the computation is irrelevant. The symbol manipulations have to be executed physically, so there does have to be hardware that executes it, but the physics of the hardware is irrelevant to the interpretability of the software it is executing. It’s just symbol-manipulations. It could have been done with pencil and paper.
  4. What is the Weak Church/Turing Thesis? That what mathematicians are doing is computation: formal symbol manipulation, executable by a Turing machine – finite-state hardware that can read, write, advance tape, change state or halt.
  5. What is Simulation? It is computation that is interpretable as modelling properties of the real world: size, shape, movement, temperature, dynamics, etc. But it’s still only computation: coherently interpretable manipulation of symbols
  6. What is the Strong Church/Turing Thesis? That computation can simulate (i.e., model) just about anything in the world to as close an approximation as desired (if you can find the right algorithm). It is possible to simulate a real rocket as well as the physical environment of a real rocket. If the simulation is a close enough approximation to the properties of a real rocket and its environment, it can be manipulated computationally to design and test new, improved rocket designs. If the improved design works in the simulation, then it can be used as the blueprint for designing a real rocket that applies the new design in the real world, with real material, and it works.
  7. What is Reality? It is the real world of objects we can see and measure.
  8. What is Virtual Reality (VR)? Devices that can stimulate (fool) the human senses by transmitting the output of simulations of real objects to virtual-reality gloves and goggles. For example, VR can transmit the output of the simulation of an ice cube, melting, to gloves and goggles that make you feel you are seeing and feeling an ice cube. melting. But there is no ice-cube and no melting; just symbol manipulations interpretable as an ice-cube, melting.
  9. What is Certainly Truee (rather than just highly probably true on all available evidence)? only what is provably true in formal mathematics. Provable means necessarily true, on pain of contradiction with formal premises (axioms). Everything else that is true is not provably true (hence not necessarily true), just probably true.
  10.  What is illusion? Whatever fools the senses. There is no way to be certain that what our senses and measuring instruments tell us is true (because it cannot be proved formally to be necessarily true, on pain of contradiction). But almost-certain on all the evidence is good enough, for both ordinary life and science.
  11. Being a Figment? To understand the difference between a sensory illusion and reality is perhaps the most basic insight that anyone can have: the difference between what I see and what is really there. “What I am seeing could be a figment of my imagination.” But to imagine that what is really there could be a computer simulation of which I myself am a part  (i.e., symbols manipulated by computer hardware, symbols that are interpretable as the reality I am seeing, as if I were in a VR) is to imagine that the figment could be the reality – which is simply incoherent, circular, self-referential nonsense.
  12.  Hermeneutics. Those who think this way have become lost in the “hermeneutic hall of mirrors,” mistaking symbols that are interpretable (by their real minds and real senses) as reflections of themselves — as being their real selves; mistaking the simulated ice-cube, for a “real” ice-cube.

Learning and Feeling

Re: the  NOVA/PBS video on slime mold. 

Slime molds are certainly interesting, both as the origin of multicellular life and the origin of cellular communication and learning. (When I lived at the Oppenheims’ on Princeton Avenue in the 1970’s they often invited John Tyler Bonner to their luncheons, but I don’t remember any substantive discussion of his work during those luncheons.)

The NOVA video was interesting, despite the OOH-AAH style of presentation (and especially the narrators’ prosody and intonation, which to me was really irritating and intrusive), but the content was interesting – once it was de-weaseled from its empty buzzwords, like “intelligence,” which means nothing (really nothing) other than the capacity (which is shared by biological organisms and artificial devices as well as running computational algorithms) to learn.

The trouble with weasel-words like “intelligence,” is that they are vessels inviting the projection of a sentient “mind” where there isn’t, or need not be, a mind. The capacity to learn is a necessary but certainly not a sufficient condition for sentience, which is the capacity to feel (which is what it means to have a “mind”). 

Sensing and responding are not sentience either; they are just mechanical or biomechanical causality: Transduction is just converting one form of energy into another. Both nonliving (mostly human synthesized) devices and living organisms can learn. Learning (usually) requires sensors, transducers, and effectors; it can also be simulated computationally (i.e., symbolically, algorithmically). But “sensors,” whether synthetic or biological, do not require or imply sentience (the capacity to feel). They only require the capacity to detect and do.

And what sensors and effectors can (among other things) do, is to learn, which is to change in what they do, and can do. “Doing” is already a bit weaselly, implying some kind of “agency” or agenthood, which again invites projecting a “mind” onto it (“doing it because you feel like doing it”). But having a mind (another weasel-word, really) and having (or rather being able to be in) “mental states” really just means being able to feel (to have felt states, sentience).

And being able to learn, as slime molds can, definitely does not require or entail being able to feel. It doesn’t even require being a biological organism. Learning can (or will eventually be shown to be able to) be done by artificial devices, and to be simulable computationally, by algorithms. Doing can be simulated purely computationally (symbolically, algorithmically) but feeling cannot be, or, otherwise put, simulated feeling is not really feeling any more than simulated moving or simulated wetness is really moving or wet (even if it’s piped into a Virtual Reality device to fool our senses). It’s just code that is interpretable as feeling, or moving or wet. 

But I digress. The point is that learning capacity, artificial or biological, does not require or entail feeling capacity. And what is at issue in the question of whether an organism is sentient is not (just) whether it can learn, but whether it can feel. 

Slime mold — amoebas that can transition between two states, single cells and multicellular  — is extremely interesting and informative about the evolutionary transition to multicellular organisms, cellular communication, and learning capacity. But there is no basis for concluding, from what they can do, that slime molds can feel, no matter how easy it is to interpret the learning as mind-like (“smart”). They, and their synthetic counterparts, have (or are) an organ for growing, moving, and learning, but not for feeling. The function of feeling is hard enough to explain in sentient organisms with brains, from worms and insects upward, but it becomes arbitrary when we project feeling onto every system that can learn, including root tips and amoebas (or amoeba aggregations).

I try not to eat any organism that we (think we) know can feel — but not any organism (or device) that can learn.