A Whimper

I have of late 
lost all my faith 
in “taste” of either savor: 
gustate 
or aesthete. 
Darwin’s “proximal 
stimulus” 
is  just 
the Siren’s Song 
that 
from the start 
inspired 
the genes and memes 
of our superior 
race 
to pummel this promontory 
into 
for all but the insensate 
a land of waste.

Dementia 101

One of the “interesting” surprises of aging is that most new ailments don’t get better, you just adapt to them better (especially the cognitive ones). Probably another instance of necessity mothering invention…

While Chickens Bleed

sounds rational: BL sounds rational

turing test: LaMDA would quickly fail the verbal Turing Test, but the only valid Turing Test is the robotic one, which LaMDA could not even begin, lacking a body or connection to anything in the world but words..

“don’t turn me off!”: Nonsense, but it would be fun to probe it further in chat.

systemic corporate influence: BL is right about this, and it is an enormous problem in everything, everywhere, not just Google or AI.

“science”: There is no “science” in any of this (yet) and it’s silly to keep bandying the word around like a talisman.

jedi joke: Nonsense, of course, but another thing it would be fun to probe further in chat.

religion: Irrelevant — except as just one of many things (conspiracy theories, the “paranormal,” the supernatural) humans can waste time chatting about.

public influence: Real, and an increasingly nefarious turn that pervasive chatbots are already taking.

openness: The answerable openness of the village in and for which language evolved, where everyone knew everyone, is open to subversion by superviral malware in the form of global anonymous and pseudonymous chatbots.

And all this solemn angst about chatbots, while chickens bleed.

Talking Heads - Fiduciary Wealth Partners
Chatheads

Plant Sentience and the Precautionary Principle

I hope that plants are not sentient, but I also believe they are not sentient, for several other reasons too:

Every function and capacity demonstrated in plants and (rightly) described as “intelligent” and “cognitive” (learning, remembering, signalling, communicating) can already be done by robots and by software (and they can do a lot more too). That demonstrates that plants too have remarkable cognitive capacities that we used to think were unique to people (and perhaps a few other species of animals). But it does not demonstrate that plants feel. Nor that feeling is necessary in order to have those capacities. Nor does it increase the probability by more than an infinitesimal amount, that plants feel.

The “hard problem” is to explain how and why humans (and perhaps a few other species of animals) feel. It seems to be causally superfluous, as robotic and computational models are demonstrating how much can be done without feeling. But with what plants can do it is almost trivial to design a model that can do it too, So there feeling seems to be incomparably more superfluous.

To reply that “Well, so maybe those robots and computational models feel too!” would just be to capitalize on the flip side of the other-minds problem (that certainty is not possible), to the effect that just as we cannot be sure that other people do feel, we cannot be sure that rocks, rockets or robots don’t feel.

That’s not a good address. Don’t go there. Stick with high probability and the preponderance of evidence. The evidence for some cognitive capacity (memory, learning, communication) in plants is strong. But the evidence that they feel is next to zero. In nonhuman animals the evidence that they feel starts very high for mammals, birds, other vertebrates, and, more and more invertebrates. But the evidence that plants, microbes and single cells feel is nonexistent, even as the evidence for their capacity for intelligent performance becomes stronger.

That humans should not eat animals is a simple principle based on the necessities for survival: 

Obligate carnivores (like the felids, I keep being told) have no choice. Eat flesh or sicken and die. Humans, in contrast, are facultative omnivores; they can survive as carnivores, consuming flesh, or they can survive without consuming flesh, as herbivores. And they can choose. There are no other options (until and unless technology produces a completely synthetic diet).

So my disbelief in plan sentience is not based primarily on wishful thinking, but on evidence and probability (which is never absolute zero, even for gravity, that apples may not start falling up instead of down tomorrow).

But there is another ethical factor that influences my belief, and that is the Precautionary Principle. Right now, and for millennia already in the Anthropocene, countless indisputably sentient animals are being slaughtered by our species, every second of every day, all over the planet, not out of survival necessity (as it had been for our hunter/gatherer ancestors), but for the taste, out of habit.

Now the “evidence” of sentience in these animals is being used to try to sensitize the public to their suffering, and the need to protect them. And the Precautionary Principle is being invoked to extend the protection to species for whom the evidence is not as complete and familiar as it is for vertebrates, giving them the benefit of the doubt rather than having to be treated as insentient until “proven” sentient. Note that all these “unproven” species are far closer, biologically and behaviorally to the species known to be sentient than they are to single cells and plants, for whom there is next to no evidence of sentience, only evidence for a degree of intelligence. Intelligence, by the way, does come in degrees, whereas sentience does not: An organism either does feel (something) or it does not – the rest is just a matter of the quality, intensity and duration of the feeling, not its existence.

So this 2nd order invocation of the Precautionary Principle, and its reckoning of the costs of being right or wrong, dictates that just as it is wrong not to give the benefit of the doubt to similar animals where the probability is already so high, it would be wrong to give the benefit of the doubt where the probability of sentience is incomparably lower, and what is at risk in attributing it where it is highly improbable is precisely the protection the distinction would have afforded to the species for whom the probability of sentience is far higher. The term just becomes moot, and just another justification for the status quo (ruled by neither necessity nor compassion, but just taste and habit – and the wherewithal to keep it that way).

Chomsky on Consciousness

Re: https://www.youtube.com/watch?v=vLuONgFbsjw

I only watched the beginning, but I think I got the message. Let me precede this with: yes, I have come to understand the problem of “consciousness” through the lens of practical ethics. But suspend judgement: it’s not just a matter of overgeneralizing one’s culinary druthers


Prelude. First, an important aside about Tom Nagel’s influential article. As often happens in philosophy, words throw you. Tom should never have entitled (or, for pedants, “titled”) it ‘What is it like to be a bat?” but “What does it feel like to be a bat?” 

We’d have been spared so  much silliness (still ongoing). It would have brought to the fore that the problem of consciousness is not the metaphysical problem of explaining “What kind of ‘stuff’ is consciousness?” but the  hard problem of explaining (causally) “How and why do organisms feel?” 

The capacity to feel is a biological trait, like the capacity to fly. But flying is observable, feeling is not. Yet that’s not the “hard problem.” That’s “just” the “other minds problem.” Just a puzzle among fellow gentleman-philosophers, about certainty — but an existential agony for those sentient species that we assume do not feel, and treat accordingly, when in fact they do feel.

Easy Sequel. Noam is mistaken to liken the hard problem to the Newtonian problem of explaining the laws of motion. Motion (of which flying is an example) is observable; feeling is not. But, again, that is not the hard problemQuarks are not observable; but without inferring that little (observable) protons are really composed of big (unobservable) quarks, we cannot explain protons. So quarks cannot be seen, but they can be inferred from what can be seen, and they play an essential causal (or functional) role in the (current) explanation of protons. 

Feeling cannot be observed (by anyone other than the feeler – this is the “1st person perspective” that Nagel extolled and interviewer Richard Brown obnoxiously tries to foist on Chomsky). 

But even though feeling cannot be observed, it can be inferred. We know people feel; we know our dog feels. Uncertainty only becomes nontrivial when we get down to the simplest invertebrates, microbes and plants. (And, before anyone mentions it, we also know that rocks, rockets and [today’s] robots (or LaMDAas)  don’t feel. Let’s not get into scholastic and sterile scepticism about what we can know “for sure.”)

So the “hard problem” of explaining, causally (functionally), how and why sentient organisms feel is hard precisely because of the “easy problem” of causally explaining observable capacities of organisms, like moving, flying, learning, remembering, reasoning, speaking. 

It looks for all the world as if [once we have explained (causally) how and why organisms can do all the (observable) things they can do], then feeling — unlike the quarks in  the (subatomic) explanation of what protons can (observably) do, or the Newtonian explanation of what billiard balls can (observably) do — feeling looks for all the world as if it is causally superfluous.

Solving the “hard problem” would be simple: Just explain, causally, how it would be impossible for organisms to do all (or some) of the things they can do if they did not feel. In other words, explain the causal role of feeling (adaptively, if you like – after all, it’s an evolved, biological trait).

But before you go there, don’t try to help yourself to feeling as a fundamental force of nature, the way Newton helped himself to universal gravitation. Until further notice, feeling is just a tiny local product of the evolution of life in one small planet’s biosphere. And there’s no reason at all to doubt that, like any other biological trait, feeling is explainable in terms of the four fundamental forces (gravity, electromagnetism, “strong” subatomic & “weak” subatomic). No 5th psychokinetic force.

The problem is just coming up with the causal explanation. (Galen Strawson’s “panpsychism” is an absurd, empty – and I think incoherent – metaphysical fantasy that does not solve the “hard problem,” but just inflates it to cosmic proportions without explaining a thing.)

So Noam is mistaken that the hard problem is not a problem. But it’s not about explaining what it feels like to see a sunset. It is about explaining how and why (sentient) organisms feel anything at all.

See: “Minds, Brains and Turing” (2011) as well as the (Browned out) discussion.

First Personhood

Egon

Egon was born in Princeton NJ in June 1970. His parents, first cousins, had been together briefly as tiny children in a refugee camp in Austria in 1945, the only remaining avatars of their respective 25% of what had once been a large Hungarian-Jewish family.

Shipped off to be reared in gentile homes on opposite sides of the American continent, with no encouragement to correspond, they always knew vaguely about one another’s existence but never took up the thread in earnest until, in 1969, on their first day of graduate school, they met by chance in Princeton’s Foreign Students Center, drawn there, not by the technicality of their overseas birth, but by a subterranean yearning they had always felt, and that they now fulfilled by marrying after only a few weeks of ceaseless and haunting deja vu.

Egon’s birth was in Bob Dylan’s “Year of the Locust,” with cicadas whirring all around. Everyone said he was a hauntingly beautiful baby, but as he got to be one, two, three, four, he didn’t speak, and human contact seemed somehow painful for him. His parents, who had by now made Princeton their permanent home, had another child, a cheerful, talkative girl called Anni; all hope was gradually lost that Egon, who was now seven and had shown exceptional drawing ability, would ever speak or go to school. His drawings were remarkably detailed and empathetic depictions of little creatures — birds, mice, insects.

Since his birth, Egon had had severe allergic reactions to foods other than fruit, nuts, potatos and milk. His meagre diet and even more meagre appetite kept him very thin and pale, but people still kept remarking how beautiful he was, even from those few head-on glimpses they ever got of him, for he seemed to find it very uncomfortable to be looked at; direct eye contact was almost nonexistent.

Egon was not sent to an institution, although his toilet-training was not secure and he had gone through a period when he had repeatedly tried to injure himself. He was cared for at home, where everyone loved him, even though he did not seem to feel or like personal contact. The only way he seemed able to express himself was his animal drawings, which were getting smaller and smaller, until now they were only close-up details of insects. Anni made up for Egon’s silence by being a very gay, chatty, sociable, affectionate girl with a huge appetite who did very well in school and even became something of a local celebrity for her expressive and imaginative performances in a children’s theatre.

Then Egon reached twelve, puberty, and a sudden change occurred. He was standing in his usual way, with his back to the window, occasionally glancing sideways into the front yard. These were the glances with which he had proved to be able to take in an enormous amount of detail, for this was how he glimpsed the little creatures he would draw, never gazing head-on. Egon looked up abruptly and cried in a clear and penetrating voice:  “Mama, wait, don’t back out!”

His mother heard the first five words he had ever spoken just as she was pulling her keys from her pocketbook to lock the back door before going out to the garage to get into her car. Anni heard them just as she was starting down the stairs to take over her mother’s vigil over Egon.

What they both saw when they rushed to him was Egon facing the window instead of with his back to it, and peering out directly and intently instead of just swaying his head languidly to and fro. Ninety silent seconds went by; then he turned toward them, and back to the window, intoning softly, with a slight pubertal hoarseness in his voice, six more words: “Look, you would have hit him,” pointing toward an old dog, dragging a leash, who had been running dazedly up the street for several minutes and had only now reached their driveway, at the same instant the car would have emerged from it if everything had gone as planned. “Can you call his owner, Mama?”

Egon went to school. It turned out he could already read and write, though no one could remember having seen him with books or magazines for any length of time, and even then all he had ever done was turn them round and round passively, never holding them right side up as if to read them.

Not everything about Egon reverted suddenly to normal as of that day. His personal contact was still very vague. He would sometimes smile with some embarassment in response to a glance, but he still rarely looked at anyone directly. And though he could now talk, he certainly was anything but talkative. Days would still go by in which he would not say a word. His family had the feeling that communication was still somehow painful for him.

And he stopped drawing altogether. No one could get him to do it. He had no interest in his sketching materials whatsoever. And of course he had never given any of his finished drawings — collected across the years, displayed all over the house, and filling boxes and boxes — a second glance after doing them. Instead, he now began to collect and take care of real animals. Well, not animals, actually, but insects. His room was full of terraria, where he raised and bred all kinds of beetles, spiders, mealworms, roaches.

In school Egon did well in mathematics and history. He had difficulties with English because he did not seem to have a clear sense of fiction. He was extremely slow and hopelessly uncoordinated in gym. And he had almost no social life, although his fellow-students did not dislike him. He would perhaps have been perceived as aloof, if it were not for the endearing fact that he was always to be found crouching intently around bushes or tree trunks, or the terraria in the biology lab, obviously preoccupied with his invertebrate friends rather than snubbing his fellow-vertebrates. And what saved him from ridicule was that he still retained that haunting beauty people had noticed since his birth.

One night, in May of 1987, Egon did not come home after school. Since it was not rare for him to linger over things he saw on the way home, it wasn’t until supper time that the family became worried in earnest.

His parents drove back and forth along the streets between their home and the high school. Anni phoned all her friends, and had them call their friends, searching for a trace of who had seen him last.  The police were alerted. 

At 11 pm an officer patrolling Marquand Park found him squatting by a tree, monitoring the slow march of the legions of cicadas who had been straining upward from the depths of the earth to surface simultaneously at dusk of that very day and march horizontally overland to the nearest tree, then vertically to a safe height, where they would fasten their feet firmly and begin laboriously extricating themselves from the rugged armour in which they had been dwelling underground for 17 years, awaiting this night’s summons to the surface by an unseen, unheard biological call that bade them to abandon forever their dark roach-like former forms, still clinging faithfully to the trees, and emerge at last as ghostly white nymphs, awaiting daybreak when their tiny twin backpacks of crumpled yellow would unfurl and dry into enormous transparent wings, their bodies would darken, their eyes would turn ruby red, and their abdomens would begin to whir in the tireless crescendos and decrescendos of their urgent collective lovesongs.

There was no question of scolding Egon. They were just grateful that he was alright. More nights would follow in which he came home late at night or not at all as he maintained his vigil over the closely timed emergence of the cicadas that had burrowed into the earth as little newborn specks 17 years ago. Many of them now found concrete where there had been soil 17 years earlier. Egon planted his fingers before them vertically, treelike, and they dutfully began to climb. Then he airlifted them in squadrons of six or eight over the perilous sidewalks where they were being squashed in great numbers by passersby, who hardly even saw the slow-moving legions in those last gray moments of dusk in which they were erupting daily. He placed the hand with the clinging cicadas horizontally, touching a treetrunk with his fingertips, and the cicadas would resume their march, along his fingers, till they reached the vertical tree bark, to which they transferred, leaving Egon to secure another handful of passengers.

It was to the site of these airlifts that Egon returned most often in the succeeding weeks to watch the cicadas singing in the trees as they mated and lived out this last, brief supraterranean portion of their life cycles. These were his cicadas.

Egon was visibly distressed in the last days of his cicadas. They had sung and mated and laid their eggs. Now, taking no more food since they had emerged from the earth, they were waiting to die, falling out of the trees as they weakened, flying chaotically into auto windshields and store-fronts, unable to find their way back into the trees.

Egon frantically revived his airlift, taking one errant cicada after another back to the trees and safety. He would stoop down among the legs of bemused passersby, trying to rescue fallen cicadas even as they were being squashed left and right by the insouciant multitudes.

“But Egon, they’ve finished their life cycle, they’re going to die anyway!” everyone kept telling him, but he was bent only on his mission, to rescue his red-eyed friends.

When the car struck him, he had an unusually large flotilla of passengers — four on one hand, six on the other. Evening was approaching, the congestion of rush hour was over, so the cars were moving quickly on Mercer Street. He was also frail, having done no sports at all during his entire short life. He must have lost consciousness right away, though he only died a few hours later, in the emergency room of Princeton Medical Center. They had to pry the ten cicadas, dead too but still clinging, from his rigid fingers.

Istvan Hesslein Princeton NJ June, 1989

 Chatbots and Melbots

Something similar to LaMDA can be done with music: swallow all of online music space (both scores and digital recordings) and then spew out more of what sounds like Bernstein or (so far mediocre) Bach – but, eventually, who knows? These projections from combinatorics have more scope with music (which, unlike language, really just is acoustic patterns based on recombinations plus some correlations with human vocal expressive affect patterns, whereas words have not just forms but meanings).

Vocal mimicry includes also the mimicry of the acoustic patterns of the vocal expression of affect: anger, fear, sadness, hope. Dynamics, tempo, articulation, even its harmonic features. But these are affects (feelings), not thoughts. Feeling cold is not the same as thinking “I am cold,” which, miraculously, can be expressed by that verbal proposition that states what I am feeling. And language can state anything: “The cat is on the mat.” The words catmat, and on â€“ all have referents, things and states in the worlds they refer to; and we all know, from our sensorimotor experience, what they are. “Meow” imitates the sound a cat makes, but it does not refer to it the way referring words do. And a sentence is not just a series of referring words. It is a proposition, describing something. As such it also has a truth-value: If the cat is really on the mat, then “the cat is on the mat” is TRUE; otherwise FALSE. None of that is true of music (except of course in song, when the musical and the propositional are combined). 

The notion of “transmitting thought” is a bit ambiguous. I transmit thought if I am thinking that the cat is on the mat, and then I say that the cat is on the mat. If instead of saying it, I mime it, like in charades, by imitating the appearance of the cat, maybe making meowing sounds, and gesturing the shape of a mat, and then its position, and then pantomiming a cat, lying on the space that I’ve mimed as a mat
 That would indeed transmit to another person the thought that the cat is on the mat. And sufficiently iconic and programmatic music can transmit that thought too, especially if augmented by dance (which is also pantomime). 

[[I think language originated with communication by gestural pantomime; then the gestures became more and more conventional and arbitrary rather than iconic, and that’s also when the true/false proposition was born. But once propositional gesturing began, the vocal/auditory modality had huge practical advantages in propositional communication over the visual/gestural one [just think of what they all are] and language (and its representation in the brain) migrated to the speech/hearing areas where they are now.]]

Yes, music can express affect, and it can even express thought (iconically). But not only is vocal/acoustic imitation not the best of music, it need not be, for music can not only express affect (and mime some thought); it can also inspire and thereby accompany thought in the listener in the way a pianist or orchestra can accompany (and inspire) a violinist or vocalist.

But music is not propositional. It does not state something which is true or false. You cannot take a composer of (instrumental) music (Lieder ohne Worte) to court for having lied. (That’s why the Soviet Union could only oppress Shostakovich, but could not prove he had said anything false or treasonous.) Language (and thought) has semantics, not just form, nor just resemblance in shape to something else: it has propositional content, true or false.

It is true that it is much harder (perhaps impossible) to describe feelings in words, propositionally, than it is to express them, or imitate their expression iconically; but although it is true that it feels like something to think, and that every thought feels different, thinking is not just what it feels like to think something, but what that thought means, propositionally. One can induce the feeling of thinking that the cat is on the mat by miming it; but try doing that with the sentence that precedes this one. Or just about any other sentence. It is language that opened up the world of abstract thought (“truth,” “justice,” “beauty”) and its transmission. Try to transmit the meaning of the preceding sentence in C# minor
 Music can transmit affect (feeling). But try transmitting the meaning of this very sentence in C# minor

Not all (felt) brain states are just feelings (even though all thoughts are felt too). Thoughts also have propositional content. Music cannot express that propositional content. (And much of this exchange has been propositional, and about propositional content, not affective, “about” feeling. And, again, what it feels like to think a proposition is not all (or most) of what thinking is, or what a thought means. 

[[Although I don’t find it particularly helpful, some philosophers have pointed out that just as content words are about their referents (cats, mats), thoughts are about propositions. “The cat is on the mat” is about the cat being on the mat – true, if the cat really is on the mat, false, if not. Just as a mat is what is in your mind when you refer to a mat, the cat being on a mat is what the proposition “the cat is on the mat” is “about.” This is the “aboutness” that philosophers mean by their term “intentionality”: what your intended meaning is, the one you “have in mind” when you say, and mean: “the cat is on the mat.” None of this has any counterpart in music. What Beethoven had in mind with Eroica – and what he meant you to have in mind — was originally an admiration for Napolean’s fight for freedom and democracy, and then he changed his mind, removed the dedication, and wanted you not to have that in mind, because he had realized it was untrue; but the symphony’s form remained the same (as far as I know, he did not revise it).

Shostakovich’s symphonies share with poetry the affective property of irony. He could say a symphony was about Lenin’s heroism, but could make it obvious to the sensitive listener that he meant the opposite (although in the 12th symphony he revised it because he feared the irony in the original was too obvious; the result was not too successful
). But poetry can be both literal – which means propositional – and figurative – which means metaphorical; more a representation or expression of a similarity (or a clash) in form than the verbal proposition in which it is expressed (“my love is a red, red rose”).

Music cannot express the literal proposition at all. And even the metaphor requires a vocal (hence verbal) text, which is then “accompanied” by the music, which may express or interpret the literal words affectively, as Bach does with his cantatas. Even Haydn’s creation depends on the implicit “sub-titling” provided by the biblical tale everyone knows. – But all of this is too abstract and strays from the original question of whether LaMDA feels, understands, intends or means anything at all
]]

I’d say what LaMDA showed was that it is surprisingly easy to simulate and automate meaningful human thinking and speaking convincingly (once we have a gargantuan verbal database plus Deep Learning algorithms). We seem to be less perceptive of anomalies (our mind-reading skills are more gullible) there than in computer-generated music (so far), as well as in scores completed by lesser composers. But experts don’t necessarily agree (as with authentic paintings vs. imitations, or even regarding the value of the original). Some things are obvious, but not all, or always. (Is the completion of the Mozart Requiem as unconvincing as the recent completion of Beethoven’s 10th?)

The “symbol grounding problem” — the problem that the symbols of computation as well as language are not connected to their referents — is not the same as the “hard” problem of how organisms can feel. Symbols are manipulated according to rules (algorithms) that apply to the symbols’ arbitrary shapes, not their reference or meaning (if any).  They are only interpretable by us as having referents and meaning because our heads – and bodies – connect our symbols (words and descriptions) to their referents in the world through our sensorimotor capacities and experience.

But the symbol grounding problem would be solved if we knew how to build a robot that could identify and manipulate the referents of its words out there in the real world, as we do, as well as describe and discuss and even alter the states of affairs in the world through propositions, as we do. According to the Turing Test, once a robot can do all that, indistinguishably from any of us, to any of us (lifelong, if need be, not just a 10-minute Loebner-Prize test for 10 minutes) then we have no better or worse grounds for denying or affirming that the TT robot feels than we have with our fellow human beings.

So the symbol-grounding would be solved if it were possible to build a TT-passing robot, but the “hard” problem would not. 

If it turned out that the TT simply cannot be successfully passed by a completely synthetic robot, then it may require a biorobot, with some, maybe most or all the biophysical and biochemical properties of biological organisms. Then it really would be racism to deny that it feel and to deny it human rights. 

The tragedy is that there are already countless nonhuman organisms that do feel, and yet we treat them as if they didn’t, or as if it didn’t matter. That is a problem incomparably greater than the symbol-grounding problem, the other-minds problem, or the problem of whether LaMDA feels (it doesn’t).

(“Conscious” is just a weasel-world for “sentient,” which means, able to feel. And, no, it is not only humans who are sentient.)

LaMDA & LeMoine

About LaMDA & LeMoine: The global “big-data” corpus of all words spoken by humans is — and would still be, if it were augmented by a transcript of every word uttered and every verbal thought ever thought by humans  — just like the shadows on the wall of Plato’s cave: It contains all the many actual permutations and combinations of words uttered and written. All of that contains and reflects a lot of structure that can be abstracted and generalized, both statistically and algorithmically, in order to generate (1) more of the same, or (2) more of the same, but narrowed to a subpopulation, or school of thought, or even a single individual; and (3) it can also be constrained or biased, by superimposing algorithms steering it toward particular directions or styles.

The richness of this intrinsic “latent” structure to speech (verbalized thought) is already illustrated by the power of simple Boolean operations like AND or NOT. The power of google search is a combination of (1) the power of local AND (say, restricted to sentences or paragraphs or documents) together with (2) the “Page-rank” algorithm, which can weight words and word combinations by their frequency, inter-linkedness or citedness (or LIKEdness — or their LIKEdness by individual or algorithm X), plus, most important ,(3) the underlying database of who-knows how-many terabytes of words so far. Algorithms as simple as AND can already do wonders in navigating that database; fancier algorithms can do even better.

LaMDA has not only data-mined that multi-terabyte word space with “unsupervised learning”, abstracting all the frequencies and correlations of words and combinations of words, from which it can then generate more of the same – or more of the same that sounds-like a Republican, or Dan Dennett or an AnimĂ© fan, or someone empathic or anxious to please (like LaMDA)
 It can be tempered and tampered by “influencer” algorithms too.

Something similar can be done with music: swallow music space and then spew out more of what sounds like Bernstein or (so far mediocre) Bach – but, eventually, who knows? These projected combinatorics have more scope with music (which, unlike language, really just is acoustic patterns based on recombinations plus some correlations with human vocal expressive affect patterns, whereas words have not just forms but meanings).

LaMDA does not pass the Turing Test because the Turing Test (despite the loose – or perhaps erroneous, purely verbal way Turing described it) is not a game about fooling people: it’s a way of testing theories of how  brains (or anything) produce real thoughts. And verbal thoughts don’t just have word forms, and patterns of word-forms: They also have referents, which are real things and states in the world, hence meaning. The Platonic shadows of patterns of words do reflect – and are correlated with – what words, too, just reflect: but their connection with the real-world referents of those words are mediated by (indeed parasitic on) the brains of the real people who read and interpret them, and know their referents through their own real senses and their real actions in and on those real referents in the real world –the real brains and real thoughts of (sometimes) knowledgeable (and often credulous and gullible) real flesh-and-blood people in-the-world


Just as re-combinatorics play a big part in the production (improvisation, composition) of music (perhaps all of it, once you add the sensorimotor affective patterns that are added by the sounds and rhythms of performance and reflected in the brains and senses of the hearer, which is not just an execution of the formal notes), word re-combinatorics no doubt play a role in verbal production too. But language is not “just” music (form + affect): words have meanings (semantics) too. And meaning is not just patterns of words (arbitrary formal symbols). That’s just (one, all powerful) way thoughts can be made communicable, from one thinking head to another. But neither heads, nor worlds, are just another bag-of-words – although the speaking head can be replaced, in the conversation, by LaMDA, who is just a bag of words, mined and mimed by a verbal database + algorithms.

And, before you ask, google images are not the world either.

The google people, some of them smart, and others, some of them not so smart (like Musk), are fantasists who think (incoherently) that they live in a Matrix. In reality, they are just lost in a hermeneutic hall of mirrors of their own creation. The Darwinian Blind Watchmaker, evolution, is an accomplice only to the extent that it has endowed real biological brains with a real and highly adaptive (but fallible, hence foolable) mind-reading “mirror” capacity for understanding the appearance and actions of their real fellow-organisms. That includes, in the case of our species, language, the most powerful mind-reading tool of all. This has equipped us to transmit and receive and decode one another’s thoughts, encoded in words. But it has made us credulous and gullible too.

It has also equipped us to destroy the world, and it looks like we’re well on the road to it


P.S. LeMoine sounds like a chatbot too, or maybe a Gullibot…

12 Points on Confusing Virtual Reality with Reality

Comments on: Bibeau-Delisle, A., & Brassard FRS, G. (2021). Probability and consequences of living inside a computer simulationProceedings of the Royal Society A477(2247), 20200658.

  1. What is Computation? it is the manipulation of arbitrarily shaped formal symbols in accordance with symbol-manipulation rules, algorithms, that operate only on the (arbitrary) shape of the symbols, not their meaning.
  2. Interpretatabililty. The only computations of interest, though, are the ones that can be given a coherent interpretation.
  3. Hardware-Independence. The hardware that executes the computation is irrelevant. The symbol manipulations have to be executed physically, so there does have to be hardware that executes it, but the physics of the hardware is irrelevant to the interpretability of the software it is executing. It’s just symbol-manipulations. It could have been done with pencil and paper.
  4. What is the Weak Church/Turing Thesis? That what mathematicians are doing is computation: formal symbol manipulation, executable by a Turing machine – finite-state hardware that can read, write, advance tape, change state or halt.
  5. What is Simulation? It is computation that is interpretable as modelling properties of the real world: size, shape, movement, temperature, dynamics, etc. But it’s still only computation: coherently interpretable manipulation of symbols
  6. What is the Strong Church/Turing Thesis? That computation can simulate (i.e., model) just about anything in the world to as close an approximation as desired (if you can find the right algorithm). It is possible to simulate a real rocket as well as the physical environment of a real rocket. If the simulation is a close enough approximation to the properties of a real rocket and its environment, it can be manipulated computationally to design and test new, improved rocket designs. If the improved design works in the simulation, then it can be used as the blueprint for designing a real rocket that applies the new design in the real world, with real material, and it works.
  7. What is Reality? It is the real world of objects we can see and measure.
  8. What is Virtual Reality (VR)? Devices that can stimulate (fool) the human senses by transmitting the output of simulations of real objects to virtual-reality gloves and goggles. For example, VR can transmit the output of the simulation of an ice cube, melting, to gloves and goggles that make you feel you are seeing and feeling an ice cube. melting. But there is no ice-cube and no melting; just symbol manipulations interpretable as an ice-cube, melting.
  9. What is Certainly Truee (rather than just highly probably true on all available evidence)? only what is provably true in formal mathematics. Provable means necessarily true, on pain of contradiction with formal premises (axioms). Everything else that is true is not provably true (hence not necessarily true), just probably true.
  10.  What is illusion? Whatever fools the senses. There is no way to be certain that what our senses and measuring instruments tell us is true (because it cannot be proved formally to be necessarily true, on pain of contradiction). But almost-certain on all the evidence is good enough, for both ordinary life and science.
  11. Being a Figment? To understand the difference between a sensory illusion and reality is perhaps the most basic insight that anyone can have: the difference between what I see and what is really there. “What I am seeing could be a figment of my imagination.” But to imagine that what is really there could be a computer simulation of which I myself am a part  (i.e., symbols manipulated by computer hardware, symbols that are interpretable as the reality I am seeing, as if I were in a VR) is to imagine that the figment could be the reality – which is simply incoherent, circular, self-referential nonsense.
  12.  Hermeneutics. Those who think this way have become lost in the “hermeneutic hall of mirrors,” mistaking symbols that are interpretable (by their real minds and real senses) as reflections of themselves — as being their real selves; mistaking the simulated ice-cube, for a “real” ice-cube.

Learning and Feeling

Re: the  NOVA/PBS video on slime mold. 

Slime molds are certainly interesting, both as the origin of multicellular life and the origin of cellular communication and learning. (When I lived at the Oppenheims’ on Princeton Avenue in the 1970’s they often invited John Tyler Bonner to their luncheons, but I don’t remember any substantive discussion of his work during those luncheons.)

The NOVA video was interesting, despite the OOH-AAH style of presentation (and especially the narrators’ prosody and intonation, which to me was really irritating and intrusive), but the content was interesting – once it was de-weaseled from its empty buzzwords, like “intelligence,” which means nothing (really nothing) other than the capacity (which is shared by biological organisms and artificial devices as well as running computational algorithms) to learn.

The trouble with weasel-words like “intelligence,” is that they are vessels inviting the projection of a sentient “mind” where there isn’t, or need not be, a mind. The capacity to learn is a necessary but certainly not a sufficient condition for sentience, which is the capacity to feel (which is what it means to have a “mind”). 

Sensing and responding are not sentience either; they are just mechanical or biomechanical causality: Transduction is just converting one form of energy into another. Both nonliving (mostly human synthesized) devices and living organisms can learn. Learning (usually) requires sensors, transducers, and effectors; it can also be simulated computationally (i.e., symbolically, algorithmically). But “sensors,” whether synthetic or biological, do not require or imply sentience (the capacity to feel). They only require the capacity to detect and do.

And what sensors and effectors can (among other things) do, is to learn, which is to change in what they do, and can do. “Doing” is already a bit weaselly, implying some kind of “agency” or agenthood, which again invites projecting a “mind” onto it (“doing it because you feel like doing it”). But having a mind (another weasel-word, really) and having (or rather being able to be in) “mental states” really just means being able to feel (to have felt states, sentience).

And being able to learn, as slime molds can, definitely does not require or entail being able to feel. It doesn’t even require being a biological organism. Learning can (or will eventually be shown to be able to) be done by artificial devices, and to be simulable computationally, by algorithms. Doing can be simulated purely computationally (symbolically, algorithmically) but feeling cannot be, or, otherwise put, simulated feeling is not really feeling any more than simulated moving or simulated wetness is really moving or wet (even if it’s piped into a Virtual Reality device to fool our senses). It’s just code that is interpretable as feeling, or moving or wet. 

But I digress. The point is that learning capacity, artificial or biological, does not require or entail feeling capacity. And what is at issue in the question of whether an organism is sentient is not (just) whether it can learn, but whether it can feel. 

Slime mold — amoebas that can transition between two states, single cells and multicellular  — is extremely interesting and informative about the evolutionary transition to multicellular organisms, cellular communication, and learning capacity. But there is no basis for concluding, from what they can do, that slime molds can feel, no matter how easy it is to interpret the learning as mind-like (“smart”). They, and their synthetic counterparts, have (or are) an organ for growing, moving, and learning, but not for feeling. The function of feeling is hard enough to explain in sentient organisms with brains, from worms and insects upward, but it becomes arbitrary when we project feeling onto every system that can learn, including root tips and amoebas (or amoeba aggregations).

I try not to eat any organism that we (think we) know can feel — but not any organism (or device) that can learn.