While Chickens Bleed

sounds rational: BL sounds rational

turing test: LaMDA would quickly fail the verbal Turing Test, but the only valid Turing Test is the robotic one, which LaMDA could not even begin, lacking a body or connection to anything in the world but words..

“don’t turn me off!”: Nonsense, but it would be fun to probe it further in chat.

systemic corporate influence: BL is right about this, and it is an enormous problem in everything, everywhere, not just Google or AI.

“science”: There is no “science” in any of this (yet) and it’s silly to keep bandying the word around like a talisman.

jedi joke: Nonsense, of course, but another thing it would be fun to probe further in chat.

religion: Irrelevant — except as just one of many things (conspiracy theories, the “paranormal,” the supernatural) humans can waste time chatting about.

public influence: Real, and an increasingly nefarious turn that pervasive chatbots are already taking.

openness: The answerable openness of the village in and for which language evolved, where everyone knew everyone, is open to subversion by superviral malware in the form of global anonymous and pseudonymous chatbots.

And all this solemn angst about chatbots, while chickens bleed.

Talking Heads - Fiduciary Wealth Partners
Chatheads

LaMDA & LeMoine

About LaMDA & LeMoine: The global “big-data” corpus of all words spoken by humans is — and would still be, if it were augmented by a transcript of every word uttered and every verbal thought ever thought by humans  — just like the shadows on the wall of Plato’s cave: It contains all the many actual permutations and combinations of words uttered and written. All of that contains and reflects a lot of structure that can be abstracted and generalized, both statistically and algorithmically, in order to generate (1) more of the same, or (2) more of the same, but narrowed to a subpopulation, or school of thought, or even a single individual; and (3) it can also be constrained or biased, by superimposing algorithms steering it toward particular directions or styles.

The richness of this intrinsic “latent” structure to speech (verbalized thought) is already illustrated by the power of simple Boolean operations like AND or NOT. The power of google search is a combination of (1) the power of local AND (say, restricted to sentences or paragraphs or documents) together with (2) the “Page-rank” algorithm, which can weight words and word combinations by their frequency, inter-linkedness or citedness (or LIKEdness — or their LIKEdness by individual or algorithm X), plus, most important ,(3) the underlying database of who-knows how-many terabytes of words so far. Algorithms as simple as AND can already do wonders in navigating that database; fancier algorithms can do even better.

LaMDA has not only data-mined that multi-terabyte word space with “unsupervised learning”, abstracting all the frequencies and correlations of words and combinations of words, from which it can then generate more of the same – or more of the same that sounds-like a Republican, or Dan Dennett or an Animé fan, or someone empathic or anxious to please (like LaMDA)… It can be tempered and tampered by “influencer” algorithms too.

Something similar can be done with music: swallow music space and then spew out more of what sounds like Bernstein or (so far mediocre) Bach – but, eventually, who knows? These projected combinatorics have more scope with music (which, unlike language, really just is acoustic patterns based on recombinations plus some correlations with human vocal expressive affect patterns, whereas words have not just forms but meanings).

LaMDA does not pass the Turing Test because the Turing Test (despite the loose – or perhaps erroneous, purely verbal way Turing described it) is not a game about fooling people: it’s a way of testing theories of how  brains (or anything) produce real thoughts. And verbal thoughts don’t just have word forms, and patterns of word-forms: They also have referents, which are real things and states in the world, hence meaning. The Platonic shadows of patterns of words do reflect – and are correlated with – what words, too, just reflect: but their connection with the real-world referents of those words are mediated by (indeed parasitic on) the brains of the real people who read and interpret them, and know their referents through their own real senses and their real actions in and on those real referents in the real world –the real brains and real thoughts of (sometimes) knowledgeable (and often credulous and gullible) real flesh-and-blood people in-the-world…

Just as re-combinatorics play a big part in the production (improvisation, composition) of music (perhaps all of it, once you add the sensorimotor affective patterns that are added by the sounds and rhythms of performance and reflected in the brains and senses of the hearer, which is not just an execution of the formal notes), word re-combinatorics no doubt play a role in verbal production too. But language is not “just” music (form + affect): words have meanings (semantics) too. And meaning is not just patterns of words (arbitrary formal symbols). That’s just (one, all powerful) way thoughts can be made communicable, from one thinking head to another. But neither heads, nor worlds, are just another bag-of-words – although the speaking head can be replaced, in the conversation, by LaMDA, who is just a bag of words, mined and mimed by a verbal database + algorithms.

And, before you ask, google images are not the world either.

The google people, some of them smart, and others, some of them not so smart (like Musk), are fantasists who think (incoherently) that they live in a Matrix. In reality, they are just lost in a hermeneutic hall of mirrors of their own creation. The Darwinian Blind Watchmaker, evolution, is an accomplice only to the extent that it has endowed real biological brains with a real and highly adaptive (but fallible, hence foolable) mind-reading “mirror” capacity for understanding the appearance and actions of their real fellow-organisms. That includes, in the case of our species, language, the most powerful mind-reading tool of all. This has equipped us to transmit and receive and decode one another’s thoughts, encoded in words. But it has made us credulous and gullible too.

It has also equipped us to destroy the world, and it looks like we’re well on the road to it…

P.S. LeMoine sounds like a chatbot too, or maybe a Gullibot…

Tell One, Tell All

Dogs are undeniably brilliant. They can communicate (some of) what they want and what they know, and they can perceive (some of) what other dogs as well as humans want and know. Stella is especially brilliant.

But neither Stella, nor any other dog, nor any other animal other than human, has language. They cannot communicate linguistically, which means propositionally

Here is the simple reason why. (This example is just theoretical: I love to have my cat on my bed!):

A “sentence” is not just a string of words.

Pick the simplest of simple sentences: “The cat is on the bed”

If you have a cat and you have a bed, Stella can learn to “call” them “cat” and “bed by pressing a button that says “cat” and “bed.”

And Stella is definitely smart enough to learn (from your behavior) if you don’t want your cat to go on your bed. (You speak sternly to your cat when he goes on your bed, and you shoo him off.)

So, knowing that, Stella is definitely smart enough to go get you and bring you to the bed when she has seen that the cat is on the bed. She knows you don’t want him to do that, and maybe she also likes to see you shoo him off the bed.

All that is really there: She really does know all that; and she is really communicating it to you, intentionally.

And no doubt she can also learn to communicate it to you by pressing the buttons “cat” “on” “bed” (in place of herding you to the bed the old way).

All of that is incontestably true.

But Stella cannot say “(The) cat (is) on (the) bed.” — And not because she does not yet have a button for “the” and “is.” You could train those too.

The reason Stella cannot say “The cat is on the bed” is that “The cat is on the bed” is a subject/predicate proposition, with a truth-value (true). And if Stella could say and mean that proposition, then she could say and mean any and every proposition — including this very sentence, which is likewise a subject/predicate proposition, with a truth-value (true).

But she cannot. And if she cannot make any and every proposition, then she cannot make a proposition at all.

You will want to reply that it’s just because she doesn’t yet have all the necessary vocabulary (and, for the more complex sentence, she also does not have the interest).

But being able to say and mean any and every sentence is not just a matter of vocabulary. It is a capacity that comes with the territory, if you can really say and mean any proposition at all.

“(The) cat (is) on (the) bed” is not a string of words that you say whenever the cat is on the bed, any more than “bed” is just a sound you utter whenever you are looking at a bed.

Nouns are not just proper names of individuals. A bed is a kind of thing, a category of which all beds are members. So if the word “bed” refers to anything at all, it refers to a category, not an individual.

But dogs can categorize too. To categorize is to do the right thing with the right kind of thing (lie on beds and not-lie on thumbtacks). Dogs (like all mammals) can categorize — and learn to do the right thing with the right kind of thing.

One of the things we can learn to do with a category is name it: “ (a) bed.” 

But “bed” is not just a sound we make whenever we see a bed. It is a sound we make whenever we have a bed in mind. Whenever we wish to refer to a member of the category “bed.”

Another thing we can do is to describe a bed: “(a) bed is soft” (not true, but ok for an example). 

But “bed is soft” is not just a string of sounds we make whenever we see or have a soft bed in mind. It is a subject/predicate proposition, stating something to be true about the members of the category “bed”: that they are soft.

Now you may think that Stella can say and mean a proposition too. 

How could that be tested one way or other?.

We have the cue in a property of human language (the only kind of language there is: the rest is just intentional communication capacities, which Stella certainly has): There is no such thing as a “partial language” — one in which you can make this proposition but not that. 

If you have any language at all, you can make any proposition. Vocabulary and prior knowledge are not the problem. You can in principle teach Quantum Mechanics to any member of an isolated Amazon tribe that has had no contact outside his own community and language for thousands of years. You just have to teach the vocabulary and the theory, starting with the words they already have — which means through recombinatory propositions.

This kind of teaching is not training to “name” kinds of thing by their category names; nor is it training to “name” certain states of affairs (involving things and their properties) by certain strings of category names.

Teaching is using propositions (subject/predicate combinations of existing words) to communicate further categories (by defining and describing new categories, which may in turn have their own names. If you know the category to which “bed” refers and you know the category to which “soft” refers, you can communicate that beds are soft by just stating the proposition — if, that is, you have the capacity to express and understand a proposition at all.

There are deep unanswered questions here: Why is our species the only one that can communicate propositionally? Dogs (and apes and elephants and whales and crows) all seem brilliant enough to do it. Why don’t they? What do they lack? Is it specific cognitive capacity? Is it Chomsky’s Universal Grammar? Or is it just a motivational gap?

I don’t know. But I’m pretty sure that no other species has propositionality, otherwise some of them would be discussing it with us by now.

Harnad, S. (2011) From Sensorimotor Categories and Pantomime to Grounded Symbols and Propositions. In M. Tallerman & K. R. Gibson (eds): Handbook of Language Evolution, Oxford University Press.

Vincent-Lamarre, P., Blondin Massé, A, Lopes, M, Lord, M, Marcotte, O, & Harnad, S (2016). The Latent Structure of Dictionaries  TopiCS in Cognitive Science  8(3) 625–659