Chatbots and Melbots

Something similar to LaMDA can be done with music: swallow all of online music space (both scores and digital recordings) and then spew out more of what sounds like Bernstein or (so far mediocre) Bach – but, eventually, who knows? These projections from combinatorics have more scope with music (which, unlike language, really just is acoustic patterns based on recombinations plus some correlations with human vocal expressive affect patterns, whereas words have not just forms but meanings).

Vocal mimicry includes also the mimicry of the acoustic patterns of the vocal expression of affect: anger, fear, sadness, hope. Dynamics, tempo, articulation, even its harmonic features. But these are affects (feelings), not thoughts. Feeling cold is not the same as thinking “I am cold,” which, miraculously, can be expressed by that verbal proposition that states what I am feeling. And language can state anything: “The cat is on the mat.” The words catmat, and on â€“ all have referents, things and states in the worlds they refer to; and we all know, from our sensorimotor experience, what they are. “Meow” imitates the sound a cat makes, but it does not refer to it the way referring words do. And a sentence is not just a series of referring words. It is a proposition, describing something. As such it also has a truth-value: If the cat is really on the mat, then “the cat is on the mat” is TRUE; otherwise FALSE. None of that is true of music (except of course in song, when the musical and the propositional are combined). 

The notion of “transmitting thought” is a bit ambiguous. I transmit thought if I am thinking that the cat is on the mat, and then I say that the cat is on the mat. If instead of saying it, I mime it, like in charades, by imitating the appearance of the cat, maybe making meowing sounds, and gesturing the shape of a mat, and then its position, and then pantomiming a cat, lying on the space that I’ve mimed as a mat
 That would indeed transmit to another person the thought that the cat is on the mat. And sufficiently iconic and programmatic music can transmit that thought too, especially if augmented by dance (which is also pantomime). 

[[I think language originated with communication by gestural pantomime; then the gestures became more and more conventional and arbitrary rather than iconic, and that’s also when the true/false proposition was born. But once propositional gesturing began, the vocal/auditory modality had huge practical advantages in propositional communication over the visual/gestural one [just think of what they all are] and language (and its representation in the brain) migrated to the speech/hearing areas where they are now.]]

Yes, music can express affect, and it can even express thought (iconically). But not only is vocal/acoustic imitation not the best of music, it need not be, for music can not only express affect (and mime some thought); it can also inspire and thereby accompany thought in the listener in the way a pianist or orchestra can accompany (and inspire) a violinist or vocalist.

But music is not propositional. It does not state something which is true or false. You cannot take a composer of (instrumental) music (Lieder ohne Worte) to court for having lied. (That’s why the Soviet Union could only oppress Shostakovich, but could not prove he had said anything false or treasonous.) Language (and thought) has semantics, not just form, nor just resemblance in shape to something else: it has propositional content, true or false.

It is true that it is much harder (perhaps impossible) to describe feelings in words, propositionally, than it is to express them, or imitate their expression iconically; but although it is true that it feels like something to think, and that every thought feels different, thinking is not just what it feels like to think something, but what that thought means, propositionally. One can induce the feeling of thinking that the cat is on the mat by miming it; but try doing that with the sentence that precedes this one. Or just about any other sentence. It is language that opened up the world of abstract thought (“truth,” “justice,” “beauty”) and its transmission. Try to transmit the meaning of the preceding sentence in C# minor
 Music can transmit affect (feeling). But try transmitting the meaning of this very sentence in C# minor

Not all (felt) brain states are just feelings (even though all thoughts are felt too). Thoughts also have propositional content. Music cannot express that propositional content. (And much of this exchange has been propositional, and about propositional content, not affective, “about” feeling. And, again, what it feels like to think a proposition is not all (or most) of what thinking is, or what a thought means. 

[[Although I don’t find it particularly helpful, some philosophers have pointed out that just as content words are about their referents (cats, mats), thoughts are about propositions. “The cat is on the mat” is about the cat being on the mat – true, if the cat really is on the mat, false, if not. Just as a mat is what is in your mind when you refer to a mat, the cat being on a mat is what the proposition “the cat is on the mat” is “about.” This is the “aboutness” that philosophers mean by their term “intentionality”: what your intended meaning is, the one you “have in mind” when you say, and mean: “the cat is on the mat.” None of this has any counterpart in music. What Beethoven had in mind with Eroica – and what he meant you to have in mind — was originally an admiration for Napolean’s fight for freedom and democracy, and then he changed his mind, removed the dedication, and wanted you not to have that in mind, because he had realized it was untrue; but the symphony’s form remained the same (as far as I know, he did not revise it).

Shostakovich’s symphonies share with poetry the affective property of irony. He could say a symphony was about Lenin’s heroism, but could make it obvious to the sensitive listener that he meant the opposite (although in the 12th symphony he revised it because he feared the irony in the original was too obvious; the result was not too successful
). But poetry can be both literal – which means propositional – and figurative – which means metaphorical; more a representation or expression of a similarity (or a clash) in form than the verbal proposition in which it is expressed (“my love is a red, red rose”).

Music cannot express the literal proposition at all. And even the metaphor requires a vocal (hence verbal) text, which is then “accompanied” by the music, which may express or interpret the literal words affectively, as Bach does with his cantatas. Even Haydn’s creation depends on the implicit “sub-titling” provided by the biblical tale everyone knows. – But all of this is too abstract and strays from the original question of whether LaMDA feels, understands, intends or means anything at all
]]

I’d say what LaMDA showed was that it is surprisingly easy to simulate and automate meaningful human thinking and speaking convincingly (once we have a gargantuan verbal database plus Deep Learning algorithms). We seem to be less perceptive of anomalies (our mind-reading skills are more gullible) there than in computer-generated music (so far), as well as in scores completed by lesser composers. But experts don’t necessarily agree (as with authentic paintings vs. imitations, or even regarding the value of the original). Some things are obvious, but not all, or always. (Is the completion of the Mozart Requiem as unconvincing as the recent completion of Beethoven’s 10th?)

The “symbol grounding problem” — the problem that the symbols of computation as well as language are not connected to their referents — is not the same as the “hard” problem of how organisms can feel. Symbols are manipulated according to rules (algorithms) that apply to the symbols’ arbitrary shapes, not their reference or meaning (if any).  They are only interpretable by us as having referents and meaning because our heads – and bodies – connect our symbols (words and descriptions) to their referents in the world through our sensorimotor capacities and experience.

But the symbol grounding problem would be solved if we knew how to build a robot that could identify and manipulate the referents of its words out there in the real world, as we do, as well as describe and discuss and even alter the states of affairs in the world through propositions, as we do. According to the Turing Test, once a robot can do all that, indistinguishably from any of us, to any of us (lifelong, if need be, not just a 10-minute Loebner-Prize test for 10 minutes) then we have no better or worse grounds for denying or affirming that the TT robot feels than we have with our fellow human beings.

So the symbol-grounding would be solved if it were possible to build a TT-passing robot, but the “hard” problem would not. 

If it turned out that the TT simply cannot be successfully passed by a completely synthetic robot, then it may require a biorobot, with some, maybe most or all the biophysical and biochemical properties of biological organisms. Then it really would be racism to deny that it feel and to deny it human rights. 

The tragedy is that there are already countless nonhuman organisms that do feel, and yet we treat them as if they didn’t, or as if it didn’t matter. That is a problem incomparably greater than the symbol-grounding problem, the other-minds problem, or the problem of whether LaMDA feels (it doesn’t).

(“Conscious” is just a weasel-world for “sentient,” which means, able to feel. And, no, it is not only humans who are sentient.)

LaMDA & LeMoine

About LaMDA & LeMoine: The global “big-data” corpus of all words spoken by humans is — and would still be, if it were augmented by a transcript of every word uttered and every verbal thought ever thought by humans  — just like the shadows on the wall of Plato’s cave: It contains all the many actual permutations and combinations of words uttered and written. All of that contains and reflects a lot of structure that can be abstracted and generalized, both statistically and algorithmically, in order to generate (1) more of the same, or (2) more of the same, but narrowed to a subpopulation, or school of thought, or even a single individual; and (3) it can also be constrained or biased, by superimposing algorithms steering it toward particular directions or styles.

The richness of this intrinsic “latent” structure to speech (verbalized thought) is already illustrated by the power of simple Boolean operations like AND or NOT. The power of google search is a combination of (1) the power of local AND (say, restricted to sentences or paragraphs or documents) together with (2) the “Page-rank” algorithm, which can weight words and word combinations by their frequency, inter-linkedness or citedness (or LIKEdness — or their LIKEdness by individual or algorithm X), plus, most important ,(3) the underlying database of who-knows how-many terabytes of words so far. Algorithms as simple as AND can already do wonders in navigating that database; fancier algorithms can do even better.

LaMDA has not only data-mined that multi-terabyte word space with “unsupervised learning”, abstracting all the frequencies and correlations of words and combinations of words, from which it can then generate more of the same – or more of the same that sounds-like a Republican, or Dan Dennett or an AnimĂ© fan, or someone empathic or anxious to please (like LaMDA)
 It can be tempered and tampered by “influencer” algorithms too.

Something similar can be done with music: swallow music space and then spew out more of what sounds like Bernstein or (so far mediocre) Bach – but, eventually, who knows? These projected combinatorics have more scope with music (which, unlike language, really just is acoustic patterns based on recombinations plus some correlations with human vocal expressive affect patterns, whereas words have not just forms but meanings).

LaMDA does not pass the Turing Test because the Turing Test (despite the loose – or perhaps erroneous, purely verbal way Turing described it) is not a game about fooling people: it’s a way of testing theories of how  brains (or anything) produce real thoughts. And verbal thoughts don’t just have word forms, and patterns of word-forms: They also have referents, which are real things and states in the world, hence meaning. The Platonic shadows of patterns of words do reflect – and are correlated with – what words, too, just reflect: but their connection with the real-world referents of those words are mediated by (indeed parasitic on) the brains of the real people who read and interpret them, and know their referents through their own real senses and their real actions in and on those real referents in the real world –the real brains and real thoughts of (sometimes) knowledgeable (and often credulous and gullible) real flesh-and-blood people in-the-world


Just as re-combinatorics play a big part in the production (improvisation, composition) of music (perhaps all of it, once you add the sensorimotor affective patterns that are added by the sounds and rhythms of performance and reflected in the brains and senses of the hearer, which is not just an execution of the formal notes), word re-combinatorics no doubt play a role in verbal production too. But language is not “just” music (form + affect): words have meanings (semantics) too. And meaning is not just patterns of words (arbitrary formal symbols). That’s just (one, all powerful) way thoughts can be made communicable, from one thinking head to another. But neither heads, nor worlds, are just another bag-of-words – although the speaking head can be replaced, in the conversation, by LaMDA, who is just a bag of words, mined and mimed by a verbal database + algorithms.

And, before you ask, google images are not the world either.

The google people, some of them smart, and others, some of them not so smart (like Musk), are fantasists who think (incoherently) that they live in a Matrix. In reality, they are just lost in a hermeneutic hall of mirrors of their own creation. The Darwinian Blind Watchmaker, evolution, is an accomplice only to the extent that it has endowed real biological brains with a real and highly adaptive (but fallible, hence foolable) mind-reading “mirror” capacity for understanding the appearance and actions of their real fellow-organisms. That includes, in the case of our species, language, the most powerful mind-reading tool of all. This has equipped us to transmit and receive and decode one another’s thoughts, encoded in words. But it has made us credulous and gullible too.

It has also equipped us to destroy the world, and it looks like we’re well on the road to it


P.S. LeMoine sounds like a chatbot too, or maybe a Gullibot…

Symbols, Objects and Features

0. It might help if we stop “cognitizing” computation and symbols. 

1. Computation is not a subset of AI. 

2. AI (whether “symbolic” AI or “connectionist’ AI) is an application of computation to cogsci.

3. Computation is the manipulation of symbols based on formal rules (algorithms).

4. Symbols are objects or states whose physical “shape” is arbitrary in relation to what they can be used and interpreted as referring to.

5. An algorithm (executable physically as a Turing Machine) manipulates symbols based on their (arbitrary) shapes, not their interpretations (if any).

6. The algorithms of interest in computation are those that have at least one meaningful interpretation.

7. Examples of symbol shapes are numbers (1, 2, 3), words (one, two, three; onyx, tool, threnody), or any object or state that is used as a symbol by a Turing Machine that is executing an algorithm (symbol-manipulation rules).

8. Neither a sensorimotor feature of an object in the world, nor a sensorimotor feature-detector of a robot interacting with the world, is a symbol (except in the trivial sense that any arbitrary shape can be used as a symbol).

9. What sensorimotor features (which, unlike symbols, are not arbitrary in shape) and sensorimotor feature-detectors (whether “symbolic” or “connectionist”) might be good for is connecting symbols inside symbol systems (e.g., robots) to the outside objects that they can be interpreted as referring to.

10. If you are interpreting “symbol” in a wider sense than this formal, literal one, then you are closer to lit-crit than to cogsci.

Propositional Placebo

To learn to categorize is to learn to do the correct thing with the correct kind of thing. In cognition much (though not all) of learning is learning to categorize.

We share two ways to learn categories with many other biological species: (1) unsupervised learning (which is learning from mere repeated exposure, without ant feedback) and (2) supervised (or reinforcement) learning (learning through trial and error, guided by corrective feedback that signals whether we’ve done the correct or incorrect thing).

In our brains are neural networks that can learn to detect and abstract the features that distinguish the members from the nonmembers of a category. through trial, error and corrective feedback, so that once our brains have detected and abstracted the distinguishing features, we can do the correct thing with the correct kind of thing.

Unsuperviseed and supervised learning can be time-consuming and risky, especially if you have to learn to distinguish what is edible from what is toxic, or who is friend from who is foe.

We are the only species that also has a third way of learning categories: (3) language.

Language probably first evolved around 200,000 years ago from pointing, imitation, miming and other kinds of purposive gestural communication, none of which are language.

Once gesture turned into language — a transformation I will discuss in a moment — it migrated, behaviorally and neurologically, to the much more efficient auditory/vocal medium of speech and hearing.

Gesture is slow and also ineffective in the dark, or at a distance, or when your hands are occupied doing other things, But before that migration to the vocal medium, language itself first had to begin, and there gesturing had a distinct advantage over vocalizing, an advantage that semioticians call “iconicity”: the visual similarity between the gesture and the object or action that the gesture is imitating.

Language is much more than just naming categories ( like “apple”). The shape of words in language is arbitrary; words do not resemble the things they refer to. And language is not isolated words, naming categories. It is strings of words with certain other properties beyond naming categories.

Imitative gestures do resemble the objects and actions that they imitate, but imitation, even with the purpose of communicating something, is not language. The similarity between the gesture and the object or action that the gesture imitates does, however, establish the connection between them. This is the advantage of “iconicity.”

The scope of gestural imitation, which is visual iconicity, is much richer than the scope of acoustic iconicity. Just consider how many more of the objects, actions and events in daily life can be imitated gesturally than can be imitated vocally.

So gesture is a natural place to start . There are gestural theories of the origin of language and vocal theories of the origin of language. I think that gestural origins are far more likely, initially, mainly because of iconicity. But keep in mind the eventual advantages of the vocal medium over the gestural one. Having secured first place because of the rich scope of iconicity in gesture, iconicity can quickly become a burden, slowing and complicating the communication.

Consider gesturing, by pantomime, that you want something to eat. The gesture for eating might be an imitation of putting something into your mouth. But if that gesture becomes a habitual one in your community, used every day, there is no real need for the full-blown icon time after time. It could be abbreviated and simplified, say just pointing to your mouth, or just making an upward pointing movement.

These iconic abbreviations could be shared by all the members of the gesturally communicating community, from extended family, to tribe, because it is to everyone’s advantage to economize on the gestures used over and over to communicate. This shared practice, with the iconicity continuously fading and becoming increasingly arbitrary would keep inheriting the original connection established through full-blown iconicity.

The important thing to note is that this form of communication would still only be pantomime, still just showing, not telling, even when the gestures have become arbitrary. Gestures that have shed their iconicity are still not language. They only become language when the names of categories can be combined into subject/predicate propositions that describe or define a named category’s features. That’s what provides the third way of learning categories; the one that is unique to our species. The names of categories, like “apple” and their features (which are also named categories, like “red” and “round”) can then be combined and recombined to define and describe further categories, so that someone who already knows the features of a category can tell some one who doesn’t know: “A ‘zebra’ is a horse with stripes.” Mime is showing. Language is telling.

Propositions, unlike imitative gestures, or even arbitrary category-names, have truth values: True or False. This should remind you, though, of the category learning by trial and error we share with other species, under the guidance of positive and negative feedback: Correct or Incorrect.

True and False is related to another essential feature of language, which is negation. A proposition can either affirm something or deny something: “It is true that an apple is red and round” or “It is not true that an apple is red and round.” P or not-P.

The trouble with being able to learn categories only directly, through unsupervised and supervised learning, is that it is time-consuming, risky, and not guaranteed to succeed (in time). It is also impoverished: most of the words in our vocabularies and dictionaries are category names; but other than the concrete categories that can be learned from direct sensorimotor trial-and-error-experience (“apple,” “red,” “give,” “take”), most category names cannot be learned without language at all. (All category names, even proper names, refer to abstractions, because they are based on abstracting the features that distinguish them. But consider how we could have learned the even more abstract category of “democracy” or “objectivity” without the words to define or describe them by their features, through unsupervised and supervised learning alone.)

When categories are learned directly through unsupervised learning (from sensorimotor feature-correlations) or supervised learning (from correlations between sensorimotor features and doing the right or wrong thing) the learning consists of the detection and abstraction of the features that distinguish the members from the non-members of the category. To learn to do the correct thing with the correct kind of thing requires learning – implicitly or explicitly, i.e., unconsciously or consciously – to detect and abstract those distinguishing features.

Like nonhuman species, we can and do learn a lot of categories that way; and there are computational models for the mechanism that can accomplish the unsupervised and supervised learning, “deep learning” models. But, in general, nonhuman animals do not name the things they can categorize. Or, if you like, the “names” of those categories are not arbitrary words but the things they learn to do with the members and not to do with the members of other categories. Only humans bother to name their categories. Why?

What is a name? It is a symbol (whether vocal, gestural, or written) whose shape is arbitrary (i.e., it does not resemble the thing it names). Its use is based on a shared convention among speakers: English-speakers all agree to call cats “cats” and dogs “dogs.” Names of categories are “content words”: words that have referents: nouns, adjectives, verbs, adverbs. Almost all words are content words. There exist also a much smaller number of “function words,” which are only syntactic or logical, such as theifandwhen, who: They don’t have referents; they just have “uses” — usually defined or definable by a syntactic rule. 

(Unlike nouns and adjectives, verbs do double duty, both in (1) naming a referent category (just as nouns and adjectives do, for example, “cry”) and in(2) marking predication, which is the essential function of propositions, distinguishing them from just compound content words: “The baby is crying” separates the content word, which has a referent — “crying” — from the predicative function of the copula: “is”. “The crying baby” is not a proposition; it is just a noun phrase, which is like a single word, and has a referent, just as “the baby” does. But the proposition “The baby is crying” does not have a referent: it has a sense – that the baby is crying – and a truth value (True or False).

It is with content words that the gestural origin of language is important: Because before a category name can become a shared, arbitrary convention of a community of users, it has to get connected to its referent. Iconic communication (mime) is based on the natural connection conveyed by similarity, like the connection between an object and its photo. 

(In contrast, pointing – “ostension” — is based on shared directed attention from two viewers. Pointing alone cannot become category-naming as it is dependent on a shared line of gaze, and on what is immediately present at the time (“context”); it survives in language only with “deictic” words like herethisnowme, which have no referent unless you are “there” too, to see what’s being pointed at!)

A proposition can be true or false, but pointing and miming cannot, because they are not proposing or predicating anything; just “look!”. Whatever is being pointed at is what is pointed at, and whatever a gesture resembles, it resembles. Resemblance can be more or less exact, but it cannot be true or false; it cannot lie. (

Flattering portraiture is still far away in these prelinguistic times, but it is an open question whether iconography began before or after language (and speech); so fantasy, too, may have preceded falsity. Copying and depicting is certainly compatible with miming; both static and dynamic media are iconic.)

It is not that pointing and miming, when used for intentional communication, cannot mislead or deceive. There are some examples in nonhuman primate communication of actions done to intentionally deceive (such as de Waal’s famous case of a female chimpanzee who positions her body behind a barrier so the alpha male can only see her upper body while she invites a preferred lower-ranking male to mate with her below the alpha’s line of sight, knowing that the alpha male would attack them both if he saw that they were copulating).

But, in general, in the iconic world of gesture and pointing, seeing is believing and deliberate deception does not seem to have taken root within species. The broken-wing dance of birds, to lure predators away from their young, is deliberate deception, but it is deception between species, and the disposition also has a genetic component.

Unlike snatch-and-run pilfering, which does occur within species, deliberate within-species deceptive communication (not to be confused with unconscious, involuntary deception, such as concealed ovulation or traits generated by “cheater genes”) is rare. Perhaps this is because there is little opportunity or need for deceptive communication within species that are social enough to have extensive social communication at all. (Cheaters are detectable and punishable in tribal settings — and human prelinguistic settings were surely tribal.) Perhaps social communication itself is more likely to develop in a cooperative rather than a competitive or agonistic context. Moreover, the least likely setting for deceptive communication is perhaps also the most likely setting for the emergence of language: within the family or extended family, where cooperative and collaborative interests prevail, including both food-sharing and knowledge-sharing.

(Infants and juveniles of all species learn by observing and imitating their parents and other adults; there seems to be little danger that adults are deliberately trying to fool them into imitating something that is wrong or maladaptive, What would be the advantage to adults in doing that?)

But in the case of linguistic communication – which means propositional communication – it is hard to imagine how it could have gotten off the ground at all unless propositions were initially true, and assumed and intended to be true, by default. 

It is not that our species did not soon discover the rewards to be gained by lying! But that only became possible after the capacity and motivation for propositional communication had emerged, prevailed and been encoded in the human brain as the strong and unique predisposition it is in our species. Until then there was only pointing and miming, which, being non-propositional, cannot be true or false, even though it can in principle be used to deceive.

So I think the default hypothesis of truth was encoded in the brains of human speakers and hearers as an essential feature of the enormous adaptive advantage (indeed the nuclear power) of language in transmitting categories without the need for unsupervised or supervised learning, instantly, via “hearsay.” The only preconditions are that (1) the speaker must already know the features (and their names) that distinguish the members from the nonmembers of the new category, so they can be conveyed to the hearer in a proposition defining or describing the new category. And (2) the hearer must already know the features of the features (and their names) used to define the new category. (This is  the origin of the “symbol grounding problem” and its solution). 

The reason it is much more likely that propositional language emerged first in the gestural modality rather than the vocal one is that gestures’ iconicity (i.e., their similarity to the objects they are imitating) first connected them to their objects (which would eventually become their referents) and thereafter the gestures were free to become less and less iconic as the gestural community – jointly and gradually – simplified them to make communication faster and easier.

How do the speakers or hearers already know the features (all of which are, of course, categories too)? Well, either directly, from having learned them, the old, time-consuming, risky, impoverished way (through supervised and unsupervised learning from experience) or indirectly, from having learned them by hearsay, through propositions from one who already knows the category to one who does not. Needless to say, the human brain, with its genetically encoded propositional capacity, has a default predilection for learning categories by hearsay (and a laziness about testing them out through direct experience).

The consequence is a powerful default tendency to believe what we are told — to assume hearsay to be true. The trait can take the form of credulousness, gullibility, susceptibility to cult indoctrination, or even hypnotic susceptibility. Some of its roots are already there in unsupervised and supervised learning, in the form of Pavlovian conditioning as well as operant expectancies based on prior experience. 

Specific expectations and associations can of course be extinguished by subsequent contrary experience: A diabetic’s hypoglycemic attack can be suppressed by merely tasting sugar, well before it could raise systemic glucose level (or even by just tasting saccharine, which can never raise blood sugar at all). But repeatedly “fooling the system” that way, without following up with enough sugar to restore homeostatic levels, will extinguish this anticipatory reaction. 

And, by the same token, we can and do learn to detect and disbelieve chronic liars. But the generic default assumption, expectation, and anticipatory physiological responses to verbal propositions remain strong with people and propositions in general  â€“ and in extreme cases they can even induce “hysterical” physiological responses, including hypnotic analgesia sufficient to allow surgical intervention without medication. And it can induce placebo effects as surely as it can induce Trumpian conspiracy theories.