Chatbots and Melbots

Something similar to LaMDA can be done with music: swallow all of online music space (both scores and digital recordings) and then spew out more of what sounds like Bernstein or (so far mediocre) Bach – but, eventually, who knows? These projections from combinatorics have more scope with music (which, unlike language, really just is acoustic patterns based on recombinations plus some correlations with human vocal expressive affect patterns, whereas words have not just forms but meanings).

Vocal mimicry includes also the mimicry of the acoustic patterns of the vocal expression of affect: anger, fear, sadness, hope. Dynamics, tempo, articulation, even its harmonic features. But these are affects (feelings), not thoughts. Feeling cold is not the same as thinking “I am cold,” which, miraculously, can be expressed by that verbal proposition that states what I am feeling. And language can state anything: “The cat is on the mat.” The words catmat, and on â€“ all have referents, things and states in the worlds they refer to; and we all know, from our sensorimotor experience, what they are. “Meow” imitates the sound a cat makes, but it does not refer to it the way referring words do. And a sentence is not just a series of referring words. It is a proposition, describing something. As such it also has a truth-value: If the cat is really on the mat, then “the cat is on the mat” is TRUE; otherwise FALSE. None of that is true of music (except of course in song, when the musical and the propositional are combined). 

The notion of “transmitting thought” is a bit ambiguous. I transmit thought if I am thinking that the cat is on the mat, and then I say that the cat is on the mat. If instead of saying it, I mime it, like in charades, by imitating the appearance of the cat, maybe making meowing sounds, and gesturing the shape of a mat, and then its position, and then pantomiming a cat, lying on the space that I’ve mimed as a mat… That would indeed transmit to another person the thought that the cat is on the mat. And sufficiently iconic and programmatic music can transmit that thought too, especially if augmented by dance (which is also pantomime). 

[[I think language originated with communication by gestural pantomime; then the gestures became more and more conventional and arbitrary rather than iconic, and that’s also when the true/false proposition was born. But once propositional gesturing began, the vocal/auditory modality had huge practical advantages in propositional communication over the visual/gestural one [just think of what they all are] and language (and its representation in the brain) migrated to the speech/hearing areas where they are now.]]

Yes, music can express affect, and it can even express thought (iconically). But not only is vocal/acoustic imitation not the best of music, it need not be, for music can not only express affect (and mime some thought); it can also inspire and thereby accompany thought in the listener in the way a pianist or orchestra can accompany (and inspire) a violinist or vocalist.

But music is not propositional. It does not state something which is true or false. You cannot take a composer of (instrumental) music (Lieder ohne Worte) to court for having lied. (That’s why the Soviet Union could only oppress Shostakovich, but could not prove he had said anything false or treasonous.) Language (and thought) has semantics, not just form, nor just resemblance in shape to something else: it has propositional content, true or false.

It is true that it is much harder (perhaps impossible) to describe feelings in words, propositionally, than it is to express them, or imitate their expression iconically; but although it is true that it feels like something to think, and that every thought feels different, thinking is not just what it feels like to think something, but what that thought means, propositionally. One can induce the feeling of thinking that the cat is on the mat by miming it; but try doing that with the sentence that precedes this one. Or just about any other sentence. It is language that opened up the world of abstract thought (“truth,” “justice,” “beauty”) and its transmission. Try to transmit the meaning of the preceding sentence in C# minor… Music can transmit affect (feeling). But try transmitting the meaning of this very sentence in C# minor

Not all (felt) brain states are just feelings (even though all thoughts are felt too). Thoughts also have propositional content. Music cannot express that propositional content. (And much of this exchange has been propositional, and about propositional content, not affective, “about” feeling. And, again, what it feels like to think a proposition is not all (or most) of what thinking is, or what a thought means. 

[[Although I don’t find it particularly helpful, some philosophers have pointed out that just as content words are about their referents (cats, mats), thoughts are about propositions. “The cat is on the mat” is about the cat being on the mat – true, if the cat really is on the mat, false, if not. Just as a mat is what is in your mind when you refer to a mat, the cat being on a mat is what the proposition “the cat is on the mat” is “about.” This is the “aboutness” that philosophers mean by their term “intentionality”: what your intended meaning is, the one you “have in mind” when you say, and mean: “the cat is on the mat.” None of this has any counterpart in music. What Beethoven had in mind with Eroica – and what he meant you to have in mind — was originally an admiration for Napolean’s fight for freedom and democracy, and then he changed his mind, removed the dedication, and wanted you not to have that in mind, because he had realized it was untrue; but the symphony’s form remained the same (as far as I know, he did not revise it).

Shostakovich’s symphonies share with poetry the affective property of irony. He could say a symphony was about Lenin’s heroism, but could make it obvious to the sensitive listener that he meant the opposite (although in the 12th symphony he revised it because he feared the irony in the original was too obvious; the result was not too successful…). But poetry can be both literal – which means propositional – and figurative – which means metaphorical; more a representation or expression of a similarity (or a clash) in form than the verbal proposition in which it is expressed (“my love is a red, red rose”).

Music cannot express the literal proposition at all. And even the metaphor requires a vocal (hence verbal) text, which is then “accompanied” by the music, which may express or interpret the literal words affectively, as Bach does with his cantatas. Even Haydn’s creation depends on the implicit “sub-titling” provided by the biblical tale everyone knows. – But all of this is too abstract and strays from the original question of whether LaMDA feels, understands, intends or means anything at all…]]

I’d say what LaMDA showed was that it is surprisingly easy to simulate and automate meaningful human thinking and speaking convincingly (once we have a gargantuan verbal database plus Deep Learning algorithms). We seem to be less perceptive of anomalies (our mind-reading skills are more gullible) there than in computer-generated music (so far), as well as in scores completed by lesser composers. But experts don’t necessarily agree (as with authentic paintings vs. imitations, or even regarding the value of the original). Some things are obvious, but not all, or always. (Is the completion of the Mozart Requiem as unconvincing as the recent completion of Beethoven’s 10th?)

The “symbol grounding problem” — the problem that the symbols of computation as well as language are not connected to their referents — is not the same as the “hard” problem of how organisms can feel. Symbols are manipulated according to rules (algorithms) that apply to the symbols’ arbitrary shapes, not their reference or meaning (if any).  They are only interpretable by us as having referents and meaning because our heads – and bodies – connect our symbols (words and descriptions) to their referents in the world through our sensorimotor capacities and experience.

But the symbol grounding problem would be solved if we knew how to build a robot that could identify and manipulate the referents of its words out there in the real world, as we do, as well as describe and discuss and even alter the states of affairs in the world through propositions, as we do. According to the Turing Test, once a robot can do all that, indistinguishably from any of us, to any of us (lifelong, if need be, not just a 10-minute Loebner-Prize test for 10 minutes) then we have no better or worse grounds for denying or affirming that the TT robot feels than we have with our fellow human beings.

So the symbol-grounding would be solved if it were possible to build a TT-passing robot, but the “hard” problem would not. 

If it turned out that the TT simply cannot be successfully passed by a completely synthetic robot, then it may require a biorobot, with some, maybe most or all the biophysical and biochemical properties of biological organisms. Then it really would be racism to deny that it feel and to deny it human rights. 

The tragedy is that there are already countless nonhuman organisms that do feel, and yet we treat them as if they didn’t, or as if it didn’t matter. That is a problem incomparably greater than the symbol-grounding problem, the other-minds problem, or the problem of whether LaMDA feels (it doesn’t).

(“Conscious” is just a weasel-world for “sentient,” which means, able to feel. And, no, it is not only humans who are sentient.)

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.