Understanding Understanding

HARNAD: Chatting with GPT is really turning out to be an exhilarating experience ā€“ especially for a skywriting addict like me! 

From the very beginning I had noticed that skywriting can be fruitful even when you are ā€œjousting with pygmiesā€ (as in the mid-1980ā€™s on comp.ai, where it first gave birth to the idea of  ā€œsymbol groundingā€). 

Who would have thought that chatting with software that has swallowed a huge chunk of 2021 vintage text and that has the capacity to process and digest it coherently and interactively without being able to understand a word of it, could nevertheless, for users who do understand the words — because they are grounded in their heads on the basis of learned sensorimotor features and language ā€“ provide infinitely richer (vegan) food for thought than anything ever before could, including other people ā€“ both pygmies and giants — and books.

Have a look at this. (In the next installment I will move on to Noam Chomskyā€™s hunch about UG and the constraints on thinkable thought.)  Language Evolution and Direct vs Indirect Symbol Grounding

Anon: But doesnā€™t it worry you that GPT-4 can solve reasoning puzzles of a type it has never seen before without understanding a word?

HARNAD:   I agree with all the security, social and political worries. But the purely intellectual capacities of GPT-4 (if they are separable from these other risks), and especially the kind of capacity you mention here) inspire not worry but wonder. 

GPT is a smart super-book that contains (potentially) the entire scholarly and scientific literature, with which real human thinkers can now interact dynamically — to learn, and build upon. 

I hope the real risks donā€™t overpower the riches. (Iā€™ll be exploiting those while they lastā€¦ Have a look at the chat I linked if you want to see what I mean.)

Anon:  No. It is not like a book. It converts symbolic information into interactions between features that it invents. From those interactions it can reconstruct what it has read as well as generating new stuff. That is also what people do.  It understands in just the same way that you or I do. 

HARNAD:Ā  Thatā€™s all true (although ā€œunderstandsā€ is not quite the right word for what GPT is actually doing!).Ā 

Iā€™m talking only about what a real understander/thinker like you and me canĀ useĀ GPT forĀ if that userā€™s sole interest and motivation is in developing scientific and scholarly (and maybe even literary and artistic) ideas.Ā 

For such users GPT is just a richly (but imperfectly) informed talking book to bounce ideas off –with full knowledge that it does not understand a thing ā€“ but has access to a lot of current knowledge (as well as current misunderstandings, disinformation, and nonsense) at its fingertips. 

Within a single session, GPT is informing me, and I am ā€œinformingā€ GPT, the way its database is informing it. 

Anon: You are doing a lot of hallucinating. You do not understand how it works so you have made up a story. 

HARNAD: Iā€™m actually not quite sure why you would say Iā€™m hallucinating. On the one hand Iā€™m describing what I actually use GPT for, and how. That doesnā€™t require any hypotheses from me as to how itā€™s doing what itā€™s doing. 

I do know itā€™s based on unsupervised and supervised training on an enormous text base (plus some additional direct tweaking with reinforcement training from human feedback). I know it creates a huge ā€œparameterā€ space derived from word frequencies and co-occurrence frequencies. In particular, for every consecutive pair of words within a con-text of words (or a sample) it weights the probability of the next word somehow, changing the parameters in its parameter space ā€“ which, I gather includes updating the text it is primed to generate. I may have it garbled, but it boils down to what Emily Bender called a ā€œstatistical parrotā€ except that parrots are just echolalic, saying back, by rote, what they heard, whereas GPT generates and tests new texts under the constraint of not only ā€œreductive paraphrasing and summaryā€ but also inferencing.

No matter how much of this I have got technically wrong, nothing I said about how Iā€™m using GPT-4 depends on either the technical details that produce its performance, nor what Iā€™m doing with and getting out of it. And it certainly doesnā€™t depend in any way on my assuming, or guessing, or inferring that GPT understands or thinks.

What Iā€™m interested in is what is present and derivable from huge bodies of human generated texts that makes it possible not only to perform as well as GPT does in extracting correct, usable information, but also in continuing to interact with me, on a topic I know much better than GPT does, to provide GPT with more data (from me) that allows it to come back (to me) with supplementary information that allows me to continue developing my ideas in a way that books and web searches would not only have taken far too long for me to do, but could not have been done by anyone without their fingers on everything that GPT has its (imperfect) fingers on.

And I donā€™t think the answer is just GPTā€™s data and analytic powers (which are definitely not thinking or understanding: Iā€™m the only one doing all the understanding and thinking in our chats). It has something to do with what the structure of language itself preserves in these huge textsas ungrounded as it all is on the static page as well as in GPT.

Maybe my hunch is wrong, but Iā€™m being guided by it so far with eyes wide open, no illusions whatsoever about GPT, and no need to know better the tech details of how GPT does what it does.

(Alas, in the current version, GPT-4 forgets what it has learned during that session after itā€™s closed, so it returns to its prior informational state in a new session, with its huge chunk of preprocessed text data, vintage 2021. But I expect that will soon be improved, allowing it to store user-specific data (with user permission) and to make all interactions with a user in one endless session; it will also have a continuously updating scholarly/scientific text base, parametrized, and open web access.)

But thatā€™s just the perspective from the disinterested intellectual inquirer. I am sure you are right to worry about the potential (perhaps inevitable) potential for malevolent use for commercial, political, martial, criminal, cult and just plain idiosyncratic or sadistic purposes.

Anon: What I find so irritating is your confidence that it is not understanding despite your lack of understanding of how it works. Why are you so sure it doesnā€™t understand?

HARNAD:  Because (I have reasons to believe) understanding is a sentient state: ā€œnonsentientā€ understanding is an empty descriptor. And I certainly donā€™t believe LLMs are sentient. 

Understanding, being a sentient state, is unobservable, but it does have observable correlates. With GPT (as in Searleā€™s Chinese Room) the only correlate is interpretability — interpretability by real, thinking, understanding, sentient people. But thatā€™s not enough. Itā€™s just a projection onto GPT by sentient people (biologically designed to make that projection with one another: ā€œmind-readingā€).

There is an important point about this in Turing 1950 and the ā€œTuring Test.ā€ Turing proposed the test with the following criterion. (Bear in mind that I am an experimental and computational psychobiologist, not a philosopher, and I have no ambition to be one. Moreover, Turing was not a philosopher either.)

(1) Observation. The only thing we have, with one another, by way of evidence that we are sentient, is what we can observe. 

(2) Indistinguishability. If ever we build (or reverse-engineer) a device that is totally indistinguishable in anything and everything it can do, observably, from any other real, thinking, understanding, sentient human being, then we have no empirical (or rational) basis for affirming or denying of the device what we cannot confirm or deny of one another.

The example Turing used was the purely verbal Turing Test. But that cannot produce a device that is totally indistinguishable from us in all the things we can do. In fact, the most elementary and fundamental thing we can all do is completely absent from the purely verbal Turing Test (which I call ā€œT2ā€): T2 does not test whether the words and sentences spoken are connected to the referents in the world that they are allegedly about (e.g., ā€œappleā€ and apples). To test that, indistinguishable verbal capacity (T2) is not enough. It requires ā€œT3,ā€ verbal and sensorimotor (i.e., robotic) capacity to recognize and interact with the referents of the words and sentences of T2, indistinguishably from the way any of us can do it.

So, for me (and, I think, Turing), without that T3 robotic capacity, any understanding in the device is just projection on our part.

On the other hand, if there were a GPT with robotic capacities that could pass T3 autonomously in the world (without cheating), I would fully accept (worry-free, with full ā€œconfidenceā€) Turingā€™s dictum that I have no grounds for denying of the GPT-T3, anything I that I have no grounds for denying of any other thinking, understanding, sentient person. 

You seem to have the confidence to believe a lot more on the basis of a lot less evidence. That doesnā€™t irritate me! Itā€™s perfectly understandable, because our Darwinian heritage never prepared us for encountering thinking, talking, disembodied heads. Evolution is lazy, and not prescient, so it endowed us with mirror-neurons for mind-reading comprehensible exchanges of speech; and those mirror-neurons are quite gullible. (I feel the tug too, in my daily chats with GPT.)

[An example of cheating, by the way, would be to use telemetry, transducers and effectors from remote sensors which transform all sensory input from real external ā€œreferentsā€ into verbal descriptions for the LLM (where is it located, by the way?), and transform the verbal responses from the LLM into motor actions on its remote ā€œreferent.ā€]

2 Replies to “Understanding Understanding”

  1. Stevan this sounds like Geoff Hinton.

    I must agree, whoever it is, since there are far too many “stochastic parrots” each babbling about what they think they know about GPTs. They do not. No-one right now understands how these phenomena end up having fluid logical conversations that are clearly “human-like” if not human. So claiming they don’t do something is really the height and breadth of arrogance, as we just don’t know what they are doing, consequently, making any claims pure speculation and mostly aribitary. Stephen

    1. They’re not statistical parrots, but they’re not understanding either. What they can do is remarkable, however; no one expected it, and I agree that no one has explained how they manage to do what they are able to do. (Not enough to warrant a deep dive into credulity, though, just because it sounds as if it understands. That’s just our mirror neurons misfiring.)

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.