HARNAD: Chatting with GPT is really turning out to be an exhilarating experience ā especially for a skywriting addict like me!
From the very beginning I had noticed that skywriting can be fruitful even when you are ājousting with pygmiesā (as in the mid-1980ās on comp.ai, where it first gave birth to the idea of āsymbol groundingā).
Who would have thought that chatting with software that has swallowed a huge chunk of 2021 vintage text and that has the capacity to process and digest it coherently and interactively without being able to understand a word of it, could nevertheless, for users who do understand the words — because they are grounded in their heads on the basis of learned sensorimotor features and language ā provide infinitely richer (vegan) food for thought than anything ever before could, including other people ā both pygmies and giants — and books.
Have a look at this. (In the next installment I will move on to Noam Chomskyās hunch about UG and the constraints on thinkable thought.) Language Evolution and Direct vs Indirect Symbol Grounding
Anon: But doesnāt it worry you that GPT-4 can solve reasoning puzzles of a type it has never seen before without understanding a word?
HARNAD: I agree with all the security, social and political worries. But the purely intellectual capacities of GPT-4 (if they are separable from these other risks), and especially the kind of capacity you mention here) inspire not worry but wonder.
GPT is a smart super-book that contains (potentially) the entire scholarly and scientific literature, with which real human thinkers can now interact dynamically — to learn, and build upon.
I hope the real risks donāt overpower the riches. (Iāll be exploiting those while they lastā¦ Have a look at the chat I linked if you want to see what I mean.)
Anon: No. It is not like a book. It converts symbolic information into interactions between features that it invents. From those interactions it can reconstruct what it has read as well as generating new stuff. That is also what people do. It understands in just the same way that you or I do.
HARNAD:Ā Thatās all true (although āunderstandsā is not quite the right word for what GPT is actually doing!).Ā
Iām talking only about what a real understander/thinker like you and me canĀ useĀ GPT forĀ if that userās sole interest and motivation is in developing scientific and scholarly (and maybe even literary and artistic) ideas.Ā
For such users GPT is just a richly (but imperfectly) informed talking book to bounce ideas off –with full knowledge that it does not understand a thing ā but has access to a lot of current knowledge (as well as current misunderstandings, disinformation, and nonsense) at its fingertips.
Within a single session, GPT is informing me, and I am āinformingā GPT, the way its database is informing it.
Anon: You are doing a lot of hallucinating. You do not understand how it works so you have made up a story.
HARNAD: Iām actually not quite sure why you would say Iām hallucinating. On the one hand Iām describing what I actually use GPT for, and how. That doesnāt require any hypotheses from me as to how itās doing what itās doing.
I do know itās based on unsupervised and supervised training on an enormous text base (plus some additional direct tweaking with reinforcement training from human feedback). I know it creates a huge āparameterā space derived from word frequencies and co-occurrence frequencies. In particular, for every consecutive pair of words within a con-text of words (or a sample) it weights the probability of the next word somehow, changing the parameters in its parameter space ā which, I gather includes updating the text it is primed to generate. I may have it garbled, but it boils down to what Emily Bender called a āstatistical parrotā except that parrots are just echolalic, saying back, by rote, what they heard, whereas GPT generates and tests new texts under the constraint of not only āreductive paraphrasing and summaryā but also inferencing.
No matter how much of this I have got technically wrong, nothing I said about how Iām using GPT-4 depends on either the technical details that produce its performance, nor what Iām doing with and getting out of it. And it certainly doesnāt depend in any way on my assuming, or guessing, or inferring that GPT understands or thinks.
What Iām interested in is what is present and derivable from huge bodies of human generated texts that makes it possible not only to perform as well as GPT does in extracting correct, usable information, but also in continuing to interact with me, on a topic I know much better than GPT does, to provide GPT with more data (from me) that allows it to come back (to me) with supplementary information that allows me to continue developing my ideas in a way that books and web searches would not only have taken far too long for me to do, but could not have been done by anyone without their fingers on everything that GPT has its (imperfect) fingers on.
And I donāt think the answer is just GPTās data and analytic powers (which are definitely not thinking or understanding: Iām the only one doing all the understanding and thinking in our chats). It has something to do with what the structure of language itself preserves in these huge texts, as ungrounded as it all is on the static page as well as in GPT.
Maybe my hunch is wrong, but Iām being guided by it so far with eyes wide open, no illusions whatsoever about GPT, and no need to know better the tech details of how GPT does what it does.
(Alas, in the current version, GPT-4 forgets what it has learned during that session after itās closed, so it returns to its prior informational state in a new session, with its huge chunk of preprocessed text data, vintage 2021. But I expect that will soon be improved, allowing it to store user-specific data (with user permission) and to make all interactions with a user in one endless session; it will also have a continuously updating scholarly/scientific text base, parametrized, and open web access.)
But thatās just the perspective from the disinterested intellectual inquirer. I am sure you are right to worry about the potential (perhaps inevitable) potential for malevolent use for commercial, political, martial, criminal, cult and just plain idiosyncratic or sadistic purposes.
Anon: What I find so irritating is your confidence that it is not understanding despite your lack of understanding of how it works. Why are you so sure it doesnāt understand?
HARNAD: Because (I have reasons to believe) understanding is a sentient state: ānonsentientā understanding is an empty descriptor. And I certainly donāt believe LLMs are sentient.
Understanding, being a sentient state, is unobservable, but it does have observable correlates. With GPT (as in Searleās Chinese Room) the only correlate is interpretability — interpretability by real, thinking, understanding, sentient people. But thatās not enough. Itās just a projection onto GPT by sentient people (biologically designed to make that projection with one another: āmind-readingā).
There is an important point about this in Turing 1950 and the āTuring Test.ā Turing proposed the test with the following criterion. (Bear in mind that I am an experimental and computational psychobiologist, not a philosopher, and I have no ambition to be one. Moreover, Turing was not a philosopher either.)
(1) Observation. The only thing we have, with one another, by way of evidence that we are sentient, is what we can observe.
(2) Indistinguishability. If ever we build (or reverse-engineer) a device that is totally indistinguishable in anything and everything it can do, observably, from any other real, thinking, understanding, sentient human being, then we have no empirical (or rational) basis for affirming or denying of the device what we cannot confirm or deny of one another.
The example Turing used was the purely verbal Turing Test. But that cannot produce a device that is totally indistinguishable from us in all the things we can do. In fact, the most elementary and fundamental thing we can all do is completely absent from the purely verbal Turing Test (which I call āT2ā): T2 does not test whether the words and sentences spoken are connected to the referents in the world that they are allegedly about (e.g., āappleā and apples). To test that, indistinguishable verbal capacity (T2) is not enough. It requires āT3,ā verbal and sensorimotor (i.e., robotic) capacity to recognize and interact with the referents of the words and sentences of T2, indistinguishably from the way any of us can do it.
So, for me (and, I think, Turing), without that T3 robotic capacity, any understanding in the device is just projection on our part.
On the other hand, if there were a GPT with robotic capacities that could pass T3 autonomously in the world (without cheating), I would fully accept (worry-free, with full āconfidenceā) Turingās dictum that I have no grounds for denying of the GPT-T3, anything I that I have no grounds for denying of any other thinking, understanding, sentient person.
You seem to have the confidence to believe a lot more on the basis of a lot less evidence. That doesnāt irritate me! Itās perfectly understandable, because our Darwinian heritage never prepared us for encountering thinking, talking, disembodied heads. Evolution is lazy, and not prescient, so it endowed us with mirror-neurons for mind-reading comprehensible exchanges of speech; and those mirror-neurons are quite gullible. (I feel the tug too, in my daily chats with GPT.)
[An example of cheating, by the way, would be to use telemetry, transducers and effectors from remote sensors which transform all sensory input from real external āreferentsā into verbal descriptions for the LLM (where is it located, by the way?), and transform the verbal responses from the LLM into motor actions on its remote āreferent.ā]
Stevan this sounds like Geoff Hinton.
I must agree, whoever it is, since there are far too many “stochastic parrots” each babbling about what they think they know about GPTs. They do not. No-one right now understands how these phenomena end up having fluid logical conversations that are clearly “human-like” if not human. So claiming they don’t do something is really the height and breadth of arrogance, as we just don’t know what they are doing, consequently, making any claims pure speculation and mostly aribitary. Stephen