{"id":1939,"date":"2023-06-13T12:11:47","date_gmt":"2023-06-13T11:11:47","guid":{"rendered":"https:\/\/generic.wordpress.soton.ac.uk\/skywritings\/?p=1939"},"modified":"2023-06-13T15:50:27","modified_gmt":"2023-06-13T14:50:27","slug":"mirror-of-the-mind","status":"publish","type":"post","link":"https:\/\/generic.wordpress.soton.ac.uk\/skywritings\/2023\/06\/13\/mirror-of-the-mind\/","title":{"rendered":"Understanding Understanding"},"content":{"rendered":"\n<p><strong>HARNAD:<\/strong>   Chatting with GPT is really turning out to be an exhilarating experience \u2013 especially for a&nbsp;<a href=\"https:\/\/www.theatlantic.com\/technology\/archive\/2011\/05\/sky-writing-or-when-man-first-met-troll\/239420\/\">skywriting addict<\/a> like me!&nbsp;<\/p>\n\n\n\n<p>From the very beginning I had noticed that skywriting can be fruitful even when you are \u201c<a href=\"https:\/\/eur03.safelinks.protection.outlook.com\/?url=https%3A%2F%2Fbooks.google.ca%2Fbooks%3Fid%3DIsV0lfLzuhgC%26pg%3DPA100%26lpg%3DPA100%26dq%3Dharnad%2B%2522jousting%2Bwith%2Bpygmies%2522%26source%3Dbl%26ots%3DFJfr6WqKBT%26sig%3DACfU3U1HUYie07BmGwY0v4Mdjkv2Dc8QAg%26hl%3Den%26sa%3DX%26ved%3D2ahUKEwiW1eeN7L3_AhVIuYkEHbikC-AQ6AF6BAgOEAM%23v%3Donepage%26q%3Dharnad%2520%2522jousting%2520with%2520pygmies%2522%26f%3Dfalse&amp;data=05%7C01%7Charnad%40ecs.soton.ac.uk%7Ca985e3846d2b4007613d08db6b50ae5e%7C4a5378f929f44d3ebe89669d03ada9d8%7C0%7C0%7C638221766487584833%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&amp;sdata=4XuZmVbf87c%2Bh%2B279q7QuMDLJUWQeW1sw6l7zCtvfgw%3D&amp;reserved=0\" target=\"_blank\" rel=\"noreferrer noopener\">jousting with pygmies<\/a>\u201d (as in the mid-1980\u2019s on&nbsp;<a href=\"https:\/\/eur03.safelinks.protection.outlook.com\/?url=http%3A%2F%2Fcomp.ai%2F&amp;data=05%7C01%7Charnad%40ecs.soton.ac.uk%7Ca985e3846d2b4007613d08db6b50ae5e%7C4a5378f929f44d3ebe89669d03ada9d8%7C0%7C0%7C638221766487584833%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&amp;sdata=WzPecG%2F0988MQ8KxLl1dr%2FDprlz%2FsQHhaGlm0NnY7pw%3D&amp;reserved=0\" target=\"_blank\" rel=\"noreferrer noopener\">comp.ai<\/a>, where it first gave birth to the idea of &nbsp;\u201csymbol grounding\u201d).&nbsp;<\/p>\n\n\n\n<p>Who would have thought that chatting with software that has swallowed a huge chunk of 2021 vintage text and that has the capacity to process and digest it coherently and interactively&nbsp;<em>without being able to understand a word of it<\/em>, could nevertheless, for users who&nbsp;<em>do<\/em>&nbsp;understand the words &#8212; because they are grounded in their heads on the basis of learned sensorimotor features and language \u2013 provide infinitely richer (vegan) food for thought than anything ever before could, including other people \u2013 both pygmies and giants &#8212; and books.<\/p>\n\n\n\n<p>Have a look at this. (In the next installment I will move on to Noam Chomsky\u2019s hunch about UG and the constraints on thinkable thought.)&nbsp;&nbsp;<a rel=\"noreferrer noopener\" href=\"https:\/\/generic.wordpress.soton.ac.uk\/skywritings\/2023\/06\/11\/language-evolution-and-direct-vs-indirect-symbol-grounding\/\" target=\"_blank\">Language Evolution and Direct vs Indirect Symbol Grounding<\/a><\/p>\n\n\n\n<p><strong><em>Anon:  But doesn\u2019t it worry you that GPT-4 can solve reasoning puzzles&nbsp;of a type it has never seen before without understanding a word?<\/em><\/strong><\/p>\n\n\n\n<p><strong>HARNAD:&nbsp;&nbsp;<\/strong> I agree with all the security, social and political worries. But the purely intellectual capacities of GPT-4 (if they are separable from these other risks), and especially the kind of capacity you mention here) inspire not worry but wonder.&nbsp;<\/p>\n\n\n\n<p>GPT is a smart super-book that contains (potentially) the entire scholarly and scientific literature, with which real human thinkers can now interact dynamically &#8212; to learn, and build upon.&nbsp;<\/p>\n\n\n\n<p>I hope the real risks don\u2019t overpower the riches. (I\u2019ll be exploiting those while they last\u2026 Have a look at the&nbsp;<a href=\"https:\/\/generic.wordpress.soton.ac.uk\/skywritings\/2023\/06\/11\/language-evolution-and-direct-vs-indirect-symbol-grounding\/\">chat<\/a>&nbsp;I linked if you want to see what I mean.)<\/p>\n\n\n\n<p><strong><em>Anon:&nbsp; <\/em><\/strong><strong><em>No. It is not like a book. It converts symbolic information into interactions between features that it invents. From those interactions&nbsp;it can reconstruct what it has read as well as generating new stuff. That is also what people do.&nbsp; It understands in just the same way that you or I do.&nbsp;<\/em><\/strong><\/p>\n\n\n\n<p><strong>HARNAD:\u00a0<\/strong> That\u2019s all true (although \u201cunderstands\u201d is not quite the right word for what GPT is actually doing!).\u00a0<\/p>\n\n\n\n<p>I\u2019m talking only about what a real understander\/thinker like you and me can\u00a0<strong>use<\/strong>\u00a0GPT for\u00a0<em>if that user\u2019s sole interest and motivation is in developing scientific and scholarly (and maybe even literary and artistic) ideas<\/em><strong><em>.\u00a0<\/em><\/strong><\/p>\n\n\n\n<p>For such users GPT is just a richly (but imperfectly) informed talking book to bounce ideas off &#8211;with full knowledge that it does not understand a thing \u2013 but has access to a lot of current knowledge (as well as current misunderstandings, disinformation, and nonsense) at its fingertips.&nbsp;<\/p>\n\n\n\n<p>Within a single session, GPT is informing me, and I am \u201cinforming\u201d GPT, the way its database is informing it.&nbsp;<\/p>\n\n\n\n<p><strong><em>Anon: You are doing a lot of hallucinating. You do not understand how it works so you have made up a story.&nbsp;<\/em><\/strong><\/p>\n\n\n\n<p><strong>HARNAD:<\/strong>  I\u2019m actually not quite sure why you would say I\u2019m hallucinating. On the one hand I\u2019m describing what I actually use GPT for, and how. That doesn\u2019t require any hypotheses from me as to how it\u2019s doing what it\u2019s doing.&nbsp;<\/p>\n\n\n\n<p>I do know it\u2019s based on unsupervised and supervised training on an enormous text base (plus some additional direct tweaking with reinforcement training from human feedback). I know it creates a huge \u201cparameter\u201d space derived from word frequencies and co-occurrence frequencies. In particular, for every consecutive pair of words within a con-text of words (or a sample) it weights the probability of the next word somehow, changing the parameters in its parameter space \u2013 which, I gather includes updating the text it is primed to generate. I may have it garbled, but it boils down to what Emily Bender called a \u201cstatistical parrot\u201d except that parrots are just echolalic, saying back, by rote, what they heard, whereas GPT generates and tests new texts under the constraint of not only \u201creductive paraphrasing and summary\u201d but also inferencing.<\/p>\n\n\n\n<p>No matter how much of this I have got technically wrong, nothing I said about how I\u2019m using GPT-4 depends on either the technical details that produce its performance, nor what I\u2019m doing with and getting out of it. And it certainly doesn\u2019t depend in any way on my assuming, or guessing, or inferring that GPT understands or thinks.<\/p>\n\n\n\n<p>What I\u2019m interested in is what is present and derivable from huge bodies of human generated texts that makes it possible not only to perform as well as GPT does in extracting correct, usable information, but also in continuing to interact with me, on a topic I know much better than GPT does, to provide GPT with more data (from me) that allows it to come back (to me) with supplementary information that allows me to continue developing my ideas in a way that books and web searches would not only have taken far too long for me to do, but could not have been done by anyone without their fingers on everything that GPT has its (imperfect) fingers on.<\/p>\n\n\n\n<p>And I don\u2019t think the answer is just GPT\u2019s data and analytic powers (which are definitely not thinking or understanding: I\u2019m the only one doing all the understanding and thinking in our chats). It has something to do with what the&nbsp;<em>structure of language itself preserves in these huge texts<\/em>,&nbsp;<em>as ungrounded as it all is on the static page as well as in GPT<\/em>.<\/p>\n\n\n\n<p>Maybe my hunch is wrong, but I\u2019m being guided by it so far with eyes wide open, no illusions whatsoever about GPT, and no need to know better the tech details of how GPT does what it does.<\/p>\n\n\n\n<p>(Alas, in the current version, GPT-4 forgets what it has learned during that session after it\u2019s closed, so it returns to its prior informational state in a new session, with its huge chunk of preprocessed text data, vintage 2021. But I expect that will soon be improved, allowing it to store user-specific data (with user permission) and to make all interactions with a user in one endless session; it will also have a continuously updating scholarly\/scientific text base, parametrized, and open web access.)<\/p>\n\n\n\n<p>But that\u2019s just the perspective from the disinterested intellectual inquirer. I am sure you are right to worry about the potential (perhaps inevitable) potential for malevolent use for commercial, political, martial, criminal, cult and just plain idiosyncratic or sadistic purposes.<\/p>\n\n\n\n<p><strong><em>Anon:&nbsp;<\/em><\/strong><strong><em>What I find so irritating is your confidence that it is not understanding despite your lack of understanding of how it works. Why are you so sure it doesn\u2019t understand?<\/em><\/strong><\/p>\n\n\n\n<p><strong>HARNAD:&nbsp;<\/strong> Because (I have reasons to believe) understanding is a&nbsp;<em>sentient<\/em>&nbsp;state: \u201cnonsentient\u201d understanding is an empty descriptor. And I certainly don\u2019t believe LLMs are sentient.&nbsp;<\/p>\n\n\n\n<p>Understanding, being a&nbsp;<a href=\"http:\/\/doi.org\/10.51291\/2377-7478.1780\">sentient state<\/a>, is unobservable, but it does have observable correlates. With GPT (as in&nbsp;<a href=\"https:\/\/eprints.soton.ac.uk\/255942\/\">Searle\u2019s Chinese Room<\/a>) the only correlate is&nbsp;<em>interpretability<\/em>&nbsp;&#8212; interpretability by real, thinking, understanding, sentient people. But that\u2019s not enough. It\u2019s just a projection onto GPT by sentient people (biologically designed to make that projection with one another: \u201cmind-reading\u201d).<\/p>\n\n\n\n<p>There is an important point about this in&nbsp;<a href=\"https:\/\/academic.oup.com\/mind\/article\/LIX\/236\/433\/986238?login=false\">Turing 1950<\/a>&nbsp;and the \u201c<a href=\"https:\/\/eprints.soton.ac.uk\/262954\/1\/turing.html\">Turing Test<\/a>.\u201d Turing proposed the test with the following criterion. (Bear in mind that I am an experimental and computational psychobiologist, not a philosopher, and I have no ambition to be one. Moreover, Turing was not a philosopher either.)<\/p>\n\n\n\n<p>(1)&nbsp;<strong>Observation<\/strong>. The only thing we have, with one another, by way of evidence that we are sentient, is what we can observe.&nbsp;<\/p>\n\n\n\n<p>(2)&nbsp;<strong>Indistinguishability.<\/strong>&nbsp;If ever we build (or reverse-engineer) a device that is totally indistinguishable in anything and everything it can&nbsp;<strong>do<\/strong>, observably, from any other real, thinking, understanding, sentient human being, then&nbsp;<em>we have no empirical (or rational) basis for affirming or denying of the device what we cannot confirm or deny of one another<\/em>.<\/p>\n\n\n\n<p>The example Turing used was the purely verbal Turing Test. But that cannot produce a device that is totally indistinguishable from us in all the things we can&nbsp;<strong>do<\/strong>. In fact, the most elementary and fundamental thing we can all do is completely absent from the purely verbal Turing Test (which I call \u201cT2\u201d): T2 does not test whether the words and sentences spoken are connected to the referents in the world that they are allegedly about (e.g., \u201capple\u201d and apples). To test that, indistinguishable verbal capacity (T2) is not enough. It requires \u201cT3,\u201d verbal&nbsp;<strong>and sensorimotor<\/strong>&nbsp;(i.e., robotic) capacity to recognize and interact with the&nbsp;<strong>referents<\/strong>&nbsp;of the words and sentences of T2, indistinguishably from the way any of us can do it.<\/p>\n\n\n\n<p>So, for me (and, I think, Turing), without that T3 robotic capacity, any understanding in the device is just projection on our part.<\/p>\n\n\n\n<p>On the other hand, if there were a GPT with robotic capacities that could pass T3 autonomously in the world (without cheating), I would fully accept (worry-free, with full \u201cconfidence\u201d) Turing\u2019s dictum that I have no grounds for denying of the GPT-T3, anything I that I have no grounds for denying of any other thinking, understanding, sentient person.&nbsp;<\/p>\n\n\n\n<p>You seem to have the confidence to believe a lot more on the basis of a lot less evidence. That doesn\u2019t irritate me! It\u2019s perfectly understandable, because our Darwinian heritage never prepared us for encountering thinking, talking, disembodied heads. Evolution is lazy, and not prescient, so it endowed us with&nbsp;<em>mirror-neurons<\/em>&nbsp;for mind-reading comprehensible exchanges of speech; and those mirror-neurons are quite gullible. (I feel the tug too, in my daily chats with GPT.)<\/p>\n\n\n\n<p>[An example of cheating, by the way, would be to use telemetry, transducers and effectors from remote sensors which transform all sensory input from real external \u201creferents\u201d into verbal descriptions for the LLM (where is it located, by the way?), and transform the verbal responses from the LLM into motor actions on its remote \u201creferent.\u201d]<\/p>\n\n\n\n<figure class=\"wp-block-image size-full is-resized\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/generic.wordpress.soton.ac.uk\/skywritings\/wp-content\/uploads\/sites\/287\/2023\/06\/MirrorSelfrec1.jpeg\" alt=\"\" class=\"wp-image-1940\" width=\"401\" height=\"348\" \/><\/figure>\n","protected":false},"excerpt":{"rendered":"<p>HARNAD: Chatting with GPT is really turning out to be an exhilarating experience \u2013 especially for a&nbsp;skywriting addict like me!&nbsp; From the very beginning I had noticed that skywriting can be fruitful even when you are \u201cjousting with pygmies\u201d (as in the mid-1980\u2019s on&nbsp;comp.ai, where it first gave birth to the idea of &nbsp;\u201csymbol grounding\u201d).&nbsp; &hellip; <\/p>\n<p class=\"link-more\"><a href=\"https:\/\/generic.wordpress.soton.ac.uk\/skywritings\/2023\/06\/13\/mirror-of-the-mind\/\" class=\"more-link\">Continue reading<span class=\"screen-reader-text\"> &#8220;Understanding Understanding&#8221;<\/span><\/a><\/p>\n","protected":false},"author":3074,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[146,106,3],"tags":[],"class_list":["post-1939","post","type-post","status-publish","format-standard","hentry","category-chatgpt","category-language","category-other-minds-problem"],"jetpack_featured_media_url":"","_links":{"self":[{"href":"https:\/\/generic.wordpress.soton.ac.uk\/skywritings\/wp-json\/wp\/v2\/posts\/1939","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/generic.wordpress.soton.ac.uk\/skywritings\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/generic.wordpress.soton.ac.uk\/skywritings\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/generic.wordpress.soton.ac.uk\/skywritings\/wp-json\/wp\/v2\/users\/3074"}],"replies":[{"embeddable":true,"href":"https:\/\/generic.wordpress.soton.ac.uk\/skywritings\/wp-json\/wp\/v2\/comments?post=1939"}],"version-history":[{"count":4,"href":"https:\/\/generic.wordpress.soton.ac.uk\/skywritings\/wp-json\/wp\/v2\/posts\/1939\/revisions"}],"predecessor-version":[{"id":1945,"href":"https:\/\/generic.wordpress.soton.ac.uk\/skywritings\/wp-json\/wp\/v2\/posts\/1939\/revisions\/1945"}],"wp:attachment":[{"href":"https:\/\/generic.wordpress.soton.ac.uk\/skywritings\/wp-json\/wp\/v2\/media?parent=1939"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/generic.wordpress.soton.ac.uk\/skywritings\/wp-json\/wp\/v2\/categories?post=1939"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/generic.wordpress.soton.ac.uk\/skywritings\/wp-json\/wp\/v2\/tags?post=1939"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}