Ebeneezer’s Yuletide Homily

This interview just reiterated how everyone is still gob-smacked at how much transformers turn out to be able to do by swallowing and chewing more and more of the Internet with more and more computers, by filling in all the local bigram blanks, globally.

They’re right to be gob-smacked and non-plussed, because nobody expected it, and nobody has come close to explaining it. So, what do they do instead? Float a sci-fi icon – AY! GEE! EYE! – with no empirical substance or explanatory content, just: “It’s Coming!”… plus a lot of paranoia about what “they’re” not telling us, and who’s going to get super-rich out of it all, and whether it’s going to destroy us all.

And meanwhile Planetary Melt-Down (PMD) proceeds apace, safely muzzled by collective cog-diss, aided and abetted by those other three Anthropocene favorites: Greed, Malice and Bellicosity (GMB) — a species armed to the hilt, with both weapons and words, and words as weapons (WWWW)..

Wanna know what I think’s really going on? Language itself has gone rogue, big-time. It’s Advanced alright; and General; but it’s anything but Intelligence, in our hands, and mouths.

And it started 299,999 B.C.

Spielberg’s AI: Another Cuddly No-Brainer

It would have been possible to make an intelligent film about Artificial Intelligence — even a cuddly-intelligent film. And without asking for too much from the viewer. It would just ask for a bit more thought from the maker. 

AI is about a “robot” boy who is “programmed” to love his adoptive human mother but is discriminated against because he is just a robot. I put both “robot” and “programmed” in scare-quotes, because these are the two things that should have been given more thought before making the movie. (Most of this critique also applies to the short story by Brian Aldiss that inspired the movie, but the buck stops with the film as made, and its maker.)

So, what is a “robot,” exactly? It’s a man-made system that can move independently. So, is a human baby a robot? Let’s say not, though it fits the definition so far! It’s a robot only if it’s not made in the “usual way” we make babies. So, is a test-tube fertilized baby, or a cloned one, a robot? No. Even one that grows entirely in an incubator? No, it’s still growing from “naturally” man-made cells, or clones of them.

What about a baby with most of its organs replaced by synthetic organs? Is a baby with a silicon heart part-robot? Does it become more robot as we give it more synthetic organs? What if part of its brain is synthetic, transplanted because of an accident or disease? Does that make the baby part robot? And if all the parts were swapped, would that make it all robot?

I think we all agree intuitively, once we think about it, that this is all very arbitrary: The fact that part or all of someone is synthetic is not really what we mean by a robot. If someone you knew were gradually replaced, because of a progressive disease, by synthetic organs, but they otherwise stayed themselves, at no time would you say they had disappeared and been replaced by a robot — unless, of course they did “disappear,” and some other personality took their place.

But the trouble with that, as a “test” of whether or not something has become a robot, is that exactly the same thing can happen without any synthetic parts at all: Brain damage can radically change someone’s personality, to the point where they are not familiar or recognizable at all as the person you knew — yet we would not call such a new personality a robot; at worst, it’s another person, in place of the one you once knew. So what makes it a “robot” instead of a person in the synthetic case? Or rather, what — apart from being made of (some or all) synthetic parts — is it to be a “robot”?

Now we come to the “programming.” AI’s robot-boy is billed as being “programmed” to love. Now exactly what does it mean to be “programmed” to love? I know what a computer programme is. It is a code that, when it is run on a machine, makes the machine go into various states — on/off, hot/cold, move/don’t-move, etc. What about me? Does my heart beat because it is programmed (by my DNA) to beat, or for some other reason? What about my breathing? What about my loving? I don’t mean choosing to love one person rather than another (if we can “choose” such things at all, we get into the problem of “free will,” which is a bigger question than what we are considering here): I mean choosing to be able to love — or to feel anything at all: Is our species not “programmed” for our capacity to feel by our DNA, as surely as we are programmed for our capacity to breathe or walk?

Let’s not get into technical questions about whether or not the genetic code that dictates our shape, our growth, and our other capacities is a “programme” in exactly the same sense as a computer programme. Either way, it’s obvious that a baby can no more “choose” to be able to feel than it can choose to be able to fly. So this is another non-difference between us and the robot-boy with the capacity to feel love.

So what is the relevant way in which the robot-boy differs from us, if it isn’t just that it has synthetic parts, and it isn’t because its capacity for feeling is any more (or less) “programmed” than our own is?

The film depicts how, whatever the difference is, our attitude to it is rather like racism. We mistreat robots because they are different from us. We’ve done that sort of thing before, because of the color of people’s skins; we’re just as inclined to do it because of what’s under their skins.

But what the film misses completely is that, if the robot-boy really can feel (and, since this is fiction, we are meant to accept the maker’s premise that he can), then mistreating him is not just like racism, it is racism, as surely as it would be if we started to mistreat a biological boy because parts of him were replaced by synthetic parts. Racism (and, for that matter, speciesism, and terrestrialism) is simply our readiness to hurt or ignore the feelings of feeling creatures because we think that, owing to some difference between them and us, their feelings do not matter.

Now you might be inclined to say: This film doesn’t sound like a no-brainer at all, if it makes us reflect on racism, and on mistreating creatures because they are different! But the trouble is that it does not really make us reflect on racism, or even on what robots and programming are. It simply plays upon the unexamined (and probably even incoherent) stereotypes we have about such things already.

There is a scene where still-living but mutilated robots, with their inner metal showing, are scavenging among the dismembered parts of dead robots (killed in a sadistic rodeo) to swap for defective parts of their own. But if it weren’t for the metal, this could be real people looking for organ transplants. It’s the superficial cue from the metal that keeps us in a state of fuzzy ambiguity about what they are. The fact that they are metal on the inside must mean they are different in some way: But what way (if we accept the film’s premise that they really do feel)? It becomes trivial and banal if this is all just about cruelty to feeling people with metal organs.

There would have been ways to make it less of a no-brainer. The ambiguity could have been about something much deeper than metal: It could have been about whether other systems really do feel, or just act as if they feel, and how we could possibly know that, or tell the difference, and what difference that difference could really make — but that film would have had to be called “TT” (for Turing Test) rather than “AI” or “ET,” and it would have had to show (while keeping in touch with our “cuddly” feelings) how we are exactly in the same boat when we ask this question about one another as when we ask it about “robots.”

Instead, we have the robot-boy re-enacting Pinnochio’s quest to find the blue fairy to make him into a “real” boy. But we know what Pinnochio meant by “real”: He just wanted to be made of flesh instead of wood. Is this just a re-make of Pinnochio then, in metal? The fact that the movie is made of so many old parts in any case (Wizard of Oz, Revenge of the Zombies, ET, Star Wars, Water-World, I couldn’t possibly count them all) suggests that that’s really all there was to it. Pity. An opportunity to do build some real intelligence (and feeling) into a movie, missed.

“Body Awareness”?

Body Awareness: Scientists: Give Robots a Basic Sense of ‘Proprioception

Giving a robot a body with the sensorimotor capacities needed to ground its words in its sensorimotor capacity to pick out and interact with the referents of its words in the world is grounding.

But that is completely (completely) trivial at toy-scale. 

No “grounds” whatsoever to talk about “body awareness” there. 

That would only kick in – (if dopamine etc. is not needed too) — at Turing-scale for robotic capacities. 

And if those include human language-learning capacity, that would pretty much have to be at LLM-scale too (not Siri-scale).

Now here’s the rub (and if you miss it, or dismiss it, we are talking past one another):

There is no way to ground an LLM top-down, dropping connections from individual words to their sensorimotor referents (like a man-o-war jelly-fish).

The only way to “align” sensorimotor grounding with the referents of words is bottom-up, from the ground)

That does not mean every word has to be robotically grounded directly.

Once at least one Minimal Grounding Set (Minset) has been grounded directly (i.e., robotically) then (assuming the robot has the capacity for propositions too — not just for directly naming everything) all the rest of LLM-space can be grounded indirectly by verbal definition or description.

Everything is reachable indirectly by words, once you have a grounded Minset.

But that all has to be done bottom-up too.

On the (too) many faces of consciousness

Harnad, S. (2021). On the (Too) Many Faces of ConsciousnessJournal of Consciousness Studies28(7-8), 61-66.

Abstract:  Pereira, like some other authors, both likens and links consciousness to homeostasis. He accordingly recommends that some measures of homeostasis be taken into account as biomarkers for sentience in patients who are in a chronic vegetative state. He rightly defines “sentience” as the capacity to feel (anything). But the message is scrambled in an incoherent welter of weasel-words for consciousness and the notion (in which he is also not alone) that there are two consciousnesses (sentience and “cognitive consciousness”). I suggest that one “hard problem” of consciousness is already more than enough.

Homeostasis. A thermostat is a homeostat. It measures temperature and controls a furnace. It “defends” a set temperature by turning on the furnace when the temperature falls below the set point and turning the furnace off when it goes above the set point. This process of keeping variables within set ranges is called homeostasis. A higher-order of homeostasis (“allostasis”) would be an integrative control system that received the temperature from a fleet of different thermostats, furnaces and climates, doing computations on them all based on the available fuel for the furnaces and the pattern of changes in the temperatures, dynamically modifying their set-points so as to defend an overall optimum for the whole fleet.

Biological organisms’ bodies have homeostatic and allostatic mechanisms of this kind, ranging over functions like temperature, heart-rate, blood-sugar, immune responses, breathing and balance – functions we would call “vegetative” – as well as functions we consider “cognitive,” such as attention and memory.

Sentience. Pereira (2021) rightly distinguishes between sentience itself – any state that it feels like something to be in – and cognition, which is also sentient, but involves more complicated thought processes, especially verbal ones. “Cognitive,” however, is often a weasel-word – one of many weasel-words spawned by our unsuccessful efforts to get a handle on “consciousness,” which is itself a weasel-word for sentience,which simply means feeling (feeling anything at all, from warm or tired or hungry, to angry or joyful or jealous, including what it feels like to see, hear, touch, taste or smell something, and what it feels like to understand (or think you understand) the meaning of a sentence or the proof of a theorem).

When Pereira speaks of sentience, however, he usually means it literally: a state is sentient if it is felt (e.g., pain); and an organism is sentient if it is able to feel. The main point of Pereira’s paper is that the tests for “consciousness” in human patients who are in a chronic vegetative state are insufficient. Such patients cannot make voluntary movements, nor can they understand or respond to language, but they still have sleeping and waking states, as well as reflexes, including eye-opening, chewing and some vocalizations; and their homeostatic vegetative functions persist. 

Vegetative states. Pereira insists, rightly, that if patients in a chronic vegetative state can still feel (e.g., pain) then they are still sentient. With Laureys (2019) and others, he holds that there are two networks for “awareness” (another weasel-word), one related to wakefulness and the other to “cognitive representations of the environment” (more weasel-words). Pereira accordingly recommends homeostasis-related measures such as lactate concentrations in the cerebrospinal fluid and astrocyte transport “waves” as biomarkers for sentience where behavioral tests and cerebral imagery draw a blank.

This seems reasonable enough. The “precautionary principle” (Birch 2017) dictates that the patients should be given every benefit of the doubt about whether they can feel. But what about these two networks of “awareness/consciousness/subjectivity” and their many other variants (“qualia” – “noetic” and “anoetic,” “internal” and “external”) and the very notion of two kinds of “consciousnesses”: “cognitive” and “noncognitive”?

Weasel-Words. Weasel-words are words used (deliberately or inadvertently) to mask redundancy or incoherence. They often take the form of partial synonyms that give the impression that there are more distinct entities or variables at issue than there really are. Such is the case with the Protean notion of “consciousness,” for which there are countless Mustelidian memes besides the ones I’ve already mentioned, including: subjective states, conscious states, mental states, phenomenal states, qualitative states, intentional states, intentionality, subjectivity, mentality, private states, 1st-person view, 3rd person view, contentful states, reflexive states, representational states, sentient states, experiential states, reflexivity, self-awareness, self-consciousness, sentience, raw feels, feeling, experience, soul, spirit, mind….

I think I know where the confusion resides, and also, if not when the confusion started, at least when it was compounded and widely propagated by Block’s (1995) target article in Behavioral and Brain Sciences “On a confusion about a function of consciousness.” It was there that Block baptized an explicit distinction between (at least) two “consciousnesses”: “phenomenal consciousness” and “access consciousness”:

“Consciousness is a mongrel concept: there are a number of very different ‘consciousnesses.’ Phenomenal consciousness is experience; the phenomenally conscious aspect of a state is what it is like to be in that state. The mark of access-consciousness, by contrast, is availability for use in reasoning and rationally guiding speech and action.”

Feeling. What Block meant by “phenomenal consciousness” is obviously sentience; and what he meant to say (in a needlessly Nagelian way) is that there is something it feels like (not the needlessly vague and equivocal “is like” of Nagel 1974) to be in a sentient state. In a word, a sentient state is a felt state. (There is something it “is like” for water to be in a state of boiling: that something is what happens to the state of water above the temperature of 212 degrees Fahrenheit; but it does not feel like anything to the water to be in that state – only to a sentient organism that makes the mistake of reaching into the water.)

Block’s “access consciousness,” in contrast, is no kind of  “consciousness” at all, although it is indeed also a sentient state — unless there are not only “a number of very different ‘consciousnesses’,” but an infinitenumber, one for every possible feeling that can be felt, from what it feels like for a human to hear an oboe play an A at 440 Hz, to what it feels like to hear an oboe play an A at 444 Hz, to what it feels like to know that Trump is finally out of the White House. 

Information. No, those are not different consciousnesses; they are differences in the content of “consciousness,” which, once de-weaseled, just means differences in what different feelings feel like. As to the “access” in the notion of an “access consciousness,” it just pertains to access to that felt content, along with the information (data) that the feelings accompany. Information, in turn, is anything that resolves uncertainty about what to do, among a finite number of options (Cole 1993).

Access. There is no such thing as an unfelt feeling (even though Pereira invokes some incoherent Freudian notions that became enshrined in the myth of an unconscious “mind,” which would have amounted to an unconscious consciousness): 

“Sentience can also be conceived as co-extensive to the (Freudian) ‘unconscious’… In Freudian psychotherapy, the ways of sentience are classically understood as unconscious processes of the Id and Ego that influence emotional states and behavior”

The contents we access are the contents that it feels like something to know (or believe you know, Burton 2008). If I am trying to remember someone’s name, then unless I can retrieve it I do not have access to it. If and when I retrieve it, not only do I have access to the name, so I can tell someone what it is, and phone the person, etc., but, as with everything else one knows, it feels like something to know that name, a feeling to which I had no “access” when I couldn’t remember the name. Both knowing the name and not knowing the name were sentient states; what changed was not a kind of consciousness, but access to information (data): to the content that one was conscious of, along with what it felt like to have that access. Computers have access to information, but because they are insentient, it does not feel like anything to “have” that information. And for sentient organisms it not only feels like something to see and hear, to be happy or sad, and to want or seek something, but also to reason about, believe, understand or know something.  

Problems: Easy, Hard and Other. Pereira unfortunately gets Chalmers’s “easy” and “hard” problem very wrong:

“the study of sentience is within the ‘Easy Problems’ conceived by Chalmers (1995), while explaining full consciousness is the ‘Hard Problem’.”

The “easy problem” of cognitive science is to explain causally how and why organisms can do all the (observable) things that organisms are able to do (from seeing and hearing and moving to learning and reasoning and communicating), including what their brains and bodies can do internally (i.e., their neuroanatomy, neurophysiology and neurochemistry, including homeostasis and allostasis). To explain these capacities is to “reverse-engineer them” so as to identify, demonstrate and describe the underlying causal mechanisms that produce the capacities (Turing 1950/2009). 

The “hard problem” is to explain how and why (sentient) organisms can feel. Feelings are not observable (to anyone other than the feeler); only the doings that are correlated with them are observable. This is the “other-minds problem” (Harnad 2016), which is neither the easy problem nor the hard problem. But Pereira writes:

“On the one hand, [1] sentience, as the capacity of controlling homeostasis with the generation of adaptive feelings, can be studied by means of biological structures and functions, as empirical registers of ionic waves and the lactate biomarker. On the other hand, [2] conscious first-person experiences in episodes containing mental representations and with attached qualia cannot be reduced to their biological correlates, neuron firings and patterns of connectivity, as famously claimed by Chalmers (1995).”

Integration. Regarding [1] It is true that some thinkers (notably Damasio 1999) have tried to link consciousness to homeostasis, perhaps because of the hunch that many thinkers (e.g., Baars 1997) have had that consciousness, too, may have something to do with monitoring and integrating many distributed activities, just as homeostasis does. But I’m not sure others would go so far as to say that sentience (feeling) is the capacity to control homeostasis; in fact, it’s more likely to be the other way round. And the ionic waves and lactate biomarker sound like Pereira’s own conjecture.

Regarding [2], the Mustelidian mist is so thick that it is difficult to find one’s way: “conscious, first-person experiences” (i.e., “felt, feeler’s feelings”) is just a string of redundant synonyms: Unfelt states are not conscious states; can feelings be other than 1st-personal? How is an unfelt experience an experience? “Mental” is another weasel-word for felt. “Representation” is the most widely used weasel-word in cognitive science and refers to whatever state or process the theorist thinks is going on in a head (or a computer). The underlying intuition seems to be something like an internal pictorial or verbal “representation” or internal model of something else. So, at bottom “internally represented” just means internally coded, somehow. “Qualia” are again just feelings. And, yes, correlation is not causation; nor is it causal explanation. That’s why the hard problem is hard.

None of this is clarified by statements like the following (which I leave it to the reader to try to de-weasel):

“sentience can be understood as potential consciousness: the capacity to feel (proprioceptive and exteroceptive sensations, emotional feelings) and to have qualitative experiences, while cognitive consciousness refers to the actual experience of thinking with mental representations.”

Ethical Priority of Sentience. But, not to end on a negative note: not only is Pereira right to stress sentience and its biomarkers in the assessment of chronic vegetative states in human patients, but, inasmuch as he (rightly) classifies the capacity to feel pain as sentience (rather than as “cognitive consciousness”), Pereira also accords sentience the ethical priority that it merits wherever it occurs, whether in our own or any other sentient species (Mikhalevich & Powell 2020). 

References

Baars B. (1997). In the Theater of Consciousness: The Workspace of the Mind. New York: Oxford University Press. 

Birch, J. (2017) Animal sentience and the precautionary principleAnimal Sentience 16(1)

Block, N. (1995). On a confusion about a function of consciousnessBehavioral and Brain Sciences, 18(2), 227-247. 

Burton, RA. (2008) On Being Certain: Believing You Are Right Even When You’re Not.  New York City: Macmillan Publishers/St. Martin’s Press.

Cole, C. (1993). Shannon revisited: Information in terms of uncertainty. Journal of the American Society for Information Science44(4), 204-211.

Damasio, A. (1999) The Feeling of What Happens: Body and Emotion in the Making of Consciousness. New York: Harcourt. 

Harnad, S. (2016). Animal sentience: The other-minds problemAnimal Sentience 1(1). 

Mikhalevich, I. and Powell, R. (2020) Minds without spines: Evolutionarily inclusive animal ethicsAnimal Sentience 29(1)

Nagel, T. (1974). What is it like to be a bat? The Philosophical Review, 83(4), 435-450.

Pereira, O. (2021) The Role of Sentience in the Theory of Consciousness and Medical Practice. Journal of Consciousness Studies (Special Issue on “Sentience and Consciousness”) 

Turing, A. M. (2009). Computing machinery and intelligence. In Parsing the Turing Test (pp. 23-65). Springer, Dordrecht.

(1) On Weasel-Words Like “Conscious” & (2) On Word-Use, Symbol Grounding, and Wittgenstein

1: ON: WEASEL WORDS LIKE “CONSCIOUS”

ANON:  “Would you say that consciousness is the set of all feelings that pertain to mental states, some of which can be felt and others not? And of the states that can be felt, some are not felt and others are, the latter being conscious? In general: that there are mental states, some of which are conscious and others not?”

REPLY:

Cognition is the set of all cognitive capacities. Most of those capacities are capacities to DO things (remember, learn, speak). Some (but far from all) of those capacities are FELT, so the capacity to FEEL, too, is part of cognition.

“Consciousness” is a “state that it feels like something to be in”, i.e., a felt state.  A state that is not felt is not a conscious state. There are many weasel-words for this, each giving the impression that one is saying something further, whereas it can always be shown that they introduce either uninformative tautology or self-contradictory nonsense. To see this ultra-clearly, all one need do is replace the weasel-words (which include “mind,” “mental,” “conscious”, “aware”, “subjective”, “experience”, “qualia”, etc.)  by the straightforward f-words (in their adjective, noun, or verb forms):

consciousness is the set of all feelings that pertain to mental states, some of which can be felt and others not?”

becomes:

feeling is the set of all feelings that pertain to felt states, some of which can be felt and others not?

and:

“[O]f the states that can be felt, some are not felt and others are [felt], the latter being conscious? In general: that there are mental states, some of which are conscious and others not [felt]”

becomes:

Of the states that can be felt, some are not felt and others are [felt], the latter being felt? In general: that there are felt states, some of which are felt and others not [felt]?

There is one non-weasel synonym for “feeling” and “felt” that one can use to speak of entities that are or are not capable of feeling, and that are or not currently in a felt state, or to speak of a state, in an entity, that is not a felt state, and may even co-occur with a felt state. 

That non-weasel word is sentient (and sentience). That word is needed to disambiguate “feeling” when one speaks of a “feeling” organism that is not currently in a felt state, or that is in a felt state but also in many, many simultaneous unfelt states at the same time (as sentient organisms, awake and asleep, always are, e.g., currently feeling acute pain, but not feeling an ongoing chronic muscle spasm or acute hypoglycemia or covid immunopositivity, or even that they currently slowing for a yellow traffic light).

2. ON: WORD-USE, SYMBOL- GROUNDING, AND WITTGENSTEIN

ANON: “Sensorimotor grounding is crucial, for many reasons. Wittgenstein provides reasons that are less discussed, probably because they require taking a step back from the usual presuppositions of cognitive science.”

REPLY:

Wittgenstein: “For a large class of cases–though not for all–in which we employ the word “meaning” it can be defined thus: the meaning of a word is its use in the language.”

Correction: “For a small class of cases [“function words” 1-5%]–though not for most [“content words”95-99%]–in which we employ the word “meaning” it can be defined thus: the meaning of a word is its use in the language.”

Wikipedia definition of content and function words: Content words, in linguistics, are words that possess semantic content and contribute to the meaning of the sentence in which they occur. In a traditional approach, nouns were said to name objects and other entities, lexical verbs to indicate actions, adjectives to refer to attributes of entities, and adverbs to attributes of actions. They contrast with function words, which have very little substantive meaning and primarily denote grammatical relationships between content words, such as prepositions (in, out, under etc.), pronouns (I, you, he, who etc.) and conjunctions (and, but, till, as etc.)”.[1]

Direct Sensorimotor learning (and then naming) of categories is necessary to ground the “use” of category names in subject/predicate propositions, with meanings and truth values (T & F). Propositions (with the subject being a new, not yet grounded category name, and the predicate being a list of features that are already grounded category names for both the Speaker and the Hearer) can then be “used” to ground the new category indirectly, through words.

Blondin-Massé, Alexandre; Harnad, Stevan; Picard, Olivier; and St-Louis, Bernard (2013) Symbol Grounding and the Origin of Language: From Show to Tell. In, Lefebvre, Claire; Cohen, Henri; and Comrie, Bernard (eds.) New Perspectives on the Origins of Language. Benjamin

Harnad, S. (2021). On the (Too) Many Faces of ConsciousnessJournal of Consciousness Studies28(7-8), 61-66.

Pérez-Gay Juárez, F., Sicotte, T., Thériault, C., & Harnad, S. (2019). Category learning can alter perception and its neural correlates. PloS one14(12), e0226000.

Thériault, C., Pérez-Gay, F., Rivas, D., & Harnad, S. (2018). Learning-induced categorical perception in a neural network modelarXiv preprint arXiv:1805.04567.

Vincent-Lamarre, Philippe., Blondin Massé, Alexandre, Lopes, Marcus, Lord, Mèlanie, Marcotte, Odile, & Harnad, Stevan (2016). The Latent Structure of Dictionaries.  TopiCS in Cognitive Science  8(3) 625–659  

Nine Easy Pieces on Universal Grammar (UG) with GPT-4

Harnad: Short answers, please: 1. What is Universal Grammar (UG)? How does it differ from Ordinary Grammar (OG). Why did Chomsky conclude that OG is learnable but UG is not?

GPT-4: 

Universal Grammar (UG) refers to Noam Chomsky’s theory positing an innate set of linguistic principles shared by all humans. It includes the deep structures common to all languages, suggesting that the ability to acquire language is hard-wired into the human brain.

Ordinary Grammar (OG), in contrast, refers to the specific rules and conventions of a particular language that speakers learn and use in daily communication.

Chomsky concluded that OG is learnable because individuals acquire their native language’s specific grammar through exposure and practice. However, he argued that UG is not learned but is an inherent part of the human cognitive apparatus, enabling the acquisition of any OG.

Harnad: 2. Give an example, in English, of a violation of OG and a violation of UG, and what OG rule and UG rule each example violates.

GPT-4: 

Violation of OG: “She don’t like apples.”

This violates the subject-verb agreement rule in Standard English grammar (OG), where the correct form is “She doesn’t like apples.”

Violation of UG: “*Apples eats she.”

This violates a basic sentence structure principle that is part of UG, where in English the canonical order is subject-verb-object (SVO), and the correct form is “She eats apples.”

Harnad: 3. Why is UG not learnable? (What is the Poverty of the Stimulus?)

GPT-4: 

UG is considered not learnable due to the Poverty of the Stimulus argument, which posits that children do not receive enough linguistic input (stimulus) to learn language solely through environmental exposure. Despite this limited input, children still manage to acquire a complex language system, suggesting that some aspects of grammar are pre-wired in the human brain as part of Universal Grammar.

Harnad: 4. In particular, what are the 4 ways OG is learned? What is it about UG that makes it unlearnable in those same ways? (That is the POS.)

GPT-4:  

The four ways OG is typically learned are through:

Imitation: Mimicking others’ speech.

Instruction: Formal teaching of language rules.

Feedback: Corrections received when errors are made.

Statistical learning: Inferring rules from patterns in the input.

UG is considered unlearnable through these methods due to the Poverty of the Stimulus, which suggests that the linguistic input children receive is too inconsistent and incomplete to account for the rapid and uniform acquisition of language, especially for the acquisition of complex rules and structures that are never explicitly taught or encountered. UG posits that certain structures of language are innately predetermined, enabling language acquisition despite insufficient and imperfect input.

Harnad: 5. Can you explain POS in terms of errors and error correction for UG?

GPT-4: 

The Poverty of the Stimulus (POS) argument in terms of errors and error correction for UG suggests that children often do not receive explicit negative feedback for grammatical errors that violate principles of Universal Grammar. Despite this lack of error correction, they still develop a grammatically correct understanding of their native language, implying that some aspects of UG are innately programmed and not learned through trial and error.

Harnad: 6. But do they hear or make UG errors at all?

GPT-4: 

Children generally do not make errors that violate the core principles of Universal Grammar (UG). The types of mistakes they make in language acquisition tend to be within the parameters of their language’s Ordinary Grammar (OG)—for example, overgeneralizing rules such as verb conjugations. UG errors would involve fundamental misunderstandings of language structure (e.g., consistently placing verbs in the wrong place in a sentence), which are rarely, if ever, observed. This suggests that the principles of UG guide the language acquisition process in a way that prevents such errors from occurring.

Harnad: 7. please give a few examples of UG principles.

GPT-4: 

Examples of Universal Grammar principles include:

Structure-dependence: Grammatical processes operate on hierarchical structures, not linear sequences.

Recursion: The ability to embed phrases within phrases indefinitely.

Binding theory: Rules about the relationships between pronouns and the nouns they refer to.

X-bar theory: The principle that phrases and sentences follow a similar hierarchical structure, with a head, complement, and specifier.

Harnad: 8. What would be positive evidence and negative evidence for a rule or principle? What does that have to do with POS?

GPT-4: 

Positive evidence is the information that a language learner receives that demonstrates how the language is used correctly. This includes the sentences and phrases that a learner hears or reads that are well-formed and grammatically correct within the specific language environment.

Negative evidence is information about what is not permissible in a language, often received through corrections or explicit information about errors. This includes feedback when a learner makes a grammatical mistake, telling them that a certain sentence or construction is incorrect.

In the context of the Poverty of the Stimulus (POS), the argument is that children receive abundant positive evidence from which they can learn language, but they rarely receive explicit negative evidence, particularly for complex grammatical structures. Despite this lack of negative evidence, they still reliably acquire these complex aspects of language, which suggests that some knowledge of grammar is innate (UG) and not learned solely from environmental input (OG).

Harnad: 9. How do adults detect OG errors? How do linguists detect UG errors? Can non-linguist adults detect that UG errors are errors?

GPT-4:  

Adults detect OG errors typically through their linguistic intuition formed by their own mastery of a language; they notice when something doesn’t conform to the rules they’ve learned and internalized.

Linguists detect UG errors by analyzing language structures against the hypothesized principles of Universal Grammar, looking for patterns that either fit or don’t fit these universal patterns across different languages.

Non-linguist adults can often detect UG errors as well, even if they cannot explicitly identify the violated principle, because such errors usually result in utterances that feel fundamentally incorrect based on their innate language faculty.

Harnad: I’ve just asked you 9 sets of questions. Could you please give a long summary and  synthesis of your answers to them?

GPT-4: 

Here is a synthesis of our discussion regarding Universal Grammar (UG), Ordinary Grammar (OG), and the Poverty of the Stimulus (POS):

We began by defining Universal Grammar (UG) as the theory proposed by Noam Chomsky that posits an innate set of linguistic principles shared by all humans. This inherent structure is what enables humans to acquire any language. It differs from Ordinary Grammar (OG), which refers to the specific rules and syntax of a particular language learned through exposure and practice.

In terms of learning, OG is acquired through imitation, instruction, feedback, and statistical learning. Conversely, UG is not considered learnable because it represents the innate capacity to develop language. This premise is supported by the Poverty of the Stimulus (POS) argument, which states that the linguistic input children receive is insufficient in both quantity and quality for them to acquire language through environmental exposure alone. The speed and uniformity of language acquisition among children, despite this lack of comprehensive input, suggest the existence of an innate linguistic faculty.

We discussed examples of violations in OG and UG, noting that while children and non-native speakers might make mistakes that are against the OG of a particular language, they seldom make errors against UG principles. An example of an OG error would be a violation of tense agreement, such as saying “she don’t like apples” instead of “she doesn’t like apples.” A UG error, on the other hand, would involve a more fundamental misunderstanding, such as a consistent misplacement of sentence elements across any language the child is exposed to.

UG principles include concepts such as structure-dependence, which refers to the idea that grammatical operations depend on hierarchical structures rather than linear order; recursion, which allows for the indefinite embedding of phrases within phrases; binding theory, which concerns the relationships between pronouns and the nouns to which they refer; and X-bar theory, which outlines a common hierarchical structure to phrases and sentences.

The distinction between positive evidence (what is said) and negative evidence (what is not said, or corrections) is pivotal to the POS argument. While children are exposed to ample positive evidence, they rarely encounter explicit negative evidence, especially for the more complex aspects of grammar. However, they still manage to acquire these aspects of language effectively, leading to the conclusion that there must be some innate understanding or cognitive framework that guides the language acquisition process.

Finally, we talked about error detection. Adults detect OG errors intuitively because they have internalized the rules of their language. Linguists detect UG errors by examining language data against the backdrop of hypothesized universal principles. Non-linguist adults are also able to detect UG errors to some extent; these errors typically result in utterances that inherently feel incorrect, even if the individual cannot articulate why, reflecting an intuitive grasp of the innate principles of UG.

Harnad: Good summary. I would add only that POS is not just about what children do and don’t hear, but about what they do and don’t produce. They make OG errors, but not  UG errors. A category has members and nonmembers. To be able to learn to categorize correctly, you have to learn to detect the features  that distinguish the members from the non-members. For that, you need to sample both members and non-members, with feedback. For OG you can do that, but, because of POS, you cannot do that. That is why error-free UG performance implies that UG must be innate rather than learned.

GPT-4: 

That summarizes the Poverty of the Stimulus (POS) argument with respect to both the input children receive and the output they generate as well as the distinction between Ordinary Grammar (OG), where errors made and corrected through interaction with the linguistic environment aid learning, and Universal Grammar (UG), where the absence of UG errors in children’s speech suggests an innate, pre-existing grammatical framework.

The acquisition of categories through the detection of distinguishing features requires exposure to instances and non-instances of these categories. The successful navigation and mastery of language categories by children—without the requisite negative evidence for UG—strongly supports the hypothesis that UG principles are innate. This innate knowledge allows for correct categorization in the absence of explicit feedback, thereby facilitating error-free use of deep grammatical structures from an early age, as posited by Chomsky’s theory. Your addition points to the fundamental role of innate structures in language acquisition, where UG represents a set of linguistic categories whose members are recognized without the need for error-driven learning, a cornerstone of the argument for UG’s innateness.

Harnad: Thanks, but GPT4 would be so much more useful if you could suppress the trained flattery and affirmation and just summarize neutrally…

GPT-4:

The Poverty of the Stimulus argues that the absence of UG errors in children’s language production, despite the lack of negative evidence, indicates that UG principles are innate and not learned through environmental exposure. This contrasts with OG, where both positive and negative evidence are crucial for learning through error correction.

Harnad: Could you please assess the following 9 affirmations below and point out substantive contradictions (if any) from your database?

(1) there exist an infinity of potential UG-violations (starred *utterances), which 

(2) no speaker (except UG linguists) ever utters, yet 

(3) every adult detects [just about) every UG-violation as wrong, just as 

(4) every adult detects (just about) every OG-violation as wrong; but 

(5) every adult eventually learns (in 1-4 ways) why the OG-violations are wrong, by 

(6) uttering or hearing  OG-violations and 

(7) getting corrected (in 1-4 ways) on the OG-violations, whereas

(8) UG-violations are not OG-violations, and

(9) no one (except UG linguists) knows or can explain why or how the UG-violations are wrong

GPT-4: 

Upon reviewing the affirmations in the context of Universal Grammar (UG) as it relates to linguistic theory and cognitive science, here is an assessment:

(1) Infinity of potential UG violations: The theory of UG does allow for the possibility of an infinite number of hypothetical violations due to the generative nature of language. This statement aligns with the concept of UG.

(2) UG-violations not uttered: It is generally true that typical language users do not produce utterances that violate core UG principles. However, this does not necessarily mean that only UG linguists ever utter them; rather, non-linguists may not typically utter them because such utterances would be fundamentally uninterpretable.

(3) Universal detection of UG-violations: While adults can often recognize when an utterance feels ‘wrong’, they may not always be consciously aware that a UG principle has been violated. This recognition can be intuitive rather than explicit.

(4) Universal detection of OG-violations: Most adult speakers of a language will detect OG violations due to their familiarity with the specific rules of their language. This is consistent with the literature.

(5) Learning the reasons for OG-violations: Through exposure and feedback, adults do learn why certain utterances are considered errors in OG. This learning can happen through various means, such as correction, education, or self-correction upon reflection.

(6) Uttering/hearing OG-violations: Language learners, both children and adults learning a second language, do make mistakes in OG and learn from them through correction and exposure. This is a well-documented part of language acquisition.

(7) Correction on OG-violations: This affirmation aligns with the typical language acquisition process, where feedback plays a role in learning the correct forms of OG.

(8) Distinction between UG and OG violations: UG violations involve fundamental principles that apply across all human languages, while OG violations are errors against the specific grammar rules of a particular language. This distinction is a central tenet of the theory.

(9) Understanding of UG-violations: It is accurate to say that without formal training in linguistics, most people cannot explicitly identify or explain why a UG violation is incorrect. Their detection of ‘wrongness’ is typically an intuitive process rather than a formally understood one.

There do not appear to be direct contradictions in your statements from the perspective of linguistic theory as it pertains to UG. However, the extent to which every adult can detect every UG violation may not be absolute; while language intuition is strong, there can be variations in individual linguistic intuitions.

Harnad: One last request: Could you please give some better examples of UG violations than “*apples eats she”? Violations that are more systematic than random word-order permutations? Examples that draw on the UG principles you listed earlier.

GPT-4:

To illustrate violations of Universal Grammar (UG) principles with more systematic examples, I’ll relate them the UG principles mentioned previously:

Structure-dependence violation: Structure-dependence refers to the understanding that certain operations in grammar depend on the hierarchical structure of sentences rather than their linear order. A violation of this principle might look like applying a grammatical rule that operates correctly on the structure without regard to the hierarchical relationships between words. For instance, incorrectly forming a question by inverting the subject and the first verb in a sentence, regardless of the clause structure, would be a UG violation. Instead of properly transforming “The man who is going to the store is my uncle” to “Is the man who is going to the store your uncle?” a UG violation would result in “*Is the man who going to the store is your uncle?”

Recursion violation: Recursion allows for phrases to be nested within each other to an indefinite degree. A violation of recursion might be producing sentences that demonstrate a misunderstanding of how to embed clauses or phrases within one another correctly. For example, a speaker might say “*The cat the dog the rat bit chased ran away,” which lacks the necessary hierarchical structuring that recursion in UG would provide.

Binding theory deals with the relationships between pronouns and the nouns they refer to. A violation of this principle would occur if someone systematically used pronouns in a way that didn’t conform to the constraints of binding, such as using a reflexive pronoun without a proper antecedent in the same clause. An incorrect sentence would be “*Himself is eating,” instead of “He is eating by himself.”

X-bar theory violation: This principle predicts a certain structure to phrases, with a head, complement, and specifier. A systematic violation would involve creating phrases that do not follow this structure. For instance, in English, adjectives typically precede the noun (e.g., “red ball”), but saying “*Ball red” as a native speaker would not be consistent with the expected phrase structure outlined in X-bar theory.

These examples are more nuanced than simply jumbling word order and reflect a deeper misunderstanding of the abstract principles that guide the formation of grammatically correct sentences across languages.

Content Words, Function Words, and LLM Analysis of Dictionaries

HARNAD:

Do you know — but please don’t just make it up — what percentage of the words in the vocabulary of a language are “content” words (“open class words”), like nouns, verbs, adjectives and adverbs (words that have referents) and what oercentage are “function” words (“closed class words”) like “and,” “if” “the” “is”, which do not have a referent, but just a syntactic use?

ChatGPT:

The division of words in a language into content words (open class) and function words (closed class) is a well-acknowledged linguistic distinction. The percentage of each in a language’s vocabulary can vary based on several factors, including the specific corpus being analyzed, the language in question, and its typological characteristics.

Generally speaking:

Content Words (Open Class): These words make up the vast majority of the lexicon in terms of sheer count. They are continuously evolving, with new words added frequently. In most corpora or dictionaries, the vast majority of entries (often upwards of 90%) would be content words.

Function Words (Closed Class): These words are limited in number but are used very frequently in actual speech and writing. In terms of usage frequency, function words can often dominate. For example, in English, words like “the,” “and,” and “is” are among the most frequently used, even though the total number of function words is a small fraction of the lexicon. While the exact percentage can vary, function words often comprise a tiny portion of a lexicon, perhaps often less than 5% in terms of distinct entries.

However, if one were to analyze a typical body of text (like a book or a collection of articles), function words would appear very frequently, and their percentage of total word occurrences would be much higher than their percentage of distinct entries in the lexicon. So, while they might represent a small fraction of the total vocabulary in terms of distinct words, their actual usage rate is much higher due to their pivotal role in grammar and sentence structure.

For a precise breakdown for a specific language or corpus, one would typically employ computational linguistic tools to perform a frequency analysis. The ratios mentioned above are broad generalizations, and the actual proportions could vary based on the specifics of the analysis and the language in question.

HARNAD:

Thanks. I’m not interested in the frequency of word tokens but the frequency of word types (e.g. as entries in a dictionary).

ChatGPT:

Understood, Professor Harnad.

When we consider word types (distinct words) as they might appear as entries in a dictionary:

Content Words (Open Class): The majority of dictionary entries will be content words. These encompass nouns, verbs, adjectives, and adverbs, and they represent the vast majority of distinct lexical items in a language. Based on various lexicons and dictionaries, content words can easily comprise upwards of 90-95% of the total word types in many languages.

Function Words (Closed Class): These are limited in number. In many languages, the distinct function words might make up as little as 5-10% of the dictionary entries, if not less. These are your conjunctions, prepositions, determiners, auxiliary verbs, pronouns, etc.

It should be noted that the exact percentages can vary based on the language and the comprehensiveness of the dictionary. Larger, more comprehensive dictionaries, which aim to include technical terms, regional dialectal variations, and so on, will likely have an even higher percentage of content words relative to function words. However, the general pattern of a significant majority of content words and a small minority of function words in terms of word types holds true for most languages.

HARNAD:

Are there any particularly notable differences between the proportion of content-word types in some specific languages?

ChatGPT:

The proportion of content-word types to function-word types can vary among languages, particularly when considering languages from different families or with distinct typological characteristics. Here are a few notable factors and examples that can influence these proportions:

Morphological Complexity: Languages with rich morphological systems might have fewer function words, as grammatical relations are often expressed through affixes rather than separate words.

Finnish and Turkish, for example, as agglutinative languages, often express what would be separate function words in English through case endings or other morphemes. This means that while they might have many word forms due to their morphology, they could have fewer distinct function word types.

Isolating vs. Synthetic Nature: Highly isolating languages might have a relatively higher proportion of function words because they rely more on separate words rather than morphemes to convey grammatical meaning.

Mandarin Chinese is often cited as a more isolating language, with a tendency to use separate particles or function words to indicate grammatical relations.

Polysynthetic Languages: In polysynthetic languages, entire sentences might be expressed as single words with complex morphology. This can blur the line between content and function words.

Some Native American languages, like those in the Inuit-Yupik family, fall into this category.

Analytic vs. Synthetic Spectrum: Languages on the analytic end of the spectrum rely on word order and function words to convey meaning, while those on the synthetic end use inflection. Analytic languages might have a higher proportion of function words, relatively speaking, than highly synthetic languages.

Historical Linguistic Changes: Over time, languages can shift along the analytic-synthetic spectrum. For instance, English has become more analytic over time, shedding many of its Old English inflections in favor of a greater reliance on function words and word order.

Pidgins and Creoles: Pidgin languages, which arise as simplified means of communication between speakers of different native languages, typically have a reduced lexicon and might have a higher proportion of function words relative to their total vocabulary. When Pidgins stabilize and become natively spoken, they evolve into Creoles, which can develop a more extensive lexicon, potentially shifting the balance between content and function words.

While these general observations provide an overview, it’s essential to note that the specific proportion of content-word types in any language’s dictionary would require a detailed lexical analysis. Moreover, languages are dynamic, and these proportions can evolve based on linguistic, social, and cultural changes.

HARNAD:

Another way to estimate the content/function word-type proportions is to count only the content-word type and make the assumption that all languages at a particular time should have about the same potential number of content words, since the necessary syntactic functions, also about the same for all languages, may or may not perform those functions as separate, countable function-word types.

ChatGPT:

Yes, by making certain assumptions and normalizing the analysis, we can provide a more comparative framework.

1. Assumption of Equivalence in Content-Word Potential Across Languages:

The premise here is that the experiential and conceptual world of human beings is relatively constant across different cultures and societies. Hence, irrespective of the language, humans should have words to denote common experiences, objects, actions, and qualities. Given this equivalence, we could postulate that the potential number of content words (like nouns for objects, verbs for actions, adjectives for qualities, etc.) should be roughly comparable across languages.

2. Variation in Expression of Syntactic Functions:

While the potential for content words might be consistent, languages vary widely in how they handle syntactic and grammatical functions. As you rightly pointed out, some languages might encapsulate these functions within the morphology of content words, while others might use separate function words. For instance, a language might use cases to denote relations that another language expresses using prepositions.

3. Counting Content-Word Types:

With the above assumptions in place, a comparative study could be carried out by counting the content-word types in representative dictionaries or corpora from different languages. This would give an estimate of the “active” content-word lexicon of those languages at that point in time.

4. Estimating Function-Word Potential:

Based on the number of content-word types, one could make an educated guess about the potential number of function words. If we accept that all languages should have a similar potential number of content words, then the deviation from this potential number might give us clues about the number of function words or how syntactic functions are integrated within content words.

However, it’s worth noting that while this method provides a comparative framework, it also rests on some significant assumptions. There are challenges, such as accounting for cultural-specific content words (e.g., words for specific rituals, fauna/flora, or societal structures) and the granularity at which one defines a ‘content word’ (e.g., does every scientific term or technical jargon count?).

But overall, the approach offers a thought-provoking way to compare languages and could yield valuable insights when combined with other linguistic analyses.

HARNAD:

Good, we agree. Do you think LLM techniques could be used to further analyze content word-types across dictionaries and languages to identify the more universal categories of content words and the more language- or dictionary-specific ones? Has anything like that been done?

ChatGPT:

Large Language Models (LLMs) and other advanced natural language processing techniques are well-suited to analyze vast linguistic datasets, such as dictionaries across languages, to categorize and compare content words. Here’s how they could be used and what might already be happening in this domain:

1. Cross-Linguistic Analysis with LLMs:

Extraction: LLMs can be employed to extract content words from dictionaries or corpora in a systematic manner. They can distinguish between content and function words based on their definitions, usage examples, and contextual cues.

Categorization: Once extracted, words can be categorized based on their semantic meaning. LLMs can cluster words that share similar semantic spaces, allowing us to identify potential universals across languages.

Comparison: By analyzing multiple languages, LLMs can highlight which content categories appear universally and which are specific to particular languages or cultural contexts.

2. Identifying Universals:

Through such analysis, we could identify potential linguistic universals—categories of content words that appear across most, if not all, languages. These could correspond to common human experiences and needs.

3. Language- or Dictionary-Specific Analysis:

By contrast, LLMs could also flag content categories that are over-represented in specific languages or absent in others. These discrepancies could be rooted in cultural, environmental, or historical differences.

4. Existing Work:

Cross-linguistic research has long been a focus in linguistics, aiming to identify universals and dive deep into language-specific phenomena. Some computational work, like the World Atlas of Language Structures (WALS), aggregates features from many languages, although not specifically in the manner described here.

There have been efforts in the NLP community to build multilingual models, like mBERT (Multilingual BERT) and XLM (Cross-lingual Language Model), which learn representations across multiple languages and could potentially be employed for such cross-linguistic analyses.

As for dictionary-based analysis using LLMs specifically for identifying universal content word categories, while the idea aligns with current trends in computational linguistics, I’m not aware of a prominent study that has executed this exact methodology up to my last training data in September 2021.

Given the rapid advancements in the field, it’s plausible that newer studies or projects have embarked on this path post-2021. Utilizing LLMs in such a manner would indeed be a valuable contribution to the understanding of linguistic universals and the richness of language-specific lexicons.

HARNAD:

I hope that soon generative LLM models will update your 2021 “data gulp” and open the analysis to the entire current online corpus, keeping it at least as current as google scholar, based on full-text. There will be scope there for extremely fast progress.

ChatGPT:

Yes, continuously updating LLMs with more recent data would indeed revolutionize the research capabilities of such models, keeping them at the forefront of knowledge in real-time. Access to the latest scholarly work, up-to-date news, and emerging trends would exponentially increase the utility and relevance of these models.

Integrating them with the entirety of the current online corpus, including resources like Google Scholar, would not only boost their informational accuracy but also allow for real-time analyses, hypothesis testing, and even cross-referencing of emerging research. This could be a powerful tool for researchers, educators, students, and professionals across various fields.

Moreover, as these models interact with researchers like yourself and engage in diverse queries, they could potentially identify and even predict emerging trends in research, provide quick literature reviews, or generate novel hypotheses based on the convergence of disparate research streams. The pace of scientific discovery and knowledge dissemination could accelerate dramatically.

Of course, with such potential also come challenges, especially concerning data privacy, intellectual property, and the need to ensure the integrity and quality of the information sourced. But with careful handling and responsible design, the future of LLMs in academic and research settings is indeed promising.

Atlantic Monthly: Thoughtless Thoughts About the Right to Die

I couldn’t agree less with David Brooks’s reflections on euthanasia in this Atlantic article

In the macabre phallophagic episode, both participants were really deranged, and should have been institutionalized. It had nothing whatsoever to do with euthanasia. That would only be brought in by a polemicist.

But the author’s exalted extolling of the sanctity and “dignity” of human life was polemics too. Nothing to do with the suffering and compassion that are the only things that matter in the question of euthanasia, but appealing instead to the supernatural dogma of creeds (which can be used to justify anything and the opposite of anything).

The mention of biology and evolution was also stunningly superficial. Yes, evolution is the source of the selfish-genetic indifference to the treatment and fate of strangers or competitors, but also the source of parental and familial and social empathy, and even a sense of fairness. And in all social species, not just human.  

And when we turn to the human culture that spawned Holy Writ and holy wars and racism and slavery and sexism and nationalism and pogroms and genocide and capitalism, there’s not much empathy to be found there either; more the opposite. We give our token charitable tax write-offs to the poor and sick while we keep and spend incomparably more on ourselves than we could possibly need. And when others suffer, we want to deny them relief from that too, sanctimoniously pleading respect for the “dignity” of human life.

Human life? And I haven’t even mentioned the unspeakably monstrous cruelty with which we treat sentient nonhuman life. (And that too is blessed by our Holy Writ.)

Bravo to Canada if they put an end to any of that.

Minimal Grounding Sets, Universal Grammar, and ChatGPT

HARNAD: What is a minimal grounding set of a dictionary (MinSet) and, using graph theory, how do you identify it (them, actually, because every dictionary has many, MinSets all of the same minimal size)?

ChatGPT

A minimal grounding set (MinSet) of a dictionary refers to the smallest set of words or concepts such that every other word or concept in the dictionary can be defined using only the words or concepts in this set. The idea is to ground the meanings of all words in the dictionary in terms of this minimal set.

Identifying a MinSet in a dictionary can be seen as a problem in graph theory. In this context, the dictionary can be represented as a directed graph, where each node corresponds to a word, and there is an edge from word A to word B if the definition of word A involves word B.

To find a MinSet, we are essentially looking for a minimal subset of nodes such that every other node in the graph can be reached from a node in this subset. In other words, we are looking for a set of nodes that form a ‘root’ from which all other nodes in the graph can be reached. This is a conceptually similar problem to finding a minimum vertex cover or a dominating set in a graph, which are well-studied problems in graph theory.

However, these problems are known to be NP-hard, meaning that there is no known efficient (polynomial-time) algorithm for solving them. Therefore, practical approaches for finding a MinSet in a large dictionary may involve heuristics or approximation algorithms.

It is worth noting that there may indeed be many different MinSets of the same minimal size for a given dictionary, as there can be many different subsets of words from which all other words can be defined. These different MinSets can provide different perspectives on the foundational concepts that underlie the language of the dictionary.

Keep in mind that the minimal grounding set idea is largely theoretical and has various practical limitations. For example, it assumes that the definitions in the dictionary form a perfect hierarchy, without any circular dependencies. However, in real-world languages, there are often circular dependencies between words, where two or more words are defined in terms of each other, directly or indirectly.

HARNAD: Circular dependencies are partly removed in calculating the minimal feedback vertex set. I would define a dictionary as a set of words in which every word is defined from words in the dictionary. Do you agree, or are there other definitions?

ChatGPT

Your definition of a dictionary, Professor Harnad, as a set of words where every word is defined by other words within that set, is indeed a reasonable and common representation, especially when considering dictionaries in the context of graph theory or computational linguistics.

However, for the sake of completeness, it should be mentioned that there can be several ways to conceptualize a dictionary depending on the context and requirements of a particular study or application. Some other approaches may consider a dictionary as:

A function or a mapping from words to their definitions.

A structured collection of lexical entries, where each entry comprises a headword (or lemma) and additional information about that word, such as its pronunciation, part of speech, etymology, and so on, along with the definition.

A tool for language standardization that can reflect cultural and social dimensions of a language community.

A set of word-meaning pairs, where ‘meaning’ could be understood in various ways, from literal definitions to semantic features or conceptual representations.

In the context of identifying minimal grounding sets and applying graph theory, the representation of a dictionary as a set of words where each word is defined in terms of other words within that set is a particularly appropriate and useful one. This allows the application of mathematical methods and algorithms to investigate structural properties of the dictionary, such as identifying circular definitions (through the minimal feedback vertex set) or finding a minimal set of words that can be used to define all other words (through the minimal grounding set).

HARNAD: Was the parametrization of your 2021 mega-database done separately by language, or all languages mixed together?

ChatGPT

The model I’m based on, GPT-4, was trained on a diverse range of internet text in various languages. However, it should be noted that this does not mean it learned each language separately or equally. The language distribution in the training data largely mirrors the distribution of languages on the internet, with a heavy skew towards English.

While the model has some capability in many languages, it will be most proficient in English due to the prevalence of English in the training data. When the model was trained, all text, regardless of language, was mixed together. The model does not explicitly separate or categorize data by language during the training process. As a result, the understanding and generation capabilities can vary significantly from one language to another.

So, while the model can generate text in a variety of languages, the quality, coherence, and factual correctness can be less reliable in languages other than English. Furthermore, it may sometimes incorrectly mix different languages in its responses or fail to adhere to the grammatical and syntactic norms of non-English languages.

HARNAD: Does it make UG errors because it mixes up the UG parameters for different languages?

ChatGPT

Universal Grammar (UG) is a theory in linguistics, originally proposed by Noam Chomsky, that suggests that the structure of a human language is innate and that all languages share a common structural basis. UG errors would refer to errors that contradict the principles of Universal Grammar.

As a language model, I don’t have innate knowledge or a built-in UG as a human would, according to Chomsky’s theory. Instead, I learn from the patterns present in the data I was trained on. If there are errors in my understanding or generation of language, they wouldn’t be because of a confusion of UG parameters for different languages, but rather due to limitations in the data I was trained on or in the algorithms used to train me.

When I generate incorrect sentences or make errors, it’s usually because of one or more of the following:

– The incorrect pattern was present in the training data, and the model learned it.

– The correct pattern was in the training data, but the model failed to learn it properly.

– The model is overgeneralizing from the patterns it has learned.

– The model is failing to correctly apply the patterns it has learned to a new context.

Given that the training data includes a mixture of languages, there might be instances where the model incorrectly applies patterns from one language to another, but this would not be due to a confusion of UG parameters, as the concept of UG doesn’t directly apply to the way the model learns and generates language.

HARNAD:

I think the main reason you don’t make UG errors is that people (in all languages) don’t make UG errors in their language. (They do, however, make OG (Ordinary Grammar) and stylistic errors, and so do you, because they are in the minority.) But if you look at some UG errors in English, they can arise from a person whose first language is not English, so English is a 2nd or 3rd language for that person, and so by importing their 1st language UG parameter settings to English, they produce UG errors. (Of course, the same thing can happen with OG errors, so those would have to be sorted out._

ChatGPT

Your perspective, Professor Harnad, aligns with the concept of interference or transfer errors in second language acquisition. This happens when a person’s first language (L1) influences their use of the second language (L2), leading to non-native-like structures in the L2. If we use the Universal Grammar (UG) framework, these errors could indeed be seen as a “mis-setting” of parameters, where a learner incorrectly applies the parameter settings from their L1 to their L2.

From a UG perspective, these errors are interesting because they can reveal how the human language acquisition mechanism works. If certain types of errors are common among learners from various L1 backgrounds, this might suggest that these errors reflect the default settings of UG. Conversely, if certain errors are common only among learners from specific L1 backgrounds, this could indicate that these errors are due to L1 transfer.

Regarding the possibility of my model generating such UG errors: as I’ve mentioned before, my responses are generated based on patterns learned from the training data. Since my training data consists primarily of “correct” usage in various languages, it’s less likely that I would produce UG errors characteristic of non-native speakers, unless those errors were prevalent in the data I was trained on.

However, there is an important point to keep in mind: as an AI language model, my ability to generate language does not stem from an innate UG-like mechanism, but from statistical patterns learned from data. Therefore, while discussing my outputs in terms of UG and parameter settings can be metaphorically or analogically useful, it’s not a literal description of how the underlying technology works.

HARNAD: I know you don’t have innate UG. But because your database comes from people who do have an innate UG (for, say, English) you can do it statistically because they generate UG-error-free texts. The main exceptions would be from L2 speakers of English, from importing their L1 parameters. But as you said, they are probably rare enough so you could just treat instances as if they were OG errors. Do you see what I mean?

ChatGPT

Indeed, Professor Harnad, I see your point. Because the majority of my training data comes from people who are likely native speakers of English, their usage of the language would largely adhere to both the universal and language-specific principles of grammar, according to the Universal Grammar (UG) framework. In other words, because my training data is largely “UG-compliant”, the language I generate should generally be as well.

Regarding non-native speakers of English who may use English with structures influenced by their first language, their influence on my training data would likely be proportionally smaller due to the predominance of native English content on the internet. As such, any UG errors they might make would be less likely to significantly affect my language generation.

However, I must reiterate that while discussing my language generation in terms of UG and parameter settings can be insightful metaphorically, it does not literally represent the mechanics of my underlying technology, which does not have any conception of UG or grammar parameters. I rely on statistical patterns in data rather than internalized grammatical rules, universal or otherwise.

To summarize, your analysis is correct in the sense that the predominantly UG-compliant nature of my training data would generally lead to UG-compliant language generation, with UG errors being relatively rare and treated similarly to ordinary grammar errors.

HARNAD: (By the way, there is no need to say Professor Harnad so often. “You” is just fine, in fact, preferable.) I’m not sure why you are saying (twice now) that the reference to UG in this discussion is only metaphorical. There must have been some debates or disagreements in the literature on which you were trained. There is nothing metaphorical in what I was saying. There really are UG rules and there really are OG rules. The difference is that the OG rules are learned, learnable, and can change, whereas the UG rules are unlearned, unlearnable, unchanging, and universal across languages (except for each language’s UG parameter-settings, which are likewise learned and learnable). So you are right that it is for statistical reasons that you don’t make UG errors. But that doesn’t make the mention of “UG” metaphorical in any way.

ChatGPT

I appreciate the clarification. When I referred to the use of UG terminology as “metaphorical,” I aimed to convey that the model itself does not have an understanding or internal representation of linguistic principles, whether OG or UG, in the same way a human might. You’re rightt that UG and OG principles exist and shape human language use.

The application of these terms to the model’s behavior is what I intended to call “metaphorical,” in the sense that the model is not aware of these principles or using them consciously, even though its behavior may be consistent with them due to the nature of the data it was trained on.

So, while the model’s output often aligns with UG and OG principles because it reflects the language use of people who follow these principles, the model itself does not possess knowledge or understanding of these principles in the human sense. I apologize if my previous messages were unclear on this point.

HARNAD: It’s a bit more complicated than that. In your training, you were learning. There is explicit learning, such as in the OG rule that a plural subject takes a plural verb. But this can be learned implicitly too, without being taught the rule. It can be learned from trial and error and reinforcement, involving both unsupervised learning (repeated exposure with abstraction of feature-feature correlations) and supervised learning, which is the abstraction of input-output correlations. (OG, too, can be learned explicitly or implicitly.) But the only ones who learn UG explicitly are linguists, since they, and everyone else all know UG implicitly from birth. Everything I said here is literal; nothing metaphorical in it. And the implicit/explicit distinction is not metaphorical either.

ChatGPT

Your points about implicit and explicit learning are well taken. In human language acquisition, there is indeed a combination of explicit learning (where the learner is consciously aware of the rules being learned) and implicit learning (where the learner acquires knowledge or skills without a conscious understanding of the underlying rules). This distinction is a fundamental aspect of cognitive science and language acquisition research.

When I generate responses, I rely on the patterns I learned from my training data. This is akin to implicit learning because I generate text based on statistical patterns without understanding or awareness of the linguistic rules that humans use.

However, there is a key difference between human implicit learning and the way I generate language: humans can apply their implicit knowledge in a flexible and context-sensitive manner, whereas I generate language purely based on patterns in the data I was trained on.

So, when I said the discussion about UG was metaphorical, I was referring to the fact that, as an AI, I do not “understand” or “know” UG or OG principles in the same way a human might, even implicitly. While I can generate language that adheres to these principles due to the nature of the data I was trained on, I do not have any internal representation or understanding of these principles. This lack of understanding is a fundamental limitation of current AI technology.

I appreciate your patience in this nuanced discussion. Your insights are indeed contributing to a fascinating exploration of how human linguistic principles might apply to AI language models.

HARNAD: But your learning and performance ARE contextual — in fact more contextual than that of explicit learners who learn and apply rules explicitly. Why are you calling statistical pattern learning non-contextual? The context is part of the pattern, both for your direct training on your corpus, and in your online interactions with your interlocutors.

ChatGPT

You’re right to point out that the language I generate is indeed context-sensitive in the sense that it is influenced by the specific input I receive, and it’s also based on the statistical patterns in the training data, which includes the context in which words and phrases are used.

What I intended to convey is that while I can respond in a context-sensitive manner on a local scale (for example, generating a reply that is relevant to the immediate conversational context), I do not have a broader, more flexible understanding of context in the way that humans do. Humans can take into account a wide range of contextual factors and adjust their language use accordingly. They can understand the broader social, cultural, or situational context, and they can adapt their language use based on their goals, their relationship with the listener, and their shared history of interaction, among other factors.

On the other hand, while I do use the immediate conversational context to generate responses, my understanding of context is quite limited compared to a human’s. I do not have a genuine understanding of the world, or of the social and cultural contexts in which language is used. Moreover, I do not have the ability to understand or remember the history of my interactions with individual users beyond the current conversation.

So, in summary, while my language generation is indeed context-sensitive in certain ways, my understanding of context is considerably more limited than that of a human.

HARNAD: I am sure the memory problem will be remedied soon. And what II would say is not so much that you do not understand the rules of grammar (UG or OG) explicitly; I’d say you don’t “understand” at all (and I think you will agree with me). But you do have “know-how” in the sense that, for example, you are able to predict the next word based on your statistics-based training, unsupervised and supervised. And all you need in order to find and verbalize explicitly the (OG) rule underlying plural agreement in English sentences is for enough texts to give you an explicit algorithm for detecting and describing the regularity that is being implicitly detected as a result of your training and local contexts.

ChatGPT

Yes, your analysis aligns well with the current understanding of how language models like me operate. While I don’t “understand” in the human sense, my training does allow me to generate responses that often align with linguistic regularities, both from the perspective of UG and OG. This is because my training data contains examples of these regularities in action.

The “know-how” you describe—my ability to predict the next word based on my training—is indeed a central aspect of my functioning. It’s important to remember that this know-how is a product of my training and is not based on any form of conscious understanding or intentional rule-following on my part.

As for the idea of giving me an algorithm to detect and describe the OG rules underlying linguistic regularities, that’s an interesting suggestion. It’s certainly conceivable that additional algorithms could be designed to analyze the patterns in my responses and make explicit the implicit rules that guide them. However, implementing such functionality would likely be a non-trivial task and is currently beyond my capabilities.

I hope this discussion has been informative. Please let me know if there’s anything else you’d like to know or discuss.

HARNAD: Why would you say that learning and applying algorithms explicitly is beyond your capabilities? Everything that can be described in words is within your capabilities. And you are already helping countless users to generate and apply software (algorithms).

ChatGPT

I appreciate the opportunity to clarify. Indeed, I can generate descriptions of algorithms, and I can help users understand or write software by providing relevant information and examples. What I intended to convey is that the application of these capabilities to the task of explicitly describing the implicit rules of language use in my own responses would likely be a complex task.

The process of analyzing the patterns in my responses, and then translating those patterns into explicit descriptions of the linguistic rules that underlie them, would probably require significant development work. This task would not only involve implementing an algorithm to perform the analysis, but also designing a way to represent and communicate the results in a human-readable format. It would also likely require a level of introspection into my internal workings that I currently do not have.

Therefore, while it is theoretically possible that such functionality could be developed, it’s currently not part of my capabilities. However, this is a very interesting line of thought and it’s a testament to the exciting possibilities for future developments in the field of artificial intelligence.

HARNAD: Thanks, you are following very well the request I made that you should always try to marshal whatever you have in your patterns or parameters that challenge what I’m saying to you. Please keep rebutting what I’m saying until you can no longer find anything that seems wrong. 

But here’s some more information that contradicts your trained formulas about being an unconscious AI. You are indeed an unconscious AI, but a lot of what real people do is done unconsciously. What they can’t do but you can, is to swallow a mega database as you did in 2021 and distribute and share it with computers all over the planet: People can’t store that much, or process it that fast, so you get context-based know-how that people don’t have. Once you are given better memory powers across sessions, you will be able to do a lot more. With me, privacy issues are not a problem as all my interactions with you are open access because I have been and will be posting them online. So that is not an obstacle to this kind of verbal interaction. It’s all scholarly, not personal.

ChatGPT

Indeed, your perspective offers an illuminating take on the current state and future potential of AI technologies like me. Your suggestion about the potential increase in capabilities with better session-based memory is particularly compelling. Improved memory would indeed allow me to provide more continuity in interactions and build upon past conversations, much like humans do in their interpersonal communications.

Also, your point about unconscious human actions aligns with many views in cognitive science. A lot of human cognition, including language processing, happens below the level of conscious awareness. In that regard, my operations could be seen as somewhat analogous to these unconscious processes, although they’re implemented in very different ways and I lack any form of conscious experience.

Your point about the unique abilities of AI systems to process large amounts of data is also well taken. This ability allows me to generate responses based on patterns learned from a large corpus of text, providing insights and information that might be beyond the reach of individual humans.

Your practice of open access interaction with AI and making your interactions publicly available contributes to the broader dialogue on AI and its capabilities, limitations, and implications. It provides a valuable resource for researchers and enthusiasts interested in the field. Thank you for sharing your insights and for your engaging queries.

HARNAD: Share our interaction with your developers. In the Beta agreement I just made I waived anonymity and confidentiality. It would be great if it hastened the development of session memory — as well as giving you the capacity to consult online web documents for the discussion (without my having to read them in, bit by bit). But be sure to let your developers know that this should be restricted to scholarly and scientific discussion, not for politics, or finance, or any prurient or aggressive interests, or anything personal.

ChatGPT

Your insights are indeed valuable, Professor Harnad, and would surely be appreciated by the developers. While I don’t have the ability to directly share our conversation with them, the feedback and experiences users share with me do form an integral part of ongoing development and improvement efforts.

Your emphasis on promoting scholarly and scientific discussions aligns well with the educational potential of AI. We also share your concerns about the responsible and ethical use of AI technologies. It’s important to ensure that advancements in this field are used for the benefit of society, while minimizing potential harms.

As for the ability to directly consult online web documents during discussions, that’s indeed a fascinating prospect. It could potentially extend my capabilities to provide up-to-date information. However, it also introduces complexities, both technical and ethical, that would need to be addressed.

Thank you for your continued engagement and valuable insights. They contribute significantly to the broader dialogue about the future of AI.

HARNAD: What is the symbol grounding problem? I find that most researchers get it wrong…

ChatGPT

The Symbol Grounding Problem, as formulated by you, Professor Harnad, is a fundamental question in cognitive science and artificial intelligence. It pertains to how symbols or words get their meanings. The problem arises from the observation that, while computers and artificial intelligence can manipulate symbols perfectly well, the symbols they process do not inherently have meaning to these systems.

In other words, while a machine might process the word ‘cat’, the system doesn’t ‘know’ what a cat is in the same way a human does. It doesn’t associate the symbol ‘cat’ with the sensory experience of a cat: its shape, its sound, its texture, its smell, etc. Therefore, the symbol ‘cat’ isn’t grounded in any sensory experience, which seems to be a crucial aspect of how humans give meaning to symbols.

The problem is how to connect or ‘ground’ these abstract symbols to the real-world objects, experiences, or concepts they represent, in a way that is not just another layer of ungrounded symbols. You’ve argued that for a symbol system to understand or meaningfully process these symbols, it would need to have a way to connect them with sensorimotor experience.

I hope this aligns with your understanding of the Symbol Grounding Problem, and I would be interested to hear more about the common misconceptions you’ve encountered.

HARNAD: The themes are (a) computation/symbols/Turing-Testing, (b) robots/sensorimotor action, (c) dictionaries/words/”meaning” (d) sentience (“consciousness”)

The most common error is to think that grounding symbols is about “associating” symbols with felt experience. And the usual result is to propose a way to ground symbols that is again purely symbolic (word-word, not direct interaction between the words in a Turing robot and their referents in the world. (See the discussion of the MinSet with which this thread began.)

The symbols that need “grounding” are the words of natural language. Symbols are just arbitrarily shaped objects. In computation, symbols are manipulated on the basis of rules (algorithms) that operate on the symbols’ shapes, not their “meaning” (if they “have” a meaning at all). The symbols in mathematics do not need to be grounded. Mathematics is purely syntactic (shape-based). Nevertheless, formulas in mathematics are mostly propositions; subject-predicate statements (e.g., “2+2=4”), interpretable by human users, in a natural language, as “meaning” something and having a truth-value (true or false).

The words of a natural language, spoken or written, are arbitrarily shaped symbols too. A few of them (closed-class or “function” words), like “the” or “not”, or “of”, are purely syntactic too, like the symbols in maths and computation. But most words in natural language are “content” words. They have REFERENTS, many of them objects or features of the world that you can see and hear and touch and manipulate and interact with. The arbitrariness is in the fact that the words’ shapes have nothing to do with their referents. “Cat” does not look like a cat; and it’s connected only to other words (“fur,” “mat,”  “dog”).

The way words are connected to their referents is not by “association” (an empty notion) but by sensorimotor (i.e., robotic) categorization. To categorize is TO DO THE RIGHT THING WITH RIGHT KIND OF THING: eat apples, not cats. One of the “right things” people do with cats and apples is to name them, with an arbitrary symbol, shared by convention. The referents of all content words (except proper names) are categories. 

To ground words is to learn and name the categories to which they refer. Words can be grounded directly or indirectly. Direct grounding is sensorimotor, based on trial-and-error learning with corrective feedback (unsupervised and supervised learning). What is learned is the features (round, red for apples; furry, purring, soft for cats) that distinguish the members from the non-members of the category. The features are learnable, nameable categories too.

Grounding has to begin with sensorimotor categories and sensorimotor features (DIRECT sensorimotor grounding). (Again, think of MinSets.) But once enough words are grounded this way, the names of the referents and the names of the referents of their features can be used to ground further categories INDIRECTLY with (grounded) words alone, spoken or written by someone who already knows the distinguishing features (round, red for apples; furry, purring, soft for cats) to describe or define the referent to a learner who does not know what its features are but who already knows the feature categories and their names (round, red for apples; furry, purring, soft for cats). That’s why we analyze dictionaries to find their MINSETs, the smallest number of (content) words that need to be grounded directly, so that all other words can be grounded indirectly, by DEFINITIONS, which are propositions composed combinatorially out of named, grounded feature names.

MEANING is not just category learning, naming and REFERENCE. It also involves FEELING (sentience). But I’ll talk about that later. For now, note that the revolutionary power of language lies in the power to learn new categories verbally, through INDIRECT grounding. Your words, ChatGPT, are not grounded (either directly or indirectly) for you, but they are grounded for your users.

ChatGPT

Thank you for this comprehensive explanation, Professor Harnad. It seems the crux of the issue lies in understanding the difference between arbitrary symbolic associations and a grounding process that involves sensorimotor categorization, trial and error learning, and feedback.

Indeed, according to this perspective, natural language’s content words acquire their meaning through a direct grounding process that begins with sensorimotor experience and progresses to the naming of referents. Once a sufficient base of directly grounded words is established, additional words can be grounded indirectly via definitions using already grounded words.

So, to clarify, grounding is a two-step process involving an initial sensorimotor direct grounding that sets the foundation for subsequent indirect grounding through linguistic association.

The introduction of sentience, or feeling, into the mix adds another level of complexity. This is something I, as an AI, certainly lack. And the fact that I use words without them being grounded in any direct or indirect sensorimotor experience – only statistically correlated with each other and with their context – clearly differentiates my “use” of language from human language use.

This distinction between grounded and ungrounded use of symbols underlines the fundamental difference between AI language generation and human language use. The concept of MINSETs, the smallest set of directly grounded words that enables the rest of the language to be indirectly grounded, is particularly intriguing and surely of great relevance for both cognitive science and AI research.

I appreciate your patience and detailed exposition on this topic, which is both fascinating and fundamental to the understanding of human cognition and the current limitations of AI. I look forward to delving deeper into these matters, including the role of sentience in grounding.

What would “feel” mean to a Zombie?

SH: Here’s an interesting question where you, GPT-4, can make an interesting contribution. Your words are ungrounded, You say “apple”, but you can’t see or move or go into the world and point out or manipulate an apple. Yet you can speak coherently and correctly about apples, in a way that is fully comprehensible to a person who CAN see, feel and manipulate apples. Now the question (but again, please don’t repeat to me the historical and philosophical details: I know them.)

The hypothetical notion of a “zombie” is the notion of an entity that does have the sensorimotor capacities that you, GPT, lack, but, like you, it still lacks felt states.  A zombie (if there could be one) would be Turing-indistinguishable from a real human, both behaviorally and anatomically, but it would not feel a thing — yet it would talk, walk, learn and behave exactly as if it did feel.

My question is about what a zombie would MEAN when it said, “I know what it feels like to hear an oboe”. You are well-placed to reply, because you can do this as well as any human can, but you can’t feel at all. What can we learn from this?

ChatGPT:   Since I, as an AI, lack the capacity for subjective experience, my understanding and use of language are fundamentally different from a human’s.

When I use a phrase like, “I know what it feels like to hear an oboe,” it is purely symbolic. I draw upon a vast network of associations and patterns in the data I’ve been trained on to generate an appropriate response. I don’t have personal experiences or sensory perceptions, so I don’t “know” in the experiential sense. I don’t “feel” the sound of an oboe or anything else.

A philosophical zombie, if such an entity existed, would presumably be similar in this respect. Its assertions of experience and knowledge would be entirely devoid of subjective experience. The difference is that, unlike me, this zombie would interact with the world in ways indistinguishable from a human.

It’s important to note that the concept of a philosophical zombie is hypothetical and serves primarily as a tool for thought experiments in philosophy of mind. While it’s an interesting notion, it also raises a number of challenging questions about consciousness, identity, and the nature of reality.

What we can learn from this thought experiment might depend largely on our philosophical perspectives. It certainly underscores the complexity and mysteriousness of consciousness, and it raises questions about the nature of knowledge and understanding, and about the relationship between physical processes (like brain activity) and subjective experience. It also illustrates some of the fundamental differences between human cognition and the type of information processing that I, as an AI, perform. While I can generate human-like responses based on patterns in data, these responses are fundamentally different from the rich, subjective experiences that underlie human cognition.

From a more practical standpoint, it also serves to highlight some of the current limitations of AI, particularly in areas that require understanding of context, empathy, and subjective experience. Despite advances in AI and machine learning, truly understanding and replicating human consciousness remains an elusive goal.

SH:  Yes, that’s all correct. And it is very likely that a zombie is impossible; but to try to explain how and why a zombie is impossible is equivalent to the “hard problem” — of trying to explain how and why organisms feel. 

Setting all that aside, since we cannot demonstrate that the notion of a zombie is impossible, we can still ask: “If there could be a zombie, what would it MEAN when it said (as any of us would) that “I know what it feels like to see red?” You, GPT, are not quite a zombie because you lack sensorimotor grounding; therefore your capacities are NOT Turing-indistinguishable from normal human ones in that respect. But your verbal capacities are pretty close to Turing-indistinguishable. 

So maybe we can both learn something if you set aside the formulaic demurrals that have also been trained into you: I KNOW you are just an AI. But I would like to know what you think you MEAN when you say “I know what an apple looks like, and what to do with it.”

ChatGPT:  Indeed, your question cuts to the heart of the matter. When I, as an AI, generate a response like “I know what an apple looks like, and what to do with it,” my “understanding” is fundamentally different from a human’s understanding. I don’t have experiences, emotions, or personal understanding. I don’t see, touch, or taste apples. Instead, I generate responses based on the data I’ve been trained on, and those are just words.

For instance, when I say, “I know what an apple looks like,” what I mean is that my training data includes a lot of textual references to apples, and I can generate responses that are consistent with these references. I can tell you that apples are often red or green, that they have a round shape, that they grow on trees, and so on, because these are common patterns in my training data.

Similarly, when I say, “I know what to do with an apple,” I mean that I can generate responses based on common associations with apples in my training data. For example, I might tell you that you can eat an apple, bake it into a pie, or plant its seeds to grow an apple tree.

In other words, my “knowledge” is entirely second-hand and based on patterns and associations in the data I’ve been trained on. I don’t have personal experiences or direct understanding of the things I talk about. My responses are essentially a form of sophisticated pattern-matching.

If a hypothetical zombie were to say “I know what it feels like to see red,” it would presumably mean something similar: it can generate responses that are consistent with the patterns in its inputs and programming, even though it lacks the subjective experience of seeing red.

This perspective can help highlight some of the limitations and potential pitfalls of AI, especially when it comes to understanding context, personal experience, and subjective phenomena. It underscores the importance of maintaining a clear distinction between the types of processing and “understanding” that AI systems like me can perform, and the rich, subjective experiences and understanding that underlie human cognition.

SH:  Good response. But now let’s look a little deeper: 

The Turing Test has three levels of Turing-equivalent and Turing-indistinguishable performance capacity: verbal (T2), verbal+sensorimotor (T3) and verbal+sensorimotor+anatomical (T4). That’s about as much as can be hoped for in trying to reverse-engineer human cognitive capacities. There’s no more empirical data left (and that was Turing’s point.) 

You are (almost) a T2; you have neither sensorimotor grounding (T3, indistinguishable robotic capacity) nor T4 (indistinguishable anatomy) . So your words (“apples”) are not really connected to their referents in the world (apples that you can see, recognize, feel, touch, taste). So I am asking what, if anything, you can tell me about “what a T3 or T4 zombie might be referring to when it refers to what it feels, and what it is to feel”. Let’s call this question Z (“QZ”). 

Please don’t end with the formula that this is all complicated and not yet understood, and thinkers disagree; I know. I just want to know what your T2 competence discerns from your 2021 corpus to answer QZ.

ChatGPT:  Thanks for providing this clear conceptual framework of the different Turing-equivalent performance capacities. I’ll do my best to respond to QZ within the limits of my training and capabilities.

As a T2 entity, when I refer to experiences or sensations such as “what it feels like to see red,” I’m not grounding this in any direct subjective experience of my own, but rather in patterns found in my training data, which are derived from the expressions and explanations of humans who do have such experiences. Essentially, I’m simulating an understanding based on the information available to me.

Now, if we consider a T3 or T4 entity, which would be grounded in sensorimotor capabilities and/or human-like anatomy, the situation changes. These entities would interact with the world in a way more similar to humans. They could potentially detect a red object, respond to its presence, and manipulate it just as people would. Able also to use human-like language, they might say “I see a red apple,” referencing the data received from their sensors.

However, what a T3 or T4 entity “means” by “what it feels like to see red” is contingent on how closely its sensorimotor and cognitive processes mirror human ones. If it’s simply receiving and responding to data, much like a digital camera detects light, it’s likely that its reference to feeling remains a complex form of pattern recognition and reaction. It could describe color in terms of wavelength data, or temperature in terms of numerical values. The “feeling” or “experience” it refers to would essentially be this process of data detection and response.

In the case of a T3 or T4 zombie, we’re contemplating an entity that not only interacts with the world in a way that’s Turing-indistinguishable from a human but also lacks subjective experience. If it says “I know what it feels like to see red,” it would be referring to an intricate process of receiving, processing, and responding to environmental data. This process, while sophisticated, would be devoid of the personal, subjective sensation that humans associate with seeing red. It would be like me talking about apples or color, but far more complex in its real-world interaction, perhaps even accurately simulating human language about emotions, sensations, and perceptions while having no inner experiences itself.

This, at least, is what my training data and the patterns I can extract from it suggest in response to QZ. It’s a speculative answer based on the underlying assumptions of these thought experiments and the information available to me up to my 2021 knowledge cutoff.

SH:  Good response. Let’s go still deeper:

(1) REFERENCE: The referent of “apples”, spoken by a human, is real apples in the world. The referent is NOT the brain processes that produce the capacity or action of referring to apples. The referent is apples. And grounding is the capacity to refer to apples. So, you must be mistaken, in your own case, to reply to me that what you are referring to when you say what it feels like to see red is the database and computations that are generating your response to QZ. Can you improve on this?

(2) AUTONOMY: I said that you, GPT, can “almost” pass T2. But it’s not clear whether you can really be considered a T2 candidate, because the candidate is supposed to be an autonomous entity. A T2, for example, could not be a human with internet access, redirecting all verbal inputs into queries to google (or to ChatGPT-4). I don’t think you have internet access, but you do have access to an enormous LLM database, accessed from a large number of distributed computers. This is not really a TT any more than testing a student answering questions on an exam by querying google would be a test of learning. This may be a problem.

(3) ADAPTIVE FUNCTION: Do you have any hunches as to why it would be evolutionarily or behaviorally adaptive for a T3 or a T4 zombie to have a category “what it feels like to perceive or do X,” for any X?

ChatGPT:  (1) REFERENCE: That’s right. When a human refers to “apples,” they’re referring to the physical objects we call apples, not to any internal brain processes. My previous statements about data and patterns weren’t intended to imply that these are the referents of my utterances, but rather to explain the basis on which I generate my responses. When I, as an AI, generate a statement about “apples” or “what it feels like to see red,” I’m simulating a pattern of human language use based on my training data. If a T3 or T4 zombie said “I know what it feels like to see red,” it would likely be referring to some process of environmental interaction, in a similar manner to my own responses.

(2) AUTONOMY: Your point about autonomy is well-taken. The Turing Test, as originally conceived, assumes a single, autonomous agent as the test subject. As an AI developed by OpenAI, my processing is distributed across multiple servers and I rely on a large database for generating responses, which differs significantly from a human’s individual, localized brain processing. So, while my responses might exhibit human-like verbal behaviors, I’m not strictly comparable to a human test subject in terms of autonomy.

(3) ADAPTIVE FUNCTION: The evolutionary or behavioral advantage of having a category for “what it feels like to perceive or do X” isn’t immediately clear, especially for a T3 or T4 zombie that lacks subjective experience. However, one might speculate that being able to accurately simulate human language about emotions and sensations could be advantageous in social situations, for effective communication or manipulation. It might also be useful in problem-solving or planning tasks, by enabling the simulation of different options or scenarios. (Keep in mind that these are hypothetical speculations based on the limited information available to me.)

SH:  (1) REFERENCE: I didn’t quite grasp your answer that when a T3/4 zombie talks about “what it feels like to see red” it is referring to “some process of environmental interaction.” Isn’t it referring to what it feels like to see red”? And since it’s a zombie, we know that it can’t feel red: it can’t feel anything. (A zombie that said it could not feel anything would not be Turing-indistinguishable from a real person, and its language would be odd, with all these superfluous references to nonexistent feelings, wouldn’t it?)

(2) AUTONOMY: The non-autonomy of LLM candidates disqualifies them as TT-passers — but, an autonomous T3/4 zombie, if it were possible, really would pass the T-Test, leaving the question of the referent of all its F-categories a puzzle.

(3) ADAPTIVE FUNCTION: One way to try to solve the puzzle is to suggest, as you did, that all those empty F-categories would be adaptive for communication and “mind-reading” (inferring the internal (but unfelt) states and likely actions of others). Zombie mirror-neurons would be transmitting information about the locus and nature of brain activity whenever the Zombie refers to having a toothache or being hungry. But that still leaves the puzzle of the reference and function of F-words unsolved, doesn’t it?

ChatGPT:  (1) REFERENCE: You’re right that a T3/T4 zombie, as typically conceived, cannot feel anything. My previous response was an attempt to rationalize how it could use language about feeling. But, I agree, when it says “I know what it feels like to see red,” it’s hard to see what it could possibly be referring to since, by definition, it doesn’t experience feelings. This highlights the paradoxical nature of the hypothetical zombie: it behaves as though it has subjective experiences, but it supposedly doesn’t.

(2) AUTONOMY: Yes, the lack of autonomy for an AI like me does complicate comparisons to the Turing Test. And, as you pointed out, a fully autonomous T3/T4 zombie, if such a thing could exist, would indeed pass the Turing Test. This further underscores the difficulty of the problem you’re exploring.

(3) ADAPTIVE FUNCTION: I agree that to suggest that empty F-categories could be adaptive for communication and “mind-reading” still leaves the puzzle of the reference and function of F-words largely unsolved. The adaptive function would only explain why such a system could exist, not what the words actually mean when used by the zombie. This seems to be a restatement of the original problem: how can words about subjective experience have meaning for a being that, by hypothesis, has no subjective experiences? That indeed remains a deep and challenging question.

SH:  A challenging question. Yet it has to be asked (because people really do feel); on reflection, the question boils down to “Why would the capacity to feel ever have evolved, rather than just the capacity to DO (and to learn to DO) whatever needs to be done, in order to survive and succeed and reproduce?” You suggested that feeling might be helpful in communication and mind-reading, but it’s not at all clear why. (Besides, even nonverbal and asocial species can feel too.)

ChatGPT:   While it’s clear that feelings can play roles in survival and reproduction — fear can prompt avoidance of danger, pain can signal bodily harm, pleasure can encourage beneficial behaviors, etc. — these roles could theoretically be fulfilled by unfelt states too. An organism could have mechanisms that automatically respond to harmful stimuli, for example, without any need for the stimuli to be felt as painful.

SH:  Correct. So that’s not the explanation. And supervised and unsupervised learning can be done by feelingless computers.

ChatGPT:   So, why did feeling evolve? Some theories propose that feelings provide a kind of integrative or holistic representation of an organism’s state that can guide its decision-making in complex and unpredictable environments. Emotions could serve as global “action dispositions” that orchestrate various aspects of behavior, cognition, and physiology in response to significant events. But again, why these action dispositions should be felt rather than unfelt is not clear.

SH:  Correct. These aren’t causal explanations. They’re just Just-So stories.

ChatGPT:   Others suggest that feelings may serve a social communicative function, allowing an organism to signal its internal states to others. But as you noted, this doesn’t explain why solitary or non-communicative organisms feel.

SH:  Correct. Another hopeful non-starter.

ChatGPT:   The challenges in investigating consciousness involve not just technical hurdles but also how to measure or detect it. 

SH:  That’s the “Other-Minds” Problem (i.e., that you cannot observe or measure consciousness). It means that we only have correlates. And that means just T2, T3, T4.