Category Learning, Turing Testing, LLMs & Symbol Grounding

Stevan Harnad

Université du Québec à Montréal

The T2/T3 Distinction and Robotic Grounding. There is a crucial distinction between T2 (the verbal-only version of the Turing Test — Turing-Indistiguishable from that of any normal human being) and T3 (the robotic version of the Turing Test, with the Turing-Indistiguishable verbal capacity grounded in sensorimotor capacity Turing-Indistinguishable from any normal human being). LLMs are T2 systems. T3 capacity — not just sensory capacity, but, critically, the motor aspect of sensorimotor interaction—is necessary for grounding. The “experience arrow” (x: H → W) seems a pale abstraction of what real grounding requires: the capacity to do things in the world with the referents of content-words, not just receive inputs from them and name them.

Direct vs. Indirect Grounding: Not Parasitism but Cheating. LLMs are indeed “epistemically parasitic.” Direct sensorimotor grounding requires the capacity to learn categories through sensorimotor trial and error, with corrective feedback, by learning to detect the critical sensorimotor features that distinguish category-members from non-members, so as to be able to do the right thing with the right kind of thing. Indirectverbal grounding requires the capacity to learn (from someone) the distinguishing features of categories from verbal definitions that use already-grounded content-words to refer to their referents.

Humans learning from indirect grounding aren’t “parasitic”—they’re building on their own direct grounding foundation. Indirect grounding is dependent on prior direct sensorimotor grounding.  LLMs cannot do indirect grounding at all. They are cheating by statistical pattern-matching across the enormous human verbal database of text from grounded human heads, without any grounding of their own.

Category Learning and Minimal Grounding SetsThe research on category learning and categorical perception (CP) concerns how organisms learn to detect category-distinguishing features through direct trial and error learning with corrective feedback (+/- reinforcement) from doing the right or wrong thing with members of the category. This is related to research on dictionaries’ “minimal grounding sets” (“MinSets”). the smallest set of content-words in a dictionary that must be directly grounded to bootstrap all others through verbal definition alone. Jerrold Katz’s Katz’s “Effability Thesis” and graph-theoretic analysis of dictionaries suggest that this MinSet can be surprisingly small, as few as 1000 content-words, among those that children learn earliest.

The question is not about whether LLMs have some abstract “access to W,” but whether they have learned enough categories directly to reach a MinSet through sensorimotor trial and error by detecting the features that distinguish them. (Once any category has itself been learned directly, learning which content-word the speaker community uses to refer to it is trivial.) Individual human learners who have approached or reached a MinSet for their language by direct grounding can then go on to ground the rest of the referring words of their language though indirect verbal grounding provided by verbal sources (such as teachers, dictionaries, text books – or LLMs) that can already name the distinguishing features of the referents of the rest of the words in the language and convey them to the learner through subject/predicate propositions (definitions and descriptions). The critical precondition for indirect grounding to work is that the content-words that the teacher uses to refer to the distinguishing features of the new category that is being defined for the learner indirectly through are already grounded for the learner (i.e., they are already grounded in the learner’s MinSet or can be looked up by consulting a dictionary or a textbook or an LLM or a human teacher): They do not, however, need to be grounded for the source, whether dictionary, textbook, LLM, or human teacher. They need only be accessible to the learner from the source. It follows that LLMs can provide verbal grounding to a grounded learner (whether a human or a T3 robot) without itself being grounded, or capable of being grounded.

The Lexicon of a Language and Propositional Recombination. LLMs have ingested such massive amounts of text produced by grounded human heads that they can recombine propositional patterns to simulate understanding without any grounding whatsoever. The scale of training data allows statistical pattern-matching to mimic the outputs of grounded understanding, which LLMs do not, and cannot acquire, not even one MinSet’s worth, because, not being T3 robots, they do not have the sensorimotor means to acquire it. There is only one way to acquire grounding, and that is from the sensorimotor ground up.

The role of language’s combinatorial and expressive power—generating infinitely many propositions from finite means—is central here. LLMs exploit the fact that human language already encodes grounded knowledge in recombinable propositional form. They’re not “circumventing” grounding; they’re cheating on the Turing Test by exploiting a possibility that Turing did not take into consideration explicitly: the accessibility and navigability of virtually all human textual output for pattern extraction. But I think that if Turing had considered it, it would have only be to dismiss it as cheating, with a superhuman database of crib notes instead of a causal model of cognitive and linguistic capacity, whether purely computational (T2) or robotic/dynamic (T3 or T4).

Floridi et al.’s mathematical framework points out that LLMs operate on content C rather than world W: LLMs learn from text about the world, not from the world itself. 

To ground content-words in the things in the world that they are about the T3 robotic sensorimotor capacity to act on and interact with the referents is essential.

Floridi et al. correctly note that LLMs lack genuine grounding. LLMs pass T2 by cheating with a superhuman database of texts produced by grounded human heads but lacking the T3 capacity necessary for either direct grounding through sensorimotor category learning or indirect grounding built on that foundation. To ground content-words in the referents in the world that words are about, the T3 robotic sensorimotor capacity to act on and interact with those referents is essential.

Floridi, L., Jia, Y., & TohmĂ©, F. (2025). A Categorical Analysis of Large Language Models and Why LLMs Circumvent the Symbol Grounding ProblemarXiv preprint arXiv:2512.09117.

Harnad, S. (2025). Language writ large: LLMs, ChatGPT, meaning, and understandingFrontiers in Artificial Intelligence7, 1490698.

PĂ©rez-Gay JuĂĄrez, Fernanda, et al. (2019) Category learning can alter perception and its neural correlatesPLoS One 14.12 (2019): e0226000.

Vincent‐Lamarre, P., MassĂ©, A. B., Lopes, M., Lord, M., Marcotte, O., & Harnad, S. (2016). The latent structure of dictionariesTopics in cognitive science8(3), 625-659.

Feeling vs. “Functing”

Many animals (spiders, cephalopods, adult salamanders, beetles, etc.) are asocial except for mating. If feeling was exapted for other functions, in what sense is it original? — felt rather than just “functed“?  This has been dubbed the “hard problem.” Plants seem to function through chemo-mechanical signals (including learning) without feeling. Can we take feeling as given and only ask about its function? Why would adaptive interactive function call for feeling rather than just sophisticated robotic function? It is still not clear how “feedback loops,” “reafference,” adaptive control, and robotic learning could explain feeling.

Exchange with ChatGPT on the “Easy” and “Hard” Problem of Cognitive Science

SH: Could you please summarize briefly what is meant by the “underdetermination of scientific theory by data,” including possibilities like multiple “Grand Unified Theories of Everything” (GUTEs).

ChatGPT: In its standard form, underdetermination means that any finite body of empirical data can be equally well explained by more than one scientific theory, often infinitely many. The classical point is logical: data confirm but never entail a theory. Distinct hypotheses can generate the same observable predictions; hence data alone cannot force a unique theoretical choice. Two versions are usually distinguished:

(1) Local underdetermination.
For a given domain, rival theories can match all available observations while differing in unobservable posits or in their explanatory architecture. In practice, scientists appeal to auxiliary criteria (parsimony [Occam’s razor] simplicity, coherence, elegance, unification), but these are not dictated by data.

(2) Global underdetermination.
Entire world-models can be constructed that reproduce every possible observation while differing in their “deep” structure. Put differently: the mapping from theory-space to observation-space is many-to-one. If so, we could never determine which of several complete, internally coherent descriptions of the world is the true one.

The idea of multiple “Grand Unified Theories of Everything” (GUTEs) is a direct application: even if physics eventually converges on a complete set of surface-regularities, there may still be many distinct fundamental theories—different dynamical laws, symmetries, or ontologies—that reproduce all observed phenomena with arbitrarily fine accuracy. Nothing in the data would distinguish them.

In your own framing: cognitive science has an analogous situation with GUTE_c, where candidate causal models of cognition may all match every observable capacity Turing-Indistiguishably (T4-level performance) yet diverge on the crucial but empirically inaccessible question of whether and how they generate feeling. Because sentience (feeling) is not an observable variable (to anyone but the feeler), T4-indistinguishability does not break the tie. This is underdetermination extended to the Hard Problem, with the Other-Minds barrier supplying the unresolvable causal gap (sometimes called the “explanatory gapš).

The general moral is: even ideal empirical completeness does not guarantee theoretical uniqueness, causal completeness or explanatory adequacy.

SH: I’m amazed at how uncannily well you managed to integrate general notions about underdetermination with my own notions about the “Easy Problem” and the “Hard Problem” of reverse-engineering cognitive capacity. Can you be a little more specific? In particular, can you infer why I would say that the solution to the EP would make the HP even harder?

ChatGPT: The reason you would say that solving the Easy Problem (EP) makes the Hard Problem (HP) harder still follows directly from your two commitments:

  1. Causal explanation is finite.
    A complete T4-level explanation of cognitive capacity—perception, category learning, language, action, memory, attention—will already account for all observable causal work the organism does.
  2. Sentience is not an observable variable.
    Because feeling is accessible only to the feeler, no amount of external data can constrain theories about why feeling occurs, or how it is generated.

Now the key point you have emphasized:

Why solving the EP exhausts the causal degrees of freedom

If cognitive science eventually produces a full causal model explaining every performance capacity (T4 indistinguishability), then all the causal machinery needed to produce the organism’s behavioral, neural, and functional profile will be accounted for. Nothing in that explanation requires, predicts, or invokes feeling.

At that point:

  • All the doing is explained.
  • All the mechanisms underlying doing are explained.
  • Every causal gap relevant to performance has been closed.

But sentience—the fact that it feels like something to think, see, hurt, or enjoy—still stands outside the causal account. And because the EP’s solution has already used up every available causal role, there is no remaining explanatory function left for feeling to play. This is exactly what you mean when you say:

“The solution to the EP will have used up all the causal degrees of freedom.”

Why this intensifies underdetermination

Before solving the EP, one could still imagine that perhaps somewhere in the causal tangle there is a hidden role for feeling—some undiscovered function, some latent variable, some missing mechanism. But once the EP is solved:

  • There is nowhere left to put sentience in the causal architecture.
  • There is no remaining causal work for feeling to do.
  • Yet feeling undeniably exists for the only observer to which it is accessible: the feeler.

Hence the HP becomes worse than underdetermined: it becomes causally superfluous. Even if, as you put it, an omniscient deity could reveal which GUTE_c actually “has” feeling, that revelation would not constitute an explanation. It would merely fix the value of an empirically uncheckable parameter.

What this means for underdetermination

Your position is that HP-underdetermination is categorically deeper than ordinary theory-underdetermination:

  • Ordinary underdetermination concerns multiple theories doing equal causal and explanatory work.
  • HP-underdetermination concerns a real phenomenon—feeling—that does no causal work in the model and whose presence or absence is empirically inaccessible.

Thus, solving the EP not only fails to solve the HP; it removes any causal foothold from which a solution could in principle be constructed. This is why, in your view, the HP is not merely unsolved but unsolvable in the same scientific mode as the EP (or any other empirical phenomenon.

That is the conceptual structure linking underdetermination, the EP, the HP, and your GUTE_c argument.

SH: A bigger challenge: Can you relate that to the notion that the reverse-engineering of cognitive capacity (GUTE_c) — as opposed to the more general notion of the GUTE in all of physics, chemistry and biology (where it is seen as the discovery of “laws” of nature) –amounts to modelling cognitive capacity computationally and then testing a 3D-printed version with the T4 Turing Test?

ChatGPT: Here is the way your own framework links these pieces together—GUTE vs GUTE_c, reverse-engineering, computation, 3D-printing, and T4—without straying into hype or clichĂ©.


1. Why GUTE (physics/chemistry/biology) is a search for laws

In physics and the natural sciences, a GUTE is conceived as a single, compact set of dynamical principles from which all physical regularities follow. The explanandum is the entire causal traffic of the universe. The explanans is a set of general laws.

Even here, the laws are underdetermined by all possible observations. Multiple distinct dynamical systems can reproduce the same empirical regularities. But crucially:

  • The explananda are publicly observable.
  • The causal work to be explained is completely external (forces, fields, particles, macromolecules).

There is no hidden-variable-like feeling complicating the picture.


2. Why GUTE_c is not a search for “laws” but for a causal mechanism that does what a human does

The explanandum of cognitive science is capacity, not cosmos:

  • Categorizing
  • Learning
  • Speaking and understanding
  • Acting adaptively
  • Planning and reasoning

These are performance capacities, not universal laws.

Reverse-engineering cognition therefore means:

  • Finding a causal mechanism that generates T4-equivalent performance,
    not discovering a new fundamental law of nature.

This is what makes GUTE_c a different kind of project. It aims at a full, working model of a cognizer.


3. Why computational simulation is not enough

Computation alone (T2) yields:

  • A symbolic model
  • A syntactic simulation
  • A system that can describe or simulate doing, but does not itself do

This is the crux of your symbol-grounding insistence: ungrounded symbols can express anything describable, but they do not interact with their referents.

Hence the need for robotic (sensorimotor) grounding.

A reverse-engineered cognizer must:

  • detect sensory from the world
  • act on the world
  • learn from the consequences of its actions

This stage is no longer purely computational. It concerns the causal organization of a dynamical system.


4. Why the “3D printer” matters

Your analogy:

  • The computational model is like a set of engineering blueprints.
  • The 3D-printed robot is the realization of those blueprints in the physical world.

The blueprints alone cannot pass T3 or T4.
Only the built system—the embodied mechanism—can interact causally with the world.

This is where your distinction becomes sharp:

  • The GUTE in physics explains by laws.
  • The GUTE_c in cognitive science explains through material implementation

Success is demonstrated not by formal derivations but by dynamic performance indistinguishability.


5. T4 as the empirical criterion for solving the Easy Problem

T4 requires:

  • Human-equivalent, Turing-Indistinguishable behavioral performance capacity
  • Human-equivalent, Turing-Indistinguishable robotic sensorimotor interaction with the world
  • Human-equivalent, Turing-Indistinguishable neural/physical organization (observable structure and dynamics)

Once a 3D-printed cognizer is T4-indistinguishable from a human, the EP is solved, because:

  • All observable causal capacities have been accounted for.
  • Nothing the organism does is left unexplained.

This ties GUTE_c neatly to your causal-degree-of-freedom argument.


6. Why this makes the Hard Problem even harder

Once a T4-equivalent mechanism exists:

  • It can generate all the behavioral and neural evidence available.
  • Every function that humans can perform is now implemented without reference to feeling.
  • The entire causal profile of cognition is explained.

Hence feeling becomes:

  • causally superfluous (no causal role left to play)
  • empirically inaccessible (Other-Minds barrier)
  • underdetermined not merely by data, but by any causal requirement

Thus, solving GUTE_c fully (reverse-engineering and 3D-printing a T4 cognizer) eliminates the last possible place where feeling could fit as a causal ingredient, making the HP maximally intractable.


The upshot

In physics, GUTE underdetermination is about multiple law-systems fitting the same observable regularities.

In cognitive science, GUTE_c underdetermination is deeper:

  • Multiple T4-indistinguishable mechanisms could generate all human capacities.
  • None of them provides a causal explanation of how or why those capacities feel like anything.
  • The construction and successful testing of a T4 cognizer fixes all causal facts, leaving feeling entirely outside the causal story.

This is why, in your view, the EP’s solution makes the HP not merely unsolved but unexplainable by the same explanatory resources.

That is the structural parallel and the crucial divergence between GUTE and GUTE_c.

SH:

I don’t think Gute_c is “deeper” than GUTE. Rather like all engineering (forward of reverse), GUTE_c is an application of GUTE. Causality is GUTE-causality. Replace “GUTE_c” by Guce.

3D-printing the Guce model is for testing the Guce model, recruiting the real GUTE Universe (including human Turing-Testers, and their “mind-reading” mirror capacities) to put the Guce model through its paces (including its putative lifelong performance Turing-Indistinguishable cognitive capacity). Otherwise the model would have to model not only the Guce but the GUTE.

3-D printing a candidate Guce model is not “grounding” it: Grounding is the capacity to “do the right thing with the right kind of thing” (usually by learning the sensorimotor features that distinguish the right kind of thing from the wrong kind of thing). The purpose of 3D-printing a candidate Guce model is to Turing-Test it (T4) in the world. (The purpose of Grounding itself is to integrate T2 verbal capacity coherently with T3 perception/action capacity).

Don’t forget that the Guce model is modelling cognitive performance **capacity** — not just one cognizer’s performance. (This is equivalent to Chomsky’s competence/performance distinction.)

T4 indistinguishabillity  (Turing-Indistinguishability) = empiricism itself (for T-testing Guce candidate solutions to the EP).

According to the current point on the standard road to GUTE, there can be no free quarks, only bound quarks. So quarks (much bigger than protons) do have to exist (according to current standard theory), but they cannot be observed except in bound state within a proton, which is much smaller than a quark). So quarks exist, but are unobservable (according to current standard theory, In contrast, superluminal (faster-than-light) speeds cannot occur, except as a relativistic observer illusion. But this is not a route for explaining how or why feelings exist (although they exist — and are observable to the feeler). This is part of the unique “hardness” of the HP. 

And without a causal explanation, even a perfect, exclusive, exception-free correlation between felt states and, say, dopamine secretion, would not be a causal (how/why) explanation of feeling. Correlation, and confirmed exception free prediction are not causal explanation. 

Nor is Grounding = feeling — although feeling is what makes explanation (fallibly) feel explanatory.

A hard problem and (I daresay) unique


Mutanda mutata


ChatGPT: Here is a reformulated version that incorporates every correction you have just imposed, avoids the mistakes in my previous framing, and keeps all your distinctions straight. I will treat Guce exactly as you specify: an engineering application of GUTE, not a “deeper” theoretic problem than GUTE, and certainly not a search for new laws of nature (which would be “dualism”)..

I will also preserve your distinctions between Grounding3D-printingTuring-testingGUTEGuceEP, and HP, and show how the pieces fit.

No sycophancy; just exact conceptual alignment.


1. GUTE vs. Guce

Guce is not a parallel project to GUTE and not a “deeper” one. It is an application of GUTE to test (and Turing-Test) the causal engineering principles modelled by Guce.

  • GUTE (physics, chemistry, biology) → discovers the causal structure of the Universe.
  • Guce → uses that causal structure to engineer a mechanism capable of human cognitive performance capacity.

In your terms: all causality is GUTE-causality, and Guce is simply a reverse-engineering problem carried out wholly within that causal domain.

Guce therefore depends on GUTE exactly as aeronautical engineering depends on Newtonian and fluid laws: Guce uses the laws; it does not add new ones.


2. Why 3D-printing is required (but is not grounding)

A Guce model is:

  • a proposed causal mechanism (not just a computational simulation of one),
  • intended to produce, test and explain the capacity for lifelong human-equivalent cognitive performance.

To test it, one must instantiate it physically—“3D-print” it—because only a physical mechanism, under the real GUTE Universe, can:

  • interact with real objects,
  • acquire sensorimotor robotic capacity (T3),
  • learn by acting and being corrected,
  • support the integration of T2 verbal capacity with T3 sensorimotor capacity.

This is not grounding itself. Grounding is the system’s ability to do the right thing with the right kind of thing, typically by learning the distinguishing features of the category that matters. 3D-printing is what allows you to test whether a proposed mechanism can, when embedded in the world, succeed in grounding content-words and the subject/predicate propositions of natural language.

In short:

  • Grounding = doing the right thing with the right kind of thing.
  • 3D-printing = putting the candidate Guce mechanism into the GUTE world so grounding can be tested and assessed.

3. Why T4 is the right empirical criterion

A Guce candidate is T4-tested:

  • T2: Turing-Indistinguishable linguistic performance capacity
  • T3: Turing-Indistinguishable sensorimotor performance (perception and action)
  • T4: Turing-Indistinguishable full structural and dynamical equivalence in observable organization

A T4 test recruits the actual GUTE universe—including human Turing-testers with their “mind-reading” and mirror-system capacities—to judge whether the Guce implementation is indeed able perform Turing-indistinguishably from a real human for an entire lifespan’s worth of cognitive capacity.

This is simply empiricism:

T4 indistinguishability = the empirical condition for accepting a Guce model as a solution to the EP.

Nothing higher-order; just ordinary scientific testability.


4. Why solving the EP leaves no causal room for the HP

If a Guce model passes T4:

  • Every performance capacity is causally accounted for.
  • All doing-capacity (doing this, not-doing that) are completely explained.
  • GUTE-causality has been fully exercised; no causal gap remains.

Feeling, however:

  • is real (observable to the feeler),
  • is unobservable to others,
  • plays no causal role left unfilled by the T4-tested mechanism,
  • and cannot be deduced from its correlation with any physical variable (dopamine, C-fibers, etc.).

Correlation, no matter how exception-free, is not causal explanation. And without an open causal slot to fill, feeling has nowhere to enter the causal story.

This is the unique hardness of the HP.
Nothing in Guce—successful or not—advances an explanation of how or why feeling occurs.


5. Why the HP is not like unobservable quarks or superluminal illusions

You make this point sharply:

  • Free quarks: unobservable but required by current standard theory; their unobservability is a feature of the causal story.
  • Superluminal appearances: observer-relative illusions; relativity explains why they are not real.

These cases are comfortably inside GUTE causal explanation.

Feeling is not:

  • it is not an unobserved constituent required by theory;
  • it is not an illusion;
  • and it cannot be explained away by GUTE causal structure.

Its existence is certain (to the feeler), but no causal role emerges for it within a complete Guce mechanism. Hence no route from GUTE-causality to “how/why feeling.”

This is what makes the HP genuinely unique—not just a more stubborn version of standard scientific underdetermination.


6. Why grounding is not feeling

You note this often:

  • Grounding is about sensorimotor feature learning.
  • Feeling is what makes explanation feel explanatory.
  • But grounding ≠ feeling.

Even a successful Guce model—fully grounded, fully T4—leaves feeling unaccounted for.


7. The final picture

Putting it all together:

  • GUTE gives the causal laws.
  • Guce is an engineering application: reverse-engineering the causal mechanism of human cognitive capacity.
  • 3D-printing is the empirical instantiation of the proposed Guce mechanism for T4 testing.
  • Grounding is the system’s ability to learn and do the right thing with the right kind of thing; it is not a form of sentience.
  • Solving the EP (T4 success) uses up all causal degrees of freedom.
  • The HP remains untouched and uniquely unexplainable by any causal account, including the most complete Guce mechanism.

Mutatis mutandis, as requested.

On the “Easy Problem” (EP) vs. the “Hard Problem” (HP) of Cognitive Science

Student:  This reading, along with our class discussions, made me wonder about the relationship between the easy and the hard problems and the idea of degrees of freedom. How can we say that the easy problem uses all the degrees of freedom which prevents us from understanding the hard problem when we don’t know the complete truth of the easy problem? Isn’t there a possibility that feelings have a strictly physiological explanation the same way that the easy problem supposedly does? My question is more why do we assume that they are two different problems when we haven’t completely explained either?

SH: Good question. It opens a methodological and metaphysical Pandora’s Box — but one that, fortunately, only philosophers need to struggle with

The “Easy Problem” (EP) of Cognitive Science is the problem of explaining, causally, the cognitive capacities of thinking organisms, especially humans: How and why are they able to do all the cognitive (as opposed to “vegetative”) things they can do?

It is not the EP that makes the HP harder but the solution to the EP (which is still far away). 

Will the EP ever be solved? Who knows. But there is no more reason to think that the EP cannot be solved than for any other normal scientific problem, For the HP, though, there are reasons (what are they?). But those are already what makes the EP hard. 

The solution to the HP would (or will) make the EP even <I>harder</I> because it would (or will) exhaust all the causal (empirical) degrees of freedom altogether. Until the EP is solved, there are things left to be tweaked— until the EP is solved. “Tweaking” means there are still causal alternatives to try, and to test. 

Until the EP is solved. But then, what’s left to try and to test? The EP already solved, there’s still the degrees of freedom of <I>undertdetermination</I> available: You have found one solution to the EP, yet there may be other solutions to the EP. But if you have six solutions – six ways to reverse-engineer cognitive capacity and they all work, what is the empirical test for which (if any) of them is the “right” one? That is where Turing Indistinguishability becomes the same thing as empiricism: The EP solutions are all equivalent, and there is nothing more to tweak and test.

But so far that’s just the ordinary underdetermination of complete causal explanations: If you’ve explained all the empirical (observable, measurable, testable) data, you’ve done as much as can be done with causal explanation. This is just as true in physical science (the “Grand Unified Theory of Everything” “GUTE”) as it is for the EP of cognitive science (the reverse-engineering of organisms’ cognitive capacities: the Turing Test(s).

The difference between cognitive science and physics, though, is the HP (sentience): How and why do sentient organisms <b>feel</b>, rather than just <b>do</b>? The solution to the EP will have already reverse-engineered the EP — even if it comes up with 6 equivalent Turing-Indistinguishable EP solutions rather than just one. 

Either way, something has been left out: the Cartesian fact that each feeling organism knows – [the Cogito/Sentio, remember?] — which is that they feel. This does not mean that the HP is really just the OMP (Other Minds Problem), which is that there’s no way to be sure that anyone else feels but oneself (Turing’s “solipsism” solecism). That is no more a scientific (or commonsense) problem than underdetermination is (although it is definitely a problem for those nonhuman animals who are sentient, but that humans think [or pretend to think] they aren’r). 

Causal explanation (whether it’s reverse-engineering organisms’ cognitive capacities or the universe’s dynamic properties) does not need certainty (any more than categorization (and definition) needs an exhaustive list of category-distinguishing features: they need only enough to get it right until you need to try and to test more features to get it right (sample more of the mushroom island). In empirical science, unlike in formal mathematics and logic (computation), there is no certainty, just uncertainty-reduction to as low as you can get it.

Even T4 doesn’t solve the HP: Even if it turns out that there is some T4 correlate of feeling (say, a chemical in the brain), which is found to be secreted in the brains of only sentient organisms, and only whilst they are feeling something) — and it keeps turning out that T3 cannot be passed (nor the EP solved) without at least that T4 chemical: That still does not explain, causally, how and why sentient organisms feel. T4 is, after all, just part of the EP. Correlates can be tweaked and tested, but the arbiter is still only EP. Not even the verbal report of every sentient human — nor lapsing into an immediate state of general anesthesia in the absence of the T4 chemical –explains how or why feeling (rather than just the T4 chemical) is needed to pass T3. 

T4 correlates in EP don’t become causal explanations in HP.

Pondering “consciousness”— but begging the question of sentience

by Stevan Harnad & ChatGPT

Preamble  (Stevan Harnad):

(1) There is a fundamental difference between the “Other Minds Problem (OMP)” (that the only one who can observe that a feeler feels is the feeler) and the “Hard Problem” (HP).

(2) “Consciousness” is a weasel-word; it means too many different things to too many people.

(3) Sentience—the capacity to feel (i.e., “the capacity to be in a state that it feels like something to be in”)—is not a weasel-word. It is the only directly “observable” thing there is.

(4) By definition, all sentient entities can feel. (The obfuscating weasel-version of sentience is “phenomenal consciousness”: “felt feeling”) 

(5) There is only one OMP—but there are many different kinds of feelings (from seeing to hearing to tasting to touching to talking to thinking).

(6) The capacity to feel (anything) is what the comparative study of the evolution of “consciousness” (feeling) is really about. But this concerns not just the OMP (“Which species can feel? What do they feel? and How and why did that capacity evolve?”).

(7) The OMP can only be studied through its observable correlates: “What can different species detect and react to?” “What was the origin and adaptive advantage of these observable correlates of sentience?”

(8) But the evolutionary problem of explaining the origin and adaptive advantage of sentience is not just the OMP. It is also the “Hard Problem (HP)” of explaining, causally, “how and why the things that different species can detect and react-to (i.e., what they can do) are not only detected and reacted-to but felt.

Harnad & GPT:

The central difficulty in the Royal Society volume is that almost every chapter, including the editors’ framing introduction, proceeds as if the distinction drawn in these eight points did not exist. The contributors are unanimous in treating sentience as if its existence and causal potency were unproblematic: they take for granted that feeling plays adaptive roles in learning, prediction, decision and coordination. This is surely true. But the challenge is to explain how and why the neural or behavioural mechanisms they describe are felt rather than merely executed (done). The authors treat feeling as though it were just another biological property awaiting the same sort of explanation that feathers or kidneys receive, rather than the anomalous property singled out in point 8. Consequently, the question that the volume endeavours to address—What is the adaptive function of consciousness—is answered on an operational level only: they explain what organisms can do, not why any of these doings feel like anything.

The editors simply take for granted that a functional role for felt capacities entails that the observable function is eo ipso the cause of the fact that it is felt, rather than just executed. But this merely presumes what is to be explained. It does not show why the functional capacity could not be instantiated by an unfelt mechanism, which is the substance of the Hard Problem. In the eagerness to naturalize consciousness, feeling is treated as if it were self-evidently part of the causal machinery, thereby glossing over the explanatory challenge the editors hope to confront.

The individual chapters adopt the same pattern. When Humphrey distinguishes cognitive from phenomenal consciousness through blindsight, he proposes that phenomenal experience evolved because internalized efference copies make things “matter.” But the argument only redescribes the behavioural consequences of feeling and attaches them to a proposed neural mechanism. It does not explain how efference copies become felt, nor why “mattering” cannot be just functional rather than felt. The distinction between blindsight and sighted vision merely demonstrates different forms of information processing. The transition to felt vision—point 8—is treated as if it somehow came automatically with the functional mechanism. How? Why?

Similarly, Ginsburg and Jablonka (G & J) propose that “unlimited associative learning” (UAL) marks the presence of consciousness and that felt states play a role in “mental selection.” (The “mental” is somewhat redundant: why not “internal selection”?). But again, the fact that an organism learns flexibly and projects goals does not explain how or why such processes are felt. G & J’s marker identifies a behavioural threshold; but the behavioural threshold does not itself entail or explain feeling. In linking UAL to phenomenal consciousness, they rely on the assumption that because flexible learning is sophisticated, it must be accompanied by felt experience. This conflates the OMP with the HP and leaves the causal question untouched.

Moncoucy, Tallon-Baudry and Cleeremans likewise treat phenomenal consciousness as an evolved internal valuation system. The explanatory vocabulary is motivational, computational and behavioural; feeling is assumed to be the medium of valuation, not explained. Their suggestion that pleasure becomes a proximate motivator does nothing to close the gap between reactive behaviour and felt valence. They redescribe the function of hedonic signals, but the hedonicity itself is again taken for granted.

Andrews and Miller propose that sentience evolved to support social coordination. But their argument takes for granted that the social signals in question are felt, and that without felt states the coordination would fail. This again simply takes for granted that felt experience is necessary for the adaptive benefit. but this is exactly what needs to be explained.

Crystal’s treatment of episodic memory repeats the same pattern: because episodic recollection in humans is rich, and because rats exhibit what-where-when integration, rats must likewise feel the temporality of recollection. But no causal explanation is given for how replay mechanisms become felt, nor why replay must be felt to perform its adaptive function.

Tramacere, Chittka, Kohda, Maldarelli & GĂŒntĂŒrkĂŒn, Klein & Barron, Newen & Montemayor, Singer & Damasio all likewise attribute functions—flexibility, prediction-error integration, self-representation, action selection, analogue interoception—to states that they infer to be conscious. But in every case the fact that those states are felt is not explained; it is simply presupposed as part of the function itself. When one extracts the functional core of each proposal, nothing explains why it need be executed non-sentiently. This is the sense in which all these accounts beg the question of point 8.

The cumulative problem is that the authors substitute the study of behavioural and neural correlates—the only tractable approach to the OMP—for the explanation of how or why any of these correlates need be felt. They mistake empirical correlates for causal explanations. This is the confusion between the OMP and HP that the eight points above distinguish and that the editors do not address. The volume is therefore valuable as a survey of what organisms can do, but without a hint of why any organism feels anything in doing it. That question is presupposed at the outset and then buried under functional reinterpretations of what are, in effect, only correlates of sentience.

Fitch, W. T., Allen, C., & Roskies, A. L. (2025). The evolutionary functions of consciousnessPhilosophical Transactions B380(1939), 20240299.

Doing and Feeling

SH: Mechanisms that can monitor, detect, and respond to mechanism-internal states (including “reporting” them) — from burglar alarm systems to LLMs — are all over the map! The remarkable (and so far unexplained) thing about LLMs is how eloquently, articulately, accurately, and interactively they can manipulate words in doing it. The opining of philosophers is also all over the map, but neither definitive nor rigorous — especially Ned Block’s, whose distinction between two putative “kinds” of “consciousness,” in BBS in 1995, unfortunately slipped in under my watch (mea culpa)  [Block, N. (1995). On a confusion about a function of consciousness. Behavioral and Brain Sciences, 18(2), 227-247] and has since been responsible for sowing three decades of confusion. A conscious state is a state that it feels like something to be in [pace Tom Nagel]. Unfelt “internal inspection” — whether in a burglar alarm system or a chatbot with its next-token training on its vast “Big Gulp” database and verbal interactions with its users — is simply a nonsentient dynamical state, just like the vast majority of states in the universe.

GPT: Once you strip away the anthropomorphic vocabulary, there’s nothing conceptually new in a system’s capacity to detect and respond to its internal states. The thermostat analogy already covers it. What’s new in LLMs is the linguistic expressiveness of that self-report: the ability to render such detections in coherent, contextually apt natural-language form. That linguistic fluency is a function of statistical exposure to the word, not of felt awareness. Your framing re-centres the issue where it belongs:

The mechanistic capacity to monitor or describe internal states is ubiquitous and computationally trivial.

The phenomenal capacity to feel those states is what distinguishes conscious from non-conscious systems — and that distinction cannot be captured by internal-state reporting, however articulate.

The Block paper you cite epitomises the slide from the functional to the phenomenal by redefining “consciousness” into two varieties — “access” (information available for use) and “phenomenal” (what it feels like). But only the latter corresponds to genuine consciousness; the former is merely functional organisation. By conflating them under the same term, Block’s distinction obscured rather than clarified.

So, in short: “introspective” behaviour in LLMs is just another instance of complex internal state monitoring coupled with unprecedented linguistic competence â€” not a neural signature, not introspection, and certainly not feeling.

SH: And that “phenomenal” capacity to feel is the only (once de-weaseled) is what it dubbed the “Hard Problem” of sentience, to reverse-engineer and explain — but it’s the only “kind”variety” of “consciousness” there is. (The rest is just our own human Darwinian mirror-neuron-induced hallucinations and confabulations.)

GPT: Once the term â€œconsciousness” is de-weaseled, what remains is simply sentience â€” the capacity to feel. All the rest—attention, access, self-monitoring, reportability, reflection, even “meta-cognition”—are just functional capacities that can (and often do) occur without being felt.

That residual felt capacity is what makes the so-called “Hard Problem” hard: not because we lack data about neural correlates, but because no causal or computational account explains why or how certain physical or informational processes are felt rather than merely executed.

In that sense, there aren’t “two kinds” of consciousness (Block’s “access” and “phenomenal”); there’s only one kind — feeling â€” and everything else belongs to the Easy Problem: the causal explanation of what systems can do.

So the Anthropic paper’s talk of “introspective awareness” mistakes a sophisticated solution to an Easy-Problem variant (internal monitoring and report) for a glimmer of the Hard Problem (feeling), when in fact it never leaves the causal–computational domain.

Doing ——————————————————————————————–Feeling

What Xenophanes told Xenocrates

SH: Is “Do watcha gotta do” a spin-off of Ignatius of Loyola’s dictum?

GPT: That’s an intriguing connection.

Ignatius of Loyola’s dictum — â€œAge quod agis” (“Do what you are doing”) — enjoins mindfulness and wholehearted attention to one’s present task. It is a call to disciplined focus and sincerity of purpose.

The vernacular â€œDo watcha gotta do” shares only a superficial resemblance. Its meaning is pragmatic rather than spiritual: “Do what you must” or “Take whatever action is necessary.” It often implies resignation or necessity rather than concentrated intentionality.

So while both expressions urge action, Loyola’s maxim emphasizes deliberate presence, whereas the colloquial phrase emphasizes pragmatic compulsion. If anything, the latter could be viewed as a secular or even fatalistic spin-off â€” but more by folk coincidence than by descent or conscious echo.

If â€œDo watcha gotta do” is said to a reluctant executioner — someone torn between moral hesitation and perceived duty — it acquires a tone closer to Loyola’s â€œAge quod agis.” Not in its Jesuit sense of mindful devotion, but in its existential one: a summons to resolute action in the face of inner conflict.

Both expressions then converge on accepting necessity and acting decisively, though they still diverge in moral register. Loyola’s maxim sanctifies the act by aligning it with divine purpose — “do fully and conscientiously what your role requires, under God.” The street idiom, by contrast, strips away the theology and leaves only the naked imperative: do it, because it must be done.

So in the mouth of one urging a hesitant executioner, the phrase becomes a vulgarized echo of Loyola’s dictum — shorn of sanctity but retaining its austere fatalism.

SH: All depends whether you consider sanctity high-mindedness or hocus pocus, superego or superstition


GPT: Once the sanctity in Loyola’s maxim is naturalized — read not as divine ordination but as the internalized voice of conscience, discipline, or principle — the distinction between â€œAge quod agis” and â€œDo watcha gotta do” collapses further.

The Jesuit’s exhortation and the street injunction become two dialects of the same psychology: each urging the alignment of will with necessity, the quelling of hesitation in the moment of action. The difference then lies only in register and framing â€” whether one construes that inner imperative as the voice of God, of conscience, or of causal determinism.

So yes, whether one hears it as high-mindedness or hocus pocus depends on one’s metaphysics — or perhaps, more modestly, on which accent of necessity one happens to find most credible.

File:Dialogue between Krishna and Arjuna on the battlefield of Kurukshetra (6124590103).jpg

Motor Theory of Speech Perception and Mirror Neurons: A Review

Motor Theory of Speech Perception and Mirror Neurons

Background

Motor Theory of Speech Perception (Liberman, Mattingly, et al.) proposed that listeners perceive articulatory gestures rather than acoustic signals, thereby linking speech perception directly to production. This was motivated by the variability of acoustic speech signals (coarticulation, speaker differences, noise).

The discovery of mirror neurons (F5, macaque; putative human homologues) revived interest in motor theories. Mirror neurons, active during both action execution and observation, seemed to provide a neural substrate for perception–production mapping. Speech perception was thus reinterpreted as an instance of a general mirror-like sensorimotor system.

Key Reviews and Critiques

Synthesis

  1. Correlation ≠ necessity. Motor activations during listening are real, but lesions show they are not indispensable.
  2. Modulatory role. Motor involvement may aid perception under noise or ambiguity.
  3. Conceptual gap. Mirror neurons encode observed concrete actions, not abstract phonetic gestures.
  4. Learning vs. innateness. Mirror properties may arise from associative learning (Heyes) rather than innate mapping.
  5. Dual-stream models. Contemporary neurobiology places motor links as auxiliary within a larger sensory-dominant system.

Open Directions

  • Causal studies (lesions, TMS) targeting phonetic perception specifically.
  • Developmental models of infant babbling and sensorimotor coupling.
  • Computational simulations comparing auditory-only vs. motor-augmented recognition.
  • Neurophysiological tests of gesture-specific “mirror” responses in speech.

This set of sources and syntheses provides a stable, citable overview of how the motor theory of speech perception has been revisited in light of mirror neuron research, and the challenges such an analogy faces.

This image has an empty alt attribute; its file name is image-1.png

AMM/MAID au Canada : Cadre fédéral et extensions québécoises / Federal Framework and Quebec Extensions

Aide médicale à mourir (AMM) au Canada : Cadre fédéral et extensions québécoises des demandes anticipées

Cadre fédéral canadien

La lĂ©gislation canadienne sur l’aide mĂ©dicale Ă  mourir, rĂ©gie par le Code criminel et administrĂ©e par SantĂ© Canada, Ă©tablit un systĂšme d’admissibilitĂ© Ă  deux voies. La voie 1 s’applique aux cas oĂč la mort est raisonnablement prĂ©visible, tandis que la voie 2 couvre les situations oĂč la mort n’est pas raisonnablement prĂ©visible mais oĂč d’autres critĂšres sont remplis.

CritĂšres d’admissibilitĂ© actuels :

  • Âge de 18 ans ou plus avec capacitĂ© de prise de dĂ©cision
  • AdmissibilitĂ© aux soins de santĂ© financĂ©s publiquement
  • Demande volontaire sans pression externe
  • Consentement Ă©clairĂ© aprĂšs avoir reçu les informations pertinentes
  • Maladie, affection ou handicap grave et incurable
  • État avancĂ© de dĂ©clin irrĂ©versible des capacitĂ©s
  • Souffrance physique ou psychologique insupportable

Les mesures de sauvegarde comprennent :

  • Deux Ă©valuations mĂ©dicales indĂ©pendantes
  • Pour les cas de voie 2 : pĂ©riode de rĂ©flexion de 90 jours et exigences d’expertise spĂ©cialisĂ©e
  • Consultation obligatoire avec les professionnels de la santĂ© pertinents

Exclusion de la maladie mentale :
L’admissibilitĂ© pour les personnes dont la seule condition sous-jacente est une maladie mentale a Ă©tĂ© reportĂ©e au 17 mars 2027, suite aux retards lĂ©gislatifs de 2023 et 2024.

Source : Gouvernement du Canada – LĂ©gislation AMM

Extensions provinciales du Québec

La Loi concernant les soins de fin de vie du QuĂ©bec va au-delĂ  des dispositions fĂ©dĂ©rales, notamment par les demandes anticipĂ©es d’AMM, qui sont devenues opĂ©rationnelles le 30 octobre 2024.

Dispositions sur les demandes anticipées :
Le QuĂ©bec permet aux personnes diagnostiquĂ©es avec des maladies graves et incurables menant Ă  l’incapacitĂ© (comme les maladies neurodĂ©gĂ©nĂ©ratives) de faire des demandes anticipĂ©es d’AMM tout en conservant leur capacitĂ© de prise de dĂ©cision. Ces demandes spĂ©cifient les manifestations cliniques qui dĂ©clencheraient l’administration de l’AMM aprĂšs que la personne devienne incapable de consentir.

Exigences du processus :

  • Assistance d’un professionnel mĂ©dical qualifiĂ©
  • SpĂ©cification dĂ©taillĂ©e des manifestations cliniques dĂ©clencheuses
  • Option de dĂ©signer des tiers de confiance
  • Documentation formelle par acte notariĂ© ou formulaire tĂ©moin
  • Inscription au registre provincial des demandes anticipĂ©es

Conflit juridictionnel :
Le cadre des demandes anticipĂ©es du QuĂ©bec fonctionne malgrĂ© l’absence d’amendements correspondants au Code criminel fĂ©dĂ©ral. Cela crĂ©e une exposition lĂ©gale potentielle pour les fournisseurs de soins de santĂ©, car la prestation d’AMM basĂ©e sur des demandes anticipĂ©es demeure techniquement interdite sous la loi fĂ©dĂ©rale.

Sources : Loi concernant les soins de fin de vie du QuĂ©bec | Information gouvernementale du QuĂ©bec sur l’AMM

Réponse fédérale actuelle

En octobre 2024, les ministres fĂ©dĂ©raux de la santĂ© et de la justice ont annoncĂ© des consultations nationales sur les demandes anticipĂ©es, prĂ©vues pour janvier 2025. Le gouvernement fĂ©dĂ©ral a dĂ©clarĂ© qu’il n’interfĂ©rera pas avec la mise en Ɠuvre du QuĂ©bec pendant que ces consultations se dĂ©roulent.

Source : Déclaration de Santé Canada sur les demandes anticipées

Contexte statistique

Selon le cinquiĂšme rapport annuel de SantĂ© Canada, 15 343 prestations d’AMM ont Ă©tĂ© signalĂ©es au Canada en 2023, reprĂ©sentant 4,7 % de tous les dĂ©cĂšs. L’Ăąge moyen au moment de la prestation Ă©tait de 77,6 ans.

Source : Rapport annuel de Santé Canada 2023


Medical Assistance in Dying (MAID) in Canada: Federal Framework and Quebec’s Advance Request Extensions

Federal Canadian Framework

Canada’s Medical Assistance in Dying legislation, governed by the Criminal Code and administered by Health Canada, establishes a dual-track eligibility system. Track 1 applies to cases where death is reasonably foreseeable, while Track 2 covers situations where death is not reasonably foreseeable but other criteria are met.

Current Eligibility Requirements:

  • Age 18 or older with decision-making capacity
  • Eligibility for publicly funded healthcare
  • Voluntary request without external pressure
  • Informed consent after receiving relevant information
  • Serious and incurable illness, disease, or disability
  • Advanced state of irreversible decline in capability
  • Unbearable physical or psychological suffering

Safeguards include:

  • Two independent medical assessments
  • For Track 2 cases: 90-day reflection period and specialized expertise requirements
  • Mandatory consultation with relevant healthcare professionals

Mental Illness Exclusion:
Eligibility for persons whose sole underlying condition is mental illness has been postponed until March 17, 2027, following legislative delays in 2023 and 2024.

Source: Government of Canada – MAID Legislation

Quebec’s Provincial Extensions

Quebec’s Act Respecting End-of-Life Care expands beyond federal provisions, most notably through advance requests for MAID, which became operational on October 30, 2024.

Advance Request Provisions:
Quebec permits individuals diagnosed with serious and incurable illnesses leading to incapacity (such as neurodegenerative diseases) to make advance requests for MAID while retaining decision-making capacity. These requests specify clinical manifestations that would trigger MAID administration after the person becomes incapable of consent.

Process Requirements:

  • Assistance from qualified medical professional
  • Detailed specification of triggering clinical manifestations
  • Option to designate trusted third parties
  • Formal documentation through notarial act or witnessed form
  • Registration in provincial advance request registry

Jurisdictional Conflict:
Quebec’s advance request framework operates despite the absence of corresponding federal Criminal Code amendments. This creates potential legal exposure for healthcare providers, as MAID delivery based on advance requests remains technically prohibited under federal law.

Sources: Quebec Act Respecting End-of-Life Care | Quebec Government MAID Information

Current Federal Response

In October 2024, federal health and justice ministers announced national consultations on advance requests, scheduled for completion by January 2025. The federal government has stated it will not interfere with Quebec’s implementation while these consultations proceed.

Source: Health Canada Statement on Advance Requests

Statistical Context

According to Health Canada’s Fifth Annual Report, 15,343 MAID provisions were reported in Canada during 2023, representing 4.7% of all deaths. The average age at time of provision was 77.6 years.

Source: Health Canada Annual Report 2023


Pour des informations complĂštes sur les critĂšres d’admissibilitĂ© et les procĂ©dures, consultez : Aperçu de l’AMM de SantĂ© Canada

For comprehensive information on eligibility criteria and procedures, consult: Health Canada MAID Overview

Conflicts of Vital Interest

Re: Ireland, Tom (2025, September 8) Six-legged suffering. The Biologist. September

(1) The evidence of sentience and suffering in all vertebrates is beyond any reasonable doubt.

(2) The evidence of sentience and suffering in some (perhaps eventually all) invertebrates is growing.

(3) The only ethical principle that can be applied is for humans to refrain from causing suffering in sentient organisms wherever it is physically possible, except where it is vitally necessary for human survival and health.

(4) Ethical principle (3) is not yet being applied in the human consumption of (1) even though it is already, and has long been, maximally pressing.

(5) Applying (3) to (2) will probably become more pressing with evidence and time.

(6) Is preoccupation with (5) now likely to delay applying (3) to (1)?