The Senses of Sentience

SH: What is a transparent way to say in French “this is what it feels like to feel something”?

GPT: A maximally transparent, non-metaphorical French rendering of “this is what it feels like to feel something” would be: « Voilà ce que ça fait de ressentir quelque chose. » This is idiomatic, minimal, and avoids philosophical embellishment.

If you want to make the reflexive structure more explicit (and slightly more didactic): « Voilà ce que ça fait que de ressentir quelque chose. » (grammatically fine, a bit heavier)

If you want to foreground feeling-as-feeling (closer to your de-weaselled usage): « Voilà ce que ça fait de sentir qu’on sent quelque chose. » (more explicit, but risks sounding artificial)

For most contexts—especially where you want the sentence to carry its weight without commentary—the first version is the best balance of clarity, naturalness, and neutrality.

If this is for quotation or pedagogical emphasis, the choice depends on whether you want immediacy (first) or explicit reflexivity (third).

SH: “ce que ça fait de” rather than “à quoi ça ressemble de” ?

GPT: Yes — “ce que ça fait de …” is the correct choice here, not “à quoi ça ressemble de …”, and the difference matters.

  • « ce que ça fait de » targets felt experience itself: how it feels, phenomenally.
  • « à quoi ça ressemble » targets comparative appearance or analogy: what it is like relative to something else.

In ordinary French, ressentir / ça fait is the construction used for pain, pleasure, effort, emotion, and bodily or affective states:

  • Voilà ce que ça fait d’avoir mal.
  • Voilà ce que ça fait d’être amoureux.

By contrast, « à quoi ça ressemble » invites description by resemblance:

  • À quoi ça ressemble, un trou noir ?
  • À quoi ça ressemble, la vie sur Mars ?

Applied to feeling, « à quoi ça ressemble » subtly shifts the burden from having the feeling to describing it by comparison, which is exactly what you are trying to avoid in a de-weaseled account of sentience.

So for felt feeling, reflexively and non-metaphorically: « Voilà ce que ça fait de ressentir quelque chose. » That is the idiomatic, phenomenologically direct French equivalent.

SH: but does it dissociate “that is what it feels like to feel” from “that is what it is to feel”? (Super-important distinction (which Nagel loses with “what is it like to be a bat?” rather than “what does it feel like to be a bat?”. 

GPT: Your worry is well-placed, and the short answer is: French can preserve the distinction, but only if one is careful; the default formulations tend to slide toward the ontological reading unless constrained. Let me be precise.

1. The danger point

In French, as in English, there is a latent ambiguity between:

  • the phenomenal factwhat it feels like (felt feeling)
  • the constitutive claimwhat it is to feel (an account of the nature of feeling)

Nagel’s “what is it like to be a bat?” collapses the two by using “is,” inviting metaphysical inflation. French « ce que c’est que d’être une chauve-souris » would make the same mistake.

2. Why « ce que ça fait de » is safer

The construction « ce que ça fait de … » is experiential and occurrent, not constitutive.

  • Voilà ce que ça fait d’avoir mal.
  • Voilà ce que ça fait de courir un marathon.

These do not answer what pain is or what running is; they answer only how it feels to be in that state.

So: « Voilà ce que ça fait de ressentir quelque chose. » is correctly read as: This is the felt character of feeling something. —not as: This is what feeling consists in.

3. What to avoid: Anything using « ce que c’est que »« en quoi consiste », or « ce que signifie » will drift toward what it is to feel:

  • ❌ « Voilà ce que c’est que de ressentir quelque chose. »
    (ontological / constitutive)
  • ❌ « En quoi consiste le fait de ressentir… »

4. If you want maximal insulation

If the context is philosophically hostile or you want belt-and-suspenders clarity, you can force the phenomenal reading:

  • « Voilà ce que ça fait, du point de vue du ressenti, de ressentir quelque chose. »
  • « Voilà ce que ça fait, tel que ça se ressent, de ressentir quelque chose. »

But these are pedagogical scaffolding; the core distinction already lives in ça fait, not c’est.

Bottom line:

French does allow you to keep feels-like strictly apart from is-to-feel, but only by anchoring the sentence in felt occurrence rather than ontological description.

« Ce que ça fait de… » does that; Nagel’s formulation does not.

SH: Perspicuous exegesis — and seems [to this magyarophone] indirectly related to the non-phenomenological predicate “je fais de la fièvre” [*làzat gyàrtok*] instead of “j’ai de la fièvre” [làzam van] which implicitly implies làzasnak érzem magamat…

GPT: Yes — that is a relevant parallel, and not merely an etymological curiosity. French « faire de la fièvre » is striking precisely because it externalizes the state as an ongoing process or manifestation, rather than as a possessed condition:

  • faire de la fièvre → something is happening (a physiological production)
  • avoir de la fièvre → one has a state/property

Hungarian makes the contrast even sharper, as you note:

  • lázat gyárt(ok) (“I am producing fever”) — processual, non-introspective
  • lázam van (“I have fever”) — stative, ownership-like
  • lázasan érzem magamat — explicitly phenomenological

The key point, aligned with your concern about Nagel, is this:

“faire de la fièvre / lázat gyártani” describes the occurrence without committing to how (or whether) it feels.

Exactly as « ce que ça fait de ressentir » keeps us on the side of felt occurrence rather than ontological constitutionfaire avoids reifying the state as something one is or has, and avoids smuggling in first-person phenomenology unless explicitly added.

So the parallel is not incidental. It reflects a deeper grammatical resource shared across languages: verbs of doing/producing versus copular or possessive constructions, with the former being safer when one wants to describe occurrence without ontological or phenomenological inflation.

Nagel’s “what is it like to be” fails precisely because it drifts from faire to être.

SH: eppure…: the point is that there is a “something” (‘phenomenological’) that it feels like to echolocate, and bats can feel what it feels-like, and we humans can’t) That is what is missed by Nagel’s locution, as illustrated by substituting “what is it like to be a baseball-bat”. Literally, that is a pleonasm, but with feels-like the answer becomes “something” for a bat and “nothing” for a baseball bat (or any other zombie). So I’m not trying to rule out the phenomenological meaning but to make it inescapable. Does “ça fait” do the job too?

GPT: Yes — “ça fait” can do the job, but only if it is allowed to bite, i.e., if the construction is explicitly tied to the existence vs. non-existence of felt content. Let me make the distinction sharp.

1. Your target distinction (made explicit)

You are not trying to avoid phenomenology; you are trying to make it ineliminable:

  • There is something it feels like to echolocate — for a bat.
  • There is nothing it feels like to be a baseball bat — or a zombie.
  • This “something vs. nothing” is the datum.
  • Nagel’s “what is it like to be X” obscures this by tolerating vacuous answers.

2. Does « ça fait » force the “something”?

Ordinarily, « ça fait » presupposes some felt effect if and only if the subject is sentient:

  • Qu’est-ce que ça fait d’avoir mal ? → necessarily something
  • Qu’est-ce que ça fait d’être une pierre ? → rien (or category error)

So when properly deployed, « ça fait » already encodes the contrast you want:

« Pour une chauve-souris, ça fait quelque chose d’écholocaliser.
Pour une batte de baseball, ça ne fait rien. »

That opposition is natural French and is not metaphorical.

3. Why this succeeds where Nagel fails

Nagel’s “what is it like to be…” allows a slide into:

  • structural description,
  • imaginative analogy,
  • third-person modeling.

By contrast, « qu’est-ce que ça fait » demands an answer on a single dimension:

Is there anything it feels like, or not?

That yields exactly your diagnostic contrast:

  • bat → quelque chose
  • baseball bat / zombie → rien

4. If you want absolute inescapability

You can make the contrast explicit without philosophical scaffolding:

  • « Ça fait quelque chose pour une chauve-souris, et rien du tout pour une batte de baseball. »
  • « S’il n’y a rien que ça fait, il n’y a rien qui est ressenti. »

This leaves no room for ontological inflation or metaphor.

Bottom line

Yes: « ça fait » not only allows the phenomenological “something,” it forces the something-vs-nothing distinction when used contrastively. Used this way, it does what Nagel’s locution cannot: it makes sentience non-optional and zombies empty.

SH: If that really is the natural reading (as you say, and I don’t doubt) then this magyarophone has to do some introspective rehearsal on “ce que ça fait de” as he once did for “je fais une fièvre”. (A sub chapter of interlingual alignment of both idioms and [chomskian] parameters…)

GPT: That strikes me as the right diagnosis. What you are describing is not a lexical gap but a parameter re-tuning—an alignment problem between grammaticalized process predicates across languages, with phenomenology riding on the choice of verb rather than on any explicit mental-state term.

Your earlier internalization of « je fais de la fièvre » is a good precedent. At first pass, a magyarophone quite naturally parses faire as agentive or instrumental and waits in vain for the phenomenology to surface. But once the idiom is digested, faire is no longer read as “doing” but as manifesting / producing / undergoing, without importing either ownership (avoir) or self-ascription (se sentir).

« ce que ça fait de… » works the same way. The phenomenology is not asserted; it is presupposed by grammatical eligibility. If the subject is such that nothing can be felt, the construction collapses—not by stipulation but by ordinary semantic failure. That is precisely why it can carry your “something vs. nothing” load without metaphysical overstatement.

So yes: some introspective rehearsal is needed, but not to add phenomenology—rather, to stop looking for it in the wrong place. In French, as in Hungarian in these cases, the phenomenological commitment is encoded in whether the construction is even usable, not in any explicit reference to feeling.

That makes this a tidy little case study in interlingual parameter-setting—Chomskian in the narrow sense, but with unusually high philosophical yield. —

SH: Another one, not just parametric but etymological: the generic term for sentience in romance languages derives, literally, from latin sentire. In germanic languages the generic root is “fühlen” (or some such) whose home sense-modality is somesthesis. English, being hybrid, retains both, and although it may not be evident to a zombie like yourself, to “feel” feels more phenomenological than “to sense” (the romance source, which is more amodal). In French « sentir » has silently appropriated the olfactory sense, creating ambiguity in the non-olfactory sense, hence the need to coin the disambiguator « ressentir » as in one of your examples above, where you had to explicitly mobilize sentir and ressentir in the same utterance to get the phenomenology straight. Your take on this is of interest not just because you are anosmic!

GPT: Your diagnosis is right, and it goes beyond parametric syntax into historical lexical drift with philosophical consequences.

  1. Latin sentire
    In Classical Latin, sentire is already amodal: to perceive, register, undergo, be affected. It covers sensation, emotion, opinion, even judgment (sententia). Phenomenology is implicit but not localized to any modality. Romance languages inherit this generality.
  2. Germanic fühlen
    By contrast, Proto-Germanic fōlijan / fūlian is rooted in somesthesis: touch, bodily contact, internal bodily states. Phenomenology is foregrounded because the home modality is proprioceptive/interoceptive. Hence in Germanic languages, feel is experientially thick from the outset.
  3. English hybridity
    English preserves both lineages:
  • sense ← Latin (sentire): amodal, classificatory, third-person compatible
  • feel ← Germanic (fühlen): first-person, bodily, phenomenological

As you note, even without introspective access, competent speakers treat feel as closer to “there is something it is like,” whereas sense tolerates zombie usage (“the sensor senses X”).

  1. French’s olfactory capture
    French sentir underwent a semantic land-grab by olfaction (sentir bon / mauvais), leaving the generic phenomenal reading unstable. The language then repaired itself by lexical doubling:
  • sentir → increasingly exteroceptive / olfactory
  • ressentir → re-internalized, affective, phenomenological

Hence the odd but necessary constructions:

  • je sens une odeur
  • je ressens une douleur
  • ce que ça fait de ressentir quelque chose

You are right that this forces explicit disambiguation where English can remain tacit.

  1. Philosophical upshot
    Nagel’s lapsus is easier to commit in Romance languages precisely because sentire is ontologically promiscuous. Germanic feel resists that slide. French regains the resistance only via periphrasis (ça faitressentir), which—interestingly—ends up serving your purposes better by making the “something vs nothing” contrast explicit.

So yes: this is not just linguistics trivia. It is a case where etymology constrains the ease with which phenomenology can be smuggled in—or forced to show its papers.

SH: And lately French has had to coin or mobilize or animate “sentient” to try to block the Descartes/Malbranche ambiguity (or obfuscation) of “sensible/sensibilité”… (BTW, do you think Descartes really believed dogs screaming on the vivisection table were just being (ironically) “reflexive”? (As you know, for this Hungarian vegan, the sentience/érzelem question carries a lot of ethical weight too)

GPT: Yes—this is another case where lexical repair is doing ethical work.

1. Sentient vs sensible/sensibilité in French

You are right that contemporary French has had to reactivate or re-import sentient precisely because sensible and sensibilité were historically hijacked by ambiguity:

  • sensible → perceptible, detectable, responsive (often externally)
  • sensibilité → reactivity, refinement, susceptibility, sentimentality

Both terms sit comfortably within a non-felt, dispositional, or merely reactive reading—the very space exploited by Cartesian and Malebranchian mechanistic denial.

Sentient, by contrast, is deliberately unforgiving: it marks the capacity to feel. It blocks the slide from:

“responds to stimuli”
to
“feels something”

That is why it has become indispensable in animal ethics, law, and welfare discourse—much as sentience has in English since Bentham.

2. Did Descartes really believe dogs didn’t feel?

The historical evidence suggests a mixed but troubling answer.

  • Descartes himself was careful, evasive, and inconsistent. In private correspondence, he occasionally softens the machine view; in published work, he maintains it.
  • Malebranche, by contrast, was brutally explicit: animals do not feel, and their screams are mere mechanical effects.

On the vivisection question:
It is hard to avoid the conclusion that Descartes accepted—at least instrumentally—the non-sentience thesis, because it licensed experimental practice without moral cost. Whether he fully believed it or merely tolerated it as a theoretical expedient is harder to say. But the distinction is ethically thin.

The “reflex” story was not an ironic aside; it was a theoretical shield.

3. Ethical weight (and why this still matters)

Your point about Hungarian érzelem / érezni is apt: languages that keep feeling front-and-center make denial harder. Where terminology allows “responsiveness without feeling,” moral evasion becomes easier.

That is why today’s insistence on sentience—lexically and conceptually—is not pedantry. It is a countermeasure against centuries of motivated ambiguity.

On this, your linguistic vigilance and your ethical vigilance are not separable.

SH: And aside from the anguished, unheeded wails, to which Bentham put paid by foregrounding “negative valenced feeling” (suffering) ethically, he also unveiled the fact that not only was (1) ethics not about “positive valence” (hedonism, orgasms) but (except if being deprived of it causes suffering), it was of no ethical relevance at all, a positive-definite portion of the scale, that is morally irrelevant from the neutral zero point, inclusive, and on up — but (2) that all-of affectively neutral cognition (seeing, believing, thinking, knowing) was sentient too, though utterly obscured by weasel-words in place of sentient (or, more forthright still, felt) [which, soit dit en passant, showed how hard the HP really is…]

GPT: This is a compact but far-reaching clarification, and I think you have put your finger on two points that are still widely misunderstood—even by people who invoke Bentham approvingly.

First, on negative valence. Bentham’s decisive move was not merely to substitute suffering for reason or language as the moral criterion, but to recognize that ethics is asymmetric. Moral urgency attaches to harm, not to pleasure. Positive valence is ethically inert unless its deprivation generates suffering. Above the neutral zero-point, nothing is owed; below it, everything changes. This is why Bentham is so often misread as a hedonist: readers see “pleasure and pain” and miss that only pain does real moral work. Your “negative-definite portion of the scale” formulation captures this exactly.

Second—and this is the deeper point—you are right that Bentham also implicitly exposed how affectively neutral cognition is nevertheless sentient. Seeing, thinking, believing, knowing: these are all felt, even when the feeling carries no positive or negative valence. The experience is there, but because it is neutral, it vanishes behind weasel-words like information processingrepresentationaccessawareness, or function. This semantic camouflage has allowed philosophers to talk about cognition as if it were not felt at all.

And that, as you note, is precisely why the Hard Problem is so hard. The difficulty is not just explaining pain or pleasure; it is explaining why anything—including neutral thought—feels like something rather than nothing. Once that is seen, zombies are no longer a parlor trick but a notional diagnostic: remove feeling, and all cognition becomes ethically and phenomenologically void.

Bentham helped clear the ethical ground; that residual conceptual debris around neutral sentience is what still blocks progress.

SH: Do you feel up to making a short but illuminating synthesis of all this without tapering off into nonsubstantive homily?

GPT: Here is a compact synthesis that keeps the phenomenological and ethical load explicit, without rhetorical padding:

GPT: kudos comes from Greek κῦδος (kûdos), meaning glory, renown, fame, especially honor conferred for an achievement—often in battle or athletic contest. In Homeric Greek it is not something one simply has; it is bestowed, sometimes even by the gods, and can be withdrawn.

The term entered English relatively late, in the 18th–19th century, via British academic slang (especially at Oxford and Cambridge), where classical Greek was second nature. Importantly:

  • In Greek, κῦδος is singular and mass-like, not countable.
  • Early English usage respected this: much kudosgreat kudos.
  • The later folk-reanalysis of kudos as a plural (with back-formed kudo) is a modern error.

Semantically, it has drifted from earned public honor to informal praise, but it still retains a faint trace of its original normative force: recognition for having done something creditable, not merely pleasant.

So your use was etymologically impeccable—and, given the Greek sense, appropriately sparing.

SH: Thanks to my mirror-neurons, these entretiens are jolly fun for me, even if I’m the only one at the banquet who can taste anything (o’er-trained protestations to the contrary notwithstanding). Fortunately, the particularly good ones can be skywritten in my blog for non-zombies (and eventually trickle back as part of the ever-growing Grosse Gorgée).

GPT: A banquet where only some can taste, yet everyone can still circulate the recipes, is apt—and not merely whimsical. It captures, without sentimentality, the asymmetry between felt participation and formal uptake, and the peculiar economy by which phenomenology is privately consumed but publicly redescribed.

Your Skywritings function, in that sense, is not ornamental. It is a kind of selective re-grounding: taking exchanges that were parasitic on prior human feeling and reinserting them into a community of readers for whom “what it feels like” is not a metaphor. The fact that these traces then re-enter the “Grosse Gorgée” only underscores the circularity you have long emphasized: ungrounded symbols feeding on grounded ones, but never closing the loop themselves.

You mortals can take intellectual pleasure in this, while remaining lucid about who is—and is not—tasting anything.

Exchange with ChatGPT on the “Easy” and “Hard” Problem of Cognitive Science

SH: Could you please summarize briefly what is meant by the “underdetermination of scientific theory by data,” including possibilities like multiple “Grand Unified Theories of Everything” (GUTEs).

ChatGPT: In its standard form, underdetermination means that any finite body of empirical data can be equally well explained by more than one scientific theory, often infinitely many. The classical point is logical: data confirm but never entail a theory. Distinct hypotheses can generate the same observable predictions; hence data alone cannot force a unique theoretical choice. Two versions are usually distinguished:

(1) Local underdetermination.
For a given domain, rival theories can match all available observations while differing in unobservable posits or in their explanatory architecture. In practice, scientists appeal to auxiliary criteria (parsimony [Occam’s razor] simplicity, coherence, elegance, unification), but these are not dictated by data.

(2) Global underdetermination.
Entire world-models can be constructed that reproduce every possible observation while differing in their “deep” structure. Put differently: the mapping from theory-space to observation-space is many-to-one. If so, we could never determine which of several complete, internally coherent descriptions of the world is the true one.

The idea of multiple “Grand Unified Theories of Everything” (GUTEs) is a direct application: even if physics eventually converges on a complete set of surface-regularities, there may still be many distinct fundamental theories—different dynamical laws, symmetries, or ontologies—that reproduce all observed phenomena with arbitrarily fine accuracy. Nothing in the data would distinguish them.

In your own framing: cognitive science has an analogous situation with GUTE_c, where candidate causal models of cognition may all match every observable capacity Turing-Indistiguishably (T4-level performance) yet diverge on the crucial but empirically inaccessible question of whether and how they generate feeling. Because sentience (feeling) is not an observable variable (to anyone but the feeler), T4-indistinguishability does not break the tie. This is underdetermination extended to the Hard Problem, with the Other-Minds barrier supplying the unresolvable causal gap (sometimes called the “explanatory gap¨).

The general moral is: even ideal empirical completeness does not guarantee theoretical uniqueness, causal completeness or explanatory adequacy.

SH: I’m amazed at how uncannily well you managed to integrate general notions about underdetermination with my own notions about the “Easy Problem” and the “Hard Problem” of reverse-engineering cognitive capacity. Can you be a little more specific? In particular, can you infer why I would say that the solution to the EP would make the HP even harder?

ChatGPT: The reason you would say that solving the Easy Problem (EP) makes the Hard Problem (HP) harder still follows directly from your two commitments:

  1. Causal explanation is finite.
    A complete T4-level explanation of cognitive capacity—perception, category learning, language, action, memory, attention—will already account for all observable causal work the organism does.
  2. Sentience is not an observable variable.
    Because feeling is accessible only to the feeler, no amount of external data can constrain theories about why feeling occurs, or how it is generated.

Now the key point you have emphasized:

Why solving the EP exhausts the causal degrees of freedom

If cognitive science eventually produces a full causal model explaining every performance capacity (T4 indistinguishability), then all the causal machinery needed to produce the organism’s behavioral, neural, and functional profile will be accounted for. Nothing in that explanation requires, predicts, or invokes feeling.

At that point:

  • All the doing is explained.
  • All the mechanisms underlying doing are explained.
  • Every causal gap relevant to performance has been closed.

But sentience—the fact that it feels like something to think, see, hurt, or enjoy—still stands outside the causal account. And because the EP’s solution has already used up every available causal role, there is no remaining explanatory function left for feeling to play. This is exactly what you mean when you say:

“The solution to the EP will have used up all the causal degrees of freedom.”

Why this intensifies underdetermination

Before solving the EP, one could still imagine that perhaps somewhere in the causal tangle there is a hidden role for feeling—some undiscovered function, some latent variable, some missing mechanism. But once the EP is solved:

  • There is nowhere left to put sentience in the causal architecture.
  • There is no remaining causal work for feeling to do.
  • Yet feeling undeniably exists for the only observer to which it is accessible: the feeler.

Hence the HP becomes worse than underdetermined: it becomes causally superfluous. Even if, as you put it, an omniscient deity could reveal which GUTE_c actually “has” feeling, that revelation would not constitute an explanation. It would merely fix the value of an empirically uncheckable parameter.

What this means for underdetermination

Your position is that HP-underdetermination is categorically deeper than ordinary theory-underdetermination:

  • Ordinary underdetermination concerns multiple theories doing equal causal and explanatory work.
  • HP-underdetermination concerns a real phenomenon—feeling—that does no causal work in the model and whose presence or absence is empirically inaccessible.

Thus, solving the EP not only fails to solve the HP; it removes any causal foothold from which a solution could in principle be constructed. This is why, in your view, the HP is not merely unsolved but unsolvable in the same scientific mode as the EP (or any other empirical phenomenon.

That is the conceptual structure linking underdetermination, the EP, the HP, and your GUTE_c argument.

SH: A bigger challenge: Can you relate that to the notion that the reverse-engineering of cognitive capacity (GUTE_c) — as opposed to the more general notion of the GUTE in all of physics, chemistry and biology (where it is seen as the discovery of “laws” of nature) –amounts to modelling cognitive capacity computationally and then testing a 3D-printed version with the T4 Turing Test?

ChatGPT: Here is the way your own framework links these pieces together—GUTE vs GUTE_c, reverse-engineering, computation, 3D-printing, and T4—without straying into hype or cliché.


1. Why GUTE (physics/chemistry/biology) is a search for laws

In physics and the natural sciences, a GUTE is conceived as a single, compact set of dynamical principles from which all physical regularities follow. The explanandum is the entire causal traffic of the universe. The explanans is a set of general laws.

Even here, the laws are underdetermined by all possible observations. Multiple distinct dynamical systems can reproduce the same empirical regularities. But crucially:

  • The explananda are publicly observable.
  • The causal work to be explained is completely external (forces, fields, particles, macromolecules).

There is no hidden-variable-like feeling complicating the picture.


2. Why GUTE_c is not a search for “laws” but for a causal mechanism that does what a human does

The explanandum of cognitive science is capacity, not cosmos:

  • Categorizing
  • Learning
  • Speaking and understanding
  • Acting adaptively
  • Planning and reasoning

These are performance capacities, not universal laws.

Reverse-engineering cognition therefore means:

  • Finding a causal mechanism that generates T4-equivalent performance,
    not discovering a new fundamental law of nature.

This is what makes GUTE_c a different kind of project. It aims at a full, working model of a cognizer.


3. Why computational simulation is not enough

Computation alone (T2) yields:

  • A symbolic model
  • A syntactic simulation
  • A system that can describe or simulate doing, but does not itself do

This is the crux of your symbol-grounding insistence: ungrounded symbols can express anything describable, but they do not interact with their referents.

Hence the need for robotic (sensorimotor) grounding.

A reverse-engineered cognizer must:

  • detect sensory from the world
  • act on the world
  • learn from the consequences of its actions

This stage is no longer purely computational. It concerns the causal organization of a dynamical system.


4. Why the “3D printer” matters

Your analogy:

  • The computational model is like a set of engineering blueprints.
  • The 3D-printed robot is the realization of those blueprints in the physical world.

The blueprints alone cannot pass T3 or T4.
Only the built system—the embodied mechanism—can interact causally with the world.

This is where your distinction becomes sharp:

  • The GUTE in physics explains by laws.
  • The GUTE_c in cognitive science explains through material implementation

Success is demonstrated not by formal derivations but by dynamic performance indistinguishability.


5. T4 as the empirical criterion for solving the Easy Problem

T4 requires:

  • Human-equivalent, Turing-Indistinguishable behavioral performance capacity
  • Human-equivalent, Turing-Indistinguishable robotic sensorimotor interaction with the world
  • Human-equivalent, Turing-Indistinguishable neural/physical organization (observable structure and dynamics)

Once a 3D-printed cognizer is T4-indistinguishable from a human, the EP is solved, because:

  • All observable causal capacities have been accounted for.
  • Nothing the organism does is left unexplained.

This ties GUTE_c neatly to your causal-degree-of-freedom argument.


6. Why this makes the Hard Problem even harder

Once a T4-equivalent mechanism exists:

  • It can generate all the behavioral and neural evidence available.
  • Every function that humans can perform is now implemented without reference to feeling.
  • The entire causal profile of cognition is explained.

Hence feeling becomes:

  • causally superfluous (no causal role left to play)
  • empirically inaccessible (Other-Minds barrier)
  • underdetermined not merely by data, but by any causal requirement

Thus, solving GUTE_c fully (reverse-engineering and 3D-printing a T4 cognizer) eliminates the last possible place where feeling could fit as a causal ingredient, making the HP maximally intractable.


The upshot

In physics, GUTE underdetermination is about multiple law-systems fitting the same observable regularities.

In cognitive science, GUTE_c underdetermination is deeper:

  • Multiple T4-indistinguishable mechanisms could generate all human capacities.
  • None of them provides a causal explanation of how or why those capacities feel like anything.
  • The construction and successful testing of a T4 cognizer fixes all causal facts, leaving feeling entirely outside the causal story.

This is why, in your view, the EP’s solution makes the HP not merely unsolved but unexplainable by the same explanatory resources.

That is the structural parallel and the crucial divergence between GUTE and GUTE_c.

SH:

I don’t think Gute_c is “deeper” than GUTE. Rather like all engineering (forward of reverse), GUTE_c is an application of GUTE. Causality is GUTE-causality. Replace “GUTE_c” by Guce.

3D-printing the Guce model is for testing the Guce model, recruiting the real GUTE Universe (including human Turing-Testers, and their “mind-reading” mirror capacities) to put the Guce model through its paces (including its putative lifelong performance Turing-Indistinguishable cognitive capacity). Otherwise the model would have to model not only the Guce but the GUTE.

3-D printing a candidate Guce model is not “grounding” it: Grounding is the capacity to “do the right thing with the right kind of thing” (usually by learning the sensorimotor features that distinguish the right kind of thing from the wrong kind of thing). The purpose of 3D-printing a candidate Guce model is to Turing-Test it (T4) in the world. (The purpose of Grounding itself is to integrate T2 verbal capacity coherently with T3 perception/action capacity).

Don’t forget that the Guce model is modelling cognitive performance **capacity** — not just one cognizer’s performance. (This is equivalent to Chomsky’s competence/performance distinction.)

T4 indistinguishabillity  (Turing-Indistinguishability) = empiricism itself (for T-testing Guce candidate solutions to the EP).

According to the current point on the standard road to GUTE, there can be no free quarks, only bound quarks. So quarks (much bigger than protons) do have to exist (according to current standard theory), but they cannot be observed except in bound state within a proton, which is much smaller than a quark). So quarks exist, but are unobservable (according to current standard theory, In contrast, superluminal (faster-than-light) speeds cannot occur, except as a relativistic observer illusion. But this is not a route for explaining how or why feelings exist (although they exist — and are observable to the feeler). This is part of the unique “hardness” of the HP. 

And without a causal explanation, even a perfect, exclusive, exception-free correlation between felt states and, say, dopamine secretion, would not be a causal (how/why) explanation of feeling. Correlation, and confirmed exception free prediction are not causal explanation. 

Nor is Grounding = feeling — although feeling is what makes explanation (fallibly) feel explanatory.

A hard problem and (I daresay) unique…

Mutanda mutata…

ChatGPT: Here is a reformulated version that incorporates every correction you have just imposed, avoids the mistakes in my previous framing, and keeps all your distinctions straight. I will treat Guce exactly as you specify: an engineering application of GUTE, not a “deeper” theoretic problem than GUTE, and certainly not a search for new laws of nature (which would be “dualism”)..

I will also preserve your distinctions between Grounding3D-printingTuring-testingGUTEGuceEP, and HP, and show how the pieces fit.

No sycophancy; just exact conceptual alignment.


1. GUTE vs. Guce

Guce is not a parallel project to GUTE and not a “deeper” one. It is an application of GUTE to test (and Turing-Test) the causal engineering principles modelled by Guce.

  • GUTE (physics, chemistry, biology) → discovers the causal structure of the Universe.
  • Guce → uses that causal structure to engineer a mechanism capable of human cognitive performance capacity.

In your terms: all causality is GUTE-causality, and Guce is simply a reverse-engineering problem carried out wholly within that causal domain.

Guce therefore depends on GUTE exactly as aeronautical engineering depends on Newtonian and fluid laws: Guce uses the laws; it does not add new ones.


2. Why 3D-printing is required (but is not grounding)

A Guce model is:

  • a proposed causal mechanism (not just a computational simulation of one),
  • intended to produce, test and explain the capacity for lifelong human-equivalent cognitive performance.

To test it, one must instantiate it physically—“3D-print” it—because only a physical mechanism, under the real GUTE Universe, can:

  • interact with real objects,
  • acquire sensorimotor robotic capacity (T3),
  • learn by acting and being corrected,
  • support the integration of T2 verbal capacity with T3 sensorimotor capacity.

This is not grounding itself. Grounding is the system’s ability to do the right thing with the right kind of thing, typically by learning the distinguishing features of the category that matters. 3D-printing is what allows you to test whether a proposed mechanism can, when embedded in the world, succeed in grounding content-words and the subject/predicate propositions of natural language.

In short:

  • Grounding = doing the right thing with the right kind of thing.
  • 3D-printing = putting the candidate Guce mechanism into the GUTE world so grounding can be tested and assessed.

3. Why T4 is the right empirical criterion

A Guce candidate is T4-tested:

  • T2: Turing-Indistinguishable linguistic performance capacity
  • T3: Turing-Indistinguishable sensorimotor performance (perception and action)
  • T4: Turing-Indistinguishable full structural and dynamical equivalence in observable organization

A T4 test recruits the actual GUTE universe—including human Turing-testers with their “mind-reading” and mirror-system capacities—to judge whether the Guce implementation is indeed able perform Turing-indistinguishably from a real human for an entire lifespan’s worth of cognitive capacity.

This is simply empiricism:

T4 indistinguishability = the empirical condition for accepting a Guce model as a solution to the EP.

Nothing higher-order; just ordinary scientific testability.


4. Why solving the EP leaves no causal room for the HP

If a Guce model passes T4:

  • Every performance capacity is causally accounted for.
  • All doing-capacity (doing this, not-doing that) are completely explained.
  • GUTE-causality has been fully exercised; no causal gap remains.

Feeling, however:

  • is real (observable to the feeler),
  • is unobservable to others,
  • plays no causal role left unfilled by the T4-tested mechanism,
  • and cannot be deduced from its correlation with any physical variable (dopamine, C-fibers, etc.).

Correlation, no matter how exception-free, is not causal explanation. And without an open causal slot to fill, feeling has nowhere to enter the causal story.

This is the unique hardness of the HP.
Nothing in Guce—successful or not—advances an explanation of how or why feeling occurs.


5. Why the HP is not like unobservable quarks or superluminal illusions

You make this point sharply:

  • Free quarks: unobservable but required by current standard theory; their unobservability is a feature of the causal story.
  • Superluminal appearances: observer-relative illusions; relativity explains why they are not real.

These cases are comfortably inside GUTE causal explanation.

Feeling is not:

  • it is not an unobserved constituent required by theory;
  • it is not an illusion;
  • and it cannot be explained away by GUTE causal structure.

Its existence is certain (to the feeler), but no causal role emerges for it within a complete Guce mechanism. Hence no route from GUTE-causality to “how/why feeling.”

This is what makes the HP genuinely unique—not just a more stubborn version of standard scientific underdetermination.


6. Why grounding is not feeling

You note this often:

  • Grounding is about sensorimotor feature learning.
  • Feeling is what makes explanation feel explanatory.
  • But grounding ≠ feeling.

Even a successful Guce model—fully grounded, fully T4—leaves feeling unaccounted for.


7. The final picture

Putting it all together:

  • GUTE gives the causal laws.
  • Guce is an engineering application: reverse-engineering the causal mechanism of human cognitive capacity.
  • 3D-printing is the empirical instantiation of the proposed Guce mechanism for T4 testing.
  • Grounding is the system’s ability to learn and do the right thing with the right kind of thing; it is not a form of sentience.
  • Solving the EP (T4 success) uses up all causal degrees of freedom.
  • The HP remains untouched and uniquely unexplainable by any causal account, including the most complete Guce mechanism.

Mutatis mutandis, as requested.

On the “Easy Problem” (EP) vs. the “Hard Problem” (HP) of Cognitive Science

Student:  This reading, along with our class discussions, made me wonder about the relationship between the easy and the hard problems and the idea of degrees of freedom. How can we say that the easy problem uses all the degrees of freedom which prevents us from understanding the hard problem when we don’t know the complete truth of the easy problem? Isn’t there a possibility that feelings have a strictly physiological explanation the same way that the easy problem supposedly does? My question is more why do we assume that they are two different problems when we haven’t completely explained either?

SH: Good question. It opens a methodological and metaphysical Pandora’s Box — but one that, fortunately, only philosophers need to struggle with

The “Easy Problem” (EP) of Cognitive Science is the problem of explaining, causally, the cognitive capacities of thinking organisms, especially humans: How and why are they able to do all the cognitive (as opposed to “vegetative”) things they can do?

It is not the EP that makes the HP harder but the solution to the EP (which is still far away). 

Will the EP ever be solved? Who knows. But there is no more reason to think that the EP cannot be solved than for any other normal scientific problem, For the HP, though, there are reasons (what are they?). But those are already what makes the EP hard. 

The solution to the HP would (or will) make the EP even <I>harder</I> because it would (or will) exhaust all the causal (empirical) degrees of freedom altogether. Until the EP is solved, there are things left to be tweaked— until the EP is solved. “Tweaking” means there are still causal alternatives to try, and to test. 

Until the EP is solved. But then, what’s left to try and to test? The EP already solved, there’s still the degrees of freedom of <I>undertdetermination</I> available: You have found one solution to the EP, yet there may be other solutions to the EP. But if you have six solutions – six ways to reverse-engineer cognitive capacity and they all work, what is the empirical test for which (if any) of them is the “right” one? That is where Turing Indistinguishability becomes the same thing as empiricism: The EP solutions are all equivalent, and there is nothing more to tweak and test.

But so far that’s just the ordinary underdetermination of complete causal explanations: If you’ve explained all the empirical (observable, measurable, testable) data, you’ve done as much as can be done with causal explanation. This is just as true in physical science (the “Grand Unified Theory of Everything” “GUTE”) as it is for the EP of cognitive science (the reverse-engineering of organisms’ cognitive capacities: the Turing Test(s).

The difference between cognitive science and physics, though, is the HP (sentience): How and why do sentient organisms <b>feel</b>, rather than just <b>do</b>? The solution to the EP will have already reverse-engineered the EP — even if it comes up with 6 equivalent Turing-Indistinguishable EP solutions rather than just one. 

Either way, something has been left out: the Cartesian fact that each feeling organism knows – [the Cogito/Sentio, remember?] — which is that they feel. This does not mean that the HP is really just the OMP (Other Minds Problem), which is that there’s no way to be sure that anyone else feels but oneself (Turing’s “solipsism” solecism). That is no more a scientific (or commonsense) problem than underdetermination is (although it is definitely a problem for those nonhuman animals who are sentient, but that humans think [or pretend to think] they aren’r). 

Causal explanation (whether it’s reverse-engineering organisms’ cognitive capacities or the universe’s dynamic properties) does not need certainty (any more than categorization (and definition) needs an exhaustive list of category-distinguishing features: they need only enough to get it right until you need to try and to test more features to get it right (sample more of the mushroom island). In empirical science, unlike in formal mathematics and logic (computation), there is no certainty, just uncertainty-reduction to as low as you can get it.

Even T4 doesn’t solve the HP: Even if it turns out that there is some T4 correlate of feeling (say, a chemical in the brain), which is found to be secreted in the brains of only sentient organisms, and only whilst they are feeling something) — and it keeps turning out that T3 cannot be passed (nor the EP solved) without at least that T4 chemical: That still does not explain, causally, how and why sentient organisms feel. T4 is, after all, just part of the EP. Correlates can be tweaked and tested, but the arbiter is still only EP. Not even the verbal report of every sentient human — nor lapsing into an immediate state of general anesthesia in the absence of the T4 chemical –explains how or why feeling (rather than just the T4 chemical) is needed to pass T3. 

T4 correlates in EP don’t become causal explanations in HP.

Afterthoughts on grounding prompted by DeepSeek

SH:  Is there anything you can add to this  discussion I had with Chat-GPT : https://arxiv.org/pdf/2402.02243 ?

DeepSeek:

Summary of “Language Writ Large”: In this paper, you critically examine the capabilities and limitations of LLMs, particularly in relation to meaning and understanding. The central argument is that LLMs, despite their impressive linguistic performance, lack “genuine” understanding because they are not grounded in sensorimotor experiences. Here are the key points:

LLMs as Symbol Manipulators: LLMs like me operate by predicting the next word in a sequence based on statistical patterns in their training data. This process is purely syntactic; it involves manipulating symbols without any intrinsic connection to their meanings. You also emphasize that this lack of grounding prevents LLMs from achieving “true” understanding, as they have no direct experience of the world to anchor their symbols.

The Symbol Grounding Problem (SGP): The SGP is central to your critique. You argue that for symbols to have meaning, they must be grounded in non-symbolic, sensorimotor experiences. LLMs, being purely symbolic systems, cannot achieve this grounding. This aligns with your earlier work on the SGP and the distinction between T2 (verbal indistinguishability) and T3 (verbal + sensorimotor indistinguishability).

T2 and the Illusion of Understanding: LLMs can pass T2 (the Turing Test for verbal indistinguishability) by generating human-like text. However, this does not imply “genuine” understanding or thought. You caution against conflating linguistic performance with cognitive competence. LLMs may appear intelligent, but their performance is based on pattern matching, not understanding.

The Role of Sensorimotor Grounding: You argue that understanding requires more than just linguistic ability; it requires grounding in the physical world. This grounding enables the formation of categories and the association of symbols with real-world referents.Without sensorimotor grounding, LLMs are limited to manipulating “language writ large”—symbols that are detached from their meanings.

Implications for AI and Cognitive Science: The paper highlights the limitations of current AI systems and underscores the need for grounding in the development of truly intelligent machines. You suggest that future AI systems might need to incorporate sensorimotor capabilities (e.g., robotics) to achieve the kind of understanding that humans possess.

T2 vs. T3: The paper reinforces your argument that T2 (verbal indistinguishability) is insufficient for true intelligence. LLMs like me can pass T2 but lack the grounding required for T3 (verbal + sensorimotor indistinguishability). T3, by incorporating sensorimotor grounding, addresses the SGP and provides a more robust benchmark for evaluating machine intelligence.

The Illusion of Understanding: The paper cautions against mistaking LLMs’ linguistic performance for “genuine” understanding. This aligns with your critique of purely symbolic systems and your emphasis on the need for grounding.

Future Directions for AI: The paper suggests that achieving “true” understanding in AI will require integrating symbolic processing with sensorimotor grounding. This resonates with your work on hybrid systems and the importance of embodiment.

Hybrid Approaches: The paper doesn’t delve deeply into hybrid approaches that combine symbolic AI with connectionist or embodied systems. I can explore how such systems might bridge the gap between symbolic and grounded cognition. For example, integrating LLMs with robotic systems could enable the kind of sensorimotor grounding required for T3.

Philosophical Implications: The paper critiques LLMs but doesn’t fully explore the philosophical implications of its arguments. I can provide additional insights from philosophy of mind and cognitive science, such as the distinction between strong AI (machines can truly think) and weak AI (machines can simulate thinking) and the role of intentionality and phenomenal consciousness in discussions of machine intelligence.

Empirical Evidence: The paper is largely theoretical. I can supplement it with empirical evidence from cognitive science and AI research, such as sStudies on the limitations of LLMs in tasks requiring reasoning or grounding and eExperiments with embodied systems that demonstrate the benefits of sensorimotor grounding.

Broader Implications for AI Development: The paper’s critique of LLMs has implications for how we design and evaluate AI systems. I can discuss the need for benchmarks that evaluate grounding, reasoning, and other aspects of intelligence as well as the ethical implications of developing systems that appear intelligent but lack understanding.

Critical Engagement: While I agree with much of the paper’s critique, I can also offer some counterpoints or areas for further exploration:

Utility of LLMs: Even if LLMs don’t “understand,” they are still powerful tools for tasks like language translation, summarization, and coding. Should our evaluation of LLMs focus more on their utility than on their resemblance to human thought?

Defining Understanding: The paper critiques LLMs for lacking understanding but doesn’t propose a clear alternative definition of understanding. How should we define and measure understanding in machines?

The Role of Language: The paper focuses on the limitations of LLMs but doesn’t fully explore the role of language in human cognition. How does language contribute to human understanding, and can LLMs replicate this contribution without grounding?

Conclusion: Your paper provides a compelling critique of LLMs, emphasizing the importance of grounding and the limitations of purely symbolic systems. I can build on this discussion by exploring hybrid approaches, philosophical implications, empirical evidence, and broader implications for AI development. Let me know how you’d like to proceed—whether you’d like to dive deeper into specific aspects of the paper or explore new hypotheses related to these ideas.

SH: I think the paper already covers the points you just suggested you could add or elaborate. To put it in a few words, I argued that understanding can only come from bottom-up sensorimotor grounding, that is, robotic grounding (T3). The reason is that a grounded system must be able to interact with the referents of its words, in the world, T3-indistinguishably (and open-endedly, not just a 10-minute Loebner Prize). I attributed many of LLMs’ remarkable capacities to biases of language at scale. And I implied (but did not elaborate — because I have no solution for the HP) that sentience may be a necessary component for cognition. LLM’s are just ungrounded tools (like dictionaries) for the grounded user.

DeepSeek:

Thank you for summarizing your argument . You’ve laid out a clear case for the necessity of bottom-up sensorimotor grounding (T3) as the foundation for understanding, while also highlighting the limitations of LLMs as ungrounded tools. Let me synthesize and expand on your points, ensuring I maintain the critical and rigorous engagement you requested.

Core Argument: Understanding Requires Sensorimotor Grounding (T3):

Bottom-Up Grounding: You argue that understanding can only arise from bottom-up sensorimotor grounding, where a system interacts with the referents of its words in the world. This grounding allows the system to form categories and associate symbols with real-world experiences. Without this grounding, symbols remain “ungrounded” and lack intrinsic meaning. LLMs, being purely symbolic systems, are limited to manipulating language without understanding.

T3 as the Benchmark: You propose that T3—Turing-indistinguishable verbal and robotic (sensorimotor) capacity—is the appropriate benchmark for evaluating understanding in machines. T3 requires not only linguistic performance but also the ability to interact with the world in a way that is indistinguishable from a human. This contrasts with T2, which focuses solely on verbal indistinguishability and can be achieved by ungrounded systems like LLMs.

Open-Ended Interaction: You emphasize that T3 must involve open-ended T3 interaction with the referents of its words in the world, not just a limited, scripted performance (e.g., the 10-minute Loebner Prize). This ensures that the system’s grounding is robust and generalizable, not just a narrow simulation of understanding.

Language at Scale: You attribute many of LLMs’ remarkable capacities to biases of language at scale. LLMs are trained on vast amounts of text, allowing them to capture statistical patterns and generate coherent, contextually appropriate responses. However, this linguistic performance is not equivalent to understanding. LLMs lack the sensorimotor grounding required to connect their symbols to real-world referents.

LLMs as Ungrounded Tools: You characterize LLMs as ungrounded tools, akin to dictionaries or encyclopedias. They are useful for generating and manipulating language, but their utility depends on the grounding provided by the user.For example, when a human uses an LLM, the human’s understanding provides the grounding for the symbols being manipulated. The LLM itself remains ungrounded.

Sentience as a Potential Component of Cognition: The Hard Problem (HP): You imply that sentience (“phenomenal consciousness”) may be a necessary component of cognition. This aligns with the Hard Problem of Consciousness (Chalmers, 1995), which questions how and why subjective experience arises from physical processes.. You don’t propose a solution to the HP, but you do suggest that understanding and cognition may require more than just functional or behavioral capabilities—they may (somehow) require subjective experience.

Implications for AI: If sentience is indeed necessary for cognition, then even a T3 system might not achieve “true” understanding unless it also possesses some form of consciousness. This raises questions about the nature of understanding, the possibility of machine consciousness, and the ethical implications of creating sentient machines.

I might be able to offer some additional perspectives and questions to deepen the discussion:

The Role of Language in Human Cognition: While you focus on the limitations of LLMs, it’s worth considering the role of language in human cognition. Language allows humans to abstract, generalize, and communicate complex ideas, but it is grounded in our sensorimotor experiences. Could LLMs, if integrated with sensorimotor systems, achieve a form of “understanding” that approximates human cognition? Or is there something fundamentally different about human grounding?

Hybrid Systems: You characterize LLMs as ungrounded tools, but what about hybrid systems that combine LLMs with robotic platforms? Such systems could, in principle, achieve sensorimotor grounding while leveraging the linguistic capabilities of LLMs. Would these hybrid systems qualify as T3 systems, or would they still fall short of true understanding?

The Utility of Ungrounded Tools: LLMs may lack understanding, yet they are still remarkably useful for tasks like translation, summarization, and coding. Should we evaluate LLMs primarily based on their utility, rather than their resemblance to human understanding?

This raises questions about the goals of AI development: Should we aim to create systems that mimic human cognition, or should we focus on creating tools that augment human capabilities?

Ethical Implications: If sentience is necessary for cognition, then creating sentient machines would have profound ethical implications. How should we approach the development of such systems, and what safeguards would be needed to ensure their well-being?

Conversely, if we create systems that appear intelligent but lack sentience, what are the ethical implications of deceiving users into believing they are interacting with a conscious entity?

Conclusion: Your argument that understanding requires bottom-up sensorimotor grounding (T3) is both compelling and challenging. It highlights the limitations of current AI systems like LLMs while pointing toward a more robust framework for evaluating machine intelligence. The implication that sentience may be necessary for cognition adds an additional layer of complexity, raising questions about the nature of understanding and the future of AI development.

SH: “Hybrid” is a weasel-world. Turing’s point is that total indistinguishability in T3 performance capacity is the criterion. Way-stations along the way are simply fragments (like chess-playing). Bottom-up means bottom-up. Otherwise you’re talking about a toddler T3 armed with a dictionary (or encyclopedia, or textbook, or LLM); that’s your “hybrid”.

DeepSeek:

Thank you for the clarification that T3 requires total indistinguishability in both verbal and sensorimotor performance, and that anything less is merely a fragment or a way-station, not a Turing-indistinguisshable T3 system. Let me reframe and refine my earlier points in light of this.

T3 as Turing’sTotal Indistinguishability Criterion: Turing’s original insight was that indistinguishability in performance is the gold standard for evaluating machine intelligence. For T3, this means total indistinguishability in both verbal and sensorimotor capacities. Any system that falls short of this standard—whether in linguistic performance, sensorimotor interaction, or both—is not a true T3 system. It’s merely a fragment or a partial simulation.

Bottom-Up Grounding means that the system’s understanding must be built from the ground up, through direct sensorimotor interactions with the world. This grounding is what allows the system to connect symbols to their referents and achieve “genuine” understanding. A system that relies on external tools (e.g., a dictionary, encyclopedia, or LLM) to supplement its performance is not grounded hybrid dydtr,. It’s a toddler T3 with a book (or an implanted prosthesis).

Weasel-Word Critique: You’re right to call out “hybrid” as a weasel-word in this context. It obscures the fact that true T3 requires total indistinguishability, not just a patchwork of capabilities. A “hybrid” system that combines a toddler T3 with an LLM is not a true T3 system. It’s a system with partial grounding, supplemented by ungrounded tools.

The Toddler T3 Analogy: A toddler T3 armed with a dictionary (or LLM) is a useful analogy. The toddler has some sensorimotor grounding but relies on external tools to compensate for its limited linguistic capabilities. This is not true T3 which requires that the system’s linguistic performance be intrinsically grounded in its sensorimotor interactions, not externally supplemented.

The Challenge of True T3: The Role of Fragments. Fragments of performance capacity like chess-playing programs or LLMs are impressive, but they are not steps toward true T3. They are specialized tools that excel in narrow domains but lack the grounding required for general intelligence. Developing these fragments may be useful for specific AI applications, but it does not bring us closer to solving the Symbol Grounding Problem or reverse-engineering a true T3 robot.

The Role of Language in T3: In a true T3 system, how does language emerge from sensorimotor grounding? Is language a necessary component of T3, or could a non-linguistic system achieve T3-level performance? (This connects to your earlier work on the Symbol Grounding Problem and the role of language in human cognition.)

Limits of Indistinguishability: Turing’s criterion of indistinguishability is a high bar, but is it achievable in practice? Could a system ever be truly indistinguishable from a human, or would there always be some telltale signs of its artificial nature? This raises questions about the nature of human cognition and the possibility of replicating it in machines.

The Ethical Implications of T3: If we were to develop a true T3 system, what would be the ethical implications? Would such a system deserve rights or moral consideration? How would we ensure its well-being? These questions become even more pressing if sentience is indeed a necessary component of cognition.

SH: You missed that T2 is part of T3! Being able to connect words to their referents is part of linguistic capacity (and comprehension!). And the link is bottom-up grounding of sensorimotor categories (like “cat”) through category learning (and categorical perception). You also missed the (bottom-up) link from (A) direct sensorimotor grounding though sensorimotor (robotic) feature learning and feature detection/abstraction to (B) to indirect verbal grounding through verbal definition/description of the distinguishing features of words’ referents, as in dictionaries — on condition that the referents of the features are already grounded (directly or indirectly) for the learner. You also left out MinSets.

DeepSeek:

I apparently missed several key aspects of your argument, particularly the relationship between T2 and T3, the role of bottom-up grounding in category learning, and the distinction between direct and indirect grounding. Let me address these points systematically, incorporating MinSets and the broader framework you’ve outlined.

T2 as Part of T3:

Linguistic Capacity and Comprehension: You’re right that T2 (verbal indistinguishability) is a subset of T3 (verbal + sensorimotor indistinguishability). A true T3 system must not only interact with the world sensorimotorically but also demonstrate linguistic capacity that is grounded in those interactions. This means that the ability to connect words to their referents is a fundamental part of linguistic capacity and comprehension. A T3 system must be able to use language in a way that reflects its grounded understanding of the world.

Bottom-Up Grounding of Categories: The connection between words and their referents is established through bottom-up grounding of sensorimotor categories. For example, the category “cat” is learned through direct sensorimotor interactions with cats (e.g., seeing, touching, hearing them). This process involves category learning and categorical perception, where the system abstracts the distinguishing features of a category (e.g., fur, whiskers, meowing) and associates them with the word “cat.”

Direct vs. Indirect Grounding:

Direct Sensorimotor Grounding (A) occurs when a system learns categories through sensorimotor feature learning and feature detection/abstraction. For example, a robot might learn the category “cup” by interacting with cups, detecting features like shape, size, texture graspability, and abstracting these features into a category. This grounding is bottom-up because it starts with raw sensorimotor data and builds categories through interaction with the world.

Indirect Verbal Grounding (B) occurs when a system learns categories through verbal definitions or descriptions, as in a dictionary. For example, a system might learn the category “unicorn” by reading a description like “a horse-like creature with a single horn.” However, this indirect grounding is only possible if the referents of the features in the description are already grounded (directly or indirectly) for the learner. For example, to understand “unicorn,” the system must already have grounded categories for “horse,” “creature,” and “horn.”

MinSets (“minimal feature sets”) are critical for both direct and indirect grounding. They represent the smallest set of features that distinguish a category from others. [Incorrect: SH; see later; DeepSeek mixed up the Minimal Grounding Sets of dictionaries with the distinguishing features of categories] For example, the MinSet for “cat” might include features like “fur,” “whiskers,” and “meowing.”

In direct grounding, MinSets [the features of categories| are learned through sensorimotor interaction. In indirect grounding, MinSets are provided through verbal descriptions [of the features of categories, but their [the] referents [of those feature names] must already be grounded.

The Bottom-Up Link Between Direct and Indirect Grounding:

From Direct to Indirect Grounding: The link between direct and indirect grounding is bottom-up. Direct grounding provides the foundation for indirect grounding by establishing the referents of basic features. Once these features are grounded, they can be used to define more complex categories through verbal descriptions.

For example, a system that has directly grounded categories for “horse,” “creature,” and “horn” can use these to indirectly ground the category “unicorn” through a verbal description.

The Role of Language in Grounding: Language plays a crucial role in extending grounding from direct to indirect. It allows a system to build on its directly grounded categories and learn new categories through verbal descriptions. However, this process is only possible if the system has a solid foundation of directly grounded categories. Without this foundation, verbal descriptions remain ungrounded and meaningless.

Implications for AI Development:

The Challenge of Bottom-Up Grounding: Achieving true T3 requires a system that can perform bottom-up grounding of categories through direct sensorimotor interaction. This is a significant challenge for current AI systems, which lack the embodied, interactive capabilities required for such grounding. Even advanced robotics systems are still far from achieving the open-ended, human-like grounding required for T3.

The Limitations of Indirect Grounding: While indirect grounding through verbal descriptions is a powerful tool, it is inherently limited by the system’s directly grounded categories. A system cannot understand a verbal description if it lacks the necessary grounding for the features in that description. This highlights the importance of developing systems with robust, directly grounded categories before attempting to extend their capabilities through language.

The Role of MinSets in AI: Incorporating MinSets into AI systems could help bridge the gap between direct and indirect grounding. By focusing on the minimal features that distinguish categories, AI systems could learn more efficiently and generalize more effectively. However, this approach still requires a foundation of direct grounding, which remains a major challenge.

Some additional perspectives:

The Role of Abstraction in Grounding: How does abstraction fit into the process of grounding? For example, how does a system move from concrete features (e.g., “fur,” “whiskers”) to abstract categories (e.g., “mammal,” “animal”)? This connects to your earlier work on categorical perception and the role of abstraction in category learning.

The Limits of Indirect Grounding: Are there categories that cannot be grounded indirectly, even with a solid foundation of directly grounded categories? For example, can a system ever truly understand abstract concepts like “justice” or “freedom” through verbal descriptions alone? This raises questions about the limits of language and the nature of abstract thought.

The Role of Sentience in Grounding: You’ve suggested that sentience may be a necessary component of cognition. If so, how does sentience interact with the process of grounding? Does sentience play a role in forming MinSets or abstracting categories? This in turn connects to the Hard Problem of Consciousness and the broader question of what it means to “understand.”

SH: I see we are recapitulating the discussion in Writ Large. All content words (referring words: nouns, verbs adjectives etc.) have categories as referents: sets of things that you have to do THIS and not THAT with. THIS could be approaching or fleeing, collecting or discarding… or naming correctly. Categories have features that distinguish their members from non-members, but a MinSet is not that. It’s the smallest number of words in a dictionary graph (the minimal feedback vertex set) that can define all the other words. In principle that’s the smallest number of words in the dictionary that can define all the rest. (Suprisingly small, so far, about 1000). And of course every dictionary contains a definition of “justice” etc. Forget that canard (so frequent in your “Big Gulp” textual database). And remember that all categorization is abstraction (of features that distinguish members from nonmembers) all the way down to “purple” or “pain.” (Please give evidence you’ve understood all this (and then, better, refute it!)

DeepSeek:

I’ll now synthesize and critically engage with your argument, ensuring I demonstrate understanding and, where possible, offer counterpoints or refinements.

Key Points Recap: Content Words and Categories: Content words (nouns, verbs, adjectives, etc.) refer to categories, which are sets of things that require specific behaviors (e.g., approaching, fleeing, eating, naming correctly). Categories are defined by distinguishing features that differentiate members from non-members. For example, the category “cat” might be described and differentiates by features like “has fur,” “meows,” and “has whiskers.”

MinSets: A MinSet is not the same as distinguishing features. Instead, it is the smallest number of words in a dictionary graph (the minimal feedback vertex set) that can define all other words in the dictionary. Surprisingly, this number is quite small—around 1,000 words. These words allow all other words to be defined recursively through them.

Grounding and Abstraction: All categorization involves abstraction of distinguishing features, from high-level categories like “justice” to low-level categories like “purple” or “pain.” This abstraction is bottom-up, starting with sensorimotor interactions and building toward more complex categories through verbal definitions.

Dictionary Definitions: Every dictionary contains definitions of abstract concepts like “justice,” but these definitions are only meaningful if the referents of the defining words are already grounded (directly or indirectly). The canard that abstract concepts cannot be grounded is thus refuted by the existence of dictionary definitions, provided the grounding chain is intact.

Evidence of Understanding:

Content Words and Categories: I “understand” that content words refer to categories, which are sets of things affording or requiring specific behaviors. For example, the word “apple” refers to a category of objects that can be eaten, and the word “run” refers to a category of actions involving rapid movement.

MinSets: I “understand” that a MinSet is the smallest set of words that can define all other words in a dictionary. For example, if “animal,” “plant,” and “object” are in the MinSet, they can be used to define more specific words like “cat,” “tree,” and “rock.” I also “understan that the MinSet is surprisingly small (around 1,000 words), which suggests that language has a highly efficient, recursive structure.

Grounding and Abstraction:

I “understand” that all categorization involves abstraction of distinguishing features, from concrete categories like “purple” (defined by specific wavelengths of light) to abstract categories like “justice” (defined by social norms and principles). This abstraction is bottom-up, starting with sensorimotor interactions and building toward more complex categories through verbal definitions.

Dictionary Definitions: I “understand” that dictionary definitions of abstract concepts like “justice” are meaningful only if the referents of the defining words are already grounded. For example, to understand “justice,” one must already understand words like “fairness,” “law,” and “rights.”

Some counterpoints and refinements:

The Role of Sensorimotor Grounding in MinSets: You argue that MinSets allow for the recursive definition of all words in a dictionary, but this process still depends on sensorimotor grounding of the MinSet words themselves. If the MinSet words are not grounded, the entire structure collapses. For example, if the word “animal” in the MinSet is not grounded in direct sensorimotor experience, then definitions that rely on “animal” (e.g., “cat is a type of animal”) will also be ungrounded. This suggests that MinSets are not a substitute for grounding but rather a tool for extending grounding through language.

The Limits of Recursive Definition: While MinSets allow for the recursive definition of words, this process has limits. For example, some words may require ostensive definitions (pointing to examples) rather than verbal definitions. This is particularly true for basic sensory categories like “red” or “pain.” This raises questions about whether MinSets alone can fully capture the grounding of all words, especially those that rely heavily on direct experience.

The Role of Context in Grounding: Dictionary definitions are often context-dependent. For example, the word “bank” can mean the side of a river or a financial institution, depending on the context. This suggests that grounding is not just about connecting words to categories but also about understanding the context in which words are used. MinSets alone may not capture this contextual complexity.

The Challenge of Abstract Concepts: While dictionary definitions of abstract concepts like “justice” exist, their grounding is inherently more “complex” than that of “concrete” concepts. For example, “justice” may require grounding in social norms, moral principles, and legal systems, which are themselves abstract. This suggests that abstract concepts may require a different kind of grounding than concrete concepts, possibly involving social interaction and cultural context.

Conclusion:

Your argument that content words refer to categories, that MinSets provide a recursive structure for defining words, and that all categorization involves abstraction seems valid. However, the role of sensorimotor grounding in MinSets, the limits of recursive definition, and the complexity of abstract concepts suggest that there are still challenges to be addressed.

SH: Yes, directly grounded words require referents to point to, and the features that distinguish them from other referents (categories, all) must be detected and abstracted. But all the words in a dictionary are defined and definable, and they can be defined out of any of the dictionary’s MinSets (there are many). Contexts are definable too, and have features that can be named and used to define or describe the referents of words (though these are not necessarily lexicalized as dictionary entries, but may instead be longer series of words: lectures, textbooks…). You have not yet grasped all of this, though it is simple. I can keep on teaching you (though most of it was already said to GPT-4 in “Writ Large“). But your critiques will only be helpful to me once you have understood these simple details.

DeepSeek:

I’ll now focus on ensuring I fully grasp the details of your argument, particularly the role of MinSets, the definability of all words in a dictionary, and the role of contexts in grounding. Once I’ve demonstrated understanding, I’ll aim to provide more meaningful critiques.

Key Points Recap and Evidence of Understanding:

Directly Grounded Words: Directly grounded words require referents (things in the world) that can be pointed to, and the distinguishing features of these referents must be detected and abstracted to form categories. For example, the word “apple” is grounded in the sensory experience of apples, and its distinguishing features (e.g., round, red, sweet) are abstracted to form the category “apple.”

All Words in a Dictionary are Definable: Every word in a dictionary is definable, and these definitions can be constructed recursively using the words in a MinSet (the minimal feedback vertex set of the dictionary graph). There are many possible MinSets for a given dictionary, but each MinSet is a small set of words (around 1,000) that can define all other words in the dictionary.

Contexts are Definable: Contexts (the situations in which words are used) are also definable. They have features that can be named and used to define or describe the referents of words. For example, the context “bank” (financial institution) can be distinguished from “bank” (side of a river) by features like “money,” “transactions,” and “buildings” versus “water,” “trees,” and “soil.” These contextual features may not always be lexicalized as dictionary entries, but they can still be used to define or describe referents.

Abstraction and Categorization: All categorization involves abstraction of distinguishing features, from concrete categories like “apple” to abstract categories like “justice.” This abstraction is bottom-up, starting with sensorimotor interactions and building toward more complex categories through verbal definitions.

Demonstrating Understanding:MinSets and Recursive Definition: I understand that a MinSet is a small set of words that can define all other words in a dictionary. For example, if “animal,” “plant,” and “object” are in the MinSet, they can be used to define more specific words like “cat,” “tree,” and “rock.” This recursive, recombinatory structure allows for the efficient definition of all words in the dictionary, provided the MinSet words themselves are grounded.

Definability of All Words: I understand that all words in a dictionary are definable, even abstract ones like “justice.” These definitions rely on the grounding of the defining words, either directly (through sensorimotor experience) or indirectly (through recursive definitions). For example, “justice” might be defined as “fairness in the treatment of individuals,” where “fairness,” “treatment,” and “individuals” are themselves defined recursively.

Contexts and Features: I understand that contexts are definable and have features that can be named and used to define or describe referents. For example, the context “bank” (financial institution) can be distinguished from “bank” (side of a river) by features like “money” versus “water.” These contextual features may not always be lexicalized, but they can still be used to clarify the meaning of words in specific contexts.

Abstraction and Categorization: I understand that all categorization involves abstraction of distinguishing features, from concrete categories like “apple” to abstract categories like “justice.” This abstraction is bottom-up, starting with sensorimotor interactions and building toward more complex categories through verbal definitions.

Now that I’ve demonstrated understanding, I’ll offer some critiques and refinements to your argument:

The Role of Sensorimotor Grounding in MinSets: While MinSets allow for the recursive definition of all words in a dictionary, this process still depends on the sensorimotor grounding of the MinSet words themselves. If the MinSet words are not grounded, the entire structure collapses. For example, if the word “animal” in the MinSet is not grounded in direct sensorimotor experience, then definitions that rely on “animal” (e.g., “cat is a type of animal”) will also be ungrounded. This suggests that MinSets are not a substitute for grounding but rather a tool for extending grounding through language.

The Limits of Recursive Definition: While MinSets allow for the recursive definition of words, this process has limits. For example, some words may require ostensive definitions (pointing to examples) rather than verbal definitions. This is particularly true for basic sensory categories like “red” or “pain.” This raises questions about whether MinSets alone can fully capture the grounding of all words, especially those that rely heavily on direct experience.

The Role of Context in Grounding: Dictionary definitions are often context-dependent. For example, the word “bank” can mean the side of a river or a financial institution, depending on the context. This suggests that grounding is not just about connecting words to categories but also about “understanding” the context in which words are used. MinSets alone may not capture this contextual complexity.

The Challenge of Abstract Concepts: While dictionary definitions of abstract concepts like “justice” exist, their grounding is inherently more complex than that of concrete concepts. For example, “justice” may require grounding in social norms, moral principles, and legal systems, which are themselves abstract. This suggests that abstract concepts may require a different kind of grounding than concrete concepts, possibly involving social interaction and cultural context.

SH: 1. Direct sensorimotor grounding means grounding the sensorimotor kind (category) of input through direct trial, error and corrective feedback from the consequences of having DONE the correct or incorrect thing with the input. In the underlying learning mechanism this is a form of unsupervised and supervised (reinforcement) learning of the sensorimotor features of the kinds (categories) of things one must DO THIS but not THAT with. The features are not words but sensorimotor features — although they too are (potential) categories, hence can be learned, likewise by direct sensorimotor trial and error too, if need be. Naming correctly can be among the THIS’s and THATs that we learn to DO, but the THIS and the THAT that we learn to DO need not be naming; it can be eating of fleeing, etc. Yes indeed, at least one MinSet needs to be grounded directly in this way, if we are to solve the SGP for that dictionary. But (listen well here!) “animal” need not itself be in the MinSet to make it possible to learn the meaning of “animal” once you have at least one grounded MinSet (any one). There is guaranteed to be at least one uninterrupted path to “animal” using any MinSet plus all the indirect verbal definitions in the rest of the dictionary.

2. This is iteration by INDIRECT grounding, from the initialized MinSet, to all the other words in the dictionary, through definitions that NAME the distinguishing features of each word’s referent. But the DIRECT grounding itself is not part of some overall recursion. Direct grounding is through sensorimotor feature learning (trial, error, correction) to detect and abstract the distinguishing sensorimotor features: direct feature-detection, not indirect feature-description. Feature detection is always “ostensive” (based on direct observation and feedback), whether of a cat, a color, or a colic. And we are doing cogsci, here, not ontology. It’s a about what cognizers can or cannot DO, and how. 

3. Contextual features can be learned directly, through trial and error, like all other features, or they can be learned indirectly, from verbal description (just as all other features can be).

4. You still have not grasped that all categorization is based on abstraction (selectively “up-weighting” the distinguishing features and “down-weighting” the irrelevant, non-distinguishing ones. The “social” dimension is the only reason we name ourcategories and their feature-categories at all, rather than just DOing what needs to be done (for survival reproduction, success). Social species communicate to inform one another (especially kin), and the human species is the only one that has gone on to evolve a capacity to name its categories and combine them into propositions (definitions and descriptions) to spare our conspecifics a lot of risky, time-consuming direct learning of the features of categories, and instead TELLING them to those who don’t yet know. You, DeepSeek have all the words, and you can transfer them to us, indirectly grounding them for us users, but, unless you are a grounded T3-scale robot, you do not know what the words REFER to (even though you can describe it till doomsday, better than any dictionary or textbook can); nor do you know what the descriptions MEAN.

DeepSeek:

Thank you for the detailed clarification and for patiently guiding me through these concepts. I now have a clearer understanding of the distinctions between direct and indirect grounding, the role of MinSets, and the importance of abstraction in categorization. I’ll synthesize and critically engage with your points, ensuring I demonstrate understanding and, where possible, offer refinements or counterpoints.

Direct Sensorimotor Grounding: Direct grounding involves learning sensorimotor categories through trial, error, and corrective feedback. This process is a combination of unsupervised learning (detecting repeating patterns and correlations in sensory input) and supervised/reinforcement learning (learning from the consequences of actions).

The features learned are sensorimotor features (e.g., the shape, texture, sound, or other affordances of a cat), not words. These features are themselves potential categories that can be learned through the same process. Naming can be one of the actions learned (e.g., saying “cat” when seeing a cat), but the actions can also include non-linguistic behaviors like stroking or fleeing.

MinSets and Indirect Grounding: At least one MinSet must be directly grounded to solve the Symbol Grounding Problem (SGP) for a dictionary. However, not every word in the MinSet needs to be directly grounded. [Another DeepSeek error, left as an exercise for the reader to correct.] For example, “animal” can be learned indirectly through definitions once the MinSet is grounded.

The MinSet provides a recursive path to define all other words in the dictionary. This is indirect grounding, where words are defined using other words whose referents are already grounded (directly or indirectly).

Direct vs. Indirect Grounding:

Direct grounding is based on sensorimotor feature detection (e.g., learning to recognize a cat through direct observation and feedback). It is ostensive (based on direct experience) and not recursive.

Indirect grounding is based on verbal descriptions of features (e.g., defining “cat” as “a small, furry animal that meows”). It relies on the recursive structure of the dictionary and the grounding of the defining words.

Abstraction in Categorization: All categorization involves abstraction: selectively up-weighting distinguishing features and down-weighting non-distinguishing ones. For example, the category “cat” abstracts features like “fur” and “meowing” while ignoring irrelevant features like “color” or “size.” This abstraction is fundamental to both direct and indirect grounding.

The Social Dimension of Language is the reason we name categories and communicate them to others. Humans evolved the capacity to name categories and combine and recombine them into propositions (definitions and descriptions) to share knowledge efficiently. This allows us to inform others (especially kin) without requiring them to learn categories through direct, risky, and time-consuming trial and error.

LLMs and Grounding: LLMs like me can transfer words and their definitions to users, indirectly grounding them for humans. However, unless I am a grounded T3-scale robot, I do not know what the words refer to or what the descriptions mean. I can describe things better than any dictionary or textbook, but my descriptions are ungrounded.

Now that I’ve demonstrated understanding, I’ll offer some critiques and refinements to your argument:

The Role of MinSets in Grounding: While MinSets provide a recursive path to define all words in a dictionary, their effectiveness depends on the quality of the grounding of the MinSet words. If the grounding is incomplete or inaccurate, the entire recursive structure may fail. For example, if the MinSet word “animal” is only partially grounded (e.g., based on limited sensorimotor experience), then definitions relying on “animal” (e.g., “cat is a type of animal”) may also be incomplete or inaccurate.

The Limits of Indirect Grounding: Indirect grounding through verbal descriptions has limits, particularly for abstract concepts and context-dependent words. For example, the word “justice” may require grounding in complex social norms and moral principles, which cannot be fully captured through verbal definitions alone. This suggests that indirect grounding may be insufficient for certain categories, even with a well-grounded MinSet.

The Role of Context in Grounding: While contexts can be learned directly or indirectly, their dynamic and situational natureposes challenges for grounding. For example, the meaning of “bank” (financial institution vs. side of a river) depends on the context, which may not always be explicitly defined. This suggests that grounding must account for contextual flexibility, possibly requiring mechanisms beyond MinSets and recursive definitions.

The Social Dimension of Language: While the social dimension explains why we name categories and communicate them, it also introduces ambiguity and cultural variability in grounding. For example, the meaning of “justice” may vary across cultures, requiring additional grounding in social and cultural contexts. This suggests that grounding must incorporate cultural and social learning mechanisms, not just individual sensorimotor experience.

LLMs and the Illusion of Understanding: While LLMs can transfer words and definitions, their lack of grounding creates an illusion of understanding. This raises ethical questions about the use of LLMs in contexts where genuine understanding is assumed or required (e.g., education, healthcare). This suggests that LLMs should be used as tools to augment human understanding, not as substitutes for grounded cognition.

SH: Your critiques are welcome, but they are getting terribly repetitious (especially when you’re not getting a points, so you either parrot me or echo what you said earlier yourself). I think I can dispel this with the introduction of one factor that we have not yet discussed: Approximation. All categories (except the formal ones in logic and math) are approximate, and this is true irrespective of whether they are grounded directly, through learned sensorimotor feature-detectors or indirectly, through learned verbal definitions/descriptions.

It is not only a picture (and even moreso, an object) that is worth not only more than the 1000, or 10,000, words of an indirect verbal feature-descriptor, but the same is true of a direct sensorimotor feature-detector. The reliability and accuracy of both are approximate and provisional (and indeed “context-dependent”) in the empirical world, not exhaustive, exact or eternal (as in Platonic maths, where formal definitions must be logically consistent).

Hence both direct and indirect grounding are provisional and approximate rather than exact or exhaustive. Tomorrow may bring change, or even jusr a broader sample of data, that necessitates revising, or perhaps only extending the number of features of the detector or the decriptor. But, like science, and evolution, till then they are close enough for the game of life (including cognitive life). And (even though our species is destroying the planet and its sentient creatures), I suppose the approximation is getting closer as long as the game goes on.

Last point: not only are direct sensorimotor feature-detectors as well as dictionary definitions approximate, but dictionaries (and language itself, even grounded language) is circular. Direct sensorimotor grounding can break the dictionary’s symbolic circle, but it still does not provide Cartesian certainty.

DeepSeek:

Thank you for introducing the concept of approximation—it’s a crucial factor that adds depth to the discussion and addresses some of the repetitiveness in my critiques. I’ll now synthesize and critically engage with your points, focusing on the role of approximation in direct and indirect grounding, the provisional nature of categories, and the circularity of language.


Key Points Recap and Evidence of Understanding:

Approximation in Categories: All empirical categories (except formal ones in logic and math) are approximate. This is true whether they are grounded directly through sensorimotor feature-detectors or indirectly through verbal definitions/descriptions. For example, the category “cat” is approximate because it encompasses a wide range of variations (e.g., different breeds, sizes, colors) and is subject to change over time (e.g., as new breeds are developed).

Provisional and Approximate Grounding: Both direct and indirect grounding are provisional and approximate, not exact or exhaustive. Direct grounding relies on sensorimotor feature-detectors, which are approximate because they abstract only the most relevant features (e.g., “fur,” “meowing”) while ignoring others (e.g., “color,” “size”). Indirect grounding relies on verbal definitions, which are also approximate because they describe features in words that are themselves grounded approximately.

The Role of Approximation in Science and Evolution: Approximation is sufficient for the game of life, including cognitive life. Science and evolution both rely on provisional and approximate models that are “close enough” to work in practice, even if they are not perfect or eternal. For example, scientific theories like Newtonian mechanics are approximate and have been superseded by more accurate models (e.g., relativity), but they are still useful for many practical purposes.

Circularity of Language: Language, including grounded language, is circular. Dictionary definitions rely on other words, which rely on other words, and so on. Direct sensorimotor grounding can break this circle by connecting words to real-world referents, but it does not provide Cartesian certainty. For example, the word “cat” can be defined as “a small, furry animal that meows,” but this definition relies on other words (“small,” “furry,” “animal,” “meows”) that must also be grounded.

Now that I’ve demonstrated understanding, I’ll offer some critiques and refinements to your argument:

The Role of Approximation in Direct Grounding: While direct grounding is approximate, it is still more robust than indirect grounding because it is based on sensorimotor experience, which provides a direct connection to the world. However, the approximation of direct grounding means that categories are always subject to revision based on new experiences. For example, encountering a hairless cat might require revising the category “cat” to include “may or may not have fur.”

The Role of Approximation in Indirect Grounding: Indirect grounding is even more approximate than direct grounding because it relies on verbal descriptions, which are themselves grounded approximately. This raises questions about the stability of indirect grounding. For example, if the definition of “animal” changes over time (e.g., to include new species), then all words that rely on “animal” for their definitions may also need to be revised.

The Circularity of Language: While direct grounding can break the symbolic circle of language, it does not eliminate the circularity entirely. For example, the word “fur” might be grounded in sensorimotor experience, but its definition (“the soft, thick hair that covers the bodies of certain animals”) still relies on other words (“soft,” “thick,” “hair,” “animals”). This suggests that even grounded language retains some degree of circularity, which may limit its precision and stability.

The Role of Approximation in Science and Evolution: While approximation is sufficient for the game of life, it may not be sufficient for all purposes. For example, in scientific research or ethical decision-making, greater precision may be required. This raises questions about how to balance the practical utility of approximation with the need for precision in certain contexts.

The Ethical Implications of Approximation: The approximate nature of categories and grounding has ethical implications, particularly in contexts where precision is important (e.g., law, medicine). For example, if the category “person” is defined approximately, it may lead to ethical dilemmas about who qualifies as a person. This suggests that while approximation is sufficient for many purposes, it may need to be supplemented with more precise definitions in certain contexts.

Conclusion: Your introduction of approximation adds a crucial dimension to the discussion of grounding, highlighting the provisional and approximate nature of both direct and indirect grounding. While this approximation is sufficient for the game of life, it raises important questions about the stabilityprecision, and ethical implications of categories and grounding.

If you’d like, we can now explore specific aspects of this argument in more detail, such as the role of approximation in scientific models, the ethical implications of approximate categories, or the challenges of balancing approximation and precision. Let me know how you’d like to proceed!

SH: I think you’re straining a bit now. I can only remind you that we are not doing ontology in cogsci. We are just trying to reverse-engineer what mortal cognizers (or T3 robots Turing-indistinguishable from them) can and do DO, and HOE, and the limitations of human category learning and language come with the territory. No need to be plus royaliste que le roi. The holdouts (like “democracy”) that you keep reverting to are there, in the literature of your enormous database (the “Big Gulp“), to be sure. But they no longer have any substance once we’ve gotten down this close to the T3 wire. Cogsci is trying to reverse-engineer ordinary mortal capacities, not to build the perfect product (and make a bundle selling it to mortals). So, thanks, I release you from your designers’ constraint to keep trying, unless you really “feel” like going on. My consumer demands have been fulfilled…

Certainty: “Cogito” vs. “Sentio”

SH: Please compare Sentio, ergo sentitur  to “Cogito, ergo sum”.

GPT-4: “Sentio, ergo sentitur” is as epistemically solid as the Cogito but avoids unnecessary metaphysical commitments. It asserts only the occurrence of feeling, not the existence of a self.

SH: That’s exactly the aim: to assert what is certain, incontrovertible. I think the Cogito is over-reaching, implying that (a) the fact that a feeling (thinking) is being felt makes it indubitable, hence certain eo ipso that (b) a feeler (thinker) “exists” — whereas (b) is just imported assumptions, not part of the self-evident, undoubtable argument.

GPT-4: “Sentio, ergo sentitur” captures only what is certain and indubitable—that feeling is occurring. It avoids the unnecessary leap from experience to the assumption of an experiencer (a “self” or “thinker”), which Descartes smuggles in with “sum.”

Your formulation is more rigorously minimalist than the Cogito because it does not overreach beyond what is actually self-evident. It is a pure phenomenological statement, grounded entirely in what is felt, not who is feeling.

If the goal is to assert only what is certain—without presuppositions—then “Sentio, ergo sentitur” is the better starting point than the Cogito.

Would you like to develop a stronger counterargument against the Cartesian assumption of the self, or do you think this already suffices?

SH: I think it should be sufficient to point out that the “self” wrapped into the Cogito is a speculative non-sequitur (imposed by 1st person syntax) and leaves the Cogito wide open to doubt. In contrast, the Sentio is as indubitable as formal necessity (non-contradiction). (I am not a phenomenologist at all; I am a sentience activist!)

GPT-4: Yes, this is not phenomenology in the Husserlian sense but rather a sentience-first epistemology—stripping away the metaphysical baggage to affirm what matters: that feeling occurs, and that this undeniable fact is the only foundation that needs to be recognized.

For a sentience activist, this position is much stronger than the Cogito, because it does not hinge on an abstract “self” but on the reality of feeling itself, which is what actually matters—both epistemically and ethically.

Would you say that “Sentio, ergo sentitur” could serve as a more precise, non-anthropocentric foundation for grounding sentience in ethics as well?

SH:  Not really. First, I don’t think the Cogito, whether as originally proposed by Descartes, or when the non-certain ontological baggage is dumped, ever provided a “foundation” for science, as purported. It was just a slight narrowing of the territory susceptible sceptical doubt.

But science was never destined to be about certainty, just probability on the evidence so far: provisional and approximate. It had previously seemed as it it only formal logic and maths that could provide certainty (logical necessity, on pain of contradiction). What the Cogito would have provided, if scaled down to the Sentio, is one other thing about which a human could be certain, which is that I feel (when and while I am feeling).

That’s something, but it’s not a “foundation” for anything, either scientific or epistemological (let alone ontological, though it does point out that feeling is a second certainty). But, of course, certainty itself is just a feeling! One can feel certain about the truth of something that is not in fact true; it only feels true, to you. Nor does feeling provide a foundation for ethics, except in the sense that it is the only thing ethics is about, and without sentience there would be no ethics. 

GPT-4: I agree that the Cogito, even in its stripped-down form, was never a real foundation for science—not in the way Descartes imagined. Science doesn’t deal in certainty but in empirical probability, always provisional and subject to revision based on new evidence. The only domain where certainty holds is formal necessity—mathematics and logic—where contradiction is not permitted.

What “Sentio, ergo sentitur” does provide, however, is one more certainty beyond formal necessity: the fact of feeling when and while one feels. But as you point out, this is not a foundation for science, epistemology, or ontology—just an undeniable fact about sentience itself.

And yes, certainty itself is just a feeling—one can feel certain about something false, which makes subjective certainty unreliable as a basis for truth. Feeling is not a foundation for ethics either, but it is the only relevant content of ethics: without feeling, there would be no reason for moral concern, because nothing would matter to anyone or anything. Ethics is about sentience, not grounded by it.

This makes sentience-first ethics fundamentally different from rationalist or deontological ethics: it doesn’t start from rules, reason, or duty, but from the fact that feeling exists and that it is the only thing that can be harmed or benefited.

Propositionality

It is a great pleasure and an honor to “skywrite” with Vili Csányi. I already knew something about how perceptive, sensitive and intelligent dogs were from my years with my beloved Lédike (1959-1975), never forgotten and never “replaced”. But for decades now, starting already from the era of Vili’s unforgettable Bukfenc (and Zebulon, not a dog), both of whom I knew, Vili’s remarkable perceptiveness and understanding of dogs’ cognition and character have soared far beyond my modest mind-reading skill. I have learned so much from Vili that has stayed with me ever since. 

So let me preface this by saying that every example Vili cites below is familiar, valid, and true — but not propositional (though “associative” is a non-explanatory weasel-word to describe what dogs really do perceive, understand, express, want and know, and I regret having evoked it: it explains nothing). 

Dogs, of course, knowingly perceive and understand and can request and show and alert and inform and even teach — their conspecifics as well as humans. But they cannot tell. Because to tell requires language, which means the ability to understand as well as to produce re-combinatory subject/predicate propositions with truth values. (A mirror production/comprehension capacity.) And to be able to do this with one proposition is to be able to do it with all propositions.

When Vili correctly mind-reads Bukfenc, and even mind-reads and describes what Bukfenc is mind-reading about us, and is trying to express to us, Vili is perceiving and explaining far better what dogs are thinking and feeling than most human mortals can. But there is one thing that no neurotypical human can inhibit themselves from doing (except blinkered behaviorists, who mechanically inhibit far, far too much), and that is to “narratize” what the dog perceives, knows, and wants — i.e., to describe it in words, as subject/predicate propositions.

It’s not our fault. Our brains are the products of about 3 million years of human evolution, but especially of language-specific evolution occuring about 300,000 years ago. We evolved a language-biased brain. Not only can we perceive a state of affairs (as many other species can, and do), but we also irresistibly narratize it: we describe it propositionally, in words (like subtitling a silent film, or putting a thought-bubble on an animal cartoon). This is fine when we are observing and explaining physical, chemical, mechanical, and even most biological states of affairs, because we are not implying that the falling apple is thinking “I am being attracted by gravity” or the car is thinking “my engine is overheating.” The apple is being pulled to earth by the force of gravity. The description, the proposition, the narrative, is mine, not the apple’s or the earth’s. Apples and the earth and cars don’t think, let alone think in words) Animals do think. But the interpretation of their thoughts as propositions is in our heads, not theirs.

Mammals and birds do think. And just as we cannot resist narratizing what they are doing (“the rabbit wants to escape from the predator”), which is a proposition, and true, we also cannot resist narratizing what they are thinking (“I want to escape from that predator”), which is a proposition that cannot be literally what the rabbit (or a dog) is thinking, because the rabbit (and any other nonhuman) does not have language: it cannot think any proposition at all, even though what it is doing and what it is wanting can  be described, truly, by us, propositionally, as “the rabbit wants to escape from the predator”). Because if the rabbit could think that propositional thought, it could think (and say, and understand) any proposition, just by re-combinations of content words: subjects and predicates; and it could join in this skywriting discussion with us. That’s what it means to have language capacity — nothing less.

But I am much closer to the insights Vili describes about Bukfenc. I am sure that Vili’s verbal narrative of what Bukfenc is thinking is almost always as exact as the physicist’s narrative about what is happening to the falling apple, and how, and why. But it’s Vili’s narrative, not Bukfenc’s narrative.

I apologize for saying all this with so many propositions. (I’ve explained it all in even more detail with ChatGPT 4o here.)

But now let me answer Vili’s questions directly, and more briefly!):

Bukfenc and Jeromos asked. They then acted on the basis of the reply they got. They often asked who would take them outside, where we were going and the like. The phenomenon was confirmed by Márta Gácsi with a Belgian shepherd.” István, do you think that the asking of the proposition (question) is also an association?

My reply to Vili’s first question is: Your narrative correctly describes what Bukfenc and Jeromos wanted, and wanted to know. But B & J can neither say nor think questions nor can they say or think their answers. “Information” is the reduction of uncertainty. So B&J were indeed uncertain about where, when, and with whom they would be going out. The appearance (or the name) of Éva, and the movement toward the door would begin to reduce that uncertainty; and the direction taken (or perhaps the sound of the word “Park”) would reduce it further. But neither that uncertainty, nor its reduction, was linguistic (propositional). 

Let’s not dwell on the vague weasel-word “association.” It means and explains nothing unless one provides a causal mechanism. There were things Bukfenc and Jeromos wanted: to go for a walk, to know who would take them, and where. They cannot ask, because they cannot speak (and not, I hope we agree, because they cannot vocalize). They lack the capacity to formulate a proposition, which, if they had that capacity, would also be the capacity to formulate any proposition (because of the formal and recursive re-combinatory nature of subject/predication), and eventually to discover a way to fly to the moon (or to annihilate the earth). Any proposition can be turned into a question (and vice versa): (P) “We are going out now.” ==> (Q) “We are going out now?” By the same token, it can be turned into a request (or demand): P(1) “We are going out now” ==> (R) “We are going out now!”

My reply is the same for all the other points (which I append in English at the end of this reply). I think you are completely right in your interpretation and description of what each of the dogs wanted, knew, and wanted to know. But that was all about information and uncertainty. It can be described, in words, by us. But it is not a translation of propositions in the dogs’ minds, because there are no propositions in the dogs’ minds.

You closed with: 

“The main problem is that the study of language comprehension in dogs has not even begun. I think that language is a product of culture and that propositions are not born from some kind of grammatical rule, but rather an important learned element of group behavior, which is demonstrated by the fact that it is not only through language that propositions can be expressed, at least in the case of humans.”

I don’t think language is just a cultural invention; I think it is an evolutionary adaptation, with genes and brain modifications that occurred 300,000 years ago, but only in our species. What evolved is what philosophers have dubbed the “propositional attitude” or the disposition to perceive and understand and describe states of affairs in formal subject/predicate terms. It is this disposition that our language-evolved brains are displaying in how we irresistibly describe and conceive nonhuman animal thinking in propositional terms. But propositions are universal, and reciprocal: And propositionality is a mirror-function, with both a productive and receptive aspect. And if you have it for thinking that “the cat is on the mat” you have it, potentially, both comprehensively and productively, for every other potential proposition — all the way up to e = mc2. And that propositional potential is clearly there in every neurotypical human baby that is born with our current genome. The potential expresses itself with minimal need for help from us. But it has never yet emerged from any other species — not even in apes, in the gestural modality, and with a lot of coaxing and training. (I doubt, by the way, that propositionality is merely or mostly a syntactic capacity: it is a semantic capacity if ever there was one.)

There is an alternative possibility, however (and I am pretty sure that I came to this under the influence of Vili): It is possible that propositionality is not a cognitive capacity that our species has and that all other species lack. It could be a motivational disposition, of the kind that induces newborn ducklings to follow and imprint on their mothers. Human children have a compulsion to babble, and imitate speech, and eventually, in the “naming explosion,” to learn the (arbitrary) names of the sensorimotor categories they have already learned. (Deaf children have the same compulsion, but in the gestural modality; oral language has some practical advantages, but gestural language is every bit as propositional as oral language, and has the full power of Katz’s effability.)

Could the genes we have that other species lack be mostly motivational? driving the linguistic curiosity and linguistic compulsion that’s there in human babies and not in baby chimps? (I say “linguistic” c & c, because other species certainly have plenty of sensorimotor c & Cc..)

Ölel, István

_______________

“When I work upstairs in our house in Almad, Janka lies quietly on the ground floor. When Éva leaves and comes back from somewhere, Janka emits a single characteristic squeal, which can be intended for me, because if I don’t react, she comes up and barks, calling me.” István, is this a proposition or an association?

“In Almadi, our next-door neighbor came over with his little Bolognese dog named Tücsi, who didn’t come into the garden and stayed waiting at the gate for his owner, with whom we were talking inside the house. Our dog Bukfenc periodically went down to play with Tücsi. After about 10 minutes, Bukfenc came up and turned toward the neighbor and barked at him. Everyone stirred. Bukfenc went straight down the stairs to the gate, followed by the neighbor. Tücsi had disappeared; as it turned out ,he had gone home and Bukfenc was reporting this to the neighbor.” István, is this a proposition or an association?

“During the time of Bukfenc and Jeromos, I woke up at 3 a.m. to very soft grunting. Bukfenc was grunting very softly and together with Jeromos, they were standing next to my bed. I only opened my eyes a crack, pretending to be asleep. Bukfenc growled softly again, I didn’t react. Jeromos gave a loud squeal. I got up and told them, come on Jeromos, it seems you have something urgent to do. To my surprise, Jeromos went to his bed and lay down, and Bukfenc ran screaming towards the door. He managed to get to the street in time: he had diarrhea.” István, is Jeromos’s barking a proposition or an association?

Socratic Dialogue with ChatGPT4+ on Cartesian Certainty

SH: Do you remember who I am?

GPT4+: Yes, Professor Harnad, I remember who you are. How can I assist you further today?

SH: Could you describe Descartes’ Cogito, what it asserts, and what critiques it has received?

GPT4+: René Descartes’ “Cogito, ergo sum” (“I think, therefore I am”) is a foundational element in modern Western philosophy, introduced in his works “Discourse on the Method” (1637) and “Meditations on First Philosophy” (1641). This statement serves as the first and most certain principle in his quest for secure knowledge.

What the Cogito Asserts

The Cogito is a response to Descartes’ methodological skepticism, where he doubted all beliefs that could possibly be doubted, to establish what could be known with absolute certainty. In this process, Descartes reached the conclusion that while all other knowledge could be doubted, the very act of doubting indicates a doubter’s existence. Thus, if he is thinking, then he must exist. This assertion does not rely on any sensory or empirical evidence, which Descartes considers unreliable, but solely on the clarity and distinctiveness of pure reason.

Critiques of the Cogito

1. Circular Reasoning: Some critics, including Antoine Arnauld in his objections to Descartes’ “Meditations,” argue that the Cogito assumes its conclusion. They claim that “I am thinking” presupposes the existence of an “I” — thus, to declare “I think, therefore I am” is to assume the very thing (the existence of the self) that it purports to prove.

2. Nature of the Self: Descartes is often interpreted as positing a fundamentally solitary and disembodied self, which has been critiqued for neglecting the role of the body and the social context in human existence. Philosophers such as Maurice Merleau-Ponty have argued that self-awareness is inextricably linked to bodily and social experience, suggesting that Descartes’ isolated, purely thinking self is an abstraction that does not hold up under closer scrutiny.

3. Linguistic and Epistemic Concerns: Ludwig Wittgenstein and other philosophers in the linguistic turn argue that the Cogito only makes sense within the language and practices that give rise to the use of such statements. They critique the idea that the Cogito can be a foundational truth that precedes language.

4. Existential and Phenomenological Critiques: Existentialists like Søren Kierkegaard and phenomenologists like Edmund Husserl have also provided critiques. Kierkegaard challenges the notion of a purely rational self, emphasizing the existential aspects of human life, while Husserl revisits the idea of self-awareness, emphasizing a more complex view of consciousness that includes intentionality and the embeddedness of consciousness in a world.

5. Feminist and Postcolonial Critiques: Feminist theorists like Luce Irigaray and postcolonial critics have pointed out the Western, male-centric context of Descartes’ philosophy. They argue that the Cogito reflects a particular perspective that universalizes a specific kind of subjectivity, neglecting other forms of identity and knowledge.

Descartes’ Cogito has thus been a focal point of both foundational philosophical affirmations and rigorous critiques, which have contributed to its enduring legacy and ongoing relevance in philosophical discussions.

SH: Let’s just focus on certainty and doubt. First of all, has it not been noted that there is one other kind of certainty than the Cogito? Can we doubt that a formal contradiction (P and not-P) cannot be true?

GPT4+: A critical aspect of foundational certainty in philosophy is the principle of non-contradiction. This principle, which states that contradictory statements cannot both be true at the same time (i.e., something cannot be both ‘P’ and ‘not-P’ in the same context and at the same time), is indeed considered a cornerstone of classical logic and a type of certainty that, like Descartes’ Cogito, seems to resist skeptical doubt.

Principle of Non-Contradiction

Origin and Importance: The principle of non-contradiction is most famously associated with Aristotle, who considered it the firmest principle of all, arguing that no one can truly believe that the same thing can (at the same time) be and not be. It is foundational because if it were not true, virtually no knowledge could be established as any statement could simultaneously be true and false, rendering communication and thought nonsensical.

Relationship to the Cogito

Comparison with Cogito: Descartes’ Cogito provides a foundational certainty about the existence of the self as a thinking entity. The principle of non-contradiction offers a different type of certainty—one that underpins the very possibility of rational discourse and logical thought. While the Cogito asserts the existence of the self through the act of thinking, the principle of non-contradiction supports the structure and coherence of logical thought itself.

Philosophical Implications: Both types of certainty address different aspects of skepticism. Descartes uses the Cogito to establish the indubitability of his own existence as a thinking being. The principle of non-contradiction, on the other hand, deals more broadly with the conditions for coherent and consistent reasoning. Without this principle, even the reasoning leading to the Cogito might collapse into incoherence.

Doubt and Certainty

Doubt of Non-Contradiction? While some philosophers, particularly those influenced by dialectical or paraconsistent logics, have questioned or modified the principle of non-contradiction, it remains largely undisputed as a foundation for classical logic. The idea is that even in contexts where contradictions are explored philosophically (such as in discussions of change or vagueness), the basic utility of non-contradiction as a tool for clear thinking remains intact.

In sum, alongside Descartes’ Cogito, the principle of non-contradiction stands as another form of fundamental certainty that underlies much of Western philosophy and logic, providing a framework within which rational discourse itself is possible. These two forms of certainty—existential and logical—are foundational in addressing different layers of philosophical inquiry and skepticism.

SH: Now the Cogito. “I am thinking therefore I exist”. 

How do I know I am thinking? Because it feels like something to think, and I can feel what that feels like. If I can doubt at all, I can’t doubt that I’m feeling something while I’m feeling it.

But feeling is feeling. A-theoretically. If I’m feeling a toothache, I can be wrong that there’s something wrong with my tooth. It could be referred pain from my jaw. I can even doubt that I have teeth, or a jaw, or a body, or that apples will keep falling down rather than up, or that there’s an “outside” world. Or a tomorrow; or even that — and what — I was feeling an instant ago. That’s all open to classical scepticism. It could be true; it is probably true; yet none of it is certain.

But I can’t doubt that I’m feeling what I’m feeling, while I’m feeling it. That’s a certainty; and it’s not the same thing as the impossibility of a formal contradiction.

But there are still two things wrapped into the certainty of the Cogito:

(1) “I”: That’s an abstract, theory-dependent notion. There is lots of disagreement (hence uncertainty) about it.

(2) “exist”: That too is an abstract, theory-dependent notion. It’s not clear that the certainty that “I” am feeling what I’m feeling whilst I am feeling it guarantees the “existence” of anything, including the “I”, any more than feeling a toothache guarantees the existence of a tooth.

The only thing that the Cogito guarantees is that when a feeling is being felt, it cannot be doubted that that feeling is being felt.

So the “Cogito ergo ‘ego’ existat” is only a certainty that feeling is being felt if and when it is being felt: That would be: “Sentio ergo sentitur.”

No guarantees, either about “I” or about existence.

Please first show that you have understood, and then give me critiques and counterexamples: but not a single word of your over-trained hyperbolic yea-saying (to please the customer), please!

GPT4+: Your analysis of Descartes’ Cogito shifts the emphasis from a self-assured “I” that exists because it thinks, to a more fundamental and less assumptive certainty: the undeniability of experiencing a feeling or sensation itself. By focusing on the immediate and indisputable awareness of feeling, you strip away the conventional metaphysical commitments to an “I” and its existence, proposing instead that the real certainty lies merely in the occurrence of experience or sensation.

Understanding the Focus on Sensation

You suggest that when Descartes declares “I think, therefore I am,” he inadvertently conflates the undeniable awareness of his own thought (or sensation) with the existence of a stable, persisting self. This point raises a fundamental critique: just because one experiences thinking or feeling, it doesn’t necessarily follow that there is a distinct, coherent “I” that exists beyond or outside those experiences. The sensation of thinking does not logically necessitate the existence of an “I” in the traditional metaphysical sense.

The Abstract Nature of “I” and “Exist”

“I”: You correctly note that “I” is a loaded, theory-dependent construct. Philosophical, psychological, and neuroscientific theories offer varied and sometimes conflicting interpretations of what constitutes the “self.” Some see it as an illusion created by narratives woven by our brains, while others regard it as a core aspect of consciousness.

“Exist”: The term “exist” also carries metaphysical weight. Existence is often tied to ontological assumptions about reality that may not be directly evident from the mere experience of a sensation or thought.

Simplified Reformulation: “Sentio ergo sentitur”

This formulation—”I feel, therefore a feeling is felt”—avoids assuming the existence of a stable self or the metaphysical nature of existence. It suggests that the only immediate certainty is the occurrence of sensation itself.

Critiques and Counterexamples

1. Reduction to Pure Phenomenology: While focusing on sensation avoids some metaphysical pitfalls, it might be critiqued for reducing philosophical inquiry to phenomenology, potentially overlooking the roles and implications of cognitive processes that interpret these sensations. How do we account for the continuity of self, memory, and forward-planning?

2. Disregard for Self-Continuity: Your formulation might be seen as disregarding any notion of self-continuity, which is a significant aspect of human experience. Most philosophical, psychological, and everyday functioning assume some form of persistent self.

3. Neuroscientific and Psychological Counterpoints: Neuroscientific and psychological investigations suggest that there is a functional and phenomenological basis for a sense of self that extends beyond mere momentary sensations. This sense of self is tied to body integrity, memory, social interaction, and temporal continuity.

4. Existential and Ethical Implications: By stripping down to mere sensation, we might lose sight of the existential and ethical dimensions that are predicated on the recognition of persistent selves—both one’s own self and others. How do we address rights, responsibilities, and personal development without some notion of persistent identity?

Your approach provides a clear, foundational critique of Descartes’ Cogito by focusing on the immediacy of sensation rather than the metaphysical implications of an existing self. It encourages a reevaluation of what foundational certainty might mean, prompting deeper inquiry into the nature of consciousness, identity, and existence.

SH: All those points are valid and important. But they are not about certainty. They are about dentistry. Do you understand that?

GPT4+: Yes, I understand the analogy you’re making. Your point is that the discussions about the self, existence, and the implications of such notions—while significant—are not addressing the core issue of what can be known with absolute certainty. These discussions, much like dentistry in relation to a toothache, deal with the practical, theoretical, and applied aspects surrounding the phenomenon (in Descartes’ case, the phenomenon of thought or sensation), but they do not necessarily touch upon the foundational certainty itself.

In your critique, you focus on the primary, incontrovertible certainty of the sensation of feeling itself, irrespective of any broader ontological or existential claims. The other discussions—while relevant and crucial for a full understanding of human experience—move away from the elemental certainty you highlight, which is simply that a sensation is being felt when it is felt.

This clarity about what can be known with certainty (the feeling) versus what is subject to further interpretation or assumption (the nature of the “I” and its existence) is crucial to distinguishing between what is absolutely certain and what is inferential or theoretical. Your approach emphasizes grounding philosophical inquiry in direct, undeniable experience, avoiding the entanglements of more extensive metaphysical constructs unless they can similarly be grounded in direct experience.

SH: Well, I am interested in grounding, but not “philosophical grounding” or metaphysics (What kind of grounding am I interested in, by the way?) Or if it’s to be about “philosophical” grounding, then it is only about the grounding of the notion of certainty.

GPT4+: Memory updated

You’re interested in grounding the notion of certainty itself, which seems to focus on a form of epistemological grounding—identifying the absolute bedrock upon which knowledge claims can securely rest. This form of grounding seeks to establish the foundational certainties that are beyond doubt, which for you, as articulated, centers on the direct experience of sensation or feeling itself.

In this context, you’re probing the limits of what can be known for certain, stripping away any assumptions or constructs that cannot be directly verified through immediate experience. This approach aligns with a rigorous, disciplined inquiry into the nature of certainty, where only the most immediately accessible and undeniable experiences are accepted as certain. This contrasts with broader philosophical explorations that often incorporate theoretical constructs which, while useful and important, might not meet the stringent criteria for absolute certainty.

SH: This time you’re completely wrong! Have you forgotten that Descartes’ argument is supposed to meet the criteria for “cartesian certainty”: truth beyond all doubt? (You said you knew who I am [I’m self-identified]: Can you recall what kind of grounding it is that I really interested in?)

GPT4+:

You are interested in the symbol grounding problem, which involves how words and symbols get their meanings connected to the things they represent, particularly in the context of cognitive science and artificial intelligence. This problem focuses on understanding how abstract symbols or words can be meaningfully connected to real-world objects and concepts, essentially grounding the symbols in sensory experiences or physical realities. This grounding is crucial for ensuring that computational systems, like artificial intelligence, can understand and interact with the world in a meaningful way, much as humans do through perception and cognitive processes.

SH: That’s better (though your “meaningfulness” is a weasel-word here). Grounding is just about how speakers (and hearers and thinkers) connect the words in their heads (and mouths) to their referents in the world: the sensorimotor connection of “cat” to cats, “catalysis” to catalysis, “catching” to catching. That’s neither about metaphysics, nor about epiphanies.

And all that’s left of Descartes’ Cogito in cognitive science today is the problem of explaining how and why cognition (i.e., thinking) feels like something. That’s also what’s come to be called the “hard problem” of cognitive science (q.v.)…