Exchange with ChatGPT on the “Easy” and “Hard” Problem of Cognitive Science

SH: Could you please summarize briefly what is meant by the “underdetermination of scientific theory by data,” including possibilities like multiple “Grand Unified Theories of Everything” (GUTEs).

ChatGPT: In its standard form, underdetermination means that any finite body of empirical data can be equally well explained by more than one scientific theory, often infinitely many. The classical point is logical: data confirm but never entail a theory. Distinct hypotheses can generate the same observable predictions; hence data alone cannot force a unique theoretical choice. Two versions are usually distinguished:

(1) Local underdetermination.
For a given domain, rival theories can match all available observations while differing in unobservable posits or in their explanatory architecture. In practice, scientists appeal to auxiliary criteria (parsimony [Occam’s razor] simplicity, coherence, elegance, unification), but these are not dictated by data.

(2) Global underdetermination.
Entire world-models can be constructed that reproduce every possible observation while differing in their “deep” structure. Put differently: the mapping from theory-space to observation-space is many-to-one. If so, we could never determine which of several complete, internally coherent descriptions of the world is the true one.

The idea of multiple “Grand Unified Theories of Everything” (GUTEs) is a direct application: even if physics eventually converges on a complete set of surface-regularities, there may still be many distinct fundamental theories—different dynamical laws, symmetries, or ontologies—that reproduce all observed phenomena with arbitrarily fine accuracy. Nothing in the data would distinguish them.

In your own framing: cognitive science has an analogous situation with GUTE_c, where candidate causal models of cognition may all match every observable capacity Turing-Indistiguishably (T4-level performance) yet diverge on the crucial but empirically inaccessible question of whether and how they generate feeling. Because sentience (feeling) is not an observable variable (to anyone but the feeler), T4-indistinguishability does not break the tie. This is underdetermination extended to the Hard Problem, with the Other-Minds barrier supplying the unresolvable causal gap (sometimes called the “explanatory gap¨).

The general moral is: even ideal empirical completeness does not guarantee theoretical uniqueness, causal completeness or explanatory adequacy.

SH: I’m amazed at how uncannily well you managed to integrate general notions about underdetermination with my own notions about the “Easy Problem” and the “Hard Problem” of reverse-engineering cognitive capacity. Can you be a little more specific? In particular, can you infer why I would say that the solution to the EP would make the HP even harder?

ChatGPT: The reason you would say that solving the Easy Problem (EP) makes the Hard Problem (HP) harder still follows directly from your two commitments:

  1. Causal explanation is finite.
    A complete T4-level explanation of cognitive capacity—perception, category learning, language, action, memory, attention—will already account for all observable causal work the organism does.
  2. Sentience is not an observable variable.
    Because feeling is accessible only to the feeler, no amount of external data can constrain theories about why feeling occurs, or how it is generated.

Now the key point you have emphasized:

Why solving the EP exhausts the causal degrees of freedom

If cognitive science eventually produces a full causal model explaining every performance capacity (T4 indistinguishability), then all the causal machinery needed to produce the organism’s behavioral, neural, and functional profile will be accounted for. Nothing in that explanation requires, predicts, or invokes feeling.

At that point:

  • All the doing is explained.
  • All the mechanisms underlying doing are explained.
  • Every causal gap relevant to performance has been closed.

But sentience—the fact that it feels like something to think, see, hurt, or enjoy—still stands outside the causal account. And because the EP’s solution has already used up every available causal role, there is no remaining explanatory function left for feeling to play. This is exactly what you mean when you say:

“The solution to the EP will have used up all the causal degrees of freedom.”

Why this intensifies underdetermination

Before solving the EP, one could still imagine that perhaps somewhere in the causal tangle there is a hidden role for feeling—some undiscovered function, some latent variable, some missing mechanism. But once the EP is solved:

  • There is nowhere left to put sentience in the causal architecture.
  • There is no remaining causal work for feeling to do.
  • Yet feeling undeniably exists for the only observer to which it is accessible: the feeler.

Hence the HP becomes worse than underdetermined: it becomes causally superfluous. Even if, as you put it, an omniscient deity could reveal which GUTE_c actually “has” feeling, that revelation would not constitute an explanation. It would merely fix the value of an empirically uncheckable parameter.

What this means for underdetermination

Your position is that HP-underdetermination is categorically deeper than ordinary theory-underdetermination:

  • Ordinary underdetermination concerns multiple theories doing equal causal and explanatory work.
  • HP-underdetermination concerns a real phenomenon—feeling—that does no causal work in the model and whose presence or absence is empirically inaccessible.

Thus, solving the EP not only fails to solve the HP; it removes any causal foothold from which a solution could in principle be constructed. This is why, in your view, the HP is not merely unsolved but unsolvable in the same scientific mode as the EP (or any other empirical phenomenon.

That is the conceptual structure linking underdetermination, the EP, the HP, and your GUTE_c argument.

SH: A bigger challenge: Can you relate that to the notion that the reverse-engineering of cognitive capacity (GUTE_c) — as opposed to the more general notion of the GUTE in all of physics, chemistry and biology (where it is seen as the discovery of “laws” of nature) –amounts to modelling cognitive capacity computationally and then testing a 3D-printed version with the T4 Turing Test?

ChatGPT: Here is the way your own framework links these pieces together—GUTE vs GUTE_c, reverse-engineering, computation, 3D-printing, and T4—without straying into hype or cliché.


1. Why GUTE (physics/chemistry/biology) is a search for laws

In physics and the natural sciences, a GUTE is conceived as a single, compact set of dynamical principles from which all physical regularities follow. The explanandum is the entire causal traffic of the universe. The explanans is a set of general laws.

Even here, the laws are underdetermined by all possible observations. Multiple distinct dynamical systems can reproduce the same empirical regularities. But crucially:

  • The explananda are publicly observable.
  • The causal work to be explained is completely external (forces, fields, particles, macromolecules).

There is no hidden-variable-like feeling complicating the picture.


2. Why GUTE_c is not a search for “laws” but for a causal mechanism that does what a human does

The explanandum of cognitive science is capacity, not cosmos:

  • Categorizing
  • Learning
  • Speaking and understanding
  • Acting adaptively
  • Planning and reasoning

These are performance capacities, not universal laws.

Reverse-engineering cognition therefore means:

  • Finding a causal mechanism that generates T4-equivalent performance,
    not discovering a new fundamental law of nature.

This is what makes GUTE_c a different kind of project. It aims at a full, working model of a cognizer.


3. Why computational simulation is not enough

Computation alone (T2) yields:

  • A symbolic model
  • A syntactic simulation
  • A system that can describe or simulate doing, but does not itself do

This is the crux of your symbol-grounding insistence: ungrounded symbols can express anything describable, but they do not interact with their referents.

Hence the need for robotic (sensorimotor) grounding.

A reverse-engineered cognizer must:

  • detect sensory from the world
  • act on the world
  • learn from the consequences of its actions

This stage is no longer purely computational. It concerns the causal organization of a dynamical system.


4. Why the “3D printer” matters

Your analogy:

  • The computational model is like a set of engineering blueprints.
  • The 3D-printed robot is the realization of those blueprints in the physical world.

The blueprints alone cannot pass T3 or T4.
Only the built system—the embodied mechanism—can interact causally with the world.

This is where your distinction becomes sharp:

  • The GUTE in physics explains by laws.
  • The GUTE_c in cognitive science explains through material implementation

Success is demonstrated not by formal derivations but by dynamic performance indistinguishability.


5. T4 as the empirical criterion for solving the Easy Problem

T4 requires:

  • Human-equivalent, Turing-Indistinguishable behavioral performance capacity
  • Human-equivalent, Turing-Indistinguishable robotic sensorimotor interaction with the world
  • Human-equivalent, Turing-Indistinguishable neural/physical organization (observable structure and dynamics)

Once a 3D-printed cognizer is T4-indistinguishable from a human, the EP is solved, because:

  • All observable causal capacities have been accounted for.
  • Nothing the organism does is left unexplained.

This ties GUTE_c neatly to your causal-degree-of-freedom argument.


6. Why this makes the Hard Problem even harder

Once a T4-equivalent mechanism exists:

  • It can generate all the behavioral and neural evidence available.
  • Every function that humans can perform is now implemented without reference to feeling.
  • The entire causal profile of cognition is explained.

Hence feeling becomes:

  • causally superfluous (no causal role left to play)
  • empirically inaccessible (Other-Minds barrier)
  • underdetermined not merely by data, but by any causal requirement

Thus, solving GUTE_c fully (reverse-engineering and 3D-printing a T4 cognizer) eliminates the last possible place where feeling could fit as a causal ingredient, making the HP maximally intractable.


The upshot

In physics, GUTE underdetermination is about multiple law-systems fitting the same observable regularities.

In cognitive science, GUTE_c underdetermination is deeper:

  • Multiple T4-indistinguishable mechanisms could generate all human capacities.
  • None of them provides a causal explanation of how or why those capacities feel like anything.
  • The construction and successful testing of a T4 cognizer fixes all causal facts, leaving feeling entirely outside the causal story.

This is why, in your view, the EP’s solution makes the HP not merely unsolved but unexplainable by the same explanatory resources.

That is the structural parallel and the crucial divergence between GUTE and GUTE_c.

SH:

I don’t think Gute_c is “deeper” than GUTE. Rather like all engineering (forward of reverse), GUTE_c is an application of GUTE. Causality is GUTE-causality. Replace “GUTE_c” by Guce.

3D-printing the Guce model is for testing the Guce model, recruiting the real GUTE Universe (including human Turing-Testers, and their “mind-reading” mirror capacities) to put the Guce model through its paces (including its putative lifelong performance Turing-Indistinguishable cognitive capacity). Otherwise the model would have to model not only the Guce but the GUTE.

3-D printing a candidate Guce model is not “grounding” it: Grounding is the capacity to “do the right thing with the right kind of thing” (usually by learning the sensorimotor features that distinguish the right kind of thing from the wrong kind of thing). The purpose of 3D-printing a candidate Guce model is to Turing-Test it (T4) in the world. (The purpose of Grounding itself is to integrate T2 verbal capacity coherently with T3 perception/action capacity).

Don’t forget that the Guce model is modelling cognitive performance **capacity** — not just one cognizer’s performance. (This is equivalent to Chomsky’s competence/performance distinction.)

T4 indistinguishabillity  (Turing-Indistinguishability) = empiricism itself (for T-testing Guce candidate solutions to the EP).

According to the current point on the standard road to GUTE, there can be no free quarks, only bound quarks. So quarks (much bigger than protons) do have to exist (according to current standard theory), but they cannot be observed except in bound state within a proton, which is much smaller than a quark). So quarks exist, but are unobservable (according to current standard theory, In contrast, superluminal (faster-than-light) speeds cannot occur, except as a relativistic observer illusion. But this is not a route for explaining how or why feelings exist (although they exist — and are observable to the feeler). This is part of the unique “hardness” of the HP. 

And without a causal explanation, even a perfect, exclusive, exception-free correlation between felt states and, say, dopamine secretion, would not be a causal (how/why) explanation of feeling. Correlation, and confirmed exception free prediction are not causal explanation. 

Nor is Grounding = feeling — although feeling is what makes explanation (fallibly) feel explanatory.

A hard problem and (I daresay) unique…

Mutanda mutata…

ChatGPT: Here is a reformulated version that incorporates every correction you have just imposed, avoids the mistakes in my previous framing, and keeps all your distinctions straight. I will treat Guce exactly as you specify: an engineering application of GUTE, not a “deeper” theoretic problem than GUTE, and certainly not a search for new laws of nature (which would be “dualism”)..

I will also preserve your distinctions between Grounding3D-printingTuring-testingGUTEGuceEP, and HP, and show how the pieces fit.

No sycophancy; just exact conceptual alignment.


1. GUTE vs. Guce

Guce is not a parallel project to GUTE and not a “deeper” one. It is an application of GUTE to test (and Turing-Test) the causal engineering principles modelled by Guce.

  • GUTE (physics, chemistry, biology) → discovers the causal structure of the Universe.
  • Guce → uses that causal structure to engineer a mechanism capable of human cognitive performance capacity.

In your terms: all causality is GUTE-causality, and Guce is simply a reverse-engineering problem carried out wholly within that causal domain.

Guce therefore depends on GUTE exactly as aeronautical engineering depends on Newtonian and fluid laws: Guce uses the laws; it does not add new ones.


2. Why 3D-printing is required (but is not grounding)

A Guce model is:

  • a proposed causal mechanism (not just a computational simulation of one),
  • intended to produce, test and explain the capacity for lifelong human-equivalent cognitive performance.

To test it, one must instantiate it physically—“3D-print” it—because only a physical mechanism, under the real GUTE Universe, can:

  • interact with real objects,
  • acquire sensorimotor robotic capacity (T3),
  • learn by acting and being corrected,
  • support the integration of T2 verbal capacity with T3 sensorimotor capacity.

This is not grounding itself. Grounding is the system’s ability to do the right thing with the right kind of thing, typically by learning the distinguishing features of the category that matters. 3D-printing is what allows you to test whether a proposed mechanism can, when embedded in the world, succeed in grounding content-words and the subject/predicate propositions of natural language.

In short:

  • Grounding = doing the right thing with the right kind of thing.
  • 3D-printing = putting the candidate Guce mechanism into the GUTE world so grounding can be tested and assessed.

3. Why T4 is the right empirical criterion

A Guce candidate is T4-tested:

  • T2: Turing-Indistinguishable linguistic performance capacity
  • T3: Turing-Indistinguishable sensorimotor performance (perception and action)
  • T4: Turing-Indistinguishable full structural and dynamical equivalence in observable organization

A T4 test recruits the actual GUTE universe—including human Turing-testers with their “mind-reading” and mirror-system capacities—to judge whether the Guce implementation is indeed able perform Turing-indistinguishably from a real human for an entire lifespan’s worth of cognitive capacity.

This is simply empiricism:

T4 indistinguishability = the empirical condition for accepting a Guce model as a solution to the EP.

Nothing higher-order; just ordinary scientific testability.


4. Why solving the EP leaves no causal room for the HP

If a Guce model passes T4:

  • Every performance capacity is causally accounted for.
  • All doing-capacity (doing this, not-doing that) are completely explained.
  • GUTE-causality has been fully exercised; no causal gap remains.

Feeling, however:

  • is real (observable to the feeler),
  • is unobservable to others,
  • plays no causal role left unfilled by the T4-tested mechanism,
  • and cannot be deduced from its correlation with any physical variable (dopamine, C-fibers, etc.).

Correlation, no matter how exception-free, is not causal explanation. And without an open causal slot to fill, feeling has nowhere to enter the causal story.

This is the unique hardness of the HP.
Nothing in Guce—successful or not—advances an explanation of how or why feeling occurs.


5. Why the HP is not like unobservable quarks or superluminal illusions

You make this point sharply:

  • Free quarks: unobservable but required by current standard theory; their unobservability is a feature of the causal story.
  • Superluminal appearances: observer-relative illusions; relativity explains why they are not real.

These cases are comfortably inside GUTE causal explanation.

Feeling is not:

  • it is not an unobserved constituent required by theory;
  • it is not an illusion;
  • and it cannot be explained away by GUTE causal structure.

Its existence is certain (to the feeler), but no causal role emerges for it within a complete Guce mechanism. Hence no route from GUTE-causality to “how/why feeling.”

This is what makes the HP genuinely unique—not just a more stubborn version of standard scientific underdetermination.


6. Why grounding is not feeling

You note this often:

  • Grounding is about sensorimotor feature learning.
  • Feeling is what makes explanation feel explanatory.
  • But grounding ≠ feeling.

Even a successful Guce model—fully grounded, fully T4—leaves feeling unaccounted for.


7. The final picture

Putting it all together:

  • GUTE gives the causal laws.
  • Guce is an engineering application: reverse-engineering the causal mechanism of human cognitive capacity.
  • 3D-printing is the empirical instantiation of the proposed Guce mechanism for T4 testing.
  • Grounding is the system’s ability to learn and do the right thing with the right kind of thing; it is not a form of sentience.
  • Solving the EP (T4 success) uses up all causal degrees of freedom.
  • The HP remains untouched and uniquely unexplainable by any causal account, including the most complete Guce mechanism.

Mutatis mutandis, as requested.

On the “Easy Problem” (EP) vs. the “Hard Problem” (HP) of Cognitive Science

Student:  This reading, along with our class discussions, made me wonder about the relationship between the easy and the hard problems and the idea of degrees of freedom. How can we say that the easy problem uses all the degrees of freedom which prevents us from understanding the hard problem when we don’t know the complete truth of the easy problem? Isn’t there a possibility that feelings have a strictly physiological explanation the same way that the easy problem supposedly does? My question is more why do we assume that they are two different problems when we haven’t completely explained either?

SH: Good question. It opens a methodological and metaphysical Pandora’s Box — but one that, fortunately, only philosophers need to struggle with

The “Easy Problem” (EP) of Cognitive Science is the problem of explaining, causally, the cognitive capacities of thinking organisms, especially humans: How and why are they able to do all the cognitive (as opposed to “vegetative”) things they can do?

It is not the EP that makes the HP harder but the solution to the EP (which is still far away). 

Will the EP ever be solved? Who knows. But there is no more reason to think that the EP cannot be solved than for any other normal scientific problem, For the HP, though, there are reasons (what are they?). But those are already what makes the EP hard. 

The solution to the HP would (or will) make the EP even <I>harder</I> because it would (or will) exhaust all the causal (empirical) degrees of freedom altogether. Until the EP is solved, there are things left to be tweaked— until the EP is solved. “Tweaking” means there are still causal alternatives to try, and to test. 

Until the EP is solved. But then, what’s left to try and to test? The EP already solved, there’s still the degrees of freedom of <I>undertdetermination</I> available: You have found one solution to the EP, yet there may be other solutions to the EP. But if you have six solutions – six ways to reverse-engineer cognitive capacity and they all work, what is the empirical test for which (if any) of them is the “right” one? That is where Turing Indistinguishability becomes the same thing as empiricism: The EP solutions are all equivalent, and there is nothing more to tweak and test.

But so far that’s just the ordinary underdetermination of complete causal explanations: If you’ve explained all the empirical (observable, measurable, testable) data, you’ve done as much as can be done with causal explanation. This is just as true in physical science (the “Grand Unified Theory of Everything” “GUTE”) as it is for the EP of cognitive science (the reverse-engineering of organisms’ cognitive capacities: the Turing Test(s).

The difference between cognitive science and physics, though, is the HP (sentience): How and why do sentient organisms <b>feel</b>, rather than just <b>do</b>? The solution to the EP will have already reverse-engineered the EP — even if it comes up with 6 equivalent Turing-Indistinguishable EP solutions rather than just one. 

Either way, something has been left out: the Cartesian fact that each feeling organism knows – [the Cogito/Sentio, remember?] — which is that they feel. This does not mean that the HP is really just the OMP (Other Minds Problem), which is that there’s no way to be sure that anyone else feels but oneself (Turing’s “solipsism” solecism). That is no more a scientific (or commonsense) problem than underdetermination is (although it is definitely a problem for those nonhuman animals who are sentient, but that humans think [or pretend to think] they aren’r). 

Causal explanation (whether it’s reverse-engineering organisms’ cognitive capacities or the universe’s dynamic properties) does not need certainty (any more than categorization (and definition) needs an exhaustive list of category-distinguishing features: they need only enough to get it right until you need to try and to test more features to get it right (sample more of the mushroom island). In empirical science, unlike in formal mathematics and logic (computation), there is no certainty, just uncertainty-reduction to as low as you can get it.

Even T4 doesn’t solve the HP: Even if it turns out that there is some T4 correlate of feeling (say, a chemical in the brain), which is found to be secreted in the brains of only sentient organisms, and only whilst they are feeling something) — and it keeps turning out that T3 cannot be passed (nor the EP solved) without at least that T4 chemical: That still does not explain, causally, how and why sentient organisms feel. T4 is, after all, just part of the EP. Correlates can be tweaked and tested, but the arbiter is still only EP. Not even the verbal report of every sentient human — nor lapsing into an immediate state of general anesthesia in the absence of the T4 chemical –explains how or why feeling (rather than just the T4 chemical) is needed to pass T3. 

T4 correlates in EP don’t become causal explanations in HP.

Pondering “consciousness”— but begging the question of sentience

by Stevan Harnad & ChatGPT

Preamble  (Stevan Harnad):

(1) There is a fundamental difference between the “Other Minds Problem (OMP)” (that the only one who can observe that a feeler feels is the feeler) and the “Hard Problem” (HP).

(2) “Consciousness” is a weasel-word; it means too many different things to too many people.

(3) Sentience—the capacity to feel (i.e., “the capacity to be in a state that it feels like something to be in”)—is not a weasel-word. It is the only directly “observable” thing there is.

(4) By definition, all sentient entities can feel. (The obfuscating weasel-version of sentience is “phenomenal consciousness”: “felt feeling”) 

(5) There is only one OMP—but there are many different kinds of feelings (from seeing to hearing to tasting to touching to talking to thinking).

(6) The capacity to feel (anything) is what the comparative study of the evolution of “consciousness” (feeling) is really about. But this concerns not just the OMP (“Which species can feel? What do they feel? and How and why did that capacity evolve?”).

(7) The OMP can only be studied through its observable correlates: “What can different species detect and react to?” “What was the origin and adaptive advantage of these observable correlates of sentience?”

(8) But the evolutionary problem of explaining the origin and adaptive advantage of sentience is not just the OMP. It is also the “Hard Problem (HP)” of explaining, causally, “how and why the things that different species can detect and react-to (i.e., what they can do) are not only detected and reacted-to but felt.

Harnad & GPT:

The central difficulty in the Royal Society volume is that almost every chapter, including the editors’ framing introduction, proceeds as if the distinction drawn in these eight points did not exist. The contributors are unanimous in treating sentience as if its existence and causal potency were unproblematic: they take for granted that feeling plays adaptive roles in learning, prediction, decision and coordination. This is surely true. But the challenge is to explain how and why the neural or behavioural mechanisms they describe are felt rather than merely executed (done). The authors treat feeling as though it were just another biological property awaiting the same sort of explanation that feathers or kidneys receive, rather than the anomalous property singled out in point 8. Consequently, the question that the volume endeavours to address—What is the adaptive function of consciousness—is answered on an operational level only: they explain what organisms can do, not why any of these doings feel like anything.

The editors simply take for granted that a functional role for felt capacities entails that the observable function is eo ipso the cause of the fact that it is felt, rather than just executed. But this merely presumes what is to be explained. It does not show why the functional capacity could not be instantiated by an unfelt mechanism, which is the substance of the Hard Problem. In the eagerness to naturalize consciousness, feeling is treated as if it were self-evidently part of the causal machinery, thereby glossing over the explanatory challenge the editors hope to confront.

The individual chapters adopt the same pattern. When Humphrey distinguishes cognitive from phenomenal consciousness through blindsight, he proposes that phenomenal experience evolved because internalized efference copies make things “matter.” But the argument only redescribes the behavioural consequences of feeling and attaches them to a proposed neural mechanism. It does not explain how efference copies become felt, nor why “mattering” cannot be just functional rather than felt. The distinction between blindsight and sighted vision merely demonstrates different forms of information processing. The transition to felt vision—point 8—is treated as if it somehow came automatically with the functional mechanism. How? Why?

Similarly, Ginsburg and Jablonka (G & J) propose that “unlimited associative learning” (UAL) marks the presence of consciousness and that felt states play a role in “mental selection.” (The “mental” is somewhat redundant: why not “internal selection”?). But again, the fact that an organism learns flexibly and projects goals does not explain how or why such processes are felt. G & J’s marker identifies a behavioural threshold; but the behavioural threshold does not itself entail or explain feeling. In linking UAL to phenomenal consciousness, they rely on the assumption that because flexible learning is sophisticated, it must be accompanied by felt experience. This conflates the OMP with the HP and leaves the causal question untouched.

Moncoucy, Tallon-Baudry and Cleeremans likewise treat phenomenal consciousness as an evolved internal valuation system. The explanatory vocabulary is motivational, computational and behavioural; feeling is assumed to be the medium of valuation, not explained. Their suggestion that pleasure becomes a proximate motivator does nothing to close the gap between reactive behaviour and felt valence. They redescribe the function of hedonic signals, but the hedonicity itself is again taken for granted.

Andrews and Miller propose that sentience evolved to support social coordination. But their argument takes for granted that the social signals in question are felt, and that without felt states the coordination would fail. This again simply takes for granted that felt experience is necessary for the adaptive benefit. but this is exactly what needs to be explained.

Crystal’s treatment of episodic memory repeats the same pattern: because episodic recollection in humans is rich, and because rats exhibit what-where-when integration, rats must likewise feel the temporality of recollection. But no causal explanation is given for how replay mechanisms become felt, nor why replay must be felt to perform its adaptive function.

Tramacere, Chittka, Kohda, Maldarelli & Güntürkün, Klein & Barron, Newen & Montemayor, Singer & Damasio all likewise attribute functions—flexibility, prediction-error integration, self-representation, action selection, analogue interoception—to states that they infer to be conscious. But in every case the fact that those states are felt is not explained; it is simply presupposed as part of the function itself. When one extracts the functional core of each proposal, nothing explains why it need be executed non-sentiently. This is the sense in which all these accounts beg the question of point 8.

The cumulative problem is that the authors substitute the study of behavioural and neural correlates—the only tractable approach to the OMP—for the explanation of how or why any of these correlates need be felt. They mistake empirical correlates for causal explanations. This is the confusion between the OMP and HP that the eight points above distinguish and that the editors do not address. The volume is therefore valuable as a survey of what organisms can do, but without a hint of why any organism feels anything in doing it. That question is presupposed at the outset and then buried under functional reinterpretations of what are, in effect, only correlates of sentience.

Fitch, W. T., Allen, C., & Roskies, A. L. (2025). The evolutionary functions of consciousnessPhilosophical Transactions B380(1939), 20240299.