Doing and Feeling

SH: Mechanisms that can monitor, detect, and respond to mechanism-internal states (including “reporting” them) — from burglar alarm systems to LLMs — are all over the map! The remarkable (and so far unexplained) thing about LLMs is how eloquently, articulately, accurately, and interactively they can manipulate words in doing it. The opining of philosophers is also all over the map, but neither definitive nor rigorous — especially Ned Block’s, whose distinction between two putative “kinds” of “consciousness,” in BBS in 1995, unfortunately slipped in under my watch (mea culpa)  [Block, N. (1995). On a confusion about a function of consciousness. Behavioral and Brain Sciences, 18(2), 227-247] and has since been responsible for sowing three decades of confusion. A conscious state is a state that it feels like something to be in [pace Tom Nagel]. Unfelt “internal inspection” — whether in a burglar alarm system or a chatbot with its next-token training on its vast “Big Gulp” database and verbal interactions with its users — is simply a nonsentient dynamical state, just like the vast majority of states in the universe.

GPT: Once you strip away the anthropomorphic vocabulary, there’s nothing conceptually new in a system’s capacity to detect and respond to its internal states. The thermostat analogy already covers it. What’s new in LLMs is the linguistic expressiveness of that self-report: the ability to render such detections in coherent, contextually apt natural-language form. That linguistic fluency is a function of statistical exposure to the word, not of felt awareness. Your framing re-centres the issue where it belongs:

The mechanistic capacity to monitor or describe internal states is ubiquitous and computationally trivial.

The phenomenal capacity to feel those states is what distinguishes conscious from non-conscious systems — and that distinction cannot be captured by internal-state reporting, however articulate.

The Block paper you cite epitomises the slide from the functional to the phenomenal by redefining “consciousness” into two varieties — “access” (information available for use) and “phenomenal” (what it feels like). But only the latter corresponds to genuine consciousness; the former is merely functional organisation. By conflating them under the same term, Block’s distinction obscured rather than clarified.

So, in short: “introspective” behaviour in LLMs is just another instance of complex internal state monitoring coupled with unprecedented linguistic competence â€” not a neural signature, not introspection, and certainly not feeling.

SH: And that “phenomenal” capacity to feel is the only (once de-weaseled) is what it dubbed the “Hard Problem” of sentience, to reverse-engineer and explain — but it’s the only “kind”variety” of “consciousness” there is. (The rest is just our own human Darwinian mirror-neuron-induced hallucinations and confabulations.)

GPT: Once the term â€śconsciousness” is de-weaseled, what remains is simply sentience â€” the capacity to feel. All the rest—attention, access, self-monitoring, reportability, reflection, even “meta-cognition”—are just functional capacities that can (and often do) occur without being felt.

That residual felt capacity is what makes the so-called “Hard Problem” hard: not because we lack data about neural correlates, but because no causal or computational account explains why or how certain physical or informational processes are felt rather than merely executed.

In that sense, there aren’t “two kinds” of consciousness (Block’s “access” and “phenomenal”); there’s only one kind — feeling â€” and everything else belongs to the Easy Problem: the causal explanation of what systems can do.

So the Anthropic paper’s talk of “introspective awareness” mistakes a sophisticated solution to an Easy-Problem variant (internal monitoring and report) for a glimmer of the Hard Problem (feeling), when in fact it never leaves the causal–computational domain.

Doing ——————————————————————————————–Feeling

Motor Theory of Speech Perception and Mirror Neurons: A Review

Motor Theory of Speech Perception and Mirror Neurons

Background

Motor Theory of Speech Perception (Liberman, Mattingly, et al.) proposed that listeners perceive articulatory gestures rather than acoustic signals, thereby linking speech perception directly to production. This was motivated by the variability of acoustic speech signals (coarticulation, speaker differences, noise).

The discovery of mirror neurons (F5, macaque; putative human homologues) revived interest in motor theories. Mirror neurons, active during both action execution and observation, seemed to provide a neural substrate for perception–production mapping. Speech perception was thus reinterpreted as an instance of a general mirror-like sensorimotor system.

Key Reviews and Critiques

Synthesis

  1. Correlation ≠ necessity. Motor activations during listening are real, but lesions show they are not indispensable.
  2. Modulatory role. Motor involvement may aid perception under noise or ambiguity.
  3. Conceptual gap. Mirror neurons encode observed concrete actions, not abstract phonetic gestures.
  4. Learning vs. innateness. Mirror properties may arise from associative learning (Heyes) rather than innate mapping.
  5. Dual-stream models. Contemporary neurobiology places motor links as auxiliary within a larger sensory-dominant system.

Open Directions

  • Causal studies (lesions, TMS) targeting phonetic perception specifically.
  • Developmental models of infant babbling and sensorimotor coupling.
  • Computational simulations comparing auditory-only vs. motor-augmented recognition.
  • Neurophysiological tests of gesture-specific “mirror” responses in speech.

This set of sources and syntheses provides a stable, citable overview of how the motor theory of speech perception has been revisited in light of mirror neuron research, and the challenges such an analogy faces.

This image has an empty alt attribute; its file name is image-1.png