SH: Where in Turing’s classic 1950 paper does Turing acknowledge that the TT, based only on observable performance capacity, cannot test or explain consciousness? (I need the exact quote of the passage).
GPT: Here is the exact passage from Turingâs 1950 paper âComputing Machinery and Intelligenceâ, in which he acknowledges that the Turing Testâbased solely on observable performanceâdoes not address or explain consciousness:
âI do not wish to give the impression that I think there is no mystery about consciousness. There is, for instance, something of a paradox connected with any attempt to localise it. But I do not think these mysteries necessarily need to be solved before we can answer the question with which we are concerned in this paper.â
This occurs under the section titled âArgument from Consciousness,â where Turing responds to Geoffrey Jeffersonâs objection that machines, lacking subjective experience, cannot truly think. Turing explicitly grants that there is a âmystery about consciousness,â yet he maintains that solving that mystery is orthogonal to the question of whether machines can exhibit behavior indistinguishable from humans.
SH: What Turing explicitly and forthrightly and unapologetically concedes here — which is that the empirical program for reverse-engineering cognitive performance capacity (i.e., cognitive science) cannot solve either the OMP (Other Minds Problem) or the HP (âHard Problemâ). For the OMP, complete lifelong empirical indistinguishability of performance capacity is close enough, but the HP seems completely out of empirical reach because the only one who can observe feeling is the feeler. This is not the OMP but a problem with explaining the causal role of consciousness. Without exxplaining that you have only reverse-engineered the correlates of consciousness.
GPT: Turingâs acknowledgment in that passage is an early and candid concession that the Turing Test (TT) is only a test of observable performance capacity (the “Easy Problem”) not a test of consciousness (the “Hard Problem”). Turing’s programme is explicitly empiricalâreverse-engineering the capacities underlying intelligent behaviorâbut he recognizes that this program cannot, even in principle, penetrate the barrier posed by consciousness (i.e., feeling) .
You point out the necessary distinction between:
The Other-Minds Problem (OMP)Â â an empirical limitation: no amount of behavioral or physiological data from another being can prove that that being feels. Turingâs test addresses this with the pragmatic move: if the performance capacity is lifelong and totally indistinguishable from the capacity of any other person, thatâs close enough. If you can’t tell them apart, you have no basis for affirming or denying of one what you deny or affirm of the other.
The Hard Problem (HP) â a causal-explanatory limitation: even if we succeed in completely reverse-engineering all behavioral capacities (T4) Turing-indistinguishably, that still leaves unexplained why or how any of it feels like anything at all. The problem isnât just epistemic (the non-observability of feeling) but ontological and causal: what is the function (or causal role) of feeling in the reverse-engineering?
As you note, under these constraints science can only predict, test and explain the observable correlates of consciousnessânot its cause. Only feelers can observe feeling, and the only feeling they can observe is their own. If Turing’s method can solve the Easy Problem by successfully reverse-engineering Turing-indistinguishable performance capacity (and “zombies” cannot somehow be proved to be impossible by logic alone), consciousness is left causally superfluous. Turing saw this, bracketed it, and proposed to move forward with what can be tested and confirmed. Thatâs why the TT remains relevantâbut also why it cannot close the explanatory gap.
Here is a contextualization of Turingâs positionâparticularly the passage on consciousnessâwith respect to your T2âT4 distinctions and the wider discourse:
Turingâs empirical proposal is squarely a T2 test: verbal performance capacity totally indistinguishable from that of a normal human. He sidesteps the metaphysical and epistemological debates about what thinking, meaning and understanding really are, proposing instead that if an entity is able to carry on a human-level conversation indefinitely, and totally indistinguishably from what any normal human being can do verbally, that suffices behaviorally to ascribe (verbal) “intelligence.”
However, in noting the “mystery” that âthere is, for instance, something of a paradox connected with any attempt to localise [consciousness],â Turing is acknowledging what Dave Chalmers later dubbed the Hard Problem (HP) as well as what underlies the Other-Minds Problem (OMP).
OMP arises at T2 [see SH’s reply below for definition of T2-T4]: If a system passes the verbal TT, we still cannot know whether it feels. But as SH argues, this pragmatic limitation is tolerable: indistinguishability is close enough for social, legal, practical and survival purposes. It does not explain how or why the system feelsâjust that we canât tell the difference. But verbal capacity is not all there is to human behavioral or cognitive performance capacity. Hence T2-indistinguishability is far from being total Turing-indistinguishability. People can do much more than just talk, and their verbal capacity is inextricably intertwined with and dependent on the rest: The HP arises at
Suppose cognitive science successfully reverse-engineer a a robot (T3) or even a neurologically and biochemically indistinguishable system (T4) that performs all sensorimotor and verbal behaviors indistinguishably from a human. That will be a solution to the Easy Problemâhow humans are able to do what they can doâbut it will not have explained why or how any of this is accompanied by feeling. The HP is not just âWhy canât we detect feeling in others?â (OMP) but âWhy would there be feeling at all, if it plays no causal role in passing EP?â Even if the correlates of feeling (sentience, consciousness) are fully known, they do not explain the causal function of feeling itself.
Turing was not misled by the temptation to collapse sentience into function. He made no pretense that the TT could access the âmystery about consciousness.â He did not conflate intelligence with understanding, or behavior with experience. He instead proposed that cognitive science should proceed with what can be observed and tested (behavioral and cognitive capacities)âin other words, the EPâwhile remaining agnostic about what cannot (the HP).
Yet much of post-Turing discourseâincluding some recent interpretations by AI researchersâhas missed or suppressed this point, treating the TT as a test of understanding or sentience, rather than of observable performance capacity alone.
Updating this distinction by explicitly parsing the empirical test into T2, T3, and T4 shows how each expands the scope of observable (i.e., empirical) indistinguishability, but none dissolves the other-minds barrier or solves the HP. The EP project remains open: to reverse-engineer behavioral and cognitive capacity (T4), while acknowledging that this leaves the felt side of cognition unexplainedâand possibly even unexplainable in causal terms.
SH:  Dave Chalmers gave the HP (formerly called the âmind/body problem”) its new name. Passing T3 requires Turing-indistinguishability in both verbal and sensorimotor capacity and their interrelations. Passing T4 requires Turing-indistinguishability in verbal and sensorimotor capacity, but also their observable brain functional correlatesâphysiological, biochemical and biophysical. Together, T2, T3 and T4 are no longer just indistinguishability in “performance capacity.” They amount to total empirical indistinguishability. Turing recognized and understood all that. The full gamut of Turing Test(s) is simply empiricism. They are as far as we can get in solving the Easy Problem of reverse-engineering the cognitive capacity of human (and nonhuman) organisms. The Hard Problem of explaining consciousness causally is an empirical problem tooâbut not just a problem of reverse-engineering of performance capacity.
Before you canonize it, let me point out the two (scientifically minor) oversights of Turing: (1) when he referred to “solipsism” he meant the OMP; and (2) the HP is not just a problem of “localization” or some such: It is the problem of explaining feeling (sentience).
Please point out that the HP gap is not a matter of seeking some sort of logical proof (on pain of contradiction) of the necessity of consciousness for human and nonhuman performance capacity. Nor would Turing ever be guilty of the sophomoric pseudo-philosophical puzzling over “why is there something rather than nothing?”, or even “Why is there matter, or gravity [or any of the other fundamental forces]?” Each sentient knows with Cartesian certainty that (and what) it feels. Sentience (consciousness) exists. That’s not the HP. The HP is the problem of explaining, causally, how and why sentients feel rather than just âfunctâ. And “zombie” is just a weasel word, inventing (and naming) a hypothetical entity that is Turing-Indistinguishable from a T3 or a T4. That’s just idle thought-metaphysicsâwhile there exists a real empirical problem, unsolved. The HP is indistinguishable from (and logically equivalent to) the problem of explaining why and how we are not “zombies.” Well duh!
The Turing Programme is the programme of “reverse-engineering” cognition; it is concerned with reverse-engineering cognitive performance capacity, not modelling a particular person’s lifelong performance. Reverse-engineering is not philosophy but empirical science. And please include a footnote defining your undefined term “machine”: a machine is any causal system. No need to get into the metaphysics of causality or autonomy.
This is a test (though not a TT!) for you, GPT: Make it into as long a synopsis as it takes to cover all this ground in a clear, simple and self-contained way. If it does the trick, I’ll use it in this year’s edition (q.v.) of https://www.blogger.com/blog/posts/4995014831077048734
GPT: Here is a long-form, self-contained synopsis that integrates and clarifies the distinction between the Turing Test (TT), the Other-Minds Problem (OMP), and the Hard Problem (HP), grounded in your framing:
Alan Turing, the Turing Test, and the Hard Problem
In his 1950 paper âComputing Machinery and Intelligenceâ, Alan Turing proposed a bold and pragmatic approach to the age-old question, âCan machines think?â Rather than engaging in metaphysical debates about the nature of âthinkingâ or âconsciousness,â he reframed the question in terms of observable performance capacity. Suppose a machine1⯠can be successfully reverse-engineered that has the capacity to carry on a back and forth verbal conversation, indefinitely long, with any normal human, completely indistinguishably, to any normal human, from any normal human. (This is what we now refer to as passing the Turing Test or T2). Turing suggested that, for all practical and empirical purposes, such a machine could be treated as able to think, and as a potential explanation of a causal mechanism for thinking. This was not a metaphysical claim, but a methodological proposal to ground cognitive science in what can be observed and explainedâwithout trying, or claiming, to be able to make distinctions between things that cannot be distinguished.
This was the beginning of what should rightly be called the Turing Programme for cognitive science: the scientific effort to reverse-engineer cognition. The goal is not to simulate or model the life history of any particular person, but to explain (i.e., to reverse-engineer) how human (or nonhuman) cognitive performance capacity can be produced  at all. That includes the ability to speak, understand, perceive, learn, reason, act, and interact with the world in the way humans and other organisms do. This is a program in empirical science, not philosophy.
Turingâs approach was entirely grounded in empiricism. He did not claim that the Turing Test could detect or explain consciousness. In fact, he explicitly acknowledged that consciousness remains a âmystery,â and that its presence or absence in other systemsâhuman or artificialâcannot be determined by observation. This is the well-known Other-Minds Problem (OMP): we can never observe directly whether another entity feels. No matter how complete our data on another personâs behavior, physiology, or even biochemistry, we cannot obesrve or measure whether they feel. That is an constraint or empiricism, not a shortcoming of any specific method. Turingâs solution was pragmatic: if a system behaves in every observable respect as if it were thinking and understanding, that is as close as science can get.
But there is a deeper problemâwhat  Dave Chalmers later called the Hard Problem of consciousness (HP). Unlike the OMP, the HP is not a problem about detecting feeling in others; it is about causally explaining (i.e., reverse-engineering) feelingâhow and why any of this performance capacity is accompanied by sentience. Why is all this doingâverbal, sensorimotor, and even physiologicalânot just happening without feeling? Why does it feel like something to see, think, or act?
This is not a metaphysical puzzle like âWhy is there something rather than nothing?ââa question Turing would have rightly dismissed as idle. Nor is it a logical paradox or an ontological speculation. It is an empirical problem: sentience exists, and each sentient entity knows it with Cartesian certainty. Thatâs not the problem. The problem is that science has no explanation for how and why feeling occursâwhat its causal role is in the mechanisms that produce the capacity to do all the things that thinking organisms (but especially humans) can do.
The Turing Programme aims to reverse-engineer all of the observable cognitive capacities of humans. These capacities can be modeled and tested at increasing levels of empirical completeness:
T2: Verbal capacityâcan the system converse indistinguishably from a human?
T3: Verbal and sensorimotor capacityâcan the system not only talk but also act and interact with the world, verbally and nonverbally, indistinguishably from a human to a human (including the verbal-sensorimotor interrelations and interactions of âwords and the worldâ: e.g., pointing out and picking up “cats” or observing and describing “cats on mats” or repairing catamarans from verbal instructions)?
T4: All of T3, plus internal indistinguishabilityâthe systemâs observable neurophysiological, biochemical, and biophysical structure and dynamics are indistinguishable from those of a real human.
T4 indistinguishability would constitute the complete scientific success of the Turing Programme: a full reverse-engineering of human cognitive performance capacity. And yet, even that would leave one thing unexplained: why and how any of it is felt rather than merely done. The Easy Problem (HP)âreverse-engineering and explaining the mechanism that produces the capacity to do what humans can doâcould be solved completely, yet we would still not have explained how or why any of it is accompanied by feeling. That is the Hard Problem.
The term âzombieâ is sometimes used for a hypothetical being that is T3- or T4-indistinguishable from a human but lacks feeling. But âzombieâ is a weasel-wordâit adds nothing to either the problem or its solution. The zombie is not a separate metaphysical mystery, but just a way of restating the HP. The question is not âAre zombies possible?â but âHow and why does sentience exist at all?â Why are organisms sentients rather than just functersâsystems that can do everything we can do, in every observable way, but without feeling?
Turing anticipated all of this. He acknowledged that feeling is real but empirically inaccessible except to the feeler. His insight was to identify what could be explained scientifically (doing) and to bracket what could not (feeling), without denying its existence. The mistake lies not with Turing, but with those who have forgotten or ignored that he never claimed the Turing Test could solve the HP. His was and is a fully empirical program, and the HP remains fully within the domain of empirical scienceâbut still completely unsolved, and perhaps unsolvable in the way other scientific problems are solved.
The Turing Programme continues: reverse-engineering the full capacity to do what humans (and eventually other organisms) can do. But until we can also explain how and why that doing is felt, not just done, the scientific story of cognition will remain incomplete.
- âMachine,â refers to any causal system. No further metaphysical assumptionsâabout mechanism, determinism, autonomy, or artificialityâare needed or implied. âŠď¸