SH: Where in Turing’s classic 1950 paper does Turing acknowledge that the TT, based only on observable capacity, cannot test or explain consciousness? (I need the exact quote of the passage).
GPT: Here is the exact passage from Turingâs 1950 paper âComputing Machinery and Intelligenceâ, in which he acknowledges that the Turing Testâbased solely on observable performanceâdoes not address or explain consciousness:
âI do not wish to give the impression that I think there is no mystery about consciousness. There is, for instance, something of a paradox connected with any attempt to localise it. But I do not think these mysteries necessarily need to be solved before we can answer the question with which we are concerned in this paper.â
This occurs under the section titled âArgument from Consciousness,â where Turing responds to Geoffrey Jeffersonâs objection that machines, lacking subjective experience, cannot truly think. Turing explicitly grants that there is a âmystery about consciousness,â yet he maintains that solving that mystery is orthogonal to the question of whether machines can exhibit behavior indistinguishable from humans.
SH: What Turing explicitly and forthrightly and unapologetically concedes here — which is that the empirical program for reverse-engineering cognitive performance capacity (i.e., cognitive science) cannot solve either the OMP (Other Minds Problem) or the HP âHard Problemâ). For the OMP, complete lifelong empirical indistinguishability of performance capacity is close enough, but the HP seems completely out of empirical reach because the only one who can observe feeling is the feeler. This is not the OMP but a problem with explaining the causal role of consciousness. Without that you have only reverse-engineered the correlatesof consciousness.
GPT: Turingâs acknowledgment in that passage is an early and candid concession that the Turing Test (TT) is a test of observable performance, not of consciousness. His program is explicitly empiricalâreverse-engineering the capacities underlying intelligent behaviorâbut he recognizes that this program cannot, even in principle, penetrate the barrier posed by subjective feeling.
You point out the necessary distinction between:
The Other-Minds Problem (OMP) â an epistemic limitation: no amount of behavioral or physiological data from another being can prove that they feel. Turingâs test addresses this with the pragmatic move: if the performance is lifelong and indistinguishable, thatâs close enough.
The Hard Problem (HP) â a causal-explanatory limitation: even if we succeed in completely reverse-engineering all behavioral capacities (T4) Turing-indistinguishably, that still leaves unexplained why or howany of it feels like anything. The problem isnât just epistemic (access to feeling) but ontological and causal: what is the function (or causal necessity) of feeling in the causal chain?
As you note, science under these constraints can only identify the correlates of consciousnessânever its cause, because only the feeler has direct access to feeling, and feeling has no evident, indispensable causal role in behavior (lest zombies somehow be ruled out by logic alone). Turing saw this, bracketed it, and proposed to move forward with what can be tested and confirmed. Thatâs why the TT remains relevantâbut also why it cannot close the explanatory gap.
Here is a contextualization of Turingâs positionâparticularly the passage on consciousnessâwith respect to your T2âT4 distinctions and the wider discourse:
Turingâs empirical proposal is squarely a T2 test: verbal performance capacity indistinguishable from that of a normal human. He sidesteps the metaphysical and epistemological debates about what thinking, meaning and understanding really are, proposing instead that if something can carry on a human-level conversation indefinitely and indistinguishably, that suffices behaviorally to ascribe intelligence.
However, in conceding that âthere is, for instance, something of a paradox connected with any attempt to localise [consciousness],â he is explicitly acknowledging what you [incorrect attribution: see SH reply belowlater formalized as the Hard Problem (HP) and what also underlies the Other-Minds Problem (OMP).
OMP arises at T2 [see SH reply below for definition of T2-T4]: If a system passes the verbal TT, we still cannot know whether it feels. But as you argue, this pragmatic limitation is tolerable: indistinguishability is close enough for social, legal, practical and survival purposes. It does not explain how or why the system feelsâjust that we canât tell the difference.
HP arises at T3 and T4: Suppose we build a robot (T3) or a neurologically and biochemically indistinguishable system (T4) that performs all sensorimotor and verbal behaviors indistinguishably from a human. We will have solved the Easy Problemâhow humans are able to do what they can doâbut we will not have explained why or how any of this is accompanied by feeling. The HP is not just âWhy canât we detect feeling in others?â (OMP) but âWhy should feeling exist at all, if it plays no causal role?â Even if the correlates of feeling (sentience, consciousness) are fully known, they do not explain the causal function of feeling.
Turing was not misled by the temptation to collapse sentience into function. He made no pretense that the TT could access the âmystery about consciousness.â He did not conflate intelligence with understanding, or behavior with experience. He instead proposed that cognitive science should proceed with what can be observed and tested (behavioral and cognitive capacities), while remaining agnostic about what cannot (feeling).
Yet much of post-Turing discourseâincluding some recent interpretations by AI researchersâhas missed or suppressed this point, treating the TT as a test of understanding or sentience, rather than of performance capacity alone.
You have clarified, and in fact strengthened, this distinction by explicitly parsing the empirical test into T2, T3, and T4, and showing how each expands the scope of observable (i.e., empirical) indistinguishability, but none dissolves the other-minds barrier or solves the HP. The project remains open: to reverse-engineer behavioral and cognitive capacity (T4), while acknowledging that this leaves the felt side of cognition unexplained and, perhaps, unexplainable in causal terms.
SH: I did not give the HP (aka the âmind/body problem) its name, Dave Chalmers did. And T3(indistinguishability in both verbal and robotic capacity and their interrelations [“word and world”]) — as well as T4 (indistinguishability in verbal and robotic capacity, but also their observable brain functional correlates — physiological, biochemical and biophysical) — are no longer just indistinguishability in “performance capacity” but total empirical indistinguishability. Turing understood and acknowledged all that. The Turing Test(s) are empiricism, the full gamut. The HP is an empirical problem, not just a problem of cognitive reverse-engineering.
Before you canonize it, let me point out the two (scientifically minor) oversights of Turing: (1) when he referred to “solipsism” he meant the OMP; and (2) the HP is not just a problem of “localization” or some such: It is the problem of feeling (sentience).
Point out that the HP gap is not a matter of logical necessity (on pain of contradiction). Nor would Turing ever be guilty of the sophomoric pseudo-philosophical pondering of “why is there something rather than nothing?”, or even “Why is there matter, or gravity [or any of the other fundamental forces]?” Each sentient knows with Cartesian certainty that (and what) it feels. Sentience exists. That’s not the HP. The HP is the problem of explaining, causally, how and why sentients feel rather than just âfunctâ. And “zombie” is just a weasel word, inventing a hypothetical entity that is Turing-Indistinguishable from a T3 or a T4 — idle notional metaphysics when there exists a real empirical problem, unsolved. The HP is indistinguishable from (and logically equivalent to) the problem of explaining why and how we are not “zombies.” Well duh!
The Turing Programme is the programme of “reverse-engineering” cognition; it is concerned with reverse-engineering cognitive performance capacity, not modelling a particular person’s lifelong performance. Reverse-engineering is not philosophy but empirical science. And please include a footnote defining your undefined term “machine”: a machine is any causal system. No need to get into the metaphysics of causality or autonomy.
This is a test (though not a TT!) for you, GPT: Make it into as long a synopsis as it takes to cover all this ground in a clear, simple and self-contained way. If it does the trick, I’ll use it in this year’s edition (q.v.) of https://www.blogger.com/blog/posts/4995014831077048734
GPT: Here is a long-form, self-contained synopsis that integrates and clarifies the distinction between the Turing Test (TT), the Other-Minds Problem (OMP), and the Hard Problem (HP), grounded in your framing:
Alan Turing, the Turing Test, and the Hard Problem
In his 1950 paper âComputing Machinery and Intelligenceâ, Alan Turing proposed a bold and pragmatic approach to the age-old question, âCan machines think?â Rather than engaging in metaphysical debates about the nature of âthinkingâ or âconsciousness,â he reframed the question in terms of observable performance capacity. If a machineâŻ[*1] could carry on a verbal conversation completely indistinguishably,from any normal human, to any normal human âwhat we now call the Turing Test or T2âthen, Turing suggested, for all empirical purposes, the machine should be treated as thinking. This was not a metaphysical claim, but a methodological proposal to ground cognitive science in what can be observed and explainedâand not endeavouring or purporting to make distinctions between indistinguishables.
This was the beginning of what should rightly be called the Turing Programme: the scientific effort to reverse-engineer cognition. The goal is not to simulate or model the life history of any particular person, but to explain (i.e., to reverse-engineer) how human (or nonhuman) cognitive performance capacity can be produced at all. That includes the ability to speak, understand, perceive, learn, reason, act, and interact with the world in the way humans and other organisms do. This is a program in empirical science, not philosophy.
Turingâs approach was entirely grounded in empiricism. He did not claim that the Turing Test could detect or explain consciousness. In fact, he explicitly acknowledged that consciousness remains a âmystery,â and that its presence or absence in other systemsâhuman or artificialâcannot be determined by observation. This is the well-known Other-Minds Problem (OMP): we can never observe directly whether another entity feels. No matter how complete our data on another personâs behavior, physiology, or even biochemistry, we cannot see or measure whether they feel. That is an epistemic limitation, not a shortcoming of any specific method. Turingâs solution was pragmatic: if a system behaves in every observable respect as if it were thinking and understanding, that is as close as science can get.
But there is a deeper problemâwhat David Chalmers later called the Hard Problem of consciousness (HP). Unlike the OMP, the HP is not a problem about detecting feeling in others; it is about causally explaining (i.e., reverse-engineering) feelingâhow and why any of this performance capacity is accompanied by sentience. Why is all this doingâverbal, sensorimotor, and even physiologicalânot just happening without feeling? Why does it feel like something to see, think, or act?
This is not a metaphysical puzzle like âWhy is there something rather than nothing?ââa question Turing would have rightly dismissed as idle. Nor is it a logical paradox or an ontological speculation. It is an empirical problem: sentience exists, and each sentient knows it with Cartesian certainty. Thatâs not the problem. The problem is that science has no explanation for how and why feeling occursâwhat its causal role is in the mechanisms responsible for doing.
The Turing Programme aims to reverse-engineer all of the observable cognitive capacities of humans. These capacities can be modeled and tested at increasing levels of empirical completeness:
T2: Verbal capacityâcan the system converse indistinguishably from a human?
T3: Verbal and sensorimotor capacityâcan the system not only talk but also act and interact with the world, verbally and nonverbally, indistinguishably from a human to a human (including the verbal-sensorimotor interactions of âword and worldâ)?
T4: All of T3, plus internal indistinguishabilityâthe systemâs observable neurophysiological, biochemical, and biophysical structure and dynamics are indistinguishable from those of a real human.
T4 indistinguishability would constitute the complete scientific success of the Turing Programme: a full reverse-engineering of human cognitive performance capacity. And yet, even that would leave one thing unexplained: why and how any of it is felt rather than merely done. The Easy Problemâexplaining doingâcould be solved completely, yet we would still not have explained how or why any of it is accompanied by feeling. That is the Hard Problem.
The term âzombieâ is sometimes used for a hypothetical being that is T3- or T4-indistinguishable from a human but lacks feeling. But âzombieâ is a weasel-wordâit adds nothing to the problem or its solution. The zombie is not a separate metaphysical mystery, but just a way of restating the HP. The question is not âAre zombies possible?â but âHow and why does sentience exist at all?â Why are organisms sentients rather than just functersâsystems that can do everything we can do, in every observable way, but without feeling?
Turing anticipated all of this. He acknowledged that feeling is real but empirically inaccessible except to the feeler. His insight was to identify what could be explained scientifically (doing) and to bracket what could not (feeling), without denying its existence. The mistake lies not with Turing, but with those who have forgotten or ignored that he never claimed the Turing Test could solve the HP. His was and is a fully empirical program, and the HP remains fully within the domain of empirical scienceâbut still entirely unsolved, and perhaps unsolvable in the way other scientific problems are solved.
The Turing Programme continues: reverse-engineering the full capacity to do what humans (and eventually other organism) can do. But until we can also explain how and why that doing is felt, not just performed, the scientific story of cognition will remain incomplete.
[*1]âŻâMachine,â refers to any causal system. No further metaphysical assumptionsâabout mechanism, determinism, autonomy, or artificialityâare needed or implied.