{"id":2477,"date":"2025-07-11T11:37:01","date_gmt":"2025-07-11T15:37:01","guid":{"rendered":"https:\/\/generic.wordpress.soton.ac.uk\/skywritings\/?p=2477"},"modified":"2025-07-13T09:02:00","modified_gmt":"2025-07-13T13:02:00","slug":"to-come","status":"publish","type":"post","link":"https:\/\/generic.wordpress.soton.ac.uk\/skywritings\/2025\/07\/11\/to-come\/","title":{"rendered":"The Turing Test at 75 (with ChatGPT)"},"content":{"rendered":"\n<p class=\"has-cyan-bluish-gray-background-color has-background\"><strong>SH:<\/strong>&nbsp;Where in&nbsp;<a href=\"https:\/\/ebiquity.umbc.edu\/_file_directory_\/papers\/1389.pdf%20https:\/ebiquity.umbc.edu\/_file_directory_\/papers\/1389.pdf\"><strong>Turing&#8217;s classic 1950 paper<\/strong><\/a>&nbsp;does Turing acknowledge that the TT, based only on observable performance capacity, cannot test or explain consciousness? (I need the exact quote of the passage).<\/p>\n\n\n\n<p><strong>GPT:<\/strong>&nbsp;Here is the exact passage from Turing\u2019s 1950 paper&nbsp;<strong>\u201cComputing Machinery and Intelligence\u201d<\/strong>, in which he acknowledges that the Turing Test\u2014based solely on observable performance\u2014does not address or explain consciousness:<\/p>\n\n\n\n<p class=\"has-text-align-right has-small-font-size\">\u201c<em>I do not wish to give the impression that I think there is no mystery about consciousness. There is, for instance, something of a paradox connected with any attempt to localise it. But I do not think these mysteries necessarily need to be solved before we can answer the question with which we are concerned in this paper.<\/em>\u201d&nbsp;&nbsp;<\/p>\n\n\n\n<p>This occurs under the section titled \u201cArgument from Consciousness,\u201d where Turing responds to Geoffrey Jefferson\u2019s objection that machines, lacking subjective experience, cannot truly think. Turing explicitly grants that there is a \u201cmystery about consciousness,\u201d yet he maintains that solving that mystery is orthogonal to the question of whether machines can exhibit behavior indistinguishable from humans.<\/p>\n\n\n\n<p class=\"has-cyan-bluish-gray-background-color has-background\"><strong>SH:<\/strong>&nbsp;What Turing explicitly and forthrightly and unapologetically concedes here &#8212; which is that the empirical program for reverse-engineering cognitive performance capacity (i.e., cognitive science) cannot solve either the OMP (Other Minds Problem) or the HP (\u201cHard Problem\u201d). For the OMP, complete lifelong empirical indistinguishability of performance capacity is close enough, but the HP seems completely out of&nbsp;empirical reach because the only one who can observe feeling is the feeler. This is not the OMP but a problem with explaining the&nbsp;<strong>causal role<\/strong>&nbsp;of consciousness. Without exxplaining that you have only reverse-engineered the&nbsp;<strong>correlates<\/strong> of consciousness.<\/p>\n\n\n\n<p><strong>GPT:\u00a0<\/strong>Turing\u2019s acknowledgment in that passage is an early and candid concession that the Turing Test (TT) is only a\u00a0<strong>test of observable performanc<\/strong>e <strong>capacity<\/strong> (the &#8220;Easy Problem&#8221;) not a test of\u00a0<strong>consciousness<\/strong> (the &#8220;Hard Problem&#8221;). Turing&#8217;s programme is explicitly empirical\u2014reverse-engineering the\u00a0<strong>capacities<\/strong>\u00a0underlying intelligent behavior\u2014but he recognizes that this program cannot, even in principle, penetrate the barrier posed by <strong>consciousness<\/strong> (i.e., feeling) .<\/p>\n\n\n\n<p>You point out the necessary distinction between:<\/p>\n\n\n\n<p><strong>The Other-Minds Problem (OMP)<\/strong>\u00a0\u2014 an empirical limitation: no amount of behavioral or physiological data from another being can prove that that being feels. Turing\u2019s test addresses this with the pragmatic move: if the performance capacity is lifelong and totally indistinguishable from the capacity of any other person, that\u2019s close enough. If you can&#8217;t tell them apart, you have no basis for affirming or denying of one what you deny or affirm of the other.<\/p>\n\n\n\n<p><strong>The Hard Problem (HP)<\/strong>\u00a0\u2014 a causal-explanatory limitation: even if we succeed in completely reverse-engineering\u00a0<strong>all<\/strong>\u00a0behavioral capacities (T4) Turing-indistinguishably, that still leaves unexplained\u00a0<em>why or how<\/em> any of it <em>feels like anything at all<\/em>. The problem isn\u2019t just epistemic (the non-observability of feeling) but\u00a0<strong>ontological<\/strong>\u00a0and\u00a0<strong>causal<\/strong>: what is the\u00a0<strong>function<\/strong>\u00a0(or causal role) of feeling in the reverse-engineering?<\/p>\n\n\n\n<p>As you note, under these constraints science can only predict, test and explain the observable\u00a0<strong>correlates <\/strong>of consciousness\u2014not its\u00a0<strong>cause<\/strong>. Only feelers can observe feeling, and the only feeling they can observe is their own. If Turing&#8217;s method can solve the Easy Problem by successfully reverse-engineering Turing-indistinguishable performance capacity (and &#8220;zombies&#8221; cannot somehow be proved to be impossible by logic alone), consciousness is left causally superfluous. Turing saw this, bracketed it, and proposed to move forward with what\u00a0<em>can<\/em>\u00a0be tested and confirmed. That\u2019s why the TT remains relevant\u2014but also why it cannot close the explanatory gap.<\/p>\n\n\n\n<p>Here is a contextualization of Turing\u2019s position\u2014particularly the passage on consciousness\u2014with respect to your T2\u2013T4 distinctions and the wider discourse:<\/p>\n\n\n\n<p>Turing\u2019s empirical proposal is squarely a\u00a0<strong>T2<\/strong>\u00a0test: verbal performance capacity totally indistinguishable from that of a normal human. He sidesteps the metaphysical and epistemological debates about\u00a0<strong>what thinking, meaning and understanding really are<\/strong>, proposing instead that if an entity is able to carry on a human-level conversation indefinitely, and totally indistinguishably from what any normal human being can do verbally, that suffices\u00a0<em>behaviorally<\/em>\u00a0to ascribe (verbal) &#8220;intelligence.&#8221;<\/p>\n\n\n\n<p>However, in noting the &#8220;mystery&#8221; that \u201cthere is, for instance, something of a paradox connected with any attempt to localise [consciousness],\u201d Turing is  acknowledging what <a href=\"https:\/\/web.archive.org\/web\/20060509123618id_\/http:\/consc.net:80\/papers\/facing.pdf\"><strong>Dave Chalmers<\/strong><\/a> later dubbed the\u00a0<strong>Hard Problem (HP)<\/strong>\u00a0as well as what  underlies the\u00a0<strong>Other-Minds Problem (OMP)<\/strong>.<\/p>\n\n\n\n<p><strong>OMP<\/strong>\u00a0arises at T2<em>\u00a0[see SH&#8217;s reply below for definition of T2-T4]<\/em>: If a system passes the verbal TT, we still cannot know whether it feels. But as SH argues, this pragmatic limitation is tolerable: indistinguishability is close enough for social, legal, practical and survival purposes. It does not explain\u00a0<em>how<\/em>\u00a0or\u00a0<em>why<\/em>\u00a0the system feels\u2014just that we can\u2019t tell the difference. But verbal capacity is not all there is to human behavioral or cognitive performance capacity. Hence T2-indistinguishability is far from being total <strong>Turing-indistinguishability<\/strong>. People can do much more than just talk, and their verbal capacity is inextricably intertwined with and dependent on the rest: The <strong>HP<\/strong>\u00a0arises at <\/p>\n\n\n\n<p>Suppose cognitive science successfully reverse-engineer a a robot (T3) or even a neurologically and biochemically indistinguishable system (T4) that performs all sensorimotor and verbal behaviors indistinguishably from a human. That will be a solution to the\u00a0<strong>Easy Problem<\/strong>\u2014how humans are able to\u00a0<em>do<\/em>\u00a0what they can do\u2014but it will not have explained\u00a0<em>why<\/em>\u00a0or\u00a0<em>how<\/em>\u00a0any of this is accompanied by\u00a0<strong>feeling<\/strong>. The HP is not just \u201cWhy can\u2019t we detect feeling in others?\u201d (OMP) but \u201cWhy would there be feeling at all, if it plays no causal role in passing EP?\u201d Even if the correlates of feeling (sentience, consciousness) are fully known, they do not explain the causal function of feeling itself.<\/p>\n\n\n\n<p>Turing was not misled by the temptation to collapse sentience into function. He made no pretense that the TT could access the \u201cmystery about consciousness.\u201d He did not conflate intelligence with understanding, or behavior with experience. He instead proposed that cognitive science should proceed with what\u00a0<em>can<\/em>\u00a0be observed and tested (behavioral and cognitive capacities)\u2014in other words, the EP\u2014while remaining agnostic about what cannot (the HP).<\/p>\n\n\n\n<p>Yet much of post-Turing discourse\u2014including some recent interpretations by AI researchers\u2014has missed or suppressed this point, treating the TT as a test of understanding or sentience, rather than of observable\u00a0<strong>performance capacity alone<\/strong>.<\/p>\n\n\n\n<p>Updating this distinction by explicitly\u00a0<a href=\"https:\/\/eprints.soton.ac.uk\/262954\/1\/turing.html\"><strong>parsing the empirical test<\/strong><\/a>\u00a0into\u00a0<a href=\"https:\/\/link.springer.com\/chapter\/10.1007\/978-94-010-0105-2_14\"><strong>T2, T3, and T4<\/strong><\/a> shows how each expands the scope of observable (i.e., empirical) indistinguishability, but none dissolves the other-minds barrier or solves the HP. The EP project remains open: to reverse-engineer behavioral and cognitive capacity (T4), while acknowledging that this leaves the felt side of cognition unexplained\u2014and possibly even unexplainable in causal terms.<\/p>\n\n\n\n<p class=\"has-cyan-bluish-gray-background-color has-background\"><strong>SH:<\/strong>\u00a0\u00a0<a href=\"https:\/\/web.archive.org\/web\/20060509123618id_\/http:\/consc.net:80\/papers\/facing.pdf\"><strong>Dave Chalmers<\/strong><\/a> gave the <strong>HP<\/strong> (formerly called the \u201cmind\/body problem&#8221;) its new name. Passing\u00a0<strong>T3<\/strong> requires Turing-indistinguishability in both verbal  and sensorimotor capacity and their interrelations. Passing\u00a0<strong>T4<\/strong> requires Turing-indistinguishability in verbal and sensorimotor capacity, but also their observable brain functional correlates\u2014physiological, biochemical and biophysical. Together, T2, T3  and T4 are no longer just indistinguishability in &#8220;performance capacity.&#8221; They amount to\u00a0<strong>total empirical indistinguishability<\/strong>. Turing recognized and understood all that. The full gamut of Turing Test(s) is simply empiricism. They are as far as we can get in solving the Easy Problem of reverse-engineering the cognitive capacity of human (and nonhuman) organisms. The Hard Problem of explaining consciousness causally is an empirical problem too\u2014but not just a problem of reverse-engineering of performance capacity.<\/p>\n\n\n\n<p class=\"has-cyan-bluish-gray-background-color has-background\">Before you canonize it, let me point out the two (scientifically minor) oversights of Turing: (1) when he referred to &#8220;solipsism&#8221; he meant the OMP; and (2) the HP is not just a problem of &#8220;localization&#8221; or some such: It is the problem of explaining feeling (sentience).<\/p>\n\n\n\n<p class=\"has-cyan-bluish-gray-background-color has-background\">Please point out that the HP gap is not a matter of seeking some sort of  logical  proof (on pain of contradiction) of the necessity of consciousness for human and nonhuman performance capacity. Nor would Turing ever be guilty of the sophomoric pseudo-philosophical puzzling over &#8220;why is there something rather than nothing?&#8221;, or even &#8220;Why is there matter, or gravity [or any of the other fundamental forces]?&#8221; Each sentient knows with Cartesian certainty that (and what) it feels. Sentience (consciousness) exists. That&#8217;s not the HP. The HP is the problem of explaining, causally, how and why sentients feel rather than just \u201cfunct\u201d.  And &#8220;zombie&#8221; is just a weasel word, inventing (and naming) a hypothetical entity that is Turing-Indistinguishable from a T3 or a T4. That&#8217;s just idle thought-metaphysics\u2014while there exists a real empirical problem, unsolved. The HP is indistinguishable from (and logically equivalent to) the problem of\u00a0<strong>explaining\u00a0<\/strong><a href=\"https:\/\/core.ac.uk\/download\/pdf\/1499956.pdf\"><strong>why and how we are not &#8220;zombies.<\/strong><\/a>&#8221; Well duh!<\/p>\n\n\n\n<p class=\"has-cyan-bluish-gray-background-color has-background\">The Turing Programme is the programme of &#8220;reverse-engineering&#8221; cognition; it is concerned with reverse-engineering cognitive performance&nbsp;<strong>capacity<\/strong>, not modelling a particular person&#8217;s lifelong&nbsp;<strong>performance<\/strong>. Reverse-engineering is not philosophy but empirical science. And please include a footnote defining your undefined term &#8220;machine&#8221;: a machine is any causal system. No need to get into the metaphysics of causality or autonomy.<\/p>\n\n\n\n<p class=\"has-cyan-bluish-gray-background-color has-background\">This is a test (though not a TT!) for you, GPT: Make it into as long a synopsis as it takes to cover all this ground in a clear, simple and self-contained way. If it does the trick, I&#8217;ll use it in this year&#8217;s edition (q.v.) of&nbsp;<a href=\"https:\/\/www.blogger.com\/blog\/posts\/4995014831077048734\"><strong>https:\/\/www.blogger.com\/blog\/posts\/4995014831077048734<\/strong><\/a><strong><\/strong><\/p>\n\n\n\n<p class=\"has-white-background-color has-background\"><strong>GPT:<\/strong>&nbsp;Here is a long-form, self-contained synopsis that integrates and clarifies the distinction between the Turing Test (TT), the Other-Minds Problem (OMP), and the Hard Problem (HP), grounded in your framing:<\/p>\n\n\n\n<p class=\"has-text-align-center has-medium-font-size\"><strong>Alan Turing, the Turing Test, and the Hard Problem<\/strong><\/p>\n\n\n\n<p>In his 1950 paper\u00a0<em>\u201cComputing Machinery and Intelligence\u201d<\/em>, Alan Turing proposed a bold and pragmatic approach to the age-old question, \u201cCan machines think?\u201d Rather than engaging in metaphysical debates about the nature of \u201cthinking\u201d or \u201cconsciousness,\u201d he reframed the question in terms of\u00a0<strong>observable performance capacity<\/strong>. Suppose a machine<strong><sup data-fn=\"74c0a04b-0860-46c7-a5aa-683623e5fed7\" class=\"fn\"><a href=\"#74c0a04b-0860-46c7-a5aa-683623e5fed7\" id=\"74c0a04b-0860-46c7-a5aa-683623e5fed7-link\">1<\/a><\/sup><\/strong>\u202f can be successfully reverse-engineered that has the capacity to carry on a back and forth verbal conversation, indefinitely long, <strong>with<\/strong>\u00a0any normal human, completely indistinguishably, <strong>to<\/strong>\u00a0any normal human, <strong>from<\/strong>\u00a0any normal human. (This is what we now refer to as passing the\u00a0<strong>Turing Test<\/strong>\u00a0or\u00a0<strong>T2<\/strong>). Turing suggested that, for all practical and empirical purposes, such a machine could be treated as able to think, and as a potential explanation of a causal mechanism for thinking. This was not a metaphysical claim, but a methodological proposal to ground cognitive science in what can be\u00a0<strong>observed and explained<\/strong>\u2014without trying, or claiming, to be able to make distinctions between things that cannot be distinguished.<\/p>\n\n\n\n<p>This was the beginning of what should rightly be called the\u00a0<strong>Turing Programme<\/strong> for cognitive science: the scientific effort to\u00a0<strong>reverse-engineer<\/strong>\u00a0cognition. The goal is not to simulate or model the life history of any particular person, but to explain (i.e., to reverse-engineer)\u00a0<strong>how human (<\/strong><a href=\"https:\/\/royalsocietypublishing.org\/doi\/pdf\/10.1098\/rstb.2024.0118\"><strong>or nonhuman<\/strong><\/a><strong>) cognitive performance capacity can be produced\u00a0\u00a0at all<\/strong>. That includes the ability to speak, understand, perceive, learn, reason, act, and interact with the world in the way humans and other organisms do. This is a program in\u00a0<strong>empirical science<\/strong>, not philosophy.<\/p>\n\n\n\n<p>Turing\u2019s approach was entirely grounded in empiricism. He did not claim that the Turing Test could detect or explain\u00a0<strong>consciousness<\/strong>. In fact, he explicitly acknowledged that consciousness remains a \u201cmystery,\u201d and that its\u00a0<strong>presence or absence<\/strong>\u00a0in other systems\u2014human or artificial\u2014cannot be determined by observation. This is the well-known\u00a0<strong>Other-Minds Problem (OMP)<\/strong>: we can never observe directly whether another entity feels. No matter how complete our data on another person\u2019s behavior, physiology, or even biochemistry, we cannot obesrve or measure whether they feel. That is an\u00a0<strong>constraint or empiricism<\/strong>, not a shortcoming of any specific method. Turing\u2019s solution was pragmatic: if a system behaves in every observable respect as if it were thinking and understanding, that is\u00a0<strong>as close as science can get<\/strong>.<\/p>\n\n\n\n<p>But there is a deeper problem\u2014what  \u00a0<a href=\"https:\/\/web.archive.org\/web\/20060509123618id_\/http:\/consc.net:80\/papers\/facing.pdf\"><strong>Dave Chalmers<\/strong><\/a> later called the\u00a0<strong>Hard Problem of consciousness (HP)<\/strong>. Unlike the OMP, the HP is not a problem about\u00a0<em>detecting<\/em>\u00a0feeling in others; it is about causally\u00a0<strong>explaining<\/strong>\u00a0(i.e., reverse-engineering) feeling\u2014how and why\u00a0<strong>any<\/strong>\u00a0of this performance capacity is\u00a0<strong>accompanied by sentience<\/strong>. Why is all this doing\u2014verbal, sensorimotor, and even physiological\u2014not just happening\u00a0<em>without<\/em>\u00a0feeling? Why does it feel like something to see, think, or act?<\/p>\n\n\n\n<p>This is not a metaphysical puzzle like \u201cWhy is there something rather than nothing?\u201d\u2014a question Turing would have rightly dismissed as idle. Nor is it a logical paradox or an ontological speculation. It is an\u00a0<strong>empirical problem<\/strong>: sentience\u00a0<strong>exists<\/strong>, and each sentient entity knows it with\u00a0<strong>Cartesian certainty<\/strong>. That\u2019s not the problem. The problem is that science has no explanation for\u00a0<strong>how and why feeling occurs<\/strong>\u2014what its\u00a0<strong>causal role<\/strong>\u00a0is in the mechanisms that produce the capacity to <strong>do<\/strong> all the things that thinking organisms (but especially humans) can <strong>do<\/strong>.<\/p>\n\n\n\n<p>The Turing Programme aims to reverse-engineer&nbsp;<strong>all of the observable cognitive capacities<\/strong>&nbsp;of humans. These capacities can be modeled and tested at increasing levels of empirical completeness:<\/p>\n\n\n\n<p><strong>T2<\/strong>: Verbal capacity\u2014can the system converse indistinguishably from a human?<\/p>\n\n\n\n<p><strong>T3<\/strong>: Verbal and sensorimotor capacity\u2014can the system not only talk but also act and interact with the world, verbally and nonverbally, indistinguishably from a human to a human (including the verbal-sensorimotor interrelations and interactions of \u201cwords and the world\u201d: e.g., pointing out and picking up &#8220;cats&#8221; or observing and describing &#8220;cats on mats&#8221; or repairing catamarans from verbal instructions)?<\/p>\n\n\n\n<p><strong>T4<\/strong>: All of T3, plus&nbsp;<strong>internal indistinguishability<\/strong>\u2014the system\u2019s observable neurophysiological, biochemical, and biophysical structure and dynamics are indistinguishable from those of a real human.<\/p>\n\n\n\n<p><strong>T4 indistinguishability<\/strong>\u00a0would constitute the complete scientific success of the Turing Programme: a full reverse-engineering of human cognitive performance capacity. And yet, even that would leave\u00a0<strong>one thing unexplained<\/strong>: why and how any of it is\u00a0<strong>felt<\/strong>\u00a0rather than merely\u00a0<strong>done<\/strong>. The <strong>Easy Problem<\/strong> <strong>(HP)<\/strong>\u2014reverse-engineering and explaining the  mechanism that produces the capacity to <strong>do<\/strong> what humans can <strong>do<\/strong>\u2014could be solved completely, yet we would still not have explained how or why any of it is accompanied by feeling. That is the\u00a0<strong>Hard Problem<\/strong>.<\/p>\n\n\n\n<p>The term\u00a0<strong>\u201czombie\u201d<\/strong>\u00a0is sometimes used for a hypothetical being that is T3- or T4-indistinguishable from a human but lacks feeling. But \u201czombie\u201d is a\u00a0<strong>weasel-word<\/strong>\u2014it adds nothing to either the problem or its solution. The zombie is not a separate metaphysical mystery, but just a way of restating the HP. The question is not \u201cAre zombies possible?\u201d but \u201cHow and why does sentience exist at all?\u201d Why are organisms\u00a0<strong>sentients<\/strong>\u00a0rather than just\u00a0<strong>functers<\/strong>\u2014systems that can do everything we can do, in every observable way, but without feeling?<\/p>\n\n\n\n<p>Turing anticipated all of this. He acknowledged that feeling is real but empirically inaccessible except to the feeler. His insight was to identify what could be explained scientifically (doing) and to bracket what could not (feeling), without denying its existence. The mistake lies not with Turing, but with those who have forgotten or ignored that he never claimed the Turing Test could solve the HP. His was and is a\u00a0<strong>fully empirical program<\/strong>, and the HP remains fully within the domain of\u00a0<strong>empirical science<\/strong>\u2014but still\u00a0<strong>completely unsolved<\/strong>, and perhaps unsolvable in the way other scientific problems are solved.<\/p>\n\n\n\n<p>The Turing Programme continues: reverse-engineering the full capacity to do what humans (and eventually other organisms) can do. But until we can also explain\u00a0<strong>how and why<\/strong>\u00a0that doing is\u00a0<strong>felt<\/strong>, not just\u00a0<strong>done<\/strong>, the scientific story of cognition will remain incomplete.<\/p>\n\n\n<ol class=\"wp-block-footnotes\"><li id=\"74c0a04b-0860-46c7-a5aa-683623e5fed7\"><em><em>\u201cMachine,\u201d refers to any\u00a0<strong>causal system<\/strong>. No further metaphysical assumptions\u2014about mechanism, determinism, autonomy, or artificiality\u2014are needed or implied.<\/em><\/em> <a href=\"#74c0a04b-0860-46c7-a5aa-683623e5fed7-link\" aria-label=\"Jump to footnote reference 1\">\u21a9\ufe0e<\/a><\/li><\/ol>","protected":false},"excerpt":{"rendered":"<p>SH:&nbsp;Where in&nbsp;Turing&#8217;s classic 1950 paper&nbsp;does Turing acknowledge that the TT, based only on observable performance capacity, cannot test or explain consciousness? (I need the exact quote of the passage). GPT:&nbsp;Here is the exact passage from Turing\u2019s 1950 paper&nbsp;\u201cComputing Machinery and Intelligence\u201d, in which he acknowledges that the Turing Test\u2014based solely on observable performance\u2014does not address &hellip; <\/p>\n<p class=\"link-more\"><a href=\"https:\/\/generic.wordpress.soton.ac.uk\/skywritings\/2025\/07\/11\/to-come\/\" class=\"more-link\">Continue reading<span class=\"screen-reader-text\"> &#8220;The Turing Test at 75 (with ChatGPT)&#8221;<\/span><\/a><\/p>\n","protected":false},"author":3074,"featured_media":0,"comment_status":"closed","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":"[{\"id\":\"74c0a04b-0860-46c7-a5aa-683623e5fed7\",\"content\":\"<em><em>\\u201cMachine,\\u201d refers to any\\u00a0<strong>causal system<\\\/strong>. No further metaphysical assumptions\\u2014about mechanism, determinism, autonomy, or artificiality\\u2014are needed or implied.<\\\/em><\\\/em>\"}]"},"categories":[152,6,146,110,111,164,3,119,153,147],"tags":[212],"class_list":["post-2477","post","type-post","status-publish","format-standard","hentry","category-hard-problem","category-hard-problem-consciousness","category-chatgpt","category-cognitive-science","category-computation","category-llms","category-other-minds-problem","category-sentience","category-symbol-grounding","category-turing-test","tag-alan-turing"],"jetpack_featured_media_url":"","_links":{"self":[{"href":"https:\/\/generic.wordpress.soton.ac.uk\/skywritings\/wp-json\/wp\/v2\/posts\/2477","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/generic.wordpress.soton.ac.uk\/skywritings\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/generic.wordpress.soton.ac.uk\/skywritings\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/generic.wordpress.soton.ac.uk\/skywritings\/wp-json\/wp\/v2\/users\/3074"}],"replies":[{"embeddable":true,"href":"https:\/\/generic.wordpress.soton.ac.uk\/skywritings\/wp-json\/wp\/v2\/comments?post=2477"}],"version-history":[{"count":17,"href":"https:\/\/generic.wordpress.soton.ac.uk\/skywritings\/wp-json\/wp\/v2\/posts\/2477\/revisions"}],"predecessor-version":[{"id":2501,"href":"https:\/\/generic.wordpress.soton.ac.uk\/skywritings\/wp-json\/wp\/v2\/posts\/2477\/revisions\/2501"}],"wp:attachment":[{"href":"https:\/\/generic.wordpress.soton.ac.uk\/skywritings\/wp-json\/wp\/v2\/media?parent=2477"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/generic.wordpress.soton.ac.uk\/skywritings\/wp-json\/wp\/v2\/categories?post=2477"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/generic.wordpress.soton.ac.uk\/skywritings\/wp-json\/wp\/v2\/tags?post=2477"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}