Horganās essay in Scientific American is a singularly superficial (and glib) misconstrual of every one of the points he discusses: the (1) Turing Test, (2) computation, (3) Searleās Chinese Room Argument, (4) the other-minds problem, (5) zombies, (6) understanding QM, (7) understanding and (8) consciousnes.
(1) TheTuring Test is not about question-asking/answering but about lifelong verbal capacity and performance indistinguishable from that of any normal human being.
(2) Computation (and mathematics) is just symbol-manipulation, algorithms, syntax, without meaning or understanding. (Mathematicians understand, but computation is just formal syntax. A proof or a procedure is not based on what a mathematician understands (or thinks he understands); it is based on what his algorithm can do, formally, syntactically, not on its interpretation by us ā although we are not interested in uninterpretable algorithms.) But language, though it includes formal mathematics, is not just interpretable syntax. Neither is thought (cognition); nor understanding.
(3) Searleās Chinese Room Argument simply shows that neither seeing nor executing the syntax will generate understanding. Searle would no more understand Chinese from just the algorithm for manipulating Chinese symbols than he would understand he was playing Chess from executing the algorithm for chess, coded in binary. Words have referents in the world: they donāt just have links to other words and algorithms for manipulating them. But Searle only refutes the verbal version of TT, not the robotic version, which is not purely symbolic.
Harnad, S. (2001) What’s Wrong and Right About Searle’s Chinese Room Argument?
In: M. Bishop & J. Preston (eds.) Essays on Searle’s Chinese Room Argument. Oxford University Press
(4) The other-minds problem is not āsolipsism.ā (Both Horgan and Turing himself are wrong on this.) Solipsism is skepticism about the āoutside worldā (all of it).
(5) A āZombieā would be a TT-passer that was insentient (unconscious, nobody home) ā if that were possible.
(6) Quantum Mechanics is not “understood” by Physicists; they know how to execute the algorithm and predict the experimental outcomes, but their interpretations are far-fetched sci-fi conjectures. (Horgan is right about this.)
(7) To understand is to have a (correct) causal explanation, grounded in real-world observations and measurement PLUS what it feels like to understand something. (But that feeling, as Russell (?) has pointed out, can also be mistaken).
(8) Consciousness (sentience) is a state that it feels like something to be in.
Computation itself is hardware-independent (though it has to be executed by some hardware or other, whether Samsung or Searlel).
What is executed is the algorithm, which is rules for manipulating symbols on the basis of their shapes.
Thatās what Searle does. The execution of the algorithm on the input data (Chinese messages) may include saving some data and updating some rules.
Searle should not have described the Chinese room as static symbols on a wall, but as an algorithm, to be applied to Chinese input symbols. He memorizes the initial algorithm and does the data storage and algorithm updates in his head, as needed, and dictated by the (current) algorithm. (It is irrelevant [for this thought experiment] whether he has enough time or brain capacity to actually do all that. Do it on hexadecimal tic-tac-toe, the punchline is the same: he would be executing code without a clue as to what it meant.)
But the deepest shortcoming of computationalism ā the theory that cognition is just computation (aka āStrong AI) ā is that in cognition, symbols have referents ā physical things in the world to which they refer. Computation alone is just the manipulation of symbols based on their (arbitrary) shapes (0/1). There is nothing in the domain of arbitrary symbols that can connect with real-world referents: there are just symbols that can be interpreted (by the user) as referring to real-world objects. And thatās despite the distributional patterns and shadows cast by huge bodies of interpretable symbols (e.g., GPT-3 ā or any form of unsupervised or supervised learning, shallow or deep, small or big, as long as itās all within the circle of symbols. Sensorimotor function, in contrast, is not symbolic, nor computationally simulated sensorimotor function take its place (although the Strong Church/Turing Thesis remains true, that computation can simulate (i.e. model) just about anything, from atoms to galaxies, and that includes modeling rockets, robots, rabbits and rhinencephalons, if you come up with the right algorithmsā¦
(Thatās why my own conclusion is that computation alone cannot pass the (verbal) Turing Test; only a robot, with sensorimotor grounding in the world of referents of its symbols, could pass the TT.)
Searle is also wrong that his Argument proves that cognition cannot be computation at all (and hence we should all go off and study the brain): all he has shown is that cognition could not be all computation.
Turing said that the question of whether machines can think is ātoo meaningless to deserve discussion.ā But there is a rational construal we can ascribe to this dismissiveness:
It feels like something to think. We have that feeling. And, of course, we are machines (i.e. causal systems).
So the question is not whether machines can think (of course they can, because we can) but which machines can think — i.e. which ones are capable to being in those states that it feels like something to be in, and how?
And Turingās methodological proposal amounts to this:
If and when you can build a machine (so you know how it can do what it do) )that can do anything that the (normal) human thinker can do, to the point where you canāt tell them apart at all based solely on what they can do (their lifelong cognitive performance capacities, foremost among them being language) then it is meaningless to try to distinguish them.
The premise, of course (because of the other-minds problem [which is not āsolipsismā: even Turing gets that wrong, as I said] which prevents us from being able to observe whether anyone other than me feels) is that the only thing we have to go by is whether the candidate can do what a thinking human can do. That refers largely to the exercise of what we call the ācognitiveā capacities (perceiving objects, reasoning, learning, remembering, communicating verbally) that (human) thinkers have.
One could insist on a bit more: the brain correlates of felt states.
On the one hand, what our brain ādoesā with its physiological and biochemical activity is definitely part of what we can do (in fact the brain is the causal mechanism that is producing it).
But the intuitive test perhaps worth reflecting on is this: if one of the people we had known and communicated with for 40 years turned out to have had a synthetic device in his head all along, instead of a biological brain, and hence he lacked the activity correlated with felt states, would we conclude ā on the basis of that evidence ā that he does not feel after all?
I think Turing would consider that pretty arbitrary too: for what did we actually learn from the lack of the brain correlations?
(The notion of some sort of transition or singularity along the causal chain from the head of the thinker to the things in the world his thoughts are about is a misplaced hope, I think. Conscious states are just felt states. And both the feeling and what generates the feeling, is skin-and-inā¦)