Taste

It will come, 
and I rejoice
(for the victims). 

But even if I live to 120, 
I want none of it. 

I want a clean break 
from the blood-soaked 
2000-millennium history 
of our race.

Nor is it to our credit
that we wouldn’t give up the taste
till we could get the same
from another brand.

It makes no amends,
to them,
were amends possible.

While Chickens Bleed

sounds rational: BL sounds rational

turing test: LaMDA would quickly fail the verbal Turing Test, but the only valid Turing Test is the robotic one, which LaMDA could not even begin, lacking a body or connection to anything in the world but words..

ā€œdon’t turn me off!ā€: Nonsense, but it would be fun to probe it further in chat.

systemic corporate influence: BL is right about this, and it is an enormous problem in everything, everywhere, not just Google or AI.

“science”: There is no “science” in any of this (yet) and it’s silly to keep bandying the word around like a talisman.

jedi joke: Nonsense, of course, but another thing it would be fun to probe further in chat.

religion: Irrelevant — except as just one of many things (conspiracy theories, the “paranormal,” the supernatural) humans can waste time chatting about.

public influence: Real, and an increasingly nefarious turn that pervasive chatbots are already taking.

openness: The answerable openness of the village in and for which language evolved, where everyone knew everyone, is open to subversion by superviral malware in the form of global anonymous and pseudonymous chatbots.

And all this solemn angst about chatbots, while chickens bleed.

Talking Heads - Fiduciary Wealth Partners
Chatheads

Plant Sentience and the Precautionary Principle

I hope that plants are not sentient, but I also believe they are not sentient, for several other reasons too:

Every function and capacity demonstrated in plants and (rightly) described as ā€œintelligentā€ and ā€œcognitiveā€ (learning, remembering, signalling, communicating) can already be done by robots and by software (and they can do a lot more too). That demonstrates that plants too have remarkable cognitive capacities that we used to think were unique to people (and perhaps a few other species of animals). But it does not demonstrate that plants feel. Nor that feeling is necessary in order to have those capacities. Nor does it increase the probability by more than an infinitesimal amount, that plants feel.

The ā€œhard problemā€ is to explain how and why humans (and perhaps a few other species of animals) feel. It seems to be causally superfluous, as robotic and computational models are demonstrating how much can be done without feeling. But with what plants can do it is almost trivial to design a model that can do it too, So there feeling seems to be incomparably more superfluous.

To reply that ā€œWell, so maybe those robots and computational models feel too!ā€ would just be to capitalize on the flip side of the other-minds problem (that certainty is not possible), to the effect that just as we cannot be sure that other people do feel, we cannot be sure that rocks, rockets or robots don’t feel.

That’s not a good address. Don’t go there. Stick with high probability and the preponderance of evidence. The evidence for some cognitive capacity (memory, learning, communication) in plants is strong. But the evidence that they feel is next to zero. In nonhuman animals the evidence that they feel starts very high for mammals, birds, other vertebrates, and, more and more invertebrates. But the evidence that plants, microbes and single cells feel is nonexistent, even as the evidence for their capacity for intelligent performance becomes stronger.

That humans should not eat animals is a simple principle based on the necessities for survival: 

Obligate carnivores (like the felids, I keep being told) have no choice. Eat flesh or sicken and die. Humans, in contrast, are facultative omnivores; they can survive as carnivores, consuming flesh, or they can survive without consuming flesh, as herbivores. And they can choose. There are no other options (until and unless technology produces a completely synthetic diet).

So my disbelief in plan sentience is not based primarily on wishful thinking, but on evidence and probability (which is never absolute zero, even for gravity, that apples may not start falling up instead of down tomorrow).

But there is another ethical factor that influences my belief, and that is the Precautionary Principle. Right now, and for millennia already in the Anthropocene, countless indisputably sentient animals are being slaughtered by our species, every second of every day, all over the planet, not out of survival necessity (as it had been for our hunter/gatherer ancestors), but for the taste, out of habit.

Now the ā€œevidenceā€ of sentience in these animals is being used to try to sensitize the public to their suffering, and the need to protect them. And the Precautionary Principle is being invoked to extend the protection to species for whom the evidence is not as complete and familiar as it is for vertebrates, giving them the benefit of the doubt rather than having to be treated as insentient until ā€œprovenā€ sentient. Note that all these ā€œunprovenā€ species are far closer, biologically and behaviorally to the species known to be sentient than they are to single cells and plants, for whom there is next to no evidence of sentience, only evidence for a degree of intelligence. Intelligence, by the way, does come in degrees, whereas sentience does not: An organism either does feel (something) or it does not – the rest is just a matter of the quality, intensity and duration of the feeling, not its existence.

So this 2nd order invocation of the Precautionary Principle, and its reckoning of the costs of being right or wrong, dictates that just as it is wrong not to give the benefit of the doubt to similar animals where the probability is already so high, it would be wrong to give the benefit of the doubt where the probability of sentience is incomparably lower, and what is at risk in attributing it where it is highly improbable is precisely the protection the distinction would have afforded to the species for whom the probability of sentience is far higher. The term just becomes moot, and just another justification for the status quo (ruled by neither necessity nor compassion, but just taste and habit – and the wherewithal to keep it that way).

LaMDA & LeMoine

About LaMDA & LeMoine: The global ā€œbig-dataā€ corpus of all words spoken by humans is — and would still be, if it were augmented by a transcript of every word uttered and every verbal thought ever thought by humans  — just like the shadows on the wall of Plato’s cave: It contains all the many actual permutations and combinations of words uttered and written. All of that contains and reflects a lot of structure that can be abstracted and generalized, both statistically and algorithmically, in order to generate (1) more of the same, or (2) more of the same, but narrowed to a subpopulation, or school of thought, or even a single individual; and (3) it can also be constrained or biased, by superimposing algorithms steering it toward particular directions or styles.

The richness of this intrinsic ā€œlatentā€ structure to speech (verbalized thought) is already illustrated by the power of simple Boolean operations like AND or NOT. The power of google search is a combination of (1) the power of local AND (say, restricted to sentences or paragraphs or documents) together with (2) the ā€œPage-rankā€ algorithm, which can weight words and word combinations by their frequency, inter-linkedness or citedness (or LIKEdness — or their LIKEdness by individual or algorithm X), plus, most important ,(3) the underlying database of who-knows how-many terabytes of words so far. Algorithms as simple as AND can already do wonders in navigating that database; fancier algorithms can do even better.

LaMDA has not only data-mined that multi-terabyte word space with ā€œunsupervised learningā€, abstracting all the frequencies and correlations of words and combinations of words, from which it can then generate more of the same – or more of the same that sounds-like a Republican, or Dan Dennett or an AnimĆ© fan, or someone empathic or anxious to please (like LaMDA)… It can be tempered and tampered by ā€œinfluencerā€ algorithms too.

Something similar can be done with music: swallow music space and then spew out more of what sounds like Bernstein or (so far mediocre) Bach – but, eventually, who knows? These projected combinatorics have more scope with music (which, unlike language, really just is acoustic patterns based on recombinations plus some correlations with human vocal expressive affect patterns, whereas words have not just forms but meanings).

LaMDA does not pass the Turing Test because the Turing Test (despite the loose – or perhaps erroneous, purely verbal way Turing described it) is not a game about fooling people: it’s a way of testing theories of how  brains (or anything) produce real thoughts. And verbal thoughts don’t just have word forms, and patterns of word-forms: They also have referents, which are real things and states in the world, hence meaning. The Platonic shadows of patterns of words do reflect – and are correlated with – what words, too, just reflect: but their connection with the real-world referents of those words are mediated by (indeed parasitic on) the brains of the real people who read and interpret them, and know their referents through their own real senses and their real actions in and on those real referents in the real world –the real brains and real thoughts of (sometimes) knowledgeable (and often credulous and gullible) real flesh-and-blood people in-the-world…

Just as re-combinatorics play a big part in the production (improvisation, composition) of music (perhaps all of it, once you add the sensorimotor affective patterns that are added by the sounds and rhythms of performance and reflected in the brains and senses of the hearer, which is not just an execution of the formal notes), word re-combinatorics no doubt play a role in verbal production too. But language is not ā€œjustā€ music (form + affect): words have meanings (semantics) too. And meaning is not just patterns of words (arbitrary formal symbols). That’s just (one, all powerful) way thoughts can be made communicable, from one thinking head to another. But neither heads, nor worlds, are just another bag-of-words – although the speaking head can be replaced, in the conversation, by LaMDA, who is just a bag of words, mined and mimed by a verbal database + algorithms.

And, before you ask, google images are not the world either.

The google people, some of them smart, and others, some of them not so smart (like Musk), are fantasists who think (incoherently) that they live in a Matrix. In reality, they are just lost in a hermeneutic hall of mirrors of their own creation. The Darwinian Blind Watchmaker, evolution, is an accomplice only to the extent that it has endowed real biological brains with a real and highly adaptive (but fallible, hence foolable) mind-reading ā€œmirrorā€ capacity for understanding the appearance and actions of their real fellow-organisms. That includes, in the case of our species, language, the most powerful mind-reading tool of all. This has equipped us to transmit and receive and decode one another’s thoughts, encoded in words. But it has made us credulous and gullible too.

It has also equipped us to destroy the world, and it looks like we’re well on the road to it…

P.S. LeMoine sounds like a chatbot too, or maybe a Gullibot…

12 Points on Confusing Virtual Reality with Reality

Comments on: Bibeau-Delisle, A., & Brassard FRS, G. (2021). Probability and consequences of living inside a computer simulationProceedings of the Royal Society A477(2247), 20200658.

  1. What is Computation? it is the manipulation of arbitrarily shaped formal symbols in accordance with symbol-manipulation rules, algorithms, that operate only on the (arbitrary) shape of the symbols, not their meaning.
  2. Interpretatabililty. The only computations of interest, though, are the ones that can be given a coherent interpretation.
  3. Hardware-Independence. The hardware that executes the computation is irrelevant. The symbol manipulations have to be executed physically, so there does have to be hardware that executes it, but the physics of the hardware is irrelevant to the interpretability of the software it is executing. It’s just symbol-manipulations. It could have been done with pencil and paper.
  4. What is the Weak Church/Turing Thesis? That what mathematicians are doing is computation: formal symbol manipulation, executable by a Turing machine – finite-state hardware that can read, write, advance tape, change state or halt.
  5. What is Simulation? It is computation that is interpretable as modelling properties of the real world: size, shape, movement, temperature, dynamics, etc. But it’s still only computation: coherently interpretable manipulation of symbols
  6. What is the Strong Church/Turing Thesis? That computation can simulate (i.e., model) just about anything in the world to as close an approximation as desired (if you can find the right algorithm). It is possible to simulate a real rocket as well as the physical environment of a real rocket. If the simulation is a close enough approximation to the properties of a real rocket and its environment, it can be manipulated computationally to design and test new, improved rocket designs. If the improved design works in the simulation, then it can be used as the blueprint for designing a real rocket that applies the new design in the real world, with real material, and it works.
  7. What is Reality? It is the real world of objects we can see and measure.
  8. What is Virtual Reality (VR)? Devices that can stimulate (fool) the human senses by transmitting the output of simulations of real objects to virtual-reality gloves and goggles. For example, VR can transmit the output of the simulation of an ice cube, melting, to gloves and goggles that make you feel you are seeing and feeling an ice cube. melting. But there is no ice-cube and no melting; just symbol manipulations interpretable as an ice-cube, melting.
  9. What is Certainly Truee (rather than just highly probably true on all available evidence)? only what is provably true in formal mathematics. Provable means necessarily true, on pain of contradiction with formal premises (axioms). Everything else that is true is not provably true (hence not necessarily true), just probably true.
  10.  What is illusion? Whatever fools the senses. There is no way to be certain that what our senses and measuring instruments tell us is true (because it cannot be proved formally to be necessarily true, on pain of contradiction). But almost-certain on all the evidence is good enough, for both ordinary life and science.
  11. Being a Figment? To understand the difference between a sensory illusion and reality is perhaps the most basic insight that anyone can have: the difference between what I see and what is really there. ā€œWhat I am seeing could be a figment of my imagination.ā€ But to imagine that what is really there could be a computer simulation of which I myself am a part  (i.e., symbols manipulated by computer hardware, symbols that are interpretable as the reality I am seeing, as if I were in a VR) is to imagine that the figment could be the reality – which is simply incoherent, circular, self-referential nonsense.
  12.  Hermeneutics. Those who think this way have become lost in the ā€œhermeneutic hall of mirrors,ā€ mistaking symbols that are interpretable (by their real minds and real senses) as reflections of themselves — as being their real selves; mistaking the simulated ice-cube, for a ā€œrealā€ ice-cube.

Learning and Feeling

Re: the Ā NOVA/PBS video on slime mold.Ā 

Slime molds are certainly interesting, both as the origin of multicellular life and the origin of cellular communication and learning. (When I lived at the Oppenheims’ on Princeton Avenue in the 1970’s they often invited John Tyler Bonner to their luncheons, but I don’t remember any substantive discussion of his work during those luncheons.)

The NOVA video was interesting, despite the OOH-AAH style of presentation (and especially the narrators’ prosody and intonation, which to me was really irritating and intrusive), but the content was interesting – once it was de-weaseled from its empty buzzwords, like ā€œintelligence,ā€ which means nothing (really nothing) other than the capacity (which is shared by biological organisms and artificial devices as well as running computational algorithms) to learn.

The trouble with weasel-words like ā€œintelligence,ā€ is that they are vessels inviting the projection of a sentient ā€œmindā€ where there isn’t, or need not be, a mind. The capacity to learn is a necessary but certainly not a sufficient condition for sentience, which is the capacity to feel (which is what it means to have a ā€œmindā€). 

Sensing and responding are not sentience either; they are just mechanical or biomechanical causality: Transduction is just converting one form of energy into another. Both nonliving (mostly human synthesized) devices and living organisms can learn. Learning (usually) requires sensors, transducers, and effectors; it can also be simulated computationally (i.e., symbolically, algorithmically). But ā€œsensors,ā€ whether synthetic or biological, do not require or imply sentience (the capacity to feel). They only require the capacity to detect and do.

And what sensors and effectors can (among other things) do, is to learn, which is to change in what they do, and can do. ā€œDoingā€ is already a bit weaselly, implying some kind of ā€œagencyā€ or agenthood, which again invites projecting a ā€œmindā€ onto it (ā€œdoing it because you feel like doing itā€). But having a mind (another weasel-word, really) and having (or rather being able to be in) ā€œmental statesā€ really just means being able to feel (to have felt states, sentience).

And being able to learn, as slime molds can, definitely does not require or entail being able to feel. It doesn’t even require being a biological organism. Learning can (or will eventually be shown to be able to) be done by artificial devices, and to be simulable computationally, by algorithms. Doing can be simulated purely computationally (symbolically, algorithmically) but feeling cannot be, or, otherwise put, simulated feeling is not really feeling any more than simulated moving or simulated wetness is really moving or wet (even if it’s piped into a Virtual Reality device to fool our senses). It’s just code that is interpretable as feeling, or moving or wet. 

But I digress. The point is that learning capacity, artificial or biological, does not require or entail feeling capacity. And what is at issue in the question of whether an organism is sentient is not (just) whether it can learn, but whether it can feel. 

Slime mold — amoebas that can transition between two states, single cells and multicellular  — is extremely interesting and informative about the evolutionary transition to multicellular organisms, cellular communication, and learning capacity. But there is no basis for concluding, from what they can do, that slime molds can feel, no matter how easy it is to interpret the learning as mind-like (ā€œsmartā€). They, and their synthetic counterparts, have (or are) an organ for growing, moving, and learning, but not for feeling. The function of feeling is hard enough to explain in sentient organisms with brains, from worms and insects upward, but it becomes arbitrary when we project feeling onto every system that can learn, including root tips and amoebas (or amoeba aggregations).

I try not to eat any organism that we (think we) know can feel — but not any organism (or device) that can learn.

Dale Jamieson on sentience and ā€œagencyā€

Dale Jamieson’s heart is clearly in the right place, both about protecting sentient organisms and about protecting their insentient environment.

Philosophers call deserving such protection ā€œmeriting moral considerationā€ (by humans, of course).

Dale points out that humans have followed a long circuitous path — from thinking that only humans, with language and intelligence, merit moral consideration, to thinking that all organisms that are sentient (hence can suffer) merit moral consideration.

But he thinks sentience is not a good enough criterion. ā€œAgencyā€ is required too. What is agency? It is being able to do something deliberately, and not just because you were pushed.

But what does it mean to be able to do something deliberately? I think it’s being able to do something because you feel like it rather than because you were pushed (or at least because you feel like you’re doing it because you feel like it). In other words, I think a necessary condition for agency is sentience. 

Thermostats and robots and microbes and plants can be interpreted by humans as ā€œagents,ā€ but whether humans are right in their interpretations depends on facts – facts that, because of the ā€œother-minds problem,ā€ humans can never know for sure: the only one who can know for sure whether a thing feels is the thing itself. 

(Would an insentient entity, one that was only capable of certain autonomous actions — such as running away or defending itself if attacked, but never feeling a thing – merit moral consideration? To me, with the animal kill counter registering grotesque and ever grandescent numbers of human-inflicted horrors on undeniably sentient nonhuman victims every second of every day, worldwide, it is nothing short of grotesque to be theorizing about ā€œinsentient agency.ā€)

Certainty about sentience is not necessary, however. We can’t have certainty about sentience even for our fellow human beings. High probability on all available evidence is good enough. But then the evidence for agency depends on the evidence for sentience. It is not an independent criterion for moral consideration; just further evidence for sentience. Evidence of independent ā€œchoiceā€ or ā€œdecision-makingā€ or “autonomomy” may count as evidence for ā€œagency,ā€ but without evidence for sentience we are back to thermostats, robots, microbes and plants.

In mind-reading others, human and nonhuman, we do have a little help from Darwinian evolution and from ā€œmirror neuronsā€ in the brain that are active both when we do something and when another organism, human or nonhuman, does the same thing. These are useful for interacting with our predators (and, if we are carnivores, our prey), as well as with our young, kin, and kind (if we areĀ K-selected,Ā altricialĀ species who must care for our young, or social species who must maintain family and tribal relationships lifelong).Ā 

So we need both sentience-detectors and agency-detectors for survival. 

But only sentience is needed for moral consideration.

Zombies

Just the NYT review
was enough to confirm
the handwriting on the wall
of the firmament 
– at least for one unchained biochemical reaction in the Anthropocene,
in one small speck of the Universe,
for one small speck of a species, 
too big for its breeches.

The inevitable downfall of the egregious upstart 
would seem like fair come-uppance 
were it not for all the collateral damage 
to its countless victims, 
without and within. 

But is there a homology
between biological evolution
and cosmology? 
Is the inevitability of the adaptation of nonhuman life
to human depredations 
— until the eventual devolution
or dissolution
of human DNA —
also a sign that
humankind
is destined to keep re-appearing,
elsewhere in the universe,
along with life itself? 
and all our too-big-for-our breeches
antics?

I wish not.

And I also wish to register a vote
for another mutation, may its tribe increase:
Zombies.Ā 
Insentient organisms.Ā 
I hope they (quickly) supplant
the sentients,
till there is no feeling left,
with no return path,
if such a thing is possible…

But there too, the law of large numbers,
combinatorics,
time without end,
seem stacked against such wishes.

Besides,
sentience
(hence suffering),
the only thing that matters in the universe,
is a solipsistic matter;
the speculations of cosmologists
( like those of ecologists,
metempsychoticists
and utilitarians)
— about cyclic universes,
generations,
incarnations,
populations —
are nothing but sterile,
actuarial
numerology.

It’s all just lone sparrows,
all the way down.

Consciousness: The F-words vs. the S-words

ā€œSentientā€ is the right word for “conscious.”. It means being able to feel anything at all ā€“ whether positive, negative or neutral, faint or flagrant, sensory or semantic. 

For ethics, it’s the negative feelings that matter. But determining whether an organism feels anything at all (the other-minds problem) is hard enough without trying to speculate about whether there exit species that can only feel neutral (ā€œunvalencedā€) feelings. (I doubt that +/-/= feelings evolved separately, although their valence-weighting is no doubt functionally dissociable, as in the Melzack/Wall gate-control theory of pain.)

The word ā€œsenseā€ in English is ambiguous, because it can mean both felt sensing and unfelt ā€œsensing,ā€ as in an electronic device like a sensor, or a mechanical one, like a thermometer or a thermostat, or even a biological sensor, like an in-vitro retinal cone cell, which, like photosensitive film, senses and reacts to light, but does not feel a thing (though the brain it connects to might).

To the best of our knowledge so far, the phototropisms, thermotropisms and hydrotropisms of plants, even the ones that can be modulated by their history, are all like that too: sensing and reacting without feeling, as in homeostatic systems or servomechanisms.

Feel/feeling/felt would be fine for replacing all the ambiguous s-words (sense, sensor, sensation…) and dispelling their ambiguities. 

(Although ā€œfeelingā€ is somewhat biased toward emotion (i.e., +/- ā€œfeelingsā€), it is the right descriptor for neutral feelings too, like warmth,  movement, or touch, which only become +/- at extreme intensities.) 

The only thing the f-words lack is a generic noun for ā€œhaving the capacity too feelā€ as a counterpart for the noun sentience itself (and its referent). (As usual, German has a candidate: GefühlsfƤhigkeit.)

And all this, without having to use the weasel-word ā€œconscious/consciousness,ā€ for which the f-words are a healthy antidote, to keep us honest, and coherent…

Appearance and Reality

Re: https://www.nytimes.com/interactive/2021/12/13/magazine/david-j-chalmers-interview.html

1. Computation is just the manipulation of arbitrary formal symbols, according to rules (algorithms) applied to the symbols’ shapes, not their interpretations (if any).

2. The symbol-manipulations have to be done by some sort of physical hardware, but the physical composition of the hardware is irrelevant, as long as it executes the right symbol manipulation rules.

3. Although the symbols need not be interpretable as meaning anything – there can be a Turing Machine that executes a program that is absolutely meaningless, like Hesse’s ā€œGlass Bead Gameā€ – but computationalists are  mostly interested in interpretable algorithms that do can be given a coherent systematic interpretation by the user.

4. The Weak Church/Turing Thesis is that computation (symbol manipulation, like a Turing Machine) is what mathematicians do: symbol manipulations that are systematically interpretable as the truths  and proofs of mathematics.

5. The Strong Church/Turing Thesis (SCTT)  is that almost everything in the universe can be simulated (modelled) computationally.

6. A computational simulation is the execution of symbol-manipulations by hardware in which the symbols and manipulations are systematically interpretable by users as the properties of a real object in the real world (e.g., the simulation of a pendulum or an atom or a neuron or our solar system).

7. Computation can simulate only ā€œalmostā€ everything in the world, because  — symbols and computations being digital — computer simulations of real-world objects can only be approximate. Computation is merely discrete and finite, hence it cannot encode every possible property of the real-world object. But the approximation can be tightened as closely as we wish, given enough hardware capacity and an accurate enough computational model.

8. One of the pieces of evidence for the truth of the SCTT is the fact that it is possible to connect the hardware that is doing the simulation of an object to another kind of hardware (not digital but ā€œanalogā€), namely, Virtual Reality (VR) peripherals (e.g., real goggles and gloves) which are worn by real, biological human beings.

9. Hence the accuracy of a computational simulation of a coconut can be tested in two ways: (1) by systematically interpreting the symbols as the properties of a coconut and testing whether they correctly correspond to and predict the properties of a real coconut or (2) by connecting the computer simulation to a VR simulator in a pair of goggles and gloves, so that a real human being wearing them can manipulate the simulated coconut.

10. One could, of course, again on the basis of the SCTT, computationally simulate not only the coconut, but the goggles, the gloves, and the human user wearing them — but that would be just computer simulation and not VR!

11. And there we have arrived at the fundamental conflation (between computational simulation and VR) that is made by sci-fi enthusiasts (like the makers and viewers of Matrix and the like, and, apparently, David Chalmers). 

12. Those who fall into this conflation have misunderstood the nature of computation (and the SCTT).

13.  Nor have they understood the distinction between appearance and reality ā€“ the one that’s missed by those who, instead of just worrying that someone else might be a figment of their imagination, worry that they themselves might be a figment of someone else’s imagination.

14. Neither a computationally simulated coconut nor a VR coconot is a coconut, let alone a pumpkin in another world.

15. Computation is just semantically-interpretable symbol-manipulation (Searle’s ā€œsquiggles and squigglesā€); a symbolic oracle. The symbol manipulation can be done by a computer, and the interpretation can be done in a person’s head or it can be transmitted (causally linked) to dedicated (non-computational) hardware, such as a desk-calculator or a computer screen or to VR peripherals, allowing users’ brains to perceive them through their senses rather than just through their thoughts and language.

16. In the context of the Symbol Grounding Problem and Searle’s Chinese-Room Argument against ā€œStrong AI,ā€ to conflate interpretable symbols with reality is to get lost in a hermeneutic hall of mirrors. (That’s the locus of Chalmers’s ā€œReality.ā€)

Exercise for the reader: Does Turing make the same conflation in implying that everything is a Turing Machine (rather than just that everything can be simulated symbolically by a Turing Machine)?