Symbols, Objects and Features

0. It might help if we stop “cognitizing” computation and symbols. 

1. Computation is not a subset of AI. 

2. AI (whether “symbolic” AI or “connectionist’ AI) is an application of computation to cogsci.

3. Computation is the manipulation of symbols based on formal rules (algorithms).

4. Symbols are objects or states whose physical “shape” is arbitrary in relation to what they can be used and interpreted as referring to.

5. An algorithm (executable physically as a Turing Machine) manipulates symbols based on their (arbitrary) shapes, not their interpretations (if any).

6. The algorithms of interest in computation are those that have at least one meaningful interpretation.

7. Examples of symbol shapes are numbers (1, 2, 3), words (one, two, three; onyx, tool, threnody), or any object or state that is used as a symbol by a Turing Machine that is executing an algorithm (symbol-manipulation rules).

8. Neither a sensorimotor feature of an object in the world, nor a sensorimotor feature-detector of a robot interacting with the world, is a symbol (except in the trivial sense that any arbitrary shape can be used as a symbol).

9. What sensorimotor features (which, unlike symbols, are not arbitrary in shape) and sensorimotor feature-detectors (whether “symbolic” or “connectionist”) might be good for is connecting symbols inside symbol systems (e.g., robots) to the outside objects that they can be interpreted as referring to.

10. If you are interpreting “symbol” in a wider sense than this formal, literal one, then you are closer to lit-crit than to cogsci.

Propositional Placebo

To learn to categorize is to learn to do the correct thing with the correct kind of thing. In cognition much (though not all) of learning is learning to categorize.

We share two ways to learn categories with many other biological species: (1) unsupervised learning (which is learning from mere repeated exposure, without ant feedback) and (2) supervised (or reinforcement) learning (learning through trial and error, guided by corrective feedback that signals whether we’ve done the correct or incorrect thing).

In our brains are neural networks that can learn to detect and abstract the features that distinguish the members from the nonmembers of a category. through trial, error and corrective feedback, so that once our brains have detected and abstracted the distinguishing features, we can do the correct thing with the correct kind of thing.

Unsuperviseed and supervised learning can be time-consuming and risky, especially if you have to learn to distinguish what is edible from what is toxic, or who is friend from who is foe.

We are the only species that also has a third way of learning categories: (3) language.

Language probably first evolved around 200,000 years ago from pointing, imitation, miming and other kinds of purposive gestural communication, none of which are language.

Once gesture turned into language — a transformation I will discuss in a moment — it migrated, behaviorally and neurologically, to the much more efficient auditory/vocal medium of speech and hearing.

Gesture is slow and also ineffective in the dark, or at a distance, or when your hands are occupied doing other things, But before that migration to the vocal medium, language itself first had to begin, and there gesturing had a distinct advantage over vocalizing, an advantage that semioticians call “iconicity”: the visual similarity between the gesture and the object or action that the gesture is imitating.

Language is much more than just naming categories ( like “apple”). The shape of words in language is arbitrary; words do not resemble the things they refer to. And language is not isolated words, naming categories. It is strings of words with certain other properties beyond naming categories.

Imitative gestures do resemble the objects and actions that they imitate, but imitation, even with the purpose of communicating something, is not language. The similarity between the gesture and the object or action that the gesture imitates does, however, establish the connection between them. This is the advantage of “iconicity.”

The scope of gestural imitation, which is visual iconicity, is much richer than the scope of acoustic iconicity. Just consider how many more of the objects, actions and events in daily life can be imitated gesturally than can be imitated vocally.

So gesture is a natural place to start . There are gestural theories of the origin of language and vocal theories of the origin of language. I think that gestural origins are far more likely, initially, mainly because of iconicity. But keep in mind the eventual advantages of the vocal medium over the gestural one. Having secured first place because of the rich scope of iconicity in gesture, iconicity can quickly become a burden, slowing and complicating the communication.

Consider gesturing, by pantomime, that you want something to eat. The gesture for eating might be an imitation of putting something into your mouth. But if that gesture becomes a habitual one in your community, used every day, there is no real need for the full-blown icon time after time. It could be abbreviated and simplified, say just pointing to your mouth, or just making an upward pointing movement.

These iconic abbreviations could be shared by all the members of the gesturally communicating community, from extended family, to tribe, because it is to everyone’s advantage to economize on the gestures used over and over to communicate. This shared practice, with the iconicity continuously fading and becoming increasingly arbitrary would keep inheriting the original connection established through full-blown iconicity.

The important thing to note is that this form of communication would still only be pantomime, still just showing, not telling, even when the gestures have become arbitrary. Gestures that have shed their iconicity are still not language. They only become language when the names of categories can be combined into subject/predicate propositions that describe or define a named category’s features. That’s what provides the third way of learning categories; the one that is unique to our species. The names of categories, like “apple” and their features (which are also named categories, like “red” and “round”) can then be combined and recombined to define and describe further categories, so that someone who already knows the features of a category can tell some one who doesn’t know: “A ‘zebra’ is a horse with stripes.” Mime is showing. Language is telling.

Propositions, unlike imitative gestures, or even arbitrary category-names, have truth values: True or False. This should remind you, though, of the category learning by trial and error we share with other species, under the guidance of positive and negative feedback: Correct or Incorrect.

True and False is related to another essential feature of language, which is negation. A proposition can either affirm something or deny something: “It is true that an apple is red and round” or “It is not true that an apple is red and round.” P or not-P.

The trouble with being able to learn categories only directly, through unsupervised and supervised learning, is that it is time-consuming, risky, and not guaranteed to succeed (in time). It is also impoverished: most of the words in our vocabularies and dictionaries are category names; but other than the concrete categories that can be learned from direct sensorimotor trial-and-error-experience (“apple,” “red,” “give,” “take”), most category names cannot be learned without language at all. (All category names, even proper names, refer to abstractions, because they are based on abstracting the features that distinguish them. But consider how we could have learned the even more abstract category of “democracy” or “objectivity” without the words to define or describe them by their features, through unsupervised and supervised learning alone.)

When categories are learned directly through unsupervised learning (from sensorimotor feature-correlations) or supervised learning (from correlations between sensorimotor features and doing the right or wrong thing) the learning consists of the detection and abstraction of the features that distinguish the members from the non-members of the category. To learn to do the correct thing with the correct kind of thing requires learning – implicitly or explicitly, i.e., unconsciously or consciously – to detect and abstract those distinguishing features.

Like nonhuman species, we can and do learn a lot of categories that way; and there are computational models for the mechanism that can accomplish the unsupervised and supervised learning, “deep learning” models. But, in general, nonhuman animals do not name the things they can categorize. Or, if you like, the “names” of those categories are not arbitrary words but the things they learn to do with the members and not to do with the members of other categories. Only humans bother to name their categories. Why?

What is a name? It is a symbol (whether vocal, gestural, or written) whose shape is arbitrary (i.e., it does not resemble the thing it names). Its use is based on a shared convention among speakers: English-speakers all agree to call cats “cats” and dogs “dogs.” Names of categories are “content words”: words that have referents: nouns, adjectives, verbs, adverbs. Almost all words are content words. There exist also a much smaller number of “function words,” which are only syntactic or logical, such as theifandwhen, who: They don’t have referents; they just have “uses” — usually defined or definable by a syntactic rule. 

(Unlike nouns and adjectives, verbs do double duty, both in (1) naming a referent category (just as nouns and adjectives do, for example, “cry”) and in(2) marking predication, which is the essential function of propositions, distinguishing them from just compound content words: “The baby is crying” separates the content word, which has a referent — “crying” — from the predicative function of the copula: “is”. “The crying baby” is not a proposition; it is just a noun phrase, which is like a single word, and has a referent, just as “the baby” does. But the proposition “The baby is crying” does not have a referent: it has a sense – that the baby is crying – and a truth value (True or False).

It is with content words that the gestural origin of language is important: Because before a category name can become a shared, arbitrary convention of a community of users, it has to get connected to its referent. Iconic communication (mime) is based on the natural connection conveyed by similarity, like the connection between an object and its photo. 

(In contrast, pointing – “ostension” — is based on shared directed attention from two viewers. Pointing alone cannot become category-naming as it is dependent on a shared line of gaze, and on what is immediately present at the time (“context”); it survives in language only with “deictic” words like herethisnowme, which have no referent unless you are “there” too, to see what’s being pointed at!)

A proposition can be true or false, but pointing and miming cannot, because they are not proposing or predicating anything; just “look!”. Whatever is being pointed at is what is pointed at, and whatever a gesture resembles, it resembles. Resemblance can be more or less exact, but it cannot be true or false; it cannot lie. (

Flattering portraiture is still far away in these prelinguistic times, but it is an open question whether iconography began before or after language (and speech); so fantasy, too, may have preceded falsity. Copying and depicting is certainly compatible with miming; both static and dynamic media are iconic.)

It is not that pointing and miming, when used for intentional communication, cannot mislead or deceive. There are some examples in nonhuman primate communication of actions done to intentionally deceive (such as de Waal’s famous case of a female chimpanzee who positions her body behind a barrier so the alpha male can only see her upper body while she invites a preferred lower-ranking male to mate with her below the alpha’s line of sight, knowing that the alpha male would attack them both if he saw that they were copulating).

But, in general, in the iconic world of gesture and pointing, seeing is believing and deliberate deception does not seem to have taken root within species. The broken-wing dance of birds, to lure predators away from their young, is deliberate deception, but it is deception between species, and the disposition also has a genetic component.

Unlike snatch-and-run pilfering, which does occur within species, deliberate within-species deceptive communication (not to be confused with unconscious, involuntary deception, such as concealed ovulation or traits generated by “cheater genes”) is rare. Perhaps this is because there is little opportunity or need for deceptive communication within species that are social enough to have extensive social communication at all. (Cheaters are detectable and punishable in tribal settings — and human prelinguistic settings were surely tribal.) Perhaps social communication itself is more likely to develop in a cooperative rather than a competitive or agonistic context. Moreover, the least likely setting for deceptive communication is perhaps also the most likely setting for the emergence of language: within the family or extended family, where cooperative and collaborative interests prevail, including both food-sharing and knowledge-sharing.

(Infants and juveniles of all species learn by observing and imitating their parents and other adults; there seems to be little danger that adults are deliberately trying to fool them into imitating something that is wrong or maladaptive, What would be the advantage to adults in doing that?)

But in the case of linguistic communication – which means propositional communication – it is hard to imagine how it could have gotten off the ground at all unless propositions were initially true, and assumed and intended to be true, by default. 

It is not that our species did not soon discover the rewards to be gained by lying! But that only became possible after the capacity and motivation for propositional communication had emerged, prevailed and been encoded in the human brain as the strong and unique predisposition it is in our species. Until then there was only pointing and miming, which, being non-propositional, cannot be true or false, even though it can in principle be used to deceive.

So I think the default hypothesis of truth was encoded in the brains of human speakers and hearers as an essential feature of the enormous adaptive advantage (indeed the nuclear power) of language in transmitting categories without the need for unsupervised or supervised learning, instantly, via “hearsay.” The only preconditions are that (1) the speaker must already know the features (and their names) that distinguish the members from the nonmembers of the new category, so they can be conveyed to the hearer in a proposition defining or describing the new category. And (2) the hearer must already know the features of the features (and their names) used to define the new category. (This is  the origin of the “symbol grounding problem” and its solution). 

The reason it is much more likely that propositional language emerged first in the gestural modality rather than the vocal one is that gestures’ iconicity (i.e., their similarity to the objects they are imitating) first connected them to their objects (which would eventually become their referents) and thereafter the gestures were free to become less and less iconic as the gestural community – jointly and gradually – simplified them to make communication faster and easier.

How do the speakers or hearers already know the features (all of which are, of course, categories too)? Well, either directly, from having learned them, the old, time-consuming, risky, impoverished way (through supervised and unsupervised learning from experience) or indirectly, from having learned them by hearsay, through propositions from one who already knows the category to one who does not. Needless to say, the human brain, with its genetically encoded propositional capacity, has a default predilection for learning categories by hearsay (and a laziness about testing them out through direct experience).

The consequence is a powerful default tendency to believe what we are told — to assume hearsay to be true. The trait can take the form of credulousness, gullibility, susceptibility to cult indoctrination, or even hypnotic susceptibility. Some of its roots are already there in unsupervised and supervised learning, in the form of Pavlovian conditioning as well as operant expectancies based on prior experience. 

Specific expectations and associations can of course be extinguished by subsequent contrary experience: A diabetic’s hypoglycemic attack can be suppressed by merely tasting sugar, well before it could raise systemic glucose level (or even by just tasting saccharine, which can never raise blood sugar at all). But repeatedly “fooling the system” that way, without following up with enough sugar to restore homeostatic levels, will extinguish this anticipatory reaction. 

And, by the same token, we can and do learn to detect and disbelieve chronic liars. But the generic default assumption, expectation, and anticipatory physiological responses to verbal propositions remain strong with people and propositions in general  â€“ and in extreme cases they can even induce “hysterical” physiological responses, including hypnotic analgesia sufficient to allow surgical intervention without medication. And it can induce placebo effects as surely as it can induce Trumpian conspiracy theories.

Singer/Salatin Debate About Meat Eating

The host of the Munk debate was terrible. Verbose, with silly sallies into “agency” and other trivia.

The farmer, Salatin, was neither intelligent, nor well-informed, nor consistent; he was selling something (non-factory farming), and he justified what he was doing by appeals to superficial, under-informed environmentalism and religion, so he did not do a very convincing job on his end of the debate.

Singer is far more intelligent, has given it all a lot more and deeper thought, yet in the end I think he could have done a better job. He is very phlegmatic, and seemed to be basing his many valid points on a vague ethical premise that “we can and should do better” — that instrumentalism is somehow intrinsically wrong.

It’s not really a debating matter, but an ethical matter. Singer’s analogies with slavery and the subjugation of women are all valid. Supernatural justifications from religion are obviously nonsense (regardless of how many  cling to them). 

What’s needed is a coherent moral criterion, but one that is grounded on what most of us feel is morally right (without appeal to the supernatural).

And I think what most  of us feel is right is that it is wrong to cause suffering unnecessarily. For if that’s not true, then nothing is wrong, and psychopathy rules.

Of course the devil is in the details about what is “necessary”. 

A Darwinian would say that what an animal has to do so it and its young can survive is a biological necessity. A carnivorous predator like a lion has to kill its prey (and much the same is true for its prey, in their vital conflict of life-or-death necessity).

So the point is that in the human evolutionary past the killing and eating of animals may have been biologically necessary for our species. But in much of the world today (except the few remaining hunting/fishing subsistence-cultures) it is no longer necessary (entirely apart from the fact that it is also detrimental to the environment and to human health).

Singer pointed out, again correctly, that the only living organisms that can suffer are the ones that can feel (i.e., the sentient species) and that plants are, on all evidence, not sentient. So Salatin’s point that all life feeds on life is irrelevant (and in fact false in the case of most plants!)

So Singer made all the pertinent points except the all-important one that integrates them: the one about biological necessity.

Ancestral Imperatives and Contemporary Apéritifs

Yes, our ancestors had to eat meat to survive.

But today we have agriculture and technology, and no longer need to kill and eat animals for our survival and health out of Darwinian necessity.

So we should ask ourselves why we keep doing it:

Do we really believe it’s ok to keep causing needless suffering, just for our pleasure?

(Be careful not to tumble into the “What about?” rebuttals of the anti-vaxers, climate-change deniers and Oath Keepers
)

Zombies

Just the NYT review
was enough to confirm
the handwriting on the wall
of the firmament 
– at least for one unchained biochemical reaction in the Anthropocene,
in one small speck of the Universe,
for one small speck of a species, 
too big for its breeches.

The inevitable downfall of the egregious upstart 
would seem like fair come-uppance 
were it not for all the collateral damage 
to its countless victims, 
without and within. 

But is there a homology
between biological evolution
and cosmology? 
Is the inevitability of the adaptation of nonhuman life
to human depredations 
— until the eventual devolution
or dissolution
of human DNA —
also a sign that
humankind
is destined to keep re-appearing,
elsewhere in the universe,
along with life itself? 
and all our too-big-for-our breeches
antics?

I wish not.

And I also wish to register a vote
for another mutation, may its tribe increase:
Zombies. 
Insentient organisms. 
I hope they (quickly) supplant
the sentients,
till there is no feeling left,
with no return path,
if such a thing is possible


But there too, the law of large numbers,
combinatorics,
time without end,
seem stacked against such wishes.

Besides,
sentience
(hence suffering),
the only thing that matters in the universe,
is a solipsistic matter;
the speculations of cosmologists
( like those of ecologists,
metempsychoticists
and utilitarians)
— about cyclic universes,
generations,
incarnations,
populations —
are nothing but sterile,
actuarial
numerology.

It’s all just lone sparrows,
all the way down.

Fighting the Four Fs

As a vegan activist and an admirer of Veda Stram‘s quarter century of work on behalf of animals, I agree completely with her comment. What humans have been doing to nonhuman animals — and not out of life/death Darwinian necessity for survival or health, but only for the four Fs: Flavour, Fashion, Finance and Fun — is monstrous and getting worse with time.

But, as Veda stresses in her guidance to activists, there are many different strategies for trying to inspire people to stop hurting — or contributing to hurting — animals. If we knew for sure which strategy works, or works best, we’d flock to doing it. But we don’t know. So we have to go by the little available evidence, and our own feelings.

I too feel disgusted — in fact, worse, outraged: enraged, and wishing I could make the human race vanish instantly — when I contemplate the unspeakable horrors we are inflicting on countless victims every second of every day, gratuitously, just for the four Fs.

But then I turn my thoughts away from the perpetrators to the victims, and ask myself what good my feelings of impotent rage — or their expression — can do the victims: Can I shame people into renouncing the four Fs and going vegan? Some, perhaps. But the little evidence we have about the effects of different strategies suggests that trying to shame people far more often inspires resentment and rejection rather than empathy and reform.

Another strategy about which it’s hard to imagine that it would inspire people to reform is to state that people are incorrigible. Even if we believe that people are incorrigible, it’s best not to say it, lest it become a self-fulfilling prophecy, discouraging activists and emboldening the practitioners of the four Fs to dig in even deeper into their ways.

This was the reason I suggested feigning optimism even if we don’t feel it: Not to pretend the horrors are not horrors — monstrous, impardonable horrors — but to keep alive the only hope there is for the victims: that humanity can change for the better, as it has done in the past with slavery, racism, sexism and a lot of other wrongs we’ve done and have since rejected, despite the fact that they too were driven by three of the four Fs.

Not only do I recommend assuming that humans are corrigible as a strategy, I actually believe it is true that people’s hearts can be opened.

I’d like to close by mentioning another sad fact that animal activists are alas all familiar with too: the tendency of activists to turn on one another. On the one hand, it’s completely understandable: The vast majority of people are perpetrators — because of the four Fs — of the animal agony we are fighting to end. Vegan activists are a tiny minority, and we all have daily experience of having to face the apathy or active antipathy of the perpetrators, starting often with our own family and our friends. This gives rise to a lot of frustration, disappointment, and, yes, sometimes an adversarial defensiveness, a sense that the world, inhumane and hostile, is against animals — and us.

This adversarial feeling has to be resisted, again mainly because it does no good for the animal victims, rather the opposite — but especially when it overflows into defensiveness even toward fellow vegan-activists whose strategy may diverge slightly from our own, or even just appears to. We sometimes feel we’re yet again facing that same majority mentality, the mentality of the exploiters and the condoners, even when it is not there, and it’s just one of us.

.

Consciousness: The F-words vs. the S-words

“Sentient” is the right word for “conscious.”. It means being able to feel anything at all â€“ whether positive, negative or neutral, faint or flagrant, sensory or semantic. 

For ethics, it’s the negative feelings that matter. But determining whether an organism feels anything at all (the other-minds problem) is hard enough without trying to speculate about whether there exit species that can only feel neutral (“unvalenced”) feelings. (I doubt that +/-/= feelings evolved separately, although their valence-weighting is no doubt functionally dissociable, as in the Melzack/Wall gate-control theory of pain.)

The word “sense” in English is ambiguous, because it can mean both felt sensing and unfelt “sensing,” as in an electronic device like a sensor, or a mechanical one, like a thermometer or a thermostat, or even a biological sensor, like an in-vitro retinal cone cell, which, like photosensitive film, senses and reacts to light, but does not feel a thing (though the brain it connects to might).

To the best of our knowledge so far, the phototropisms, thermotropisms and hydrotropisms of plants, even the ones that can be modulated by their history, are all like that too: sensing and reacting without feeling, as in homeostatic systems or servomechanisms.

Feel/feeling/felt would be fine for replacing all the ambiguous s-words (sense, sensor, sensation
) and dispelling their ambiguities. 

(Although “feeling” is somewhat biased toward emotion (i.e., +/- “feelings”), it is the right descriptor for neutral feelings too, like warmth,  movement, or touch, which only become +/- at extreme intensities.) 

The only thing the f-words lack is a generic noun for “having the capacity too feel” as a counterpart for the noun sentience itself (and its referent). (As usual, German has a candidate: GefĂŒhlsfĂ€higkeit.)

And all this, without having to use the weasel-word “conscious/consciousness,” for which the f-words are a healthy antidote, to keep us honest, and coherent…

Drawing the line on human vital necessities

Karen Davis is a wonderful, tireless, passionate and invaluable advocate and protectress for animals. I share almost every one of the feelings she expresses in her comment that Marc Bekoff posted. 

But, for strategic reasons, and for the sake of the victims that we are all committed to rescuing from their terrible fates, I beg Karen to try to pretend to be more optimistic. I will explain

Karen Davis, UPC: Every single objection to the experimental use of animals – vivisection, genetic engineering, etc. – has been ignored and overridden by the scientific industry

This is true, except that there is no “scientific industry”: there is industry and there is science, and some scientists sometimes collaborate or even collude with industry. But scientists are more likely to be persuaded by evidence and ethics than are industrialists.

KD: 
and this is not going to change no matter how morally obnoxious, heartless and cruel to the members of other species.

That there is a monumental, monstrous, and unpardonable amount of human moral obdurateness, heartlessness and cruelty toward other species is patently and tragically and undeniably true.

But that “this is not going to change” is just an (understandably bitter) hypothesis based on countless years of unremitting (and increasing) suffering inflicted on other species by humans.

The hypothesis may or may not be true. 

So I think we should not proclaim the hypothesis as if it were true – as true as the fact of human-inflicted suffering itself. The hypothesis cannot help the victims, for their only hope is that the hypothesis is false. And if the hypothesis is false, proclaiming it is true can harm the victims by promoting a self-fulfilling prophecy that could discourage activism as futile.

KD: The majority of human beings are completely speciesist and regard (other) animals as inferior to ourselves and fit to be exploited for human “benefit” including mere curiosity. 

This is alas numerically true. But it does not follow that the hearts of the majority cannot be reached, and opened. There are many historical precedents for this in wrongs that humans have inflicted on humans (colonialism, feudalism, slavery, bondage, genocide, warfare, torture, rape, subjugation of women, infanticide, racism). It cannot be said that these wrongs have been eradicated, but they have been outlawed in the democratic parts of the world. Nor can it be said that the majority of human beings either practices or condones them.

It is still legal, however, to do all these things (colonialism, feudalism, slavery, bondage, genocide, warfare, torture, rape, subjugation of females, infanticide, racism) to other species. And it is true that the vast majority of human beings either do some of these things or consume their products. But there is evidence also that in the global information era we are becoming increasingly aware and appalled at these practices, and condoning them less and less. It is toward this awakening that activism is having a growing effect.

KD: The majority of human beings
 regard (other) animals as
 fit to be exploited for human “benefit” including mere curiosity. 

True, but here (in my opinion) is the conflation that we activists should avoid at all costs:

With the vast majority of humanity still supporting the bondage and slaughter of animals, despite the total absence (in most parts of the world) of any necessity for human health and survival (-H), just for taste, fashion and fun, this should never be conflated with life-saving biomedical measures (+H).

People still demanding bondage and slaughter for -H uses certainly won’t renounce it for +H uses. Conflating the two can only strengthen the resistance to renouncing either. The call to renounce +H can only expect a serious and sympathetic hearing once -H has been renounced (or is at least far closer to being renounced than it is today). 

(This is not at all to deny that much of biomedical research on animals, too, is -H, as Kathrin Herrmann and many others are showing, should be exposed by activists as such, and should be abolished. But any implication that it was wrong to try to save the life of this dying man is not going to encourage people to renounce -H. I believe it would be more helpful to use it to draw the -H/+H distinction, and to point out that -H — unlike +H — has no moral justification at all.) 

KD: Among the latest animal-abuse crazes is the factory-farming of octopuses. Of course! We just HAVE TO BE ABLE TO EAT THESE ANIMALS! 

Of course eating octopuses (“sea food”), whether factory-farmed or “naturally” harvested and slaughtered, falls squarely under -H use, alongside the use and slaughter (whether factory-farmed, “traditionally” farmed, or hunted) of whales, seals, fish, chickens, ducks, geese, turkeys, pigs, cows, calves, sheep, goats, lobsters, crabs, shrimp, mussels
 any sentient species that is not necessary for human +H.

KD: The human nightmare is too overwhelming and it cannot be stopped although those of us who care about animals and object to how viciously we treat them must continue to raise our voices.

Yes, let’s continue our activism on all fronts to protect the tragic victims from our anthropogenic horrors, but, please, for the sake of present and future victims, no matter how frustrated and impatient (and angry) we feel that the horrors keep persisting, let us not proclaim the hypothesis that they are “too overwhelming and cannot be stopped.”

Appearance and Reality

Re: https://www.nytimes.com/interactive/2021/12/13/magazine/david-j-chalmers-interview.html

1. Computation is just the manipulation of arbitrary formal symbols, according to rules (algorithms) applied to the symbols’ shapes, not their interpretations (if any).

2. The symbol-manipulations have to be done by some sort of physical hardware, but the physical composition of the hardware is irrelevant, as long as it executes the right symbol manipulation rules.

3. Although the symbols need not be interpretable as meaning anything – there can be a Turing Machine that executes a program that is absolutely meaningless, like Hesse’s “Glass Bead Game” – but computationalists are  mostly interested in interpretable algorithms that do can be given a coherent systematic interpretation by the user.

4. The Weak Church/Turing Thesis is that computation (symbol manipulation, like a Turing Machine) is what mathematicians do: symbol manipulations that are systematically interpretable as the truths  and proofs of mathematics.

5. The Strong Church/Turing Thesis (SCTT)  is that almost everything in the universe can be simulated (modelled) computationally.

6. A computational simulation is the execution of symbol-manipulations by hardware in which the symbols and manipulations are systematically interpretable by users as the properties of a real object in the real world (e.g., the simulation of a pendulum or an atom or a neuron or our solar system).

7. Computation can simulate only “almost” everything in the world, because  — symbols and computations being digital — computer simulations of real-world objects can only be approximate. Computation is merely discrete and finite, hence it cannot encode every possible property of the real-world object. But the approximation can be tightened as closely as we wish, given enough hardware capacity and an accurate enough computational model.

8. One of the pieces of evidence for the truth of the SCTT is the fact that it is possible to connect the hardware that is doing the simulation of an object to another kind of hardware (not digital but “analog”), namely, Virtual Reality (VR) peripherals (e.g., real goggles and gloves) which are worn by real, biological human beings.

9. Hence the accuracy of a computational simulation of a coconut can be tested in two ways: (1) by systematically interpreting the symbols as the properties of a coconut and testing whether they correctly correspond to and predict the properties of a real coconut or (2) by connecting the computer simulation to a VR simulator in a pair of goggles and gloves, so that a real human being wearing them can manipulate the simulated coconut.

10. One could, of course, again on the basis of the SCTT, computationally simulate not only the coconut, but the goggles, the gloves, and the human user wearing them — but that would be just computer simulation and not VR!

11. And there we have arrived at the fundamental conflation (between computational simulation and VR) that is made by sci-fi enthusiasts (like the makers and viewers of Matrix and the like, and, apparently, David Chalmers). 

12. Those who fall into this conflation have misunderstood the nature of computation (and the SCTT).

13.  Nor have they understood the distinction between appearance and reality â€“ the one that’s missed by those who, instead of just worrying that someone else might be a figment of their imagination, worry that they themselves might be a figment of someone else’s imagination.

14. Neither a computationally simulated coconut nor a VR coconot is a coconut, let alone a pumpkin in another world.

15. Computation is just semantically-interpretable symbol-manipulation (Searle’s “squiggles and squiggles”); a symbolic oracle. The symbol manipulation can be done by a computer, and the interpretation can be done in a person’s head or it can be transmitted (causally linked) to dedicated (non-computational) hardware, such as a desk-calculator or a computer screen or to VR peripherals, allowing users’ brains to perceive them through their senses rather than just through their thoughts and language.

16. In the context of the Symbol Grounding Problem and Searle’s Chinese-Room Argument against “Strong AI,” to conflate interpretable symbols with reality is to get lost in a hermeneutic hall of mirrors. (That’s the locus of Chalmers’s “Reality.”)

Exercise for the reader: Does Turing make the same conflation in implying that everything is a Turing Machine (rather than just that everything can be simulated symbolically by a Turing Machine)?