Research on animals

Q: As I go deeper into veganism I consistently struggle with animal experimentation. We’re not yet at the point where we can progress human medical science without animal models, and yet so much of animal research feels completely unnecessary. Any thoughts on how to approach veganism vs scientific research as it relates to animals?

A: There is an inescapable and undeniable tragedy in Darwinian reality: Life feeds upon itself: Life is a conflict of life-or-death necessities.

Opportunists, cynics and the naïve have long taken this for granted, as a “law of nature,” and hence a carte blanche for doing whatever they will. Psychopaths are not even troubled by questions of right and wrong.

There would be no tragedy if life were just insentient matter; but it’s a fact that many living kinds do feel, and suffer.

In your question you’ve already identified the moral dividing line: necessity. Not convenience or expediency: life-or-death necessity.

An obligate carnivore like a lion or a killer whale has no choice but to hunt and kill. And their prey have no choice but self-defence, including violent, lethal self-defence.

This applies to our own species too, both as predator and as prey. There are still today some subsistence cultures that can only survive by hunting or fishing. There were phases in our evolutionary past when this was true for most of our species.  

But it’s no longer true for most of our species, especially in the prosperous nations. Consuming sentient animals is no longer a vital necessity for most of us. 

So being vegan is right because hurting or killing sentient beings without vital (life-or-death) necessity is wrong. No decent human being can deny this. 

But the criterion is vital necessity, for life or health. (We can still kill in self-defence, and exterminate bed-bugs.) Is biomedical research on sentient animals a life-or-death necessity for humans?

I think you have also identified the answer: “much of animal research feels completely unnecessary.”

Much research – like much of what humans do – is not done because it is vitally necessary for our survival or health. A lot of research – in all fields, not just animal research — is driven by curiosity, careerism, fads & bandwagons, funding, profit, habit, and, frankly, also ignorance and  incompetence. These human foibles are perhaps tolerable or at least understandable where they don’t involve living beings. But where they entail hurting or killing sentient animals, the question must be asked: Is it vitally necessary? Does it save (human) life and health?

It is undeniable, today, that some biomedical research does save lives and health. Covid research is already an example.

So the first part of the answer about biomedical research on animals is that much of it is unnecessary, hence unjustified, but not all of it.

And it has to be added that a call for the immediate abolition of all animal research – just like a call for the immediate abolition of all human consumption of animals – is both unrealistic and unjust. It is not kindness to call for sacrificing sick humans any more then it is kindness to call for the starvation of subsistence cultures (or of obligate nonhuman carnivores). 

But it would be sophistical to cite these prominent exceptional cases as justifications for continuing to allow and support massacring animals for food regardless of whether it is vitally necessary. Or for continuing to allow and support biomedical research without far, far more conscientiously limiting it to what is likely to save lives. (Sophists and corporate interests and their lawyers will of course always try to play on the slippery slope of “likelihood,” but, again, many if not most cases are transparent enough so decent people will see that likelihood is not seriously at issue.)

It might even work against the urgent needs of the tragic number of animal beings who are suffering and dying every minute, everywhere, at human hands, needlessly, to insist that veganism means renouncing the life-saving benefits of medicine too.  To wrap together a call to stop causing needless animal suffering with a call to give up potential medical help can only add to the resistance to renouncing either of them. 

So my approach would be to stress the need to abolish that vast proportion of  biomedical research on animals that is unnecessary or incompetent while focussing on ending the monstrous amount of suffering that humans inflict  on animals gratuitously for food, fashion, finance or fun, without the slightest connection to health or survival needs.

Haggling about the price

Anonymous: Trying again with Weinberg and “Third Thoughts”. Into chapter on so-called Symmetry, and on Higgs. For the life of me I can’t believe they are on about anything real — anything that has thingness, rather than just counters in a language with arcane rules and words whose meanings are dependent only on the rules of their use and relationships with each other. Yet I accept that through some long chain of reasoning within that language, at some point there are “observations” which, by a long chain of derivations and dependencies are of something presumably real. And I accept that these people are way smarter than I and are not trying to fool people. This does little to shove a stable reality under their ideas, but just leaves me indifferent to particle physics. Did people feel like that about Newton  or Galileo?

Individuals (my apple, “Charlie”) are categories that are grounded directly in my sensorimotor experience (though it requires an act of inductive faith, bolstered by the biologically inbuilt feeling I have that “Charlie” is the same “thing” across time – which is already an abstraction). 

There’s your thinginess; as concrete as things ever get.

“Charlie” is red, which, too, is still a direct sensorimotor category, but already more of an abstraction from my direct experience, more of a leap of faith. “Colored” and “color” (and other “universals”) still moreso. 

The moon and all of its properties too.

“True” and “truth” are likewise way out there, no longer directly sensorimotor, but a verbal combination of properties (likewise named categories) ultimately grounded in sensorimotor ones.

Begins to feel like the kind of faith we feel for the theorems we prove in maths and algebra, far from the axioms we began with, but based on a faith (though not much immediate memory) in the rules of derivation that we learned, that make sense locally but become a blur when they become a long chain we hardly remember.

So aren’t electrons, or quarks, or superstrings, or chirality or superposition just still more of the usual leaps of faith that all categorization and abstraction entail? Far from the inbuilt sense of “things” that Darwin helpfully underwrites in our perception – but no different from most of the other things we feel we know and understand across time.

So in the end it boils down not to the reality of things but, as usual, the “hard problem” of why anything feels like anything at all…

(Or just the usual (Shavian?) quip, about having established our profession, just haggling about the price…)

Understanding Understanding

Horgan’s essay in Scientific American is a singularly superficial (and glib) misconstrual of every one of the points he discusses: the (1) Turing Test, (2) computation, (3) Searle’s Chinese Room Argument, (4) the other-minds problem, (5) zombies, (6) understanding QM, (7) understanding and (8) consciousnes.

(1) TheTuring Test is not about question-asking/answering but about lifelong verbal capacity and performance indistinguishable from that of any normal human being.

(2) Computation (and mathematics) is just symbol-manipulation, algorithms, syntax, without meaning or understanding. (Mathematicians understand, but computation is just formal syntax. A proof or a procedure is not based on what a mathematician understands (or thinks he understands); it is based on what his algorithm can do, formally, syntactically, not on its interpretation by us – although we are not interested in uninterpretable algorithms.) But language, though it includes formal mathematics, is not just interpretable syntax. Neither is thought (cognition); nor understanding.

(3) Searle’s Chinese Room Argument simply shows that neither seeing nor executing the syntax will generate understanding. Searle would no more understand Chinese from just the algorithm for manipulating Chinese symbols than he would understand he was playing Chess from executing the algorithm for chess, coded in binary. Words have referents in the world: they don’t just have links to other words and algorithms for manipulating them. But Searle only refutes the verbal version of TT, not the robotic version, which is not purely symbolic.

Harnad, S. (2001) What’s Wrong and Right About Searle’s Chinese Room Argument?
In: M. Bishop & J. Preston (eds.) Essays on Searle’s Chinese Room Argument. Oxford University Press

(4) The other-minds problem is not “solipsism.” (Both Horgan and Turing himself are wrong on this.) Solipsism is skepticism about the “outside world” (all of it).

(5) A “Zombie” would be a TT-passer that was insentient (unconscious, nobody home) – if that were possible.

(6) Quantum Mechanics is not “understood” by Physicists; they know how to execute the algorithm and predict the experimental outcomes, but their interpretations are far-fetched sci-fi conjectures. (Horgan is right about this.)

(7) To understand is to have a (correct) causal explanation, grounded in real-world observations and measurement PLUS what it feels like to understand something. (But that feeling, as Russell (?) has pointed out, can also be mistaken).

(8) Consciousness (sentience) is a state that it feels like something to be in.

Computation itself is hardware-independent (though it has to be executed by some hardware or other, whether Samsung or Searlel).

What is executed is the algorithm, which is rules for manipulating symbols on the basis of their shapes.

That’s what Searle does. The execution of the algorithm on the input data (Chinese messages) may include saving some data and updating some rules.

Searle should not have described the Chinese room as static symbols on a wall, but as an algorithm, to be applied to Chinese input symbols. He memorizes the initial algorithm and does the data storage and algorithm updates in his head, as needed, and dictated by the (current) algorithm. (It is irrelevant [for this thought experiment] whether he has enough time or brain capacity to actually do all that. Do it on hexadecimal tic-tac-toe, the punchline is the same: he would be executing code without a clue as to what it meant.)

But the deepest shortcoming of computationalism – the theory that cognition is just computation (aka “Strong AI) – is that in cognition, symbols have referents – physical things in the world to which they refer. Computation alone is just the manipulation of symbols based on their (arbitrary) shapes (0/1). There is nothing in the domain of arbitrary symbols that can connect with real-world referents: there are just symbols that can be interpreted (by the user) as referring to real-world objects. And that’s despite the distributional patterns and shadows cast by huge bodies of interpretable symbols (e.g., GPT-3 – or any form of unsupervised or supervised learning, shallow or deep, small or big, as long as it’s all within the circle of symbols. Sensorimotor function, in contrast, is not symbolic, nor computationally simulated sensorimotor function take its place (although the Strong Church/Turing Thesis remains true, that computation can simulate (i.e. model) just about anything, from atoms to galaxies, and that includes modeling rockets, robots, rabbits and rhinencephalons, if you come up with the right algorithms…

(That’s why my own conclusion is that computation alone cannot pass the (verbal) Turing Test; only a robot, with sensorimotor grounding in the world of referents of its symbols, could pass the TT.)

Searle is also wrong that his Argument proves that cognition cannot be computation at all (and hence we should all go off and study the brain): all he has shown is that cognition could not be all computation.

Turing said that the question of whether machines can think is “too meaningless to deserve discussion.” But there is a rational construal we can ascribe to this dismissiveness:

It feels like something to think. We have that feeling. And, of course, we are machines (i.e. causal systems).

So the question is not whether machines can think (of course they can, because we can) but which machines can think — i.e. which ones are capable to being in those states that it feels like something to be in, and how?

And Turing’s methodological proposal amounts to this:

If and when you can build a machine (so you know how it can do what it do) )that can do anything that the (normal) human thinker can do, to the point where you can’t tell them apart at all based solely on what they can do (their lifelong cognitive performance capacities, foremost among them being language) then it is meaningless to try to distinguish them.

The premise, of course (because of the other-minds problem [which is not “solipsism”: even Turing gets that wrong, as I said] which prevents us from being able to observe whether anyone other than me feels) is that the only thing we have to go by is whether the candidate can do what a thinking human can do. That refers largely to the exercise of what we call the “cognitive” capacities (perceiving objects, reasoning, learning, remembering, communicating verbally) that (human) thinkers have.

One could insist on a bit more: the brain correlates of felt states.

On the one hand, what our brain “does” with its physiological and biochemical activity is definitely part of what we can do (in fact the brain is the causal mechanism that is producing it). 

But the intuitive test perhaps worth reflecting on is this: if one of the people we had known and communicated with for 40 years turned out to have had a synthetic device in his head all along, instead of a biological brain, and hence he lacked the activity correlated with felt states, would we conclude – on the basis of that evidence – that he does not feel after all?

I think Turing would consider that pretty arbitrary too: for what did we actually learn from the lack of the brain correlations? 

(The notion of some sort of transition or singularity along the causal chain from the head of the thinker to the things in the world his thoughts are about is a misplaced hope, I think. Conscious states are just felt states. And both the feeling and what generates the feeling, is skin-and-in…)

Taste

Re: Delicious: The Evolution of Flavor and How It Made Us Human (R Dunn & M Sanchez)

My (shamefully late) moral awakening has made me unable to read most of what I used to consider our classical literature. The flagrant and unquestioned abuse of animals is everywhere. 

I feel the same way about the historical and biological past of human (gustatory) taste. In broad strokes, I know the story, but the details are not titillating. 

In fact I think that in a much broader sense I have renounced most “taste” of  all sorts, both gustatory and aesthetic/cultural. It’s so imbued with pleasure at the expense of the suffering of others, human and animal.

Of course there’s no rejecting Darwinian facts – but there’s no pleasure in rehearsing, replicating or revering them.

tasteYes, this does superficially resemble some sort of ascetic, killjoy puritanical cult. But although it would take a while to explain it, I don’t think it’s that at all — and in some fundamental ways the opposite: the enemy is not pleasure itself, but pleasure at the expense of the suffering of others.

Conscience

Causing less killing and suffering is always preferable to causing more. But Jacob Maxwell (“Boycotting Factory Farming is Not the Only Solution to Animal Suffering”)  has not faced the deeper ethical question underlying it, which is about causing *needless* killing and suffering at all.

An obligate carnivore like a lion needs to kill and eat its prey to feed itself and its young; and its prey needs to fight back or flee to save themselves or their young.

Humans — facultative omnivores — have been both predators and prey throughout their evolutionary history. In the few remaining human subsistence-culture environments of fishing and hunting it is still not a matter of choice. 

But modern agriculture and laboratory science have made it a matter of choice for the more prosperous majority of the human population. We can choose to hurt and kill, needlessly, or not.

Jacob counsels us to cut back on needless hurting and killing, but to “limit guilt.”

Why? He needs to look much, much more deeply at whether this is any different from counselling to cut back on the beating and killing of our slaves.

The needless harming and killing of feeling beings (human or nonhuman) is surely the wrong of wrongs. It’s what all morality is about. Is it right to counsel limiting our feeling that it is wrong, to leave us feeling free to keep doing it — and leave our victims to keep suffering and dying for it?

Cephalopods, Split-Brains and Siamese Twins

Anon: “Godfrey-Smith worries about the feeling of having separate streams of mind, due to trying to imagine what being an octopus is like.”

“Have” is a weasel-word. The only “haver” is the feeler — unless there are multiple co-habiting feelers, in which case the “haver” is the (shared) body, and that geographic “having” is not a mental state, (i.e., not a felt state) but just a geographically proximal (potentially even simultaneous) pair of distinct states, co-colonizing the same somatic substrate.

I think the crucial intuition is that a feeler cannot feel another feeler’s feeling. It can have a similar feeling, in response to the same external input; it can have it simultaneously (or successively); the feeling can feel as if it were another feeler’s feeling. But it is not, and cannot be another”s feeling. 

And that is part of the nature of feeling (hence of having a mind): Feeling is a state (generated by a neural substrate). A state that it feels like something to be in. A felt state is always “dual” in that there is the feeler and the feeling. (This is vaguely and insufficiently analogous to moving: there is the thing that is moving, and there is the moving itself.) There cannot (pace Freud) be an “unfelt feeling” any more than there can be an “unmoved moving.” 

But feeling is not contemplation by a Cartesian ego. That’s a cognitive capacity that some feelers (e.g., humans, verbally, and probably many other vertebrates and perhaps some invertebrates, nonverbally) have; and other feelers (e.g. amphyoxus, or annelids) don’t. But the duality (some prefer to call it, unhelpfully, “relationality”) is inherent in the nature of feeling itself.

“Co-habitation” and geographic overlap are certainly possible, but that has to do with the causal substrate of the feeling(s): the causal mechanism that is generating the feeling(s). If ever there was a category error, it’s that of conflating (1) the neural substrate that is generating the feeling with (2) the feeler of the feeling. The feeler is a part of the feeling generated. And, absent “telepathy,” a feeler cannot literally feel a feeling that is generated by another neural substrate — whether a nearby or even a partly overlapping one: Siamese twins who share part of their brain so that they both feel it when their conjoined arm is touched are not feeling the same feeling, just an otherwise almost identical one — only almost, because the twins are not spatially identical, otherwise we would be deeply into the metaphysics of indiscernibles!

Anon: “He uses trying to imagine split brain people but –for him and for me — that is not entirely satisfactory… he has driven his car safely over a familiar route and really can’t recall any memory of having done so: i mean he knows he did; its not a surprise, but really it was absent his conscious attention.”

We can do things without feeling we are doing it, or without remembering that we felt we were doing it, or even without remembering or otherwise knowing we did it. And I suspect that that can be true of us simultaneously (especially for our vegetative functions, like breathing, which we can do both deliberately and feelingly, and automatically without feeling it). In that sense, we are all octopus-like time-sharing multi-processors, simultaneous and successive (like the split-brain).

Doing, Feeling and Weaselling

Anon: “I agree with your take on mental states as conscious or felt states (seemingly unpopular in philosophy).”

Unfortunately, that one sentence already illustrates that it could not be that you agree with me — or even understand what I mean!

What I mean is that “weasel words” like “mental” and “conscious” are really just synonyms of “felt” — but they are being used as if they meant something else or something more. Your sentence, to my ears, reads as follows:

— “I agree with your take on felt states as felt or felt states (seemingly unpopular in philosophy)” —

What seems to be popular in (some) philosophy is the kind of vacuous redundancy that the above transcription unmasks.

Anon:  “however, I was surprised to find that you do not accept that plants and bacteria are cognizing.”

“Cognize” is not a weasel-word but a very vague place-holder for what is going on inside an organism to generate the capacity to do (some of) the kinds of things that (some) organisms can do

Cognitive science is the field of research that is trying to find out (reverse-engineer) what those internal goings-on (structures, processes) are, that generate (cause) those doing-capacities.

This would just be a very strained way of stating the obvious if it were not for the fact that some of those internal goings-on (in some organisms, sometimes) are felt states (or, perhaps more modestly, they are states that generate not only doing but feeling).

There is another distinction we make, within the category of “doings.” We distinguish “cognitive” doings from (what we could call) “vegetative” doings. 

The cognitive/vegetative distinction is clear on both ends of what might (or might not) be a continuum. When organisms are doing things that we think involve “thinking,” we tend to call those doings (or the internal goings-on that generate them) “cognitive.” And when they are doing things (like thermoregulation, digestion, immunosuppression, etc.) that we think do not involve “thinking,” we call those doings (or the internal goings-on that generate them) “vegetative.”

This “cognitive/vegetative” distinction would be completely vacuous if it were not for the existence of feeling. For cognitive doings also tend to be felt, whereas vegetative doings do not. And many (but not all) cognitive doings tend to be felt as deliberate or voluntary. They feel as if “we” (not just our bodies) were causing them.

So the “cognitive/vegetative” distinction is correlated with the felt/unfelt (as well as the voluntary/involuntary) distinction.

That’s why the question of whether we should call the internal goings-on that generate the doings of plants and bacteria “cognizing” is not just a terminological question: it depends on a matter of fact: Do plants and bacteria feel?*
—*I have no problem with the word “sentient,” which is not yet another weasel-word. It simply means “capable of feeling” when it is said of an organism, and is homologous with “felt” if it  is said of an internal state.”  It is useful for making distinctions among the nominal, verbal and adjectival senses of the notion and phenomenon of “feeling” — in English. Both English and German could, awkwardly, express “sentient state” as well as “sentient organism” with “felt state” (“gefühlter Zustand”) and “feeling-capable organism” (“gefühlfähig Organismus”), but it would sound even more awkward in English than the already more agglutinative German (though even German changes the root in its Latin-free version of “sentient organism”: “empfindungsfähig Organismus.” In Romance languages like French the Latin root, from “sentire” — “to feel” is evident in « sentir » “feel,” « état ressenti » “felt state” and  « organisme sentient » “sentient organism.”  In a pinch, English could make do without “feel” or “feeling,” relying only on “sense,” “sensing” “sensation,” (whether sensing a surface, a sound, a sorrow, or a significance), “sentient” (for both species and states) and “sentience.” — This is all just a trivial linguistic matter, but weaselling is not.

Anon: “Those who seek to argue for bacteria or plant sentience often argue that they are cognitive agents and hence have minds – with some going further and saying they are conscious.”

Those who seek to argue for bacteria or plant feeling often argue that they are feeling-capable doers and hence have feelings – with some going further and saying they feel.

We know that bacteria and plants can do things, The question is whether they can feel. (This is also the only biological domain where the other-minds problem is factually and morally nontrivial; it would also be nontrivial if we had Turing robots: synthetic but totally indistinguishable from us in their doing-capacity.)

Anon: “Wouldn’t it be more natural to argue that mental states are cognitive states such as memory or ‘knowledge’ in cells?”

De-weaselled and disambiguated:  “Wouldn’t it be more natural to (argue) that internal states are cognitive states such as memory or ‘knowledge‘ in cells?”

Once it’s de-weaselled of “mental” this just becomes a question about whether or not an organism has felt states, and if so, which ones are felt (or somehow accompanied by feeling). And this may again involve the fuzziness of our notion of the boundary between vegetative and cognitive functions (doings).

Anon: “Wouldn’t it be more natural to argue that our empirical findings of sensory-based motor control extend our concept of cognition, but by doing to disassociate it from the mind?”

De-weaselled and disambiguated:    “Wouldn’t it be more natural to argue that our empirical findings of sensory-based motor control extend our concept of cognition? (Wouldn’t arguing that it is just) “doing” disassociate it from feeling?”

No, it would not be more natural to conflate doings with feelings. Nor would it be true that they are the same thing.

Internal states can be cognitive or vegetative. They are cognitive only in organisms that can feel (when they are feeling).

Anon: “If you want to describe all of these phenomena as merely ‘doings’ it would seem that we lose useful scientific vocabulary to describe what is happening.” See: https://aeon.co/essays/how-to-understand-cells-tissues-and-organisms-as-agents-with-agendas

That article describes remarkable doing-capacities of organisms. But the question of whether (and why!) any of them are felt is begged.

“Scientific vocabulary” is not at risk. It is just over-interpreting (if it attributes feelings where there aren’t any) — or underestimating (if it implies that explaining doing-capacity is all there is to explaining feeling — or, worse, that all there is is doings and doing-capacity). 

No, robots are not mid-way between doing-capacity and feeling-capacity. They either just do, or they also feel. The same is true of living organisms, from single cells to mammals: They either just do, or they also (sometimes) feel. 

To anthropomorphize is to attribute feeling where there is none. And where there is none, none is needed. 

The hard problem is explaining how and why those organisms that do feel, feel.

Needless to say, “agent” is a weasel-word. There are just unfeeling doers (like rocks, rockets, and, I think, rhizoids, rotifers and rhododenra) and feeling doers (like mammals and mollusks), and the boundary (for feeling, not doing) is all-or-none.

The rest is just explaining how and why they can do what they can do — and, for the intrepid, also how and why they those that feel can feel. Other than that, the “cognitive/vegetative” continuum is arbitrary, and nothing is at stake.



Cultural Pluralism, Biases and Vunerability

“Natural Science” is intrinsically asocial. The only exception at the fuzzy borderline is anthropology (which includes ethnology and, according to some, linguistics). “Science” is also ambiguous; in some languages (e.g., French and Hungarian) it means, jointly, natural science, social science and humanities scholarship.

Natural scientists, whether theoreticians or experimentalists, think of their work as answerable only to objective “empirical” testing and logic — the so-called “scientific method.” As a consequence, many “natural scientists” have an implicit or overt scepticism about “social sciences” (or « sciences humaines » as the French call it, or “Geisteswissenschaften” as the Germans call it) as pseudoscience.

Sometimes the “natural scientists” are right. But far more often they don’t understand, or misunderstand. This has occasionally led to internecine squabbles and even enmity in the Academy. (See Mathematician André Weil vs. Sociologist Robert Bellah at the Institute for Advanced Study and Mathematician Serge Lang vs. Political Scientist Samuel Huntington at the National Academy of Sciences.)

That the valiant defenders of the sacrosanct frontiers of “science” are often mathematicians, whose purely formal discipline lacks one of the two pillars of the “scientific method” (which has sometimes cast doubt on the scientific status of their own discipline) is ironic, but perhaps instructive as to why Professor Lovász was so helpless in the face of Orban’s depredations. Not understanding the social sciences, he was hardly in the position to defend them.

Joseph Palinkás, in contrast, an atomic physicist and an Orban fellow-traveller, failed to defend the non-natural sciences in the “philosopher affair,” and has only lately discovered his conscience, now that it’s much too late.

There is an eery and unpleasant familiarity in the tepid defence (and even active derogation) of their non-natural foster-sister sciences by some “natural scientists,” such as neuroscientist Tamás Freund — playing into Orban’s hands while still feeling immune from his depredations. Social scientists and humanists would have a lesson to teach them, if they would listen.

My own vote (if I had not quit the Academy in 2016 in disgust over its passive submissiveness ) would be for the linguist in the demilitarized zone, Csaba Pléh.

Madonna e progenie

the source of all love, 

kindness, empathy,

mercy

that, despite itself,

psychopathic DNA

was impelled to instill

in all species

that depend on begetter’s nurturance

to propagate their seed

Covid Cavity

An undeniable sense of emptiness in covid confinement

in the distancing and zooming

something is missing, we think

till we realize

that’s pretty much all it’s been all along

and always will be.

Vaguely like sex through a condom