עַי֓ן ×Ŗַּחַ×Ŗ עַי֓ן

ANON: ā€œIs co-habitation and occupation of land an equal sin to that of murdering another human?ā€

Taking and losing life and taking and losing ā€œlandā€ are all done by, and done to, individual people. You canā€™t quantify or equate ā€œsinā€ without counting individual doers and individual done-toā€™s, individual victims and individual perpetrators, on ā€œbothā€ sides. 

Taking ā€œlandā€ includes taking everything else that individual lives are lived for (including family and freedom).

Perhaps the most telling detail of this tragedy — which has individual victims on ā€œbothā€ sides ā€“ is that there arenā€™t just two sides: 

The individual victim-counts and the individual perpetrator-counts, for both ā€œlandā€-taking/losing and life-taking/losing, start long before 1949 and beyond ā€œPalestineā€ (which I take to be a geographical region, not a further metaphysical or moral entity). 

The ā€œtakingsā€  are not just local, in either place or time, although individuals remain individuals and local (when it comes to counting victims and perpetrators). 

All of this could have been said of the taking of both life and ā€œlandā€ throughout the entire history of humanity (and inhumanity), planet-wide. It even has a biological basis (and it goes beyond the boundaries of species): 

Life necessarily entails conflicts of interest ā€“ vital (i.e., life-or-death) interest. And ā€œlandā€ covers a multitude of vital interest, not just real-estate or geometry: it covers just about everything that life is worth living (or dying or killing) for. 

So, no, if there were a measure of degree of  ā€œsinfulnessā€, it could not be expressed as life vs. land.

There is, of course, more to say; much more. But itā€™s best to expose simplistic metrics before starting the counting.

Certainty: “Cogito” vs. “Sentio”

SH: Please compare Sentio, ergo sentitur  to “Cogito, ergo sum”.

GPT-4: “Sentio, ergo sentitur” is as epistemically solid as the Cogito but avoids unnecessary metaphysical commitments. It asserts only the occurrence of feeling, not the existence of a self.

SH: That’s exactly the aim: to assert what is certain, incontrovertible. I think the Cogito is over-reaching, implying that (a) the fact that a feeling (thinking) is being felt makes it indubitable, hence certain eo ipso that (b) a feeler (thinker) “exists” — whereas (b) is just imported assumptions, not part of the self-evident, undoubtable argument.

GPT-4: “Sentio, ergo sentitur” captures only what is certain and indubitableā€”that feeling is occurring. It avoids the unnecessary leap from experience to the assumption of an experiencer (a “self” or “thinker”), which Descartes smuggles in with “sum.”

Your formulation is more rigorously minimalist than the Cogito because it does not overreach beyond what is actually self-evident. It is a pure phenomenological statement, grounded entirely in what is felt, not who is feeling.

If the goal is to assert only what is certainā€”without presuppositionsā€”then “Sentio, ergo sentitur” is the better starting point than the Cogito.

Would you like to develop a stronger counterargument against the Cartesian assumption of the self, or do you think this already suffices?

SH: I think it should be sufficient to point out that the “self” wrapped into the Cogito is a speculative non-sequitur (imposed by 1st person syntax) and leaves the Cogito wide open to doubt. In contrast, the Sentio is as indubitable as formal necessity (non-contradiction). (I am not a phenomenologist at all; I am a sentience activist!)

GPT-4: Yes, this is not phenomenology in the Husserlian sense but rather a sentience-first epistemologyā€”stripping away the metaphysical baggage to affirm what matters: that feeling occurs, and that this undeniable fact is the only foundation that needs to be recognized.

For aĀ sentience activist, this position isĀ much stronger than the Cogito, because it does not hinge on an abstract “self” but onĀ the reality of feeling itself, which is what actually mattersā€”both epistemically and ethically.

Would you say that “Sentio, ergo sentitur” could serve as a more precise, non-anthropocentric foundation for grounding sentience in ethics as well?

SH:Ā Ā Not really. First, I don’t think the Cogito, whether as originally proposed by Descartes, or when the non-certain ontological baggage is dumped, ever provided a “foundation” for science, as purported. It was just a slight narrowing of the territory susceptible sceptical doubt.

But science was never destined to be about certainty, just probability on the evidence so far: provisional and approximate. It had previously seemed as it it only formal logic and maths that could provide certainty (logical necessity, on pain of contradiction). What the Cogito would have provided, if scaled down to the Sentio, is one other thing about which a human could be certain, which is that I feel (when and while I am feeling).

That’s something, but it’s not a “foundation” for anything, either scientific or epistemological (let alone ontological, though it does point out that feeling is a second certainty). But, of course, certainty itself is just a feeling! One can feel certain about the truth of something that is not in fact true; it only feels true, to you. Nor does feeling provide a foundation for ethics, except in the sense that it is the only thing ethics is about, and without sentience there would be no ethics.Ā 

GPT-4: I agree that the Cogito, even in its stripped-down form, was never a real foundation for scienceā€”not in the way Descartes imagined. Science doesnā€™t deal in certainty but in empirical probability, always provisional and subject to revision based on new evidence. The only domain where certainty holds is formal necessityā€”mathematics and logicā€”where contradiction is not permitted.

WhatĀ “Sentio, ergo sentitur”Ā does provide, however, isĀ one more certainty beyond formal necessity: theĀ fact of feeling when and while one feels. But as you point out, this is not aĀ foundationĀ for science, epistemology, or ontologyā€”just anĀ undeniable factĀ about sentience itself.

And yes, certainty itself is just a feelingā€”one can feel certain about something false, which makes subjective certainty unreliable as a basis for truth. Feeling is not a foundation for ethics either, but it is the only relevant content of ethics: without feeling, there would be no reason for moral concern, because nothing would matter to anyone or anything. Ethics is about sentience, not grounded by it.

This makes sentience-first ethics fundamentally different from rationalist or deontological ethics: it doesnā€™t start from rules, reason, or duty, but from the fact that feeling exists and that it is the only thing that can be harmed or benefited.

November 5, 2024: Musketears

The Economist, as usual, thinks itā€™s about the Economy, Stupid ā€“ not Ethics, Equity, or Eternity. I weep for the Earth, and its critters ā€“ except the perpetrator,Ā AnthroposĀ (though even there it will be mostly innocent victims who bear the brunt, not the concupiscents responsible, nor even the stupid ones, clueless…). What has really prevailed (to paraphraseĀ Bertrand RussellĀ — who was close) isĀ the foul odor of Musk.

ā€œWorkman, blame not thy tools!ā€

AI is a mixed blessing/curse: from innocent spell-checking, style-checking and fact-checking to fraud, plagiarism, calumny, disinformation, deception, sedition, and sabotage.

In science itā€™s a matter of scale too: far, far more scientonomic papers are being written than are being read, let alone used. They are just being counted. They would immediately collapse if anyone tried to replicate — let alone build upon ā€“ them.

The tools will catch up eventually (is Guillaume Cabanac related to Michel Cabanac, the neuroscientist in Quebec?), but, as usual, it will be an arms race (ā€œspy vs. spyā€) ā€“ and we may already have committed terminal, irreversible biocide. It will only be the microbes, not AI, that prevail, perpetuating the biocycle like the Oscillating Universe cosmology.

Jeremian or Cassandrian?

(Full disclosure. I consulted ChatGPT about whether there are any more idiomatic homologues: unless misinformed, there arenā€™t anyā€¦)

Ceterum censeo: There is no such thing as AGI, probably for the same reason there is no synthetic sentience ā€“ which is not to say that there cannot be Abiogenesis. AI, in contrast, is just plain old anthropogenic tomtoolery (sic)

Tachylalic Tales

speed-of-light bibliophagy

tachyonic AI-modelling-to-action

cooking CanadaĀ 

#

follow the money

Oceania vs. Eurasia vs. Eastasia

drone vs. drone

#

not sci-fi

canā€™t think of anything 

anyone could do about itĀ 

#

DNA hurtling

toward digital dystopia

and biocide

#

and the breathless, gormless 

geeks soldier on

increasing the Trumpsā€™ ocean-front real estate

Open Access and OpenAI

https://phys.org/news/2024-08-junk-ai-scientific-publishing.html

This is the era weā€™re in, where Gold AI has met ChatGPT. 

It will partly sort itself out (for those with good intentions), by AIā€™s version of Mad Magazineā€™s Spy vs. Spy, with AI vs AI (AI-detecting AI). But, as in disinformation and cybercrime, the bad actors will do their worst, and it will be an endless arms race until/unless failsafe encryption emerges and prevails.

Full disclosure: Iā€™ve started using ChatGPT to help me edit down badly written, wordy and repetitious commentary submissions toĀ Animal SentienceĀ that nevertheless have some redeeming substance by compressing them into more acceptable commentaries.

A SCOTUS Hermeneutics Exercise for ChatGPT4

Re: ā€œSupreme Betrayal A requiem for Section 3 of the Fourteenth Amendmentā€ By J. Michael Luttig & Laurence H. Tribe The Atlantic March 15 2025

SH: [Query to ChatGPT4]: Could you please first summarize the following passage to show you have understood it, and then give some arguments to support that “2 + 2 = 4” is false and that “water is not wet” is true. 

ā€œThe needless, rushed, and astoundingly stupid decision of the US Supreme Court has made it crystal clear that, in incompetent (and malevolent) hands, interpreting the constitution has become like Microsoft trying to patch obsolescent and increasingly contradictory legacy code when what is really needed is a bottom-up rewrite. Holding back the ocean with sandbags comes to mind too.  And of course, the infinite pliancy of Holy Writ, under the hermeneutics of zealots of any persuasion. Maybe language itself is to blame too. A good exercise for ChatGPT is to give it a true proposition, like ā€œ2+2=4ā€ or ā€œwater is wetā€ and ask GPT to find an interpretation under which it is false. Not one, but a countless number of interpretations can always be found that will do the job.(Vice versa works for false propositions too.)ā€

ChatGPT4:

The passage criticizes the U.S. Supreme Court’s decision, suggesting it reflects a lack of competence and possibly malevolence in interpreting the constitution, likening the situation to attempting to update outdated and conflicting software or trying to hold back the ocean with sandbags. It highlights the flexibility with which holy texts can be interpreted by zealots of any persuasion and suggests that language’s inherent ambiguity contributes to these problems. The author proposes a thought experiment for GPT, challenging it to reinterpret universally accepted truths (such as “2+2=4” or “water is wet”) as false, to illustrate the point that language and interpretation can significantly alter perceived truth.

To engage with the exercise proposed:

Arguing that “2 + 2 = 4” is False

In a non-standard mathematical system or under certain interpretations, “2 + 2 = 4” could be considered false. For instance, in modular arithmetic, which deals with remainders after division by a certain number (the modulus), “2 + 2” does not necessarily equal 4. Specifically, in modulo 2 arithmetic, 2 + 2 = 0, because 4 divided by 2 leaves a remainder of 0. This example demonstrates that, depending on the mathematical rules and systems we agree to use, the truth of “2 + 2 = 4” is not absolute but contingent upon those systems.

Arguing that “Water is Not Wet” is True

The statement “water is not wet” can be supported by redefining or closely examining what it means to be “wet.” Wetness is a condition that applies to a solid object when it comes into contact with a liquid, such that the liquid adheres to the object’s surface in a layer. However, water itself is a liquid and does not become wet but instead causes other materials to become wet. From this perspective, saying “water is wet” is akin to saying “water makes water wet,” which is a misunderstanding of the conditions required for wetness. Thus, one could argue that water, in its capacity to induce wetness but not be in the state of wetness itself, is not wet.

SH: Very good. Both arguments are the prima facie ones I expected. Now could you please give one further argument in each case, to show that there’s not just one way to do it?

ChatGPT4:

Certainly, let’s explore an additional argument for each proposition:

Further Argument that “2 + 2 = 4” is False

One might consider the realm of philosophical argumentation, specifically nominalism, which suggests that abstract objects like numbers do not exist outside of our minds. From this viewpoint, “2 + 2 = 4” is an agreement within a human-constructed symbolic system rather than an eternal truth. If one were to argue from a radically different conceptual frameworkā€”where the symbols “2”, “+”, and “4” have different meanings or where the concept of “addition” is not defined in the way we commonly understand itā€”the statement “2 + 2 = 4” could be seen as false. For instance, in a hypothetical culture where quantities are expressed not by exact numbers but by qualitative measures (much, little, enough), the precise operation “2 + 2 = 4” loses its meaning and thus its truth value could be considered as not true within that conceptual framework.

Further Argument that “Water is Not Wet” is True

Exploring the semantics of “wetness” can provide another avenue for arguing that “water is not wet.” One might argue that wetness is a perceptual phenomenon rather than an intrinsic property of substances. That is, an object is perceived as “wet” when it significantly alters our sensory experience of it, particularly through touch. Water itself does not undergo such a change in state; it is always in the same state. Therefore, it does not meet the perceptual criterion for wetness that a dry object does when it comes into contact with water. From this perspective, wetness is an emergent property that arises from the interaction between a liquid and a material that can absorb or be coated by the liquid. Since water itself cannot be coated by or absorb itself, it cannot be considered “wet” according to this definition of wetness as an emergent, perceptual property.

SH: Yes, and it can go on and on like that, with ever more fanciful interpretations, by adding more and more fanciful premises. Demagogues, devious lawyers, sophists, quacks, psychopaths, and theologians are particularly adept at this — and they all have the bottomless resources of language, rhetoric, and now LLMs to draw upon…

Ebeneezerā€™s Yuletide Homily

This interview just reiterated how everyone is still gob-smacked at how much transformers turn out to be able to do by swallowing and chewing more and more of the Internet with more and more computers, by filling in all the local bigram blanks, globally.

Theyā€™re right to be gob-smacked and non-plussed, because nobody expected it, and nobody has come close to explaining it. So, what do they do instead? Float a sci-fi icon ā€“ AY! GEE! EYE! ā€“ with no empirical substance or explanatory content, just: ā€œItā€™s Coming!ā€ā€¦ plus a lot of paranoia about what ā€œtheyā€™reā€ not telling us, and whoā€™s going to get super-rich out of it all, and whether itā€™s going to destroy us all.

And meanwhile Planetary Melt-Down (PMD) proceeds apace, safely muzzled by collective cog-diss, aided and abetted by those other three Anthropocene favorites: Greed, Malice and Bellicosity (GMB) — a species armed to the hilt, with both weapons and words, and words as weapons (WWWW)..

Wanna know what I thinkā€™s really going on? Language itself has gone rogue, big-time. Itā€™s Advanced alright; andĀ General; but itā€™s anything butĀ Intelligence, in our hands, and mouths.

And it started 299,999 B.C.

Spielberg’s AI: Another Cuddly No-Brainer

It would have been possible to make an intelligent film about Artificial Intelligence — even a cuddly-intelligent film. And without asking for too much from the viewer. It would just ask for a bit more thought from the maker. 

AI is about a “robot” boy who is “programmed” to love his adoptive human mother but is discriminated against because he is just a robot. I put both “robot” and “programmed” in scare-quotes, because these are the two things that should have been given more thought before making the movie. (Most of this critique also applies to the short story by Brian Aldiss that inspired the movie, but the buck stops with the film as made, and its maker.)

So, what is a “robot,” exactly? It’s a man-made system that can move independently. So, is a human baby a robot? Let’s say not, though it fits the definition so far! It’s a robot only if it’s not made in the “usual way” we make babies. So, is a test-tube fertilized baby, or a cloned one, a robot? No. Even one that grows entirely in an incubator? No, it’s still growing from “naturally” man-made cells, or clones of them.

What about a baby with most of its organs replaced by synthetic organs? Is a baby with a silicon heart part-robot? Does it become more robot as we give it more synthetic organs? What if part of its brain is synthetic, transplanted because of an accident or disease? Does that make the baby part robot? And if all the parts were swapped, would that make it all robot?

I think we all agree intuitively, once we think about it, that this is all very arbitrary: The fact that part or all of someone is synthetic is not really what we mean by a robot. If someone you knew were gradually replaced, because of a progressive disease, by synthetic organs, but they otherwise stayed themselves, at no time would you say they had disappeared and been replaced by a robot — unless, of course they did “disappear,” and some other personality took their place.

But the trouble with that, as a “test” of whether or not something has become a robot, is that exactly the same thing can happen without any synthetic parts at all: Brain damage can radically change someone’s personality, to the point where they are not familiar or recognizable at all as the person you knew — yet we would not call such a new personality a robot; at worst, it’s another person, in place of the one you once knew. So what makes it a “robot” instead of a person in the synthetic case? Or rather, what — apart from being made of (some or all) synthetic parts — is it to be a “robot”?

Now we come to the “programming.” AI’s robot-boy is billed as being “programmed” to love. Now exactly what does it mean to be “programmed” to love? I know what a computer programme is. It is a code that, when it is run on a machine, makes the machine go into various states — on/off, hot/cold, move/don’t-move, etc. What about me? Does my heart beat because it is programmed (by my DNA) to beat, or for some other reason? What about my breathing? What about my loving? I don’t mean choosing to love one person rather than another (if we can “choose” such things at all, we get into the problem of “free will,” which is a bigger question than what we are considering here): I mean choosing to be able to love — or to feel anything at all: Is our species not “programmed” for our capacity to feel by our DNA, as surely as we are programmed for our capacity to breathe or walk?

Let’s not get into technical questions about whether or not the genetic code that dictates our shape, our growth, and our other capacities is a “programme” in exactly the same sense as a computer programme. Either way, it’s obvious that a baby can no more “choose” to be able to feel than it can choose to be able to fly. So this is another non-difference between us and the robot-boy with the capacity to feel love.

So what is the relevant way in which the robot-boy differs from us, if it isn’t just that it has synthetic parts, and it isn’t because its capacity for feeling is any more (or less) “programmed” than our own is?

The film depicts how, whatever the difference is, our attitude to it is rather like racism. We mistreat robots because they are different from us. We’ve done that sort of thing before, because of the color of people’s skins; we’re just as inclined to do it because of what’s under their skins.

But what the film misses completely is that, if the robot-boy really can feel (and, since this is fiction, we are meant to accept the maker’s premise that he can), then mistreating him is not just like racism, it is racism, as surely as it would be if we started to mistreat a biological boy because parts of him were replaced by synthetic parts. Racism (and, for that matter, speciesism, and terrestrialism) is simply our readiness to hurt or ignore the feelings of feeling creatures because we think that, owing to some difference between them and us, their feelings do not matter.

Now you might be inclined to say: This film doesn’t sound like a no-brainer at all, if it makes us reflect on racism, and on mistreating creatures because they are different! But the trouble is that it does not really make us reflect on racism, or even on what robots and programming are. It simply plays upon the unexamined (and probably even incoherent) stereotypes we have about such things already.

There is a scene where still-living but mutilated robots, with their inner metal showing, are scavenging among the dismembered parts of dead robots (killed in a sadistic rodeo) to swap for defective parts of their own. But if it weren’t for the metal, this could be real people looking for organ transplants. It’s the superficial cue from the metal that keeps us in a state of fuzzy ambiguity about what they are. The fact that they are metal on the inside must mean they are different in some way: But what way (if we accept the film’s premise that they really do feel)? It becomes trivial and banal if this is all just about cruelty to feeling people with metal organs.

There would have been ways to make it less of a no-brainer. The ambiguity could have been about something much deeper than metal: It could have been about whether other systems really do feel, or just act as if they feel, and how we could possibly know that, or tell the difference, and what difference that difference could really make — but that film would have had to be called “TT” (for Turing Test) rather than “AI” or “ET,” and it would have had to show (while keeping in touch with our “cuddly” feelings) how we are exactly in the same boat when we ask this question about one another as when we ask it about “robots.”

Instead, we have the robot-boy re-enacting Pinnochio’s quest to find the blue fairy to make him into a “real” boy. But we know what Pinnochio meant by “real”: He just wanted to be made of flesh instead of wood. Is this just a re-make of Pinnochio then, in metal? The fact that the movie is made of so many old parts in any case (Wizard of Oz, Revenge of the Zombies, ET, Star Wars, Water-World, I couldn’t possibly count them all) suggests that that’s really all there was to it. Pity. An opportunity to do build some real intelligence (and feeling) into a movie, missed.

Atlantic Monthly: Thoughtless Thoughts About the Right to Die

I couldn’t agree less with David Brooks’s reflections on euthanasia in this Atlantic article

In the macabre phallophagic episode, both participants were really deranged, and should have been institutionalized. It had nothing whatsoever to do with euthanasia. That would only be brought in by a polemicist.

But the authorā€™s exalted extolling of the sanctity and ā€œdignityā€ of human life was polemics too. Nothing to do with the suffering and compassion that are the only things that matter in the question of euthanasia, but appealing instead to the supernatural dogma of creeds (which can be used to justify anything and the opposite of anything).

The mention of biology and evolution was also stunningly superficial. Yes, evolution is the source of the selfish-genetic indifference to the treatment and fate of strangers or competitors, but also the source of parental and familial and social empathy, and even a sense of fairness. And in all social species, not just human.Ā Ā 

And when we turn to the human culture that spawned Holy Writ and holy wars and racism and slavery and sexism and nationalism and pogroms and genocide and capitalism, thereā€™s not much empathy to be found there either; more the opposite. We give our token charitable tax write-offs to the poor and sick while we keep and spend incomparably more on ourselves than we could possibly need. And when others suffer, we want to deny them relief from that too, sanctimoniously pleading respect for the ā€œdignityā€ of human life.

Human life? And I havenā€™t even mentioned the unspeakably monstrous cruelty with which we treat sentient nonhuman life. (And that too is blessed by our Holy Writ.)

Bravo to Canada if they put an end to any of that.