ANOSOGNOSIA

ANOSOGNOSIA

Why don’t they tell us (but would we want to know?)

that time’s inflationary,
its power of purchase doesn’t grow,
it shrinks:

at first imperceptibly,
then accelerating steadily,
till days are flicking by like phone poles from a wailing train

or stroboscopic fragments of your
parents’ nervous smiles
as you keep hurtling
round and round
on some
vertiginous
vehicle
of amusement.

Why, I remember, when I was your age, a morning would last a day,
a summer a year, a decade nearly a lifetime.
If I’d held my hands apart to show my life line up till then, look:
this is how long it would have been…
So do you think where I am now is seven times as far?
Ha! twice, three times at most.
And the last third’s
the shortest
.”

But who can understand such baleful reckoning? and besides,
a proportionate paling of our sense and recollection
must be factored in too,
diminishing awareness of our ills
even as they increase.

Lord Russell had a killjoy uncle who informed him as a child,
at the close of an especially glorious day,
“You’ll never know another day like this one.”
And little Russell cried and cried, and then forgot
.

Even “seize the rosebuds”
is a futile admonition
that rings true only
for the long since anosmic.
And isn’t there (confess it)
some diffraction too
in those wistful flashbacks to your day, when that buck stretched so much farther than it does today?

Which buck?
There’s no fixed scale
of barter or utility.
The goods, they differed too then,
and not just in their quality,
but in your own ability
to savor it.

Generation gap.
Communication gap.
It’s Zeno’s paradox:
You can’t get there from here.
“Tempora mutantur. Et nos?”

Why don’t they tell us (but would we want to know)?


CODA (Hommage to Paul MacLean)

The one frail consolation
Once you’re just an old iguana
Is you can’t recall what’s bothering you
Even if you wanna

Istvan Hesslein (c. 1993)

RGN: 1926-2006 — (2006-05-10)

   A coward. An unforgiveable, solipsistic coward, and hypocrite.

There are no words. By her own hand, like her life’s work, the unimaginable sufferings of the blighted body of a toweringly noble soul have at last come to an end, but not as and when she would have wished. The cruel decades have made a macabre mockery of any thought of what she “deserved.” Life failed her, unpardonably, and death failed her too, and most of all, the “healing profession” failed her; but she has triumphed over it all nonetheless, pellucid and undiminished, as not a single one of us could have come close to doing. I loved her and was in awe of her for most of my conscious life, far, far too much to even conceive of wishing her to have had to endure one moment longer; and yet her loss is unspeakable.

Descartes’ baby, the soul and art

From the fact that most adults and children believe in an immaterial, immortal soul, Paul Bloom, in Descartes’ Baby, concludes that this is somehow part of our evolved genetic heritage.

If so, then it would seem to follow, by the same token, that the belief in the divinity and supernatural powers of kings is likewise innate, and so are innumerable other beliefs, actual and potential, right down to Schultz’s Great Pumpkin.

More likely, our and our children’s belief in the soul arises from (1) our undoubtable (and undoubtedly evolved) Cartesian feeling that we ourselves feel (“cogito ergo sum” — “sentio ergo sentitur“), (2) our less indubitable but nevertheless irresistible (and probably evolved) feeling that others that are sufficiently like us feel too (“mind-reading”), plus (3) our complete inability to explain either the causes or the effects of feelings physically (the “mind/body problem”), so that, when forced, the only explanation we feel at home with is telekinetic dualism (“mind over matter”).

I am not at all sure that this amounts to an innate belief in the soul, but if it does, we certainly didn’t need empirical studies of children or adults to demonstrate it: Descartes could have deduced it from his armchair by reason and introspection alone.

Paul Bloom also presents developmental data that he thinks bear upon the question of what is and is not art, and on why we prefer originals to forgeries or to misattributed works by lesser artists. The findings concern children’s ability to understand that a picture is a picture of an object, rather than just an object itself; that drawings they (or others) have drawn are drawings of what they meant to draw, even if they don’t look like them; that a deliberate artifact is different from an accidental one, or a natural object; and that whereas children may sometimes prefer copies of things to the originals, when it comes to their own teddy bears, they’d rather keep the original.

This tells us about children’s understanding of representation and intention, and their ability to make the artifact/non-artifact distinction, but not about the art/non-art let alone the good/bad art distinction. Nor do the findings on children’s attachments to particular things help, since the very same preference (for originals over copies or different objects) would apply to Eva Braun’s underwear (which would have some cult/fetish/collector value while believed to have been hers, none once proven otherwise); hence these findings are about authorship, not art.

As to “Descartes’ Baby” — an apocryphal story that Descartes was so grief-stricken at the death of his 5-year-old daughter Francine that he built a life-like robot of her that he took with him everywhere till it was discovered by a ship-captain who was so horrified by it that he threw it overboard: This is another example of our overwhelming (and no doubt innate) “mind-reading” tendency to interpret creatures that look and behave as if they feel as if they really do feel (even if the feeling that evokes in us is horror or disgust). This innate tendency of ours is put to more practical scientific use in Turing’s Test.

Basic Science vs. Reverse Engineering

Maupertuis’s Principle (of Least Action) is not quite the same as Darwin’s Principle of Random Variation and Selective Retention (i.e., automatic design based on the post hoc adaptive advantages — for survival and reproduction — of natural developmental or random variations). But the two would be ominously close if it weren’t for the (subsequent) discovery of evolution’s mechanism: Mendelian genetics and eventually the DNA double helix.

That said, there nevertheless is a big difference between Biology and Physics: Physics is studying the basic laws of the universe, whereas Biology is mostly very local reverse-engineering: Figuring out how (naturally designed and selected, via DNA variation/retention) devices (organs, organisms, biological systems) work, by reverse-engineering them. This is exactly the same as forward engineering, which applies the laws of physics and the principles of engineering in order to design and build systems useful to Man: Biology simply takes already built ones and tries to figure out what lows of physics and principles of engineering underlie them and make them work.

In contrast, Physics is not, I think, usefully thought of as merely reverse-engineering designed systems (e.g., the universe or the atom). The laws of physics precede and underlie all the possible systems that can be designed and built by either engineers, or the Blind Watchmaker.

Stevan Harnad

The Syntactic Web

In reality the “semantic web” is, and can only ever be, a ”syntactic web”. Syntax is merely form — the shape of arbitrary objects called symbols , within a formal notational system adopted by an agreed and shared convention. Computation is the rule-based manipulation of those symbols, with the rules and manipulations (“algorithms”) based purely and mechanically on the shapes of the symbols, not their meaning — even though most of the individual symbols as well as the combinations of symbols are systematically interpretable (by human minds) as having meaning.

Semantics, in contrast, concerns the meanings of the symbols, not their shape, or the syntactic manipulation of their shapes. The “symbol grounding problem” is the problem of how symbols get their meanings, i.e., their semantics, and the problem is not yet solved. It is clear that symbols in the brain are grounded, but we do not yet know how. It is likely that grounding is related to our sensorimotor capacity (how we are able to perceive, recognise and manipulate objects and states), but so far that looks as if it will only connect symbols to their referents, not yet to their meaning. Frege‘s notion of “sense”, which is again just syntactic, because it consists of syntactic rules, still does not capture meaning. Nor does formal model-theoretic semantics, which likewise merely finds another syntactic object or system that follows the same rules as those of the syntactic object or system for which we are seeking the meaning.

So whereas sensorimotor grounding — as in a robot that can pass the Turing Test — does break out of the syntactic circle, it does not really get us to meaning (though it may be as far as cognitive science will ever be able to get us, because meaning may be related to the perhaps insoluble problem of consciousness).

Where does that leave the “semantic web”? As merely an ungrounded syntactic network. Like many useful symbol systems and artificial “neural networks”, the network of labels, links and connectivity of the web can compute useful answers for us, has interesting, systematic correlates (e.g., as in latent “semantic” analysis, and can be given a systematic semantic interpretation (by our minds). But it remains merely a syntactic web, not a semantic one

The Cognitive Killer-App 2006-04-16

Kevin Kelly thinks the web is not only what it really is — which is a huge peripheral memory and reference source, along with usage stats — but also a kind of independent thinking brain.

It’s not, even though it has connections, as does a neural net (which is likewise not a thinking brain).

KK is right that googling is replacing the consultation of our own onboard memories, but that is par for the course, ever since our species first began using external memories to increase our total information storage and processing capacity: Mimesis, language and writing were earlier, and more dramatic precursors. (We’re talking heads who already feel as helpless without our interlocutors, tapes and texts today as KK says we all will — and some already do today — without the web.)

And KK misses the fact that the brain is not, in fact, just a syntactic machine, the way the web is: There is no “semantic” web, just an increasingly rich “syntactic web“.

Nor (in my opinion) is the web’s most revolutionary potential in its role of periperal mega-memory and hyper-encyclopedia/almanac. It is not even — though it comes closer — in its interactive Hyde-park role in blogs and wikipedias. That’s just an extension of Call-In Chat Shows, Reality TV, acting-out, and everyone-wants-to-be-a-star. We’ve all had the capacity to talk for hundreds of thousands of years, but most of us have not found very much worth saying — or hearing by most others. The nature of the gaussian distribution is such that that is bound to remain a demographic rarity, even if the collective baseline rises — which I am not at all sure it’s doing! We just re-scale…

No, I think the real cognitive-killer-app of the web is the quote/commentary capability, but done openly — “skywriting”. At the vast bottom level this will just be the Hyde-Park “you know what’s wrong with the world dontcha?” pub-wisdom of the masses, gaussian noise. But in some more selective, rigorous and answerable reaches of cyberspace — corresponding roughly to what refereed, published science and scholarship used to be in the Gutenberg era — remarkable PostGutenberg efflorescences are waiting to happen: waiting only for the right demography to converge there, along with its writings, all Open Access, so the skywriting can begin in earnest.

Stevan Harnad

Paying the Piper 2006-03-26

Richard Poynder (RP): “Digital Rights Management may not prove workable in the long-term and can always be circumvented, so most creators are probably moving (like it or not) into a give-away world”

I don’t know whether all that’s true, practically and statistically, but if so, I’m not sure the outcome will be grounds for cheeriness. It turns creativity into a pop, distributed enterprise (which has some plusses, in some cases) but removes rewards from a kind of individual creativity that has brought us much great work in the past. Selfless creators there have been too, in the past, not motivated by desire or need for personal gain. But does that cover all, most, or enough if it?

[from another interlocutor] ‘Obviously, some artists of all kinds will always produce because they must; but if they have to do it as amateurs because they must earn their bread as janitors or professors of geology, then they will do much less work and it will be far less developed (as well as using only cheap materials). As all of them will be isolated from a (nonexistent) mainstream of common understanding and encouragement, styles will not develop, nor will there be any building upon others’ achievements. Each artist will remain a wild tree with small hard fruit, rather than a cultivated and well-fed tree giving a lot of fine sweet fruit.’

RP: ”Apart from the isolated eccentric, all creators want their creations to be as widely distributed and read/listened to/seen as possible.”

Yes, but not necessarily at the cost of forfeiting any prospect of being able to make — or even to aspire to make — a fortune (or, in some cases, even to make a living). I have no idea about the true proportions, hence the statistical uncertainty, but I do raise a point of doubt, about a potential loss of a form of individualism that, on the one hand, resembles the materialist capitalism neither of us admires or embraces, but that, in the Gutenberg age (like religion, which I admire just as little capitalism!) managed to inspire a lot of immortal, invaluable work. Where mass-market subsidiaries/services (not exactly the hub of most human creativity) are the only real revenue sources, collective efforts and wide exposure are not necessarily incentive enough to keep attracting enough of the selfish-gene-pool’s higher-end gaussian tail to making the kinds of contributions that have been the (inevitable, gaussian) glory of the history of human culture until now. Yes, new (collective, collaborative, distributed) forms of creativity are enabled, but I am lamenting the disabling of some of the older ones. It is not at all clear that they were spent.

And although collective, distributed adaptivity is perfectly congenial to our (blind-watch-) Maker, that is not in fact the basis on which He fashioned our current (selfish-) Genome. Apart from inclusive-fitness fealty to kin — which occasionally sublimates into selfless generic humanitarianism and altruism — most of our motivation is, frankly, individualistic, which is to say selfish, hankering for territory, dominance, and the tangible material rewards that can still engage the tastes instilled by the ancestral environment that shaped us. That’s what makes apples taste sweet, and sweetness desirable.

Scientists and scholars have always been a minority, and exceptional, in that they sought a cumulative, collective good: learned inquiry. Learned research was (at least since dark monkish days) a public, distributed, collective — and thereby self-corrective — enterprise. So, as I say, not much change there, in the PostGutenberg Era. But not so for music (which I hardly lament, since music as art has already died an unnatural premature death anyway, with the arrival, and departure, of atonalism, no thanks or no-thanks to digital doomsday), nor for literature (which was already suffering from block-buster economics before the digital day, but may now be dealt the death blow, leaving only the pop, How-To and movie-wannabe genres viable); film, I would say, has already done itself in.

So, as I say, it depends on the statistical proportions, on which I have no objective data. So too for hacking — the new kid on the block — for which it is still not clear whether individualism or collectivism is the most fruitful procreative mode.

RP: “The digital world allows diffusion, access and collaboration in a way never before possible. Not only does DRM threaten to take away all those benefits but, given the current status of the technology, it generally imposes even greater restrictions than were experienced in the pre-digital world. That’s bad news for creators.”

Bad news for : certain kinds of creators. The rest is the statistics of the kinds. Let me put my cards on the table: I’m not defending the divine right of the McSpielbergs of Gaia’s Gollywood to make limitless bundles on their showy, empty pap. They are not my models. Shakespeare and Mozart are.

RP: “Cory Doctorow did some calculations showing that the potential earnings of the average scribbler (we’re not talking J K Rowling here) have been on a downward curve for a long time, indeed long before the digital world. This is a product of something else, but can surely only be exaggerated by the introduction of digital technologies.”

I don’t know the works of CD or JKR, but I’d bet they’re not quite of the rarefied calibre of the WSs and WMs I had in mind! (What’s the point of regressing the rare masterworks on the menial mean?) Nor do I think WS or WM would have ever done an analysis like that, in reckoning whether or not to go OA with their work… (Actually, I think both WS and WM earned most of their rewards from the analog world of performance, not the digital-code-world of composition, but I’m certain that wasn’t true of Beethoven.)

RP: “Most (if not all) are (by the standards of main street) a little reckless about their careers. Many also seem to have a disdain for money. Given that we currently live in a world far too dominated by market forces and bean counting, I find this very encouraging. As the bard said ‘Getting and spending, we lay waste our powers’.”

A (big) part of me resonates with this too. But don’t forget that hackers are demographically anomalous, being a ant-hill of Aspergians and worse; it’s all you can do to get them to change their underwear, let alone balance their bank-accounts. And there have been selfless geniuses in other fields too. I just worry about the potential loss of the future ones that are not blind to the presence/absence of the very possibility of material glory, alongside spiritual!

Hal Varian, by the way, made similar statistical calculations about the likelihood of big-bucks for most authors. And I countered with the same dream-of-apples argument I raise with you: How many genotypes are not driven by that? are they enough? and would the loss of the potential “market” for the rest be insignificant, in the scheme of things? (Without statistics on this impalpable stuff, I don’t think anyone can say.) Reducing it all to McDisney re-use rights for high-schoolers, as LL does, just goes to show how the world of “creativity” looks to a well-meaning philistine: a reductio ad absurdum.

RP: “You are right to say that we don’t really have the necessary statistics to reach any firm conclusions, and so it is speculation. You are also right to say that there is no cause for cheeriness in this. But then each decade that passes we seem increasingly like ants in an anthill (Raymond talks about this in terms of the scaling up of society), and individualism features less and less. Maybe that’s just the way it is going to be.”

Ok for autists, but not for all artists…

RP: “It occurs to me, however, that if your model is the likes of Mozart, then will not future Mozarts continue to do what they always did, regardless of material reward? From what I know of Mozart (not enough by far) he didn’t seem too driven to do what he did in order to pay the bills; and when he did earn money was it not from playing the piano, rather than composing?”

That was still early days, and off-line coded composition had not yet become an autonomous livelihood, as distinct from on-line analog performance; but by the day of Dickens, Dostoevsky, Beethoven and Brahms it had (and Beethoven in particular was quite a copyright maven!)

RP: “If so, is that not again the same model (to all intents and purposes) of giving away your creation and making money from associated services. As I say, I may be wrong about Mozart, but is it not the case that people who are really gifted and driven to create just get on and do it, and rarely think about how they will pay the bills?”

The ones that survive to be heard from. (And do not under-estimate the prospect of potential riches as an extrinsic motivator. The failure of DRM would wipe that out as well, and thereby perhaps render a wealth of human promise still-born. I’m not saying there will not still be some intrinsically motivated stout-hearts. I’m just worrying about how many, and which, and, most important, which ones will be lost.)

RP: “Of course if the universe is, as you say, a “mindless, feelingless machine” (in which we are all trapped), then none of this really matters and individualism and creativity are all for naught right?”

That’s not quite what I said! The universe is mostly feelingless, but organisms are not. Functionally speaking, they may as well be, since their feelings can have no independent causal power, on pain of telekinetic dualism, but feelings they are nonetheless. So whereas aesthetics, like all other feeling, does not “matter” functionally, in that it has no causal role, it not only matters but is what we mean by anything’s “mattering” at all, affectively. “Mattering” is an affective matter!

RP: “ (I was listening to the World Service the other night, which was talking about Dawkins’ The Selfish Gene — currently celebrating its 30th birthday I believe — and he was saying how people used to write to him and say that they hadn’t slept for three weeks after reading the book, and could see no point in continuing to live). ”

Well, the handwriting was already on the wall with cosmology; the biosphere is just a bit of it. If they want to wile away the sleepless hours, they should puzzle over the mystery of how/why matter feels after all, albeit ever so superfluously!

So What Else Is True? (2006-03-17)

The good thing about a blog is that you can answer questions even when you haven’t been asked. A friend just sent me What We Believe But Cannot Prove: Today’s Leading Thinkers on Science in the Age of Certainty (edited by John Brockman) Harper 2006. But before I even open it – well I did peek and saw it’s mostly cog-sci light-weights rather than hard-sci heavy-hitters – I wanted to put it on record that Descartes already did a good job on this in the Age of Enlightenment.

Descartes asked the hard questions about certainty (“what can I know for sure?” “what is open to doubt?”) and his conclusion seems to be just as certain today: There is only one other thing I can know for sure, apart from what can be proved (as in logic and mathematics), and that is the fact that I feel (if/when I feel). Descartes overstated it, suggesting that when I’m thinking, I can’t doubt that I’m existing too (“Cogito Ergo Sum”), but that has always been much too theory-ridden and equivocal. What’s meant by “I,” or even by “existence”? Big words. But in baby-talk, it’s just as self-contradictory to say it’s true that “I am not feeling” when I am in fact feeling (“sentitur ergo sentitur”) as it is to say that both P and not-P are true. (No need to “define” feeling by the way; we all know what it feels like to feel, and anyone who says otherwise is just bluffing. [Pinch him!].)

But that’s all. Nothing else is certain but those two kinds of truths (the formal truths of mathematics, provably true on pain of contradiction, and the self-demonstrating truth of experiencing itself – which does not, by the way, mean that experience conveys any other certainties). All else is mere probability. In particular, all the truths of science. (It’s certain that things feel like whatever they feel like, that they seem whatever they seem; anyone who doubts that is on a fool’s errand. But whether they really are the way they seem is an entirely different matter.)

But in what sense do we live in the age of certainty? Because of the naïve scientism of some of us (“scientists have proved that…”)? or the even more naïve fideism of others (“credo quia absurdum”)?

Now I shall peek in the book and see what these bright lights have to say…

Stevan Harnad

Skywriting (c. 1987)

Sky-Writing

(Submitted to and rejected by New York Times Op Ed Page, 1987; finally appeared in Atlantic Monthly May 2011)

Stevan Harnad
Behavioral & Brain Sciences
Princeton NJ

I want to report a thoroughly (perhaps surreally) modern experience I had recently. First a little context. I’ve always been a zealous scholarly letter-writer (to the point of once being cited in print as “personal communication, pp. 14 – 20”). These days few share my epistolary penchant, which is dismissed as a doomed anachronism. Scholars don’t have the time. Inquiry is racing forward much too rapidly for such genteel dawdling — forward toward, among other things, due credit in print for one’s every minute effort. So I too had resigned myself to the slower turnaround but surer rewards of conventional scholarly publication. Until I came upon electronic mail: almost as rapid and direct and spontaneous as a telephone call, but with the added discipline and permanence of the written medium. I quickly became addicted, “logging on” to check my e-mail at all hours of the day and night and accumulating files of intellectual exchanges with similarly inclined e-epistoleans, files that rapidly approached book-length.

And then I discovered sky-writing — a new medium that has since made my e-mailing seem as remote and obsolete as illuminated manuscripts. The principle is the same as e-mail, except that your contribution is “posted” to a global electronic network, consisting currently of most of the universities and research institutions in America and Europe and growing portions of the rest of the scholarly and scientific world. I’m not entirely clear on how “the Net,” as it is called, is implemented and funded, but if you have an account at any of its “nodes,” you can do skywriting too.

The transformation was complete. The radically new medium seemed to me a worthy successor in that series of revolutions in the advancement of ideas that began with the advent of speech, then writing, then print; and now, skywriting. All my creative and communicative faculties were focused on the lively international, interdisciplinary scholarly interactions I was having on the issues of intellectual interest to me at the time (which happened to arise from Searle’s “Chinese Room Argument” and eventually came to be called the “symbol grounding problem“). Who needs conventional publication when, within a few hours, the “article” you post on the Net is already available to thousands and thousands of scholars (including, potentially, all of your intended conventional audience), who may already be posting back e-responses of their own? I was in the dizzying Platonic thrall of sky-writing and only too happy to leave the snail-like scope and pace of the old epistolary technology far below me.

But then something quite unexpected happened. With hindsight I can now see that there had already been some hints that not all was as it should be. First, veteran e-mailers and skywriters had warned me that I ought to restrict my contributions to the “moderated” groups. (Most of the subjects discussed on the Net — including physics, mathematics, philosophy, language, artificial intelligence, and so on — have, respectively, both a moderated and an unmoderated group.) I ignored these warnings because postings to the moderated groups are first filtered through a moderator, who reads all the candidate articles and then posts only those he judges to be of value. I reasoned that I could make that judgment for myself — one keystroke will jettison any piece of skywriting that does not interest you — and that “moderation” certainly isn’t worth the huge backward step toward the old technology that the delays and bottle-necking would entail. And indeed the moderated groups carry much less material and their exchanges are a good deal more sluggish than the unmoderated ones, which seem to be as “live” and spontaneous as direct e-mail (but with the added virtue of appearing in the sky for all to see and contribute to).

Apart from the warnings of the veterans, other harbingers of cloudier horizons had been the low quality of many of the responses to my postings, and the undeniable fact that some of them were distinctly unscholarly, in fact, downright rude. No matter. I’m thick-skinned, I reasoned, and perfectly able and willing to exercise my own selectivity solo, in exchange for the vast potential of unmoderated skywriting.

Then it happened. In response to a rather minor posting of mine, joining what was apparently a long-standing exchange (on whether or not linguistic gender plays a causal role in social discrimination), there suddenly appeared such an astonishing string of coprolalic abuse (the lion’s share not directed at me, but at some other poor unfortunate who had contributed to earlier phases of the exchange) that I was convinced some disturbed or malicious individual had gained illicit access to someone else’s computer account. I posted a stately response about how steps must be taken to prevent such abuses of the Net and, much to my surprise, the reaction was a torrent of echo-coprolalia from all directions, posted (it’s hard to judge in this medium whether it was with a straight face) under the guise of defending free speech. For several weeks the Net looked like a global graffiti board, with my name in the center.

The veteran fliers told me they’d told me so; that the Net was in reality a haven for student pranksters and borderline personalities, motherboard-bred, for whom the completely unconstrained nature of the unmoderated groups represents an irresistible medium for acting out. Moreover, certain technical problems — chief among which was the unsolved “authentication” problem, namely, that there is no way to determine for sure who posted what, where — had made the Net not only virtually unregulable, but also, apparently, immune to defamation and libel laws.

My penchant for skywriting has taken quite a dive since this incident. I don’t relish what’s been happening with my name, for example, but I suppose the only way to have prevented it would have been to have stayed away from the Net altogether, hoping it might never occur to anyone to bring me up spontaneously. There’s an element of Gaussian Roulette in exposure to any of the media these days, no doubt. But before I wrote it all off as one of the ineluctable technological hazards of the age of Marshall McLunacy, I thought I’d post it with the old, land-based technology, to see whether anyone has any ideas about how to prevent the vast intellectual potential of skywriting from being done in by noise from the tail end of the normal distribution. If the Wright brothers’ invention were at stake, or Gutenberg’s, what would we do?

Stevan Harnad (c. 1987)

Extermination vs. Expropriation

No one has written an ethics/etiquette book on:

(1) How 15 million people, dispersed as a stateless and oppressed minority all over the planet for 2000 years, are supposed to react to having a third of their number systematically exterminated on the grounds of their race by various European states within one half-decade

(2) How 1.5 million other people, having nothing at all to do with that extermination, are supposed to react when the land they have been living in for 2000 years is expropriated and given as a state to the remainder of the exterminated people by the same European states that allowed (or helped) them to be exterminated

(3) How those of the exterminated people who emigrate to the expropriated state are supposed to react to the expropriated people, who form a fifth column within and around their expropriated state

(4) How either side is supposed to react after almost 60 years of ensuing bloody tit-for-tat vendettas

My guess is that the ethics/etiquette book for such a case has not been written because the case is unique, tragic, and no one knows what right or wrong is, or what to do about it. Onlookers simply fixate selectively on the injustices and atrocities (on either side) that affect or disturb them most. And, as usual, they offer criticism and solutions without having the responsibility of testing whether they will really work, or of suffering the consequences if they do not.