Demography, Democracy, and Digital Disinformation: A Recommendation to Wikipedia

Musings on “digital universe” and wikipedia: There is a weak underbelly to wikipedia-style demopedias, given that the know-nots will always, always, vastly out-number and out-shout the very few who actually have any idea of what they are talking about: That’s a gaussian inevitability. Peer-review was meant to counteract it, but demotic global graffiti-board free-for-alls risk drowning out the signal with the ambient gaussian noise.

Eppur, eppur, if that were altogether true, surely, wikipedia would be a lot noisier than it actually is. This could be just chance initial conditions, or it could be because most know-nothings simply don’t bother, so it’s only compulsive quacks that are the noise-makers, and they are not in the majority.

I am told that much wikipedia vigilantism seems to consist of deleting rather than inserting or over-writing. If that is representative (it might not be), it is analogous — but in a sinister way — both to evolution and to neural function: Most “selectivity”, in both evolution and neural function, is negative, eliminative. In adaptation, it is the failure of a mutation to out-perform the competition, and hence the mutation’s elimination. In neural function, the effect is in the form of both selective neuronal loss and active inhibition. A lot of neural growth, maturation, and even learning consists in the selective loss or “pruning” of connections (or even neurons), not in the positive creation of over-riding patterns. Similarly, a lot of our “skills” (e.g., motor skills) turn out to consist in the active inhibition of false moves. (This is unmasked in aging, as the inhibition begins to fail, and the gaffes again begin to prevail!)

I say the analogy is sinister, because whereas in evolution the “arbiter” and “enforcer” of the selective deletions is the “Blind Watchmaker” — i.e., adaptive success/failure itself — and in neural function too, it is functional success and its consequences that guide what is retained and what is selectively deleted or inhibited, in the case of wikipedia it is merely self-appointed vigilantes: Someone decides something is his “turf” and he deletes all interlopers (or interlopers with whom he does not happen to agree). It’s bad enough when supposedly qualified editors do this (when they do it badly), but when anyone can self-appoint — well, I suppose we are alive when this empirical question is being answered in real time, before our eyes. But since it is all happening anarchically (indeed, global demographic anarchy is one of the players that is under test in all this), it is not clear when or whether we have a definitive outcome. Wikipedia is limping along so far, globally, though I suspect there are local abuses that are much less rosy, and might bode ill for the project as a whole, further along the way.

But the medium itself might provide the fix! If the vigilantes are tracked (anonymously) and their changes suitably tagged, a user could, in principle, sample a few “diffs” for a given document, discern that the changes wrought by, say, “Rambo” make the document worse, not better, from his point of view; there could then easily be a means of selectively “extracting” the view that is Rambo- independent (i.e., selectively deleting everything in which “Rambo” had a hand). Since all drafts and diffs and vetters’ tags are stored, this would democratize Wikipedia even more: Not only could any vigilante come in and add, subtract, or over-write whatever he wishes: any user could elicit a “view” that was selectively purged of all contributions of that vigilante, if he so wishes.

But this has to be easy to do and transparent. The current versioning of Wikipedia is far too awkward and user-unfriendly.

Holy Hermeneutics

Re: The God Theory

Yes, quantum mechanics has its puzzles, but it works, it predicts and explains, in every single case, no exceptions, whereas the arbitrary flummery the above book (for which I’ve only skimmed the blurb) favors merely fudges, feebly, after the fact.

There’s nothing wrong with parameter-settings, by the way; they only seem arbitrary because empirical laws are not matters of necessity, the way mathematical laws are, but merely matters of contingency, in other words, “accidents”. And asking “why” about them is only like asking “why” about 2+2=4 in the sense that with 2+2=4 the answer is always the same as the answer for any other mathematical law: “Because otherwise it would be self-contradictory.” Whereas when we ask “why” the cosmological constant is what it is, or why the law of universal gravitation, etc., the answer is merely that if it were otherwise, according to the laws of the way it actually happens to be (as far as we know today) it would not work.

So contingencies are less satisfying than necessities. One must ask: why not? What is the function of explanation: to describe and predict correctly, objectively, or to give people a soothing subjective feeling? It’s natural to ask for both, but the buck has to stop somewhere, and subjectivity is pretty restless except if it folds in on itself. So it never occurs to us to ask “why” about consciousness, or about god.

(Consciousness — the fact that we feel — is a cartesian “given” — the given of all givens; god, of course, is merely an invention, without consciousness’s privileged status of being, along with non-contradiction, the only other thing that is not open to doubt.)

But surely both are more arbitrary than the cosmological constants! Yet they
feel more like answers than questions — to those who are naive and unexacting about such things…

I often wonder why the naive skeptic does not feel impelled to ask “why” even about the Platonic “law” of non-contradiction: Not, I think, because of either a profound grasp of or an abiding allegiance to logic, but rather because of the kind of subjective glazing-over and tuning-out that happens whenever we confront an argument that involves more logical steps than we can follow. We just say “yeah, yeah, whatever” — too feeble-minded to either grasp or challenge.

So when we feel inclined to (completely capriciously) reduce all questions, answered and unanswered, answerable and unanswerable, to the one indubitable fact that we can always hold in mind all at once (as long as we are compos mentis, and sober), namely, the fact that we feel, we are simply confessing (without feeling it!) that when we asked for an “explanation” we never really meant, and would never have settled for, something objective, at all: When we ask “why” we are asking for the feeling that our question has been answered.

P.S. Although I’d never heard of him before, a few quick googlings suggest that the author is a fallen physicist, colloborator of another of the same ilk, “paraphysicist”, and propounder of apparent voodoo about which a layman like myself can only say “yeah, yeah, whatever”… His “digital universe” — in collaboration with wikipedia, apparently — shows how closely “openness” cohabits with the quackery. (I sometimes think god has nothing better to do than to keep orchestrating cruel caricatures of my antics…)

ANOSOGNOSIA

ANOSOGNOSIA

Why don’t they tell us (but would we want to know?)

that time’s inflationary,
its power of purchase doesn’t grow,
it shrinks:

at first imperceptibly,
then accelerating steadily,
till days are flicking by like phone poles from a wailing train

or stroboscopic fragments of your
parents’ nervous smiles
as you keep hurtling
round and round
on some
vertiginous
vehicle
of amusement.

Why, I remember, when I was your age, a morning would last a day,
a summer a year, a decade nearly a lifetime.
If I’d held my hands apart to show my life line up till then, look:
this is how long it would have been…
So do you think where I am now is seven times as far?
Ha! twice, three times at most.
And the last third’s
the shortest
.”

But who can understand such baleful reckoning? and besides,
a proportionate paling of our sense and recollection
must be factored in too,
diminishing awareness of our ills
even as they increase.

Lord Russell had a killjoy uncle who informed him as a child,
at the close of an especially glorious day,
“You’ll never know another day like this one.”
And little Russell cried and cried, and then forgot
.

Even “seize the rosebuds”
is a futile admonition
that rings true only
for the long since anosmic.
And isn’t there (confess it)
some diffraction too
in those wistful flashbacks to your day, when that buck stretched so much farther than it does today?

Which buck?
There’s no fixed scale
of barter or utility.
The goods, they differed too then,
and not just in their quality,
but in your own ability
to savor it.

Generation gap.
Communication gap.
It’s Zeno’s paradox:
You can’t get there from here.
“Tempora mutantur. Et nos?”

Why don’t they tell us (but would we want to know)?


CODA (Hommage to Paul MacLean)

The one frail consolation
Once you’re just an old iguana
Is you can’t recall what’s bothering you
Even if you wanna

Istvan Hesslein (c. 1993)

RGN: 1926-2006 — (2006-05-10)

   A coward. An unforgiveable, solipsistic coward, and hypocrite.

There are no words. By her own hand, like her life’s work, the unimaginable sufferings of the blighted body of a toweringly noble soul have at last come to an end, but not as and when she would have wished. The cruel decades have made a macabre mockery of any thought of what she “deserved.” Life failed her, unpardonably, and death failed her too, and most of all, the “healing profession” failed her; but she has triumphed over it all nonetheless, pellucid and undiminished, as not a single one of us could have come close to doing. I loved her and was in awe of her for most of my conscious life, far, far too much to even conceive of wishing her to have had to endure one moment longer; and yet her loss is unspeakable.

Descartes’ baby, the soul and art

From the fact that most adults and children believe in an immaterial, immortal soul, Paul Bloom, in Descartes’ Baby, concludes that this is somehow part of our evolved genetic heritage.

If so, then it would seem to follow, by the same token, that the belief in the divinity and supernatural powers of kings is likewise innate, and so are innumerable other beliefs, actual and potential, right down to Schultz’s Great Pumpkin.

More likely, our and our children’s belief in the soul arises from (1) our undoubtable (and undoubtedly evolved) Cartesian feeling that we ourselves feel (“cogito ergo sum” — “sentio ergo sentitur“), (2) our less indubitable but nevertheless irresistible (and probably evolved) feeling that others that are sufficiently like us feel too (“mind-reading”), plus (3) our complete inability to explain either the causes or the effects of feelings physically (the “mind/body problem”), so that, when forced, the only explanation we feel at home with is telekinetic dualism (“mind over matter”).

I am not at all sure that this amounts to an innate belief in the soul, but if it does, we certainly didn’t need empirical studies of children or adults to demonstrate it: Descartes could have deduced it from his armchair by reason and introspection alone.

Paul Bloom also presents developmental data that he thinks bear upon the question of what is and is not art, and on why we prefer originals to forgeries or to misattributed works by lesser artists. The findings concern children’s ability to understand that a picture is a picture of an object, rather than just an object itself; that drawings they (or others) have drawn are drawings of what they meant to draw, even if they don’t look like them; that a deliberate artifact is different from an accidental one, or a natural object; and that whereas children may sometimes prefer copies of things to the originals, when it comes to their own teddy bears, they’d rather keep the original.

This tells us about children’s understanding of representation and intention, and their ability to make the artifact/non-artifact distinction, but not about the art/non-art let alone the good/bad art distinction. Nor do the findings on children’s attachments to particular things help, since the very same preference (for originals over copies or different objects) would apply to Eva Braun’s underwear (which would have some cult/fetish/collector value while believed to have been hers, none once proven otherwise); hence these findings are about authorship, not art.

As to “Descartes’ Baby” — an apocryphal story that Descartes was so grief-stricken at the death of his 5-year-old daughter Francine that he built a life-like robot of her that he took with him everywhere till it was discovered by a ship-captain who was so horrified by it that he threw it overboard: This is another example of our overwhelming (and no doubt innate) “mind-reading” tendency to interpret creatures that look and behave as if they feel as if they really do feel (even if the feeling that evokes in us is horror or disgust). This innate tendency of ours is put to more practical scientific use in Turing’s Test.

Basic Science vs. Reverse Engineering

Maupertuis’s Principle (of Least Action) is not quite the same as Darwin’s Principle of Random Variation and Selective Retention (i.e., automatic design based on the post hoc adaptive advantages — for survival and reproduction — of natural developmental or random variations). But the two would be ominously close if it weren’t for the (subsequent) discovery of evolution’s mechanism: Mendelian genetics and eventually the DNA double helix.

That said, there nevertheless is a big difference between Biology and Physics: Physics is studying the basic laws of the universe, whereas Biology is mostly very local reverse-engineering: Figuring out how (naturally designed and selected, via DNA variation/retention) devices (organs, organisms, biological systems) work, by reverse-engineering them. This is exactly the same as forward engineering, which applies the laws of physics and the principles of engineering in order to design and build systems useful to Man: Biology simply takes already built ones and tries to figure out what lows of physics and principles of engineering underlie them and make them work.

In contrast, Physics is not, I think, usefully thought of as merely reverse-engineering designed systems (e.g., the universe or the atom). The laws of physics precede and underlie all the possible systems that can be designed and built by either engineers, or the Blind Watchmaker.

Stevan Harnad

The Syntactic Web

In reality the “semantic web” is, and can only ever be, a ”syntactic web”. Syntax is merely form — the shape of arbitrary objects called symbols , within a formal notational system adopted by an agreed and shared convention. Computation is the rule-based manipulation of those symbols, with the rules and manipulations (“algorithms”) based purely and mechanically on the shapes of the symbols, not their meaning — even though most of the individual symbols as well as the combinations of symbols are systematically interpretable (by human minds) as having meaning.

Semantics, in contrast, concerns the meanings of the symbols, not their shape, or the syntactic manipulation of their shapes. The “symbol grounding problem” is the problem of how symbols get their meanings, i.e., their semantics, and the problem is not yet solved. It is clear that symbols in the brain are grounded, but we do not yet know how. It is likely that grounding is related to our sensorimotor capacity (how we are able to perceive, recognise and manipulate objects and states), but so far that looks as if it will only connect symbols to their referents, not yet to their meaning. Frege‘s notion of “sense”, which is again just syntactic, because it consists of syntactic rules, still does not capture meaning. Nor does formal model-theoretic semantics, which likewise merely finds another syntactic object or system that follows the same rules as those of the syntactic object or system for which we are seeking the meaning.

So whereas sensorimotor grounding — as in a robot that can pass the Turing Test — does break out of the syntactic circle, it does not really get us to meaning (though it may be as far as cognitive science will ever be able to get us, because meaning may be related to the perhaps insoluble problem of consciousness).

Where does that leave the “semantic web”? As merely an ungrounded syntactic network. Like many useful symbol systems and artificial “neural networks”, the network of labels, links and connectivity of the web can compute useful answers for us, has interesting, systematic correlates (e.g., as in latent “semantic” analysis, and can be given a systematic semantic interpretation (by our minds). But it remains merely a syntactic web, not a semantic one

The Cognitive Killer-App 2006-04-16

Kevin Kelly thinks the web is not only what it really is — which is a huge peripheral memory and reference source, along with usage stats — but also a kind of independent thinking brain.

It’s not, even though it has connections, as does a neural net (which is likewise not a thinking brain).

KK is right that googling is replacing the consultation of our own onboard memories, but that is par for the course, ever since our species first began using external memories to increase our total information storage and processing capacity: Mimesis, language and writing were earlier, and more dramatic precursors. (We’re talking heads who already feel as helpless without our interlocutors, tapes and texts today as KK says we all will — and some already do today — without the web.)

And KK misses the fact that the brain is not, in fact, just a syntactic machine, the way the web is: There is no “semantic” web, just an increasingly rich “syntactic web“.

Nor (in my opinion) is the web’s most revolutionary potential in its role of periperal mega-memory and hyper-encyclopedia/almanac. It is not even — though it comes closer — in its interactive Hyde-park role in blogs and wikipedias. That’s just an extension of Call-In Chat Shows, Reality TV, acting-out, and everyone-wants-to-be-a-star. We’ve all had the capacity to talk for hundreds of thousands of years, but most of us have not found very much worth saying — or hearing by most others. The nature of the gaussian distribution is such that that is bound to remain a demographic rarity, even if the collective baseline rises — which I am not at all sure it’s doing! We just re-scale…

No, I think the real cognitive-killer-app of the web is the quote/commentary capability, but done openly — “skywriting”. At the vast bottom level this will just be the Hyde-Park “you know what’s wrong with the world dontcha?” pub-wisdom of the masses, gaussian noise. But in some more selective, rigorous and answerable reaches of cyberspace — corresponding roughly to what refereed, published science and scholarship used to be in the Gutenberg era — remarkable PostGutenberg efflorescences are waiting to happen: waiting only for the right demography to converge there, along with its writings, all Open Access, so the skywriting can begin in earnest.

Stevan Harnad

Paying the Piper 2006-03-26

Richard Poynder (RP): “Digital Rights Management may not prove workable in the long-term and can always be circumvented, so most creators are probably moving (like it or not) into a give-away world”

I don’t know whether all that’s true, practically and statistically, but if so, I’m not sure the outcome will be grounds for cheeriness. It turns creativity into a pop, distributed enterprise (which has some plusses, in some cases) but removes rewards from a kind of individual creativity that has brought us much great work in the past. Selfless creators there have been too, in the past, not motivated by desire or need for personal gain. But does that cover all, most, or enough if it?

[from another interlocutor] ‘Obviously, some artists of all kinds will always produce because they must; but if they have to do it as amateurs because they must earn their bread as janitors or professors of geology, then they will do much less work and it will be far less developed (as well as using only cheap materials). As all of them will be isolated from a (nonexistent) mainstream of common understanding and encouragement, styles will not develop, nor will there be any building upon others’ achievements. Each artist will remain a wild tree with small hard fruit, rather than a cultivated and well-fed tree giving a lot of fine sweet fruit.’

RP: ”Apart from the isolated eccentric, all creators want their creations to be as widely distributed and read/listened to/seen as possible.”

Yes, but not necessarily at the cost of forfeiting any prospect of being able to make — or even to aspire to make — a fortune (or, in some cases, even to make a living). I have no idea about the true proportions, hence the statistical uncertainty, but I do raise a point of doubt, about a potential loss of a form of individualism that, on the one hand, resembles the materialist capitalism neither of us admires or embraces, but that, in the Gutenberg age (like religion, which I admire just as little capitalism!) managed to inspire a lot of immortal, invaluable work. Where mass-market subsidiaries/services (not exactly the hub of most human creativity) are the only real revenue sources, collective efforts and wide exposure are not necessarily incentive enough to keep attracting enough of the selfish-gene-pool’s higher-end gaussian tail to making the kinds of contributions that have been the (inevitable, gaussian) glory of the history of human culture until now. Yes, new (collective, collaborative, distributed) forms of creativity are enabled, but I am lamenting the disabling of some of the older ones. It is not at all clear that they were spent.

And although collective, distributed adaptivity is perfectly congenial to our (blind-watch-) Maker, that is not in fact the basis on which He fashioned our current (selfish-) Genome. Apart from inclusive-fitness fealty to kin — which occasionally sublimates into selfless generic humanitarianism and altruism — most of our motivation is, frankly, individualistic, which is to say selfish, hankering for territory, dominance, and the tangible material rewards that can still engage the tastes instilled by the ancestral environment that shaped us. That’s what makes apples taste sweet, and sweetness desirable.

Scientists and scholars have always been a minority, and exceptional, in that they sought a cumulative, collective good: learned inquiry. Learned research was (at least since dark monkish days) a public, distributed, collective — and thereby self-corrective — enterprise. So, as I say, not much change there, in the PostGutenberg Era. But not so for music (which I hardly lament, since music as art has already died an unnatural premature death anyway, with the arrival, and departure, of atonalism, no thanks or no-thanks to digital doomsday), nor for literature (which was already suffering from block-buster economics before the digital day, but may now be dealt the death blow, leaving only the pop, How-To and movie-wannabe genres viable); film, I would say, has already done itself in.

So, as I say, it depends on the statistical proportions, on which I have no objective data. So too for hacking — the new kid on the block — for which it is still not clear whether individualism or collectivism is the most fruitful procreative mode.

RP: “The digital world allows diffusion, access and collaboration in a way never before possible. Not only does DRM threaten to take away all those benefits but, given the current status of the technology, it generally imposes even greater restrictions than were experienced in the pre-digital world. That’s bad news for creators.”

Bad news for : certain kinds of creators. The rest is the statistics of the kinds. Let me put my cards on the table: I’m not defending the divine right of the McSpielbergs of Gaia’s Gollywood to make limitless bundles on their showy, empty pap. They are not my models. Shakespeare and Mozart are.

RP: “Cory Doctorow did some calculations showing that the potential earnings of the average scribbler (we’re not talking J K Rowling here) have been on a downward curve for a long time, indeed long before the digital world. This is a product of something else, but can surely only be exaggerated by the introduction of digital technologies.”

I don’t know the works of CD or JKR, but I’d bet they’re not quite of the rarefied calibre of the WSs and WMs I had in mind! (What’s the point of regressing the rare masterworks on the menial mean?) Nor do I think WS or WM would have ever done an analysis like that, in reckoning whether or not to go OA with their work… (Actually, I think both WS and WM earned most of their rewards from the analog world of performance, not the digital-code-world of composition, but I’m certain that wasn’t true of Beethoven.)

RP: “Most (if not all) are (by the standards of main street) a little reckless about their careers. Many also seem to have a disdain for money. Given that we currently live in a world far too dominated by market forces and bean counting, I find this very encouraging. As the bard said ‘Getting and spending, we lay waste our powers’.”

A (big) part of me resonates with this too. But don’t forget that hackers are demographically anomalous, being a ant-hill of Aspergians and worse; it’s all you can do to get them to change their underwear, let alone balance their bank-accounts. And there have been selfless geniuses in other fields too. I just worry about the potential loss of the future ones that are not blind to the presence/absence of the very possibility of material glory, alongside spiritual!

Hal Varian, by the way, made similar statistical calculations about the likelihood of big-bucks for most authors. And I countered with the same dream-of-apples argument I raise with you: How many genotypes are not driven by that? are they enough? and would the loss of the potential “market” for the rest be insignificant, in the scheme of things? (Without statistics on this impalpable stuff, I don’t think anyone can say.) Reducing it all to McDisney re-use rights for high-schoolers, as LL does, just goes to show how the world of “creativity” looks to a well-meaning philistine: a reductio ad absurdum.

RP: “You are right to say that we don’t really have the necessary statistics to reach any firm conclusions, and so it is speculation. You are also right to say that there is no cause for cheeriness in this. But then each decade that passes we seem increasingly like ants in an anthill (Raymond talks about this in terms of the scaling up of society), and individualism features less and less. Maybe that’s just the way it is going to be.”

Ok for autists, but not for all artists…

RP: “It occurs to me, however, that if your model is the likes of Mozart, then will not future Mozarts continue to do what they always did, regardless of material reward? From what I know of Mozart (not enough by far) he didn’t seem too driven to do what he did in order to pay the bills; and when he did earn money was it not from playing the piano, rather than composing?”

That was still early days, and off-line coded composition had not yet become an autonomous livelihood, as distinct from on-line analog performance; but by the day of Dickens, Dostoevsky, Beethoven and Brahms it had (and Beethoven in particular was quite a copyright maven!)

RP: “If so, is that not again the same model (to all intents and purposes) of giving away your creation and making money from associated services. As I say, I may be wrong about Mozart, but is it not the case that people who are really gifted and driven to create just get on and do it, and rarely think about how they will pay the bills?”

The ones that survive to be heard from. (And do not under-estimate the prospect of potential riches as an extrinsic motivator. The failure of DRM would wipe that out as well, and thereby perhaps render a wealth of human promise still-born. I’m not saying there will not still be some intrinsically motivated stout-hearts. I’m just worrying about how many, and which, and, most important, which ones will be lost.)

RP: “Of course if the universe is, as you say, a “mindless, feelingless machine” (in which we are all trapped), then none of this really matters and individualism and creativity are all for naught right?”

That’s not quite what I said! The universe is mostly feelingless, but organisms are not. Functionally speaking, they may as well be, since their feelings can have no independent causal power, on pain of telekinetic dualism, but feelings they are nonetheless. So whereas aesthetics, like all other feeling, does not “matter” functionally, in that it has no causal role, it not only matters but is what we mean by anything’s “mattering” at all, affectively. “Mattering” is an affective matter!

RP: “ (I was listening to the World Service the other night, which was talking about Dawkins’ The Selfish Gene — currently celebrating its 30th birthday I believe — and he was saying how people used to write to him and say that they hadn’t slept for three weeks after reading the book, and could see no point in continuing to live). ”

Well, the handwriting was already on the wall with cosmology; the biosphere is just a bit of it. If they want to wile away the sleepless hours, they should puzzle over the mystery of how/why matter feels after all, albeit ever so superfluously!

So What Else Is True? (2006-03-17)

The good thing about a blog is that you can answer questions even when you haven’t been asked. A friend just sent me What We Believe But Cannot Prove: Today’s Leading Thinkers on Science in the Age of Certainty (edited by John Brockman) Harper 2006. But before I even open it – well I did peek and saw it’s mostly cog-sci light-weights rather than hard-sci heavy-hitters – I wanted to put it on record that Descartes already did a good job on this in the Age of Enlightenment.

Descartes asked the hard questions about certainty (“what can I know for sure?” “what is open to doubt?”) and his conclusion seems to be just as certain today: There is only one other thing I can know for sure, apart from what can be proved (as in logic and mathematics), and that is the fact that I feel (if/when I feel). Descartes overstated it, suggesting that when I’m thinking, I can’t doubt that I’m existing too (“Cogito Ergo Sum”), but that has always been much too theory-ridden and equivocal. What’s meant by “I,” or even by “existence”? Big words. But in baby-talk, it’s just as self-contradictory to say it’s true that “I am not feeling” when I am in fact feeling (“sentitur ergo sentitur”) as it is to say that both P and not-P are true. (No need to “define” feeling by the way; we all know what it feels like to feel, and anyone who says otherwise is just bluffing. [Pinch him!].)

But that’s all. Nothing else is certain but those two kinds of truths (the formal truths of mathematics, provably true on pain of contradiction, and the self-demonstrating truth of experiencing itself – which does not, by the way, mean that experience conveys any other certainties). All else is mere probability. In particular, all the truths of science. (It’s certain that things feel like whatever they feel like, that they seem whatever they seem; anyone who doubts that is on a fool’s errand. But whether they really are the way they seem is an entirely different matter.)

But in what sense do we live in the age of certainty? Because of the naïve scientism of some of us (“scientists have proved that…”)? or the even more naïve fideism of others (“credo quia absurdum”)?

Now I shall peek in the book and see what these bright lights have to say…

Stevan Harnad