Why are beliefs felt rather than just functed, zombily?

Two experiences (e.g., of seeing a stop sign) would be similar. That’s what makes us different from Funes the Memorious: we can selectively forget, and thereby we can selectively abstract what two non-identical experiences have in common, the underlying invariant across all instances — and as a consequence we can categorise, name, and, in general, learn and talk. (If we were stuck in Funes’s world of infinitely faithful rote memory, every instant would be unique and incommensurable with every other except perhaps along a fretless continuum of degrees of similarity (that we would be powerless to do anything about or with).

Harnad, S. (2005) To Cognize is to Categorize: Cognition is Categorization, in Lefebvre, C. and Cohen, H., Eds. Handbook of Categorization. Elsevier.

Real human beings hardly notice and quickly forget the details of experience, and especially the irrelevant details. (Relevance depends on our purposes, and on consequences: like it or not, we have to learn what features distinguish toxic and edible mushrooms, if we need to eat mushrooms to live.)

So selective perceiving and remembering is going on, and must. (That is why there could never be a Funes the Memorious, except perhaps on a passive life-support system — and even then he could never think or talk: he could just feel!)

So, yes, there is a lot more than merely the passive recording of successive, unique, incommensurable instants, all grading continuously into one another. But we can’t take credit for it, because all that selectivity — even the active, consciously learnt part — is basically done for you by brain mechanisms for category learning and detection that we are only beginning to understand and that our armchair phenomenology simply takes for granted without even realising it.

Feelings would be unanalyzable wholistic chunks if we didn’t have a lot of innate and learned selectivity and abstraction going on. But we do.

But let’s not lose sight of the point at issue: My belief P that the distance from (x1,y1) to (x2,y2) = SQRT [(x2-x1)**2 + (y2-y1)**2] (i.e., Pythagoras, your example), or, to take a simpler case, my belief Q that a mouse is smaller-than-a-breadbox.

First, does one specific instance of my believing P (understanding P and feeling that P is true) differ from one specific instance of my believing Q (feeling that Q is true)? Yes, of course, just as one instance of tasting chocolate ice-cream differs from one instance of tasting vanilla ice-cream. And what about two instance of believing P? They differ in the same sense that two instances of eating chocolate ice-cream differ.

Do they have something in common? Sure, all four feelings do: P, Q, chocolate and vanilla. In fact, every feeling has something in common with every other feeling — and something different too. We are tempted to say, however, that some feelings have more in common than others, and that is true, but we should also ask how and why, rather than take it for granted.

(The pertinent exercise here is to remind ourselves of the diagonal argument in Watanabe’s “Ugly Duckling Theorem“: The only reason the 5 small yellow ducklings look more like one another than like the large gray gosling is that our perceptual and computational systems “privilege” differences in size and colour (i.e., weight them differentially in feeling space: Watanabe calls this a “bias.”) If our brains instead coded all differences on a par — spatial position, being closer to duckling X than Y, having the same number of feathers — including, by the way, subtle “higher-order” features such as not-having an even number of feathers, or having the same number of feathers as the middle duckling has epidermal cells, etc. — if all classifiable differences were felt to be on a par, then all ducklings would indeed be infinitely similar and infinitely different, and incommensurable.)

In reality, however, the reason everything does not feel infinitely similar/different to/from everything else is because features are not coded on a par (we are not Funes the Memorious). Some things look, or come to look, more alike than others, some features “pop out,” etc., and this is what allows us to categorize our experiences instead of being awash in an intractable, unnavigable flux.

But none of this resolves the zombie (feeling/function) problem, because our brains could just as well have privileged and abstracted certain useful traits over others in our adaptive interactions with the world without ever having bothered to make any of it — whether tasting an ice-cream or judging whether or not it is true that a mouse is bigger-than-a-bread-box — feel ike anything at all! The feeling, “veridical though it may be” is functionally supererogatory: As always, only the doing capacity is functionally needed, or causally efficacious. The feeling is just a superfluous frill.

(And the mind/body problem is and always was no more nor less than the “feeling/function” problem: why/how are some functions felt functions? That is also the “zombie” problem: Not imagining whether there could be zombies, but explaining how and why we are *not* zombies — without resorting to telekinesis.)

Moreover, I’m inclined to say that, qua feeling, unconnected with differential action, all feelings are incommensurable with the objects of which they are meant to be reflections or representations. At best, they are correlated with them (in that I feel chocolate when I eat chocolate and not vanilla). But I am sure there is a higher-order version of the inverted-spectrum problem that can be resurrected for all of feeling-space (qualia space), according to which no feeling really “resembles” the real-world property it is meant to be a feeling of (or caused by): it only feels as if it is. (Feeling is just seeming.)

This is probably a variant of Wittgenstein’s Private Language Argument: What could it mean to err in one’s subjective feelings? One can act erroneously; one can speak erroneously (saying something is True when it is False, or when others disagree). But what does it mean to say that chocolate does not really taste like chocolate (Dan Dennett’s Mr. Chase and Mr. Sanborn come to mind)? or that THIS feels more like what it really is than THAT? Again, we seem to be back into the incorrigible circle of “feels-as-if-it-feels”: Descartes is right that I cannot doubt that I am feeling what I am feeling when I feel it; but the incorrigibility of my feeling that I have a tooth-ache — even when I have no tooth, and when neurology says I am experiencing referred pain from an ear infection — does not imply anything about any commensurability between feelings and whatever they might be caused by or correlated with or representative of in the real world.

For present purposes, not only does believing that P feel different form believing that Q, but believing that P this time feels different from believing that Q the next time. Thanks to our Watanabe “biases,” we are not Funes the Memorious; we can detect the recurrent invariants, so we can feel that although my P-feeling this time is not identical to my P-feeling that time, they are nevertheless both P-feelings rather than Q-feelings. In other words, yes, we can categorize beliefs just as we can categorize concrete objects.

Yes, feelings are unique, hence different, every time, but no, that doesn’t mean we cannot ignore the differences and selectively categorize and identify them anyway.

But when I challenge the idea that there is nothing that it feels-like to believe that P, I am not trying to suggest that it follows that we cannot abstract and categorize beliefs, just as we abstract and categorize perceived objects. I am only trying to point out that our “cognitive” activity is just as felt as our “perceptual” activity, and that this fact — the felt nature of these functions — is equally problematic in both cases.

It is not possible to bracket the problem of cognition (or intentionality, or belief, or whatever you want to call it) and treat it separately as unproblematic mental territory, zombie-immune. Either it is just as problematic as tasting ice-cream, or it’s not mental (and merely functional, i.e., zombic). Feeling suffuses all of mental life, perceptual and “cognitive”; it is what makes it mental, and problematic. The rest is just unproblematic function: doings rather than feelings.

Let’s compare “twoness” as a perceptual experience (a direct perception of numerosity) with “2+2=4” as a piece of propositional thought. I say both “there’s 2 (things)” and “2+2=4” are thoughts we can have, that feel-like something to have, and mean, and experience (all those are the same thing).

They do feel different from instance to instance, but we can overcome the Borgesian uniqueness with a Watanabean bias, selectively abstracting some “aspects” of the feeling over others. (I doubt, though, that the exercise actually takes place at the felt level: I think our brains do the selection/abstraction for us, then hand the outcome to us on a platter, garnished with the right feeling, and give us a free co-authorship, as if it had been our discerning tastes that had generated the dish…)

I am not adept at phenomenology, so I can’t really introspect and describe what’s happening in my head very well, but I suspect that analysis is unconscious (zombic) and our feelings are after-the-fact, always gerrymandered to square with the computations (or other dynamics) that they did not themselves engender (or constitute: how/why would feelings be computations?)

If tasting chocolate, and recognizing its chocolate while at the same time recognizing that tasting the same chocolate on a number of feelings is always a different feeling too, then exactly the same goes for believing that “the cat is on the mat.” “The cat is on the mat” is a proposition, P. It feels like something to understand what P means (including, no doubt, knowing what a cat is and a mat is, so no doubt images are involved). To believe P is to have the feeling of knowing what P (and its components) means and are, and to feel that P is true. (It feels different to feel that P is false. And, yes, there is something all true propositions have in common, but there is also a lot about them that is different; and different to is believing that P on different occasions.)

The zombie problem, meanwhile, just perdures: How and why does it feel like something to believe that P? Why is it not enough simply to have the datum and just act accordingly, zombily?

How and Why We Are NOT Zombies

Re: Dean Zimmerman, Dispatches from the zombie wars, Times Literary Supplement April 28, 2006: Review of Daniel C. Dennettā€™s Sweet dreams. Philosophical obstacles to a science of consciousness and Gregg Rosenbergā€™s, A Place for Consciousness.

Zimmermanā€™s review of Rosenbergā€˜s book is admirably detailed; the one of Dennettā€™s book, less so; and both reviews are indecisive: Are Rosenbergā€™s detailed arguments based on what he can imagine about ā€œzombiesā€ ā€“ hypothetical creatures that can ā€œthinkā€ and act, but cannot feel — an intellectually rigorous exercise? Or is Dennett right to think not? Zimmerman takes no stance and doesnā€™t give the reader a basis for taking one either. He does, however, restate in passing a truism in philosophy whose truth it might be useful to call into question: ā€œthere isā€¦noā€¦ā€™way that it feelsā€™ to believe the Pythagorean Theorem.ā€

Most of what we believe/think/know is latent or ā€œimplicit ā€ (dare I say “zombic”?) I donā€™t carry around, actively, my knowledge of what is and is not bigger-than-a-breadbox, or that a mouse, in particular, is not. But when I am actually thinking about whether or not a mouse is bigger-than-a-breadbox, and have it in mind that it is indeed not, there is something that ā€œonlineā€ belief-state feels like, just as there is something that a tickle or seeing yellow feel like. Yet surely it is the capacity for that online feeling state that distinguishes me from a zombie that has all my offline thinking and acting capacity, but no feeling.

So it wonā€™t do to try to separate the problem of knowledge ā€“ offloading it onto computation ā€“ and try to treat tickles separately. The real zombie problem is not whether or not there can be zombies, but how and why we are not zombies. And that problem (the ā€œmind/bodyā€ problem) is hard (and, I think, insoluble) for one simple reason: Because ā€œhowā€ and ā€œwhyā€ are causal, functional questions. And feelings can only have causal power on pain of telekinetic dualism (ā€œmind over matterā€), which surely all evidence from physics contradicts. So Dennett might well be right that this problem cannot be solved by an exercise in imagination (although he is surely wrong that it is not a problem at all!).

The “zombie” problem, in other words, is the problem of explaining (nontelekinetically) how and why human (and animal) adaptive functions are felt, rather than merely “functed”…


If a “belief” is not to be merely a behavioral disposition, or merely the possession of certain data (which would then apply to any dynamical system, animate or otherwise, hence would not be something mental at all), then a belief can only be what it feels like to believe that P. And what it feels like to believe that P is not (in general) the same as what it feels like to believe that Q. Beliefs are as different as flavours, indeed as different as instances of tasting flavours (and no two instances of any feeling are identical, as we know from the philosopher [sic] Jorge Luis Borges, and his “Funes el memorioso“!).

It’s less a stance on the usefulness of arguments from imaginability that would be interesting, than its reasons. Dan gave some pretty snappy examples that seemed to reduce arguments from presence (or absence) of imagination to absurdities: Can they be resurrected? (I tend to think not: I can imagine making a perpetual motion machine too, and trisecting an angle…)

“Panpsychism” entails such a nightmarish mereological explosion/implosion (if we rightly use our imaginations and see how it pans out all the way to its logical conclusion) as to (I’ll bet) be provably incoherent; or if not that, then as implausible as anything an epsilon short of being provably self-contradictory can be…

Demography, Democracy, and Digital Disinformation: A Recommendation to Wikipedia

Musings on “digital universe” and wikipedia: There is a weak underbelly to wikipedia-style demopedias, given that the know-nots will always, always, vastly out-number and out-shout the very few who actually have any idea of what they are talking about: That’s a gaussian inevitability. Peer-review was meant to counteract it, but demotic global graffiti-board free-for-alls risk drowning out the signal with the ambient gaussian noise.

Eppur, eppur, if that were altogether true, surely, wikipedia would be a lot noisier than it actually is. This could be just chance initial conditions, or it could be because most know-nothings simply don’t bother, so it’s only compulsive quacks that are the noise-makers, and they are not in the majority.

I am told that much wikipedia vigilantism seems to consist of deleting rather than inserting or over-writing. If that is representative (it might not be), it is analogous — but in a sinister way — both to evolution and to neural function: Most “selectivity”, in both evolution and neural function, is negative, eliminative. In adaptation, it is the failure of a mutation to out-perform the competition, and hence the mutation’s elimination. In neural function, the effect is in the form of both selective neuronal loss and active inhibition. A lot of neural growth, maturation, and even learning consists in the selective loss or “pruning” of connections (or even neurons), not in the positive creation of over-riding patterns. Similarly, a lot of our “skills” (e.g., motor skills) turn out to consist in the active inhibition of false moves. (This is unmasked in aging, as the inhibition begins to fail, and the gaffes again begin to prevail!)

I say the analogy is sinister, because whereas in evolution the “arbiter” and “enforcer” of the selective deletions is the “Blind Watchmaker” — i.e., adaptive success/failure itself — and in neural function too, it is functional success and its consequences that guide what is retained and what is selectively deleted or inhibited, in the case of wikipedia it is merely self-appointed vigilantes: Someone decides something is his “turf” and he deletes all interlopers (or interlopers with whom he does not happen to agree). It’s bad enough when supposedly qualified editors do this (when they do it badly), but when anyone can self-appoint — well, I suppose we are alive when this empirical question is being answered in real time, before our eyes. But since it is all happening anarchically (indeed, global demographic anarchy is one of the players that is under test in all this), it is not clear when or whether we have a definitive outcome. Wikipedia is limping along so far, globally, though I suspect there are local abuses that are much less rosy, and might bode ill for the project as a whole, further along the way.

But the medium itself might provide the fix! If the vigilantes are tracked (anonymously) and their changes suitably tagged, a user could, in principle, sample a few “diffs” for a given document, discern that the changes wrought by, say, “Rambo” make the document worse, not better, from his point of view; there could then easily be a means of selectively “extracting” the view that is Rambo- independent (i.e., selectively deleting everything in which “Rambo” had a hand). Since all drafts and diffs and vetters’ tags are stored, this would democratize Wikipedia even more: Not only could any vigilante come in and add, subtract, or over-write whatever he wishes: any user could elicit a “view” that was selectively purged of all contributions of that vigilante, if he so wishes.

But this has to be easy to do and transparent. The current versioning of Wikipedia is far too awkward and user-unfriendly.

Holy Hermeneutics

Re: The God Theory

Yes, quantum mechanics has its puzzles, but it works, it predicts and explains, in every single case, no exceptions, whereas the arbitrary flummery the above book (for which I’ve only skimmed the blurb) favors merely fudges, feebly, after the fact.

There’s nothing wrong with parameter-settings, by the way; they only seem arbitrary because empirical laws are not matters of necessity, the way mathematical laws are, but merely matters of contingency, in other words, “accidents”. And asking “why” about them is only like asking “why” about 2+2=4 in the sense that with 2+2=4 the answer is always the same as the answer for any other mathematical law: “Because otherwise it would be self-contradictory.” Whereas when we ask “why” the cosmological constant is what it is, or why the law of universal gravitation, etc., the answer is merely that if it were otherwise, according to the laws of the way it actually happens to be (as far as we know today) it would not work.

So contingencies are less satisfying than necessities. One must ask: why not? What is the function of explanation: to describe and predict correctly, objectively, or to give people a soothing subjective feeling? It’s natural to ask for both, but the buck has to stop somewhere, and subjectivity is pretty restless except if it folds in on itself. So it never occurs to us to ask “why” about consciousness, or about god.

(Consciousness — the fact that we feel — is a cartesian “given” — the given of all givens; god, of course, is merely an invention, without consciousness’s privileged status of being, along with non-contradiction, the only other thing that is not open to doubt.)

But surely both are more arbitrary than the cosmological constants! Yet they
feel more like answers than questions — to those who are naive and unexacting about such things…

I often wonder why the naive skeptic does not feel impelled to ask “why” even about the Platonic “law” of non-contradiction: Not, I think, because of either a profound grasp of or an abiding allegiance to logic, but rather because of the kind of subjective glazing-over and tuning-out that happens whenever we confront an argument that involves more logical steps than we can follow. We just say “yeah, yeah, whatever” — too feeble-minded to either grasp or challenge.

So when we feel inclined to (completely capriciously) reduce all questions, answered and unanswered, answerable and unanswerable, to the one indubitable fact that we can always hold in mind all at once (as long as we are compos mentis, and sober), namely, the fact that we feel, we are simply confessing (without feeling it!) that when we asked for an “explanation” we never really meant, and would never have settled for, something objective, at all: When we ask “why” we are asking for the feeling that our question has been answered.

P.S. Although I’d never heard of him before, a few quick googlings suggest that the author is a fallen physicist, colloborator of another of the same ilk, “paraphysicist”, and propounder of apparent voodoo about which a layman like myself can only say “yeah, yeah, whatever”… His “digital universe” — in collaboration with wikipedia, apparently — shows how closely “openness” cohabits with the quackery. (I sometimes think god has nothing better to do than to keep orchestrating cruel caricatures of my antics…)

ANOSOGNOSIA

ANOSOGNOSIA

Why don’t they tell us (but would we want to know?)

that time’s inflationary,
its power of purchase doesn’t grow,
it shrinks:

at first imperceptibly,
then accelerating steadily,
till days are flicking by like phone poles from a wailing train

or stroboscopic fragments of your
parents’ nervous smiles
as you keep hurtling
round and round
on some
vertiginous
vehicle
of amusement.

Why, I remember, when I was your age, a morning would last a day,
a summer a year, a decade nearly a lifetime.
If I’d held my hands apart to show my life line up till then, look:
this is how long it would have been…
So do you think where I am now is seven times as far?
Ha! twice, three times at most.
And the last third’s
the shortest
.”

But who can understand such baleful reckoning? and besides,
a proportionate paling of our sense and recollection
must be factored in too,
diminishing awareness of our ills
even as they increase.

Lord Russell had a killjoy uncle who informed him as a child,
at the close of an especially glorious day,
“You’ll never know another day like this one.”
And little Russell cried and cried, and then forgot
.

Even “seize the rosebuds”
is a futile admonition
that rings true only
for the long since anosmic.
And isn’t there (confess it)
some diffraction too
in those wistful flashbacks to your day, when that buck stretched so much farther than it does today?

Which buck?
There’s no fixed scale
of barter or utility.
The goods, they differed too then,
and not just in their quality,
but in your own ability
to savor it.

Generation gap.
Communication gap.
It’s Zeno’s paradox:
You can’t get there from here.
“Tempora mutantur. Et nos?”

Why don’t they tell us (but would we want to know)?


CODA (Hommage to Paul MacLean)

The one frail consolation
Once you’re just an old iguana
Is you can’t recall what’s bothering you
Even if you wanna

Istvan Hesslein (c. 1993)

RGN: 1926-2006 — (2006-05-10)

   A coward. An unforgiveable, solipsistic coward, and hypocrite.

There are no words. By her own hand, like her life’s work, the unimaginable sufferings of the blighted body of a toweringly noble soul have at last come to an end, but not as and when she would have wished. The cruel decades have made a macabre mockery of any thought of what she “deserved.” Life failed her, unpardonably, and death failed her too, and most of all, the “healing profession” failed her; but she has triumphed over it all nonetheless, pellucid and undiminished, as not a single one of us could have come close to doing. I loved her and was in awe of her for most of my conscious life, far, far too much to even conceive of wishing her to have had to endure one moment longer; and yet her loss is unspeakable.

Descartes’ baby, the soul and art

From the fact that most adults and children believe in an immaterial, immortal soul, Paul Bloom, in Descartesā€™ Baby, concludes that this is somehow part of our evolved genetic heritage.

If so, then it would seem to follow, by the same token, that the belief in the divinity and supernatural powers of kings is likewise innate, and so are innumerable other beliefs, actual and potential, right down to Schultzā€™s Great Pumpkin.

More likely, our and our childrenā€™s belief in the soul arises from (1) our undoubtable (and undoubtedly evolved) Cartesian feeling that we ourselves feel (ā€œcogito ergo sumā€ — “sentio ergo sentitur“), (2) our less indubitable but nevertheless irresistible (and probably evolved) feeling that others that are sufficiently like us feel too (ā€œmind-readingā€), plus (3) our complete inability to explain either the causes or the effects of feelings physically (the ā€œmind/body problemā€), so that, when forced, the only explanation we feel at home with is telekinetic dualism (ā€œmind over matterā€).

I am not at all sure that this amounts to an innate belief in the soul, but if it does, we certainly didnā€™t need empirical studies of children or adults to demonstrate it: Descartes could have deduced it from his armchair by reason and introspection alone.

Paul Bloom also presents developmental data that he thinks bear upon the question of what is and is not art, and on why we prefer originals to forgeries or to misattributed works by lesser artists. The findings concern childrenā€™s ability to understand that a picture is a picture of an object, rather than just an object itself; that drawings they (or others) have drawn are drawings of what they meant to draw, even if they donā€™t look like them; that a deliberate artifact is different from an accidental one, or a natural object; and that whereas children may sometimes prefer copies of things to the originals, when it comes to their own teddy bears, theyā€™d rather keep the original.

This tells us about childrenā€™s understanding of representation and intention, and their ability to make the artifact/non-artifact distinction, but not about the art/non-art let alone the good/bad art distinction. Nor do the findings on childrenā€™s attachments to particular things help, since the very same preference (for originals over copies or different objects) would apply to Eva Braunā€™s underwear (which would have some cult/fetish/collector value while believed to have been hers, none once proven otherwise); hence these findings are about authorship, not art.

As to “Descartes’ Baby” — an apocryphal story that Descartes was so grief-stricken at the death of his 5-year-old daughter Francine that he built a life-like robot of her that he took with him everywhere till it was discovered by a ship-captain who was so horrified by it that he threw it overboard: This is another example of our overwhelming (and no doubt innate) “mind-reading” tendency to interpret creatures that look and behave as if they feel as if they really do feel (even if the feeling that evokes in us is horror or disgust). This innate tendency of ours is put to more practical scientific use in Turing’s Test.

Basic Science vs. Reverse Engineering

Maupertuis’s Principle (of Least Action) is not quite the same as Darwin’s Principle of Random Variation and Selective Retention (i.e., automatic design based on the post hoc adaptive advantages — for survival and reproduction — of natural developmental or random variations). But the two would be ominously close if it weren’t for the (subsequent) discovery of evolution’s mechanism: Mendelian genetics and eventually the DNA double helix.

That said, there nevertheless is a big difference between Biology and Physics: Physics is studying the basic laws of the universe, whereas Biology is mostly very local reverse-engineering: Figuring out how (naturally designed and selected, via DNA variation/retention) devices (organs, organisms, biological systems) work, by reverse-engineering them. This is exactly the same as forward engineering, which applies the laws of physics and the principles of engineering in order to design and build systems useful to Man: Biology simply takes already built ones and tries to figure out what lows of physics and principles of engineering underlie them and make them work.

In contrast, Physics is not, I think, usefully thought of as merely reverse-engineering designed systems (e.g., the universe or the atom). The laws of physics precede and underlie all the possible systems that can be designed and built by either engineers, or the Blind Watchmaker.

Stevan Harnad

The Syntactic Web

In reality the “semantic web” is, and can only ever be, a ”syntactic web”. Syntax is merely form — the shape of arbitrary objects called symbols , within a formal notational system adopted by an agreed and shared convention. Computation is the rule-based manipulation of those symbols, with the rules and manipulations (“algorithms”) based purely and mechanically on the shapes of the symbols, not their meaning — even though most of the individual symbols as well as the combinations of symbols are systematically interpretable (by human minds) as having meaning.

Semantics, in contrast, concerns the meanings of the symbols, not their shape, or the syntactic manipulation of their shapes. The “symbol grounding problem” is the problem of how symbols get their meanings, i.e., their semantics, and the problem is not yet solved. It is clear that symbols in the brain are grounded, but we do not yet know how. It is likely that grounding is related to our sensorimotor capacity (how we are able to perceive, recognise and manipulate objects and states), but so far that looks as if it will only connect symbols to their referents, not yet to their meaning. Frege‘s notion of “sense”, which is again just syntactic, because it consists of syntactic rules, still does not capture meaning. Nor does formal model-theoretic semantics, which likewise merely finds another syntactic object or system that follows the same rules as those of the syntactic object or system for which we are seeking the meaning.

So whereas sensorimotor grounding — as in a robot that can pass the Turing Test — does break out of the syntactic circle, it does not really get us to meaning (though it may be as far as cognitive science will ever be able to get us, because meaning may be related to the perhaps insoluble problem of consciousness).

Where does that leave the “semantic web”? As merely an ungrounded syntactic network. Like many useful symbol systems and artificial “neural networks”, the network of labels, links and connectivity of the web can compute useful answers for us, has interesting, systematic correlates (e.g., as in latent “semantic” analysis, and can be given a systematic semantic interpretation (by our minds). But it remains merely a syntactic web, not a semantic one

The Cognitive Killer-App 2006-04-16

Kevin Kelly thinks the web is not only what it really is — which is a huge peripheral memory and reference source, along with usage stats — but also a kind of independent thinking brain.

It’s not, even though it has connections, as does a neural net (which is likewise not a thinking brain).

KK is right that googling is replacing the consultation of our own onboard memories, but that is par for the course, ever since our species first began using external memories to increase our total information storage and processing capacity: Mimesis, language and writing were earlier, and more dramatic precursors. (We’re talking heads who already feel as helpless without our interlocutors, tapes and texts today as KK says we all will — and some already do today — without the web.)

And KK misses the fact that the brain is not, in fact, just a syntactic machine, the way the web is: There is no “semantic” web, just an increasingly rich “syntactic web“.

Nor (in my opinion) is the web’s most revolutionary potential in its role of periperal mega-memory and hyper-encyclopedia/almanac. It is not even — though it comes closer — in its interactive Hyde-park role in blogs and wikipedias. That’s just an extension of Call-In Chat Shows, Reality TV, acting-out, and everyone-wants-to-be-a-star. We’ve all had the capacity to talk for hundreds of thousands of years, but most of us have not found very much worth saying — or hearing by most others. The nature of the gaussian distribution is such that that is bound to remain a demographic rarity, even if the collective baseline rises — which I am not at all sure it’s doing! We just re-scale…

No, I think the real cognitive-killer-app of the web is the quote/commentary capability, but done openly — “skywriting”. At the vast bottom level this will just be the Hyde-Park “you know what’s wrong with the world dontcha?” pub-wisdom of the masses, gaussian noise. But in some more selective, rigorous and answerable reaches of cyberspace — corresponding roughly to what refereed, published science and scholarship used to be in the Gutenberg era — remarkable PostGutenberg efflorescences are waiting to happen: waiting only for the right demography to converge there, along with its writings, all Open Access, so the skywriting can begin in earnest.

Stevan Harnad