Ebeneezer’s Yuletide Homily

This interview just reiterated how everyone is still gob-smacked at how much transformers turn out to be able to do by swallowing and chewing more and more of the Internet with more and more computers, by filling in all the local bigram blanks, globally.

They’re right to be gob-smacked and non-plussed, because nobody expected it, and nobody has come close to explaining it. So, what do they do instead? Float a sci-fi icon – AY! GEE! EYE! – with no empirical substance or explanatory content, just: “It’s Coming!”
 plus a lot of paranoia about what “they’re” not telling us, and who’s going to get super-rich out of it all, and whether it’s going to destroy us all.

And meanwhile Planetary Melt-Down (PMD) proceeds apace, safely muzzled by collective cog-diss, aided and abetted by those other three Anthropocene favorites: Greed, Malice and Bellicosity (GMB) — a species armed to the hilt, with both weapons and words, and words as weapons (WWWW)..

Wanna know what I think’s really going on? Language itself has gone rogue, big-time. It’s Advanced alright; and General; but it’s anything but Intelligence, in our hands, and mouths.

And it started 299,999 B.C.

GPT as Syntactic Shadow-Puppetry

Pondering whether there is something non-arbitrary to pin down in the notion of “intelligence” (or “cognition”) is reminiscent of what philosophers tried (unsuccessfully) to do with the notion of “knowing” (or “cognizing”):

BELIEF: Do I know (cognize) that “the cat is on the mat” if I simply believe the cat is on the mat? 

No, the cat really has to be on the mat.

TRUE BELIEF: So do I know (cognize) that “the cat is on the mat” if I believe the cat is on the mat and the cat is really on the mat?

No, I could be believing that it’s true for the wrong reasons, or by luck.

JUSTIFIED TRUE BELIEF: So do I know (cognize) that “the cat is on the mat” if I believe the cat is on the mat and the cat is really on the mat and I believe it because I have photographic evidence, or a mathematical proof that it’s on the mat?

No, the evidence could be unreliable or wrong, or the proof could be wrong or irrelevant.

VALID, JUSTIFIED, TRUE BELIEF: So do I know (cognize) that “the cat is on the mat” if I believe the cat is on the mat and the cat is really on the mat and I believe it because I have photographic evidence, or a mathematical proof that it’s on the mat, and neither the evidence nor the proof is unreliable or wrong, or otherwise invalid?.

How do I know the justification is valid?

So the notion of “knowledge” is in the end circular.

“Intelligence” (and “cognition”) has this affliction, and Shlomi Sher’s notion that we can always make it break down in GPT is also true of human intelligence: they’re both somehow built on sand.

Probably a more realistic notion of “knowledge” (or “cognition,” or “intelligence”) is that they are not only circular (i.e., auto-parasitic, like the words and their definition in a dictionary), but that also approximate. Approximation can be tightened as much as you like, but it’s still not exact or exhaustive. A dictionary cannot be infinite. A picture (or object) is always worth more than 1000++ words describing it. 

Ok, so set aside words and verbal (and digital) “knowledge” and “intelligence”: Cannonverbal knowledge and intelligence do any better? Of course, there’s one thing nonverbal knowledge can do, and that’s to ground verbal knowledge by connecting the words in a speaker’s head to their referents in the world through sensorimotor “know-how.”

But that’s still just know-how. Knowing that the cat is on the mat is not just knowing how to find out whether the cat is on the mat. That’s just empty operationalism. Is there anything else to “knowledge” or “intelligence”?

Well, yes, but that doesn’t help either: Back to belief. What is it to believe that the cat is on the mat? Besides all the failed attempts to upgrade it to “knowing” that the cat is on the mat, which proved circular and approximate, even when grounded by sensorimotor means, it also feels like something to believe something. 

But that’s no solution either. The state of feeling something, whether a belief or a bee-sting, is, no doubt, a brain state. Humans and nonhuman animals have those states; computers and GPTs and robots GPT robots (so far) don’t.

But what if they the artificial ones eventually did feel? What would that tell us about what “knowledge” or “intelligence” really are – besides FELT, GROUNDED, VALID, JUSTIFIED, TRUE VERBAL BELIEF AND SENSORIMOTOR KNOWHOW? (“FGVJTVBSK”)

That said, GPT is a non-starter, being just algorithm-tuned statistical figure-completions and extrapolations derived from on an enormous ungrounded verbal corpus produced by human FGVJTVBSKs. A surprisingly rich database/algorithm combination of the structure of verbal discourse. That consists of the shape of the shadows of “knowledge,” “cognition,” “intelligence” — and, for that matter, “meaning” – that are reflected in the words and word-combinations produced by countless human FGVJTVBSKs. And they’re not even analog shadows