Pondering whether there is something non-arbitrary to pin down in the notion of âintelligenceâ (or âcognitionâ) is reminiscent of what philosophers tried (unsuccessfully) to do with the notion of âknowingâ (or âcognizingâ):
BELIEF: Do I know (cognize) that âthe cat is on the matâ if I simply believe the cat is on the mat?
No, the cat really has to be on the mat.
TRUE BELIEF: So do I know (cognize) that âthe cat is on the matâ if I believe the cat is on the mat and the cat is really on the mat?
No, I could be believing that itâs true for the wrong reasons, or by luck.
JUSTIFIED TRUE BELIEF: So do I know (cognize) that âthe cat is on the matâ if I believe the cat is on the mat and the cat is really on the mat and I believe it because I have photographic evidence, or a mathematical proof that itâs on the mat?
No, the evidence could be unreliable or wrong, or the proof could be wrong or irrelevant.
VALID, JUSTIFIED, TRUE BELIEF: So do I know (cognize) that âthe cat is on the matâ if I believe the cat is on the mat and the cat is really on the mat and I believe it because I have photographic evidence, or a mathematical proof that itâs on the mat, and neither the evidence nor the proof is unreliable or wrong, or otherwise invalid?.
How do I know the justification is valid?
So the notion of âknowledgeâ is in the end circular.
âIntelligenceâ (and âcognitionâ) has this affliction, and Shlomi Sherâs notion that we can always make it break down in GPT is also true of human intelligence: theyâre both somehow built on sand.
Probably a more realistic notion of âknowledgeâ (or âcognition,â or âintelligenceâ) is that they are not only circular (i.e., auto-parasitic, like the words and their definition in a dictionary), but that also approximate. Approximation can be tightened as much as you like, but itâs still not exact or exhaustive. A dictionary cannot be infinite. A picture (or object) is always worth more than 1000++ words describing it.
Ok, so set aside words and verbal (and digital) âknowledgeâ and âintelligenceâ: Cannonverbal knowledge and intelligence do any better? Of course, thereâs one thing nonverbal knowledge can do, and thatâs to ground verbal knowledge by connecting the words in a speakerâs head to their referents in the world through sensorimotor âknow-how.â
But thatâs still just know-how. Knowing that the cat is on the mat is not just knowing how to find out whether the cat is on the mat. Thatâs just empty operationalism. Is there anything else to âknowledgeâ or âintelligenceâ?
Well, yes, but that doesnât help either: Back to belief. What is it to believe that the cat is on the mat? Besides all the failed attempts to upgrade it to âknowingâ that the cat is on the mat, which proved circular and approximate, even when grounded by sensorimotor means, it also feels like something to believe something.
But thatâs no solution either. The state of feeling something, whether a belief or a bee-sting, is, no doubt, a brain state. Humans and nonhuman animals have those states; computers and GPTs and robots GPT robots (so far) donât.
But what if they the artificial ones eventually did feel? What would that tell us about what âknowledgeâ or âintelligenceâ really are â besides FELT, GROUNDED, VALID, JUSTIFIED, TRUE VERBAL BELIEF AND SENSORIMOTOR KNOWHOW? (âFGVJTVBSKâ)
That said, GPT is a non-starter, being just algorithm-tuned statistical figure-completions and extrapolations derived from on an enormous ungrounded verbal corpus produced by human FGVJTVBSKs. A surprisingly rich database/algorithm combination of the structure of verbal discourse. That consists of the shape of the shadows of âknowledge,â âcognition,â âintelligenceâ — and, for that matter, âmeaningâ â that are reflected in the words and word-combinations produced by countless human FGVJTVBSKs. And theyâre not even analog shadowsâŠ