Pondering whether there is something non-arbitrary to pin down in the notion of āintelligenceā (or ācognitionā) is reminiscent of what philosophers tried (unsuccessfully) to do with the notion of āknowingā (or ācognizingā):
BELIEF: Do I know (cognize) that āthe cat is on the matā if I simply believe the cat is on the mat?
No, the cat really has to be on the mat.
TRUE BELIEF: So do I know (cognize) that āthe cat is on the matā if I believe the cat is on the mat and the cat is really on the mat?
No, I could be believing that itās true for the wrong reasons, or by luck.
JUSTIFIED TRUE BELIEF: So do I know (cognize) that āthe cat is on the matā if I believe the cat is on the mat and the cat is really on the mat and I believe it because I have photographic evidence, or a mathematical proof that itās on the mat?
No, the evidence could be unreliable or wrong, or the proof could be wrong or irrelevant.
VALID, JUSTIFIED, TRUE BELIEF: So do I know (cognize) that āthe cat is on the matā if I believe the cat is on the mat and the cat is really on the mat and I believe it because I have photographic evidence, or a mathematical proof that itās on the mat, and neither the evidence nor the proof is unreliable or wrong, or otherwise invalid?.
How do I know the justification is valid?
So the notion of āknowledgeā is in the end circular.
āIntelligenceā (and ācognitionā) has this affliction, and Shlomi Sherās notion that we can always make it break down in GPT is also true of human intelligence: theyāre both somehow built on sand.
Probably a more realistic notion of āknowledgeā (or ācognition,ā or āintelligenceā) is that they are not only circular (i.e., auto-parasitic, like the words and their definition in a dictionary), but that also approximate. Approximation can be tightened as much as you like, but itās still not exact or exhaustive. A dictionary cannot be infinite. A picture (or object) is always worth more than 1000++ words describing it.
Ok, so set aside words and verbal (and digital) āknowledgeā and āintelligenceā: Cannonverbal knowledge and intelligence do any better? Of course, thereās one thing nonverbal knowledge can do, and thatās to ground verbal knowledge by connecting the words in a speakerās head to their referents in the world through sensorimotor āknow-how.ā
But thatās still just know-how. Knowing that the cat is on the mat is not just knowing how to find out whether the cat is on the mat. Thatās just empty operationalism. Is there anything else to āknowledgeā or āintelligenceā?
Well, yes, but that doesnāt help either: Back to belief. What is it to believe that the cat is on the mat? Besides all the failed attempts to upgrade it to āknowingā that the cat is on the mat, which proved circular and approximate, even when grounded by sensorimotor means, it also feels like something to believe something.
But thatās no solution either. The state of feeling something, whether a belief or a bee-sting, is, no doubt, a brain state. Humans and nonhuman animals have those states; computers and GPTs and robots GPT robots (so far) donāt.
But what if they the artificial ones eventually did feel? What would that tell us about what āknowledgeā or āintelligenceā really are ā besides FELT, GROUNDED, VALID, JUSTIFIED, TRUE VERBAL BELIEF AND SENSORIMOTOR KNOWHOW? (āFGVJTVBSKā)
That said, GPT is a non-starter, being just algorithm-tuned statistical figure-completions and extrapolations derived from on an enormous ungrounded verbal corpus produced by human FGVJTVBSKs. A surprisingly rich database/algorithm combination of the structure of verbal discourse. That consists of the shape of the shadows of āknowledge,ā ācognition,ā āintelligenceā — and, for that matter, āmeaningā ā that are reflected in the words and word-combinations produced by countless human FGVJTVBSKs. And theyāre not even analog shadowsā¦