rubens rainbow

Arnold Trehub wrote: “brain analogs… are much more informative than mere correlates”

I am going to think out loud about the possibility of “duals” here, because I am not really sure yet what implication I want to draw from them for the question of psychophysical “analogs” vs “correlates.”

The question is interesting (and Saul Kripke gave it some thought in the ’70s when he expressed some skepticism about the coherence, hence the very possibility, of the notion of “spectrum inversion“: Could you and I really use exactly the same language, indistinguishably, and live and interact indistinguishably in the world, while (unbeknownst to us) green looks (i.e., feels) to me the way red does to you, and vice versa?

Kripke thought the answer was no, because with that simple swap would come an infinity of other associated similarity relations, all of which would likewise have to be systematically adjusted to preserve the coherence of what we say as well as do in the world. (“Green” looks more like blue, “red” looks more like purple, etc.)

At the time, I agreed, because I had come to much the same conclusion about semantic swapping: Would a book still be systematically interpretable if every token of “less” were interpreted to mean “more” and vice versa? (I don’t mean just making a swap between the two arbitrary terms we use, but between their intended meanings, while preserving the usage of the terms exactly as they are used now.)

I was pretty sure that the swap would run into detectable trouble quickly for the simple reason that “less” and “more” are not formal “duals” the way some terms and operations are in mathematics and logic. My intuition — though I could not prove it — was that almost all seemingly local pairwise swaps like less/more would eventually require systematic swaps of countless other opposing or contradictory or dependent terms (“I prefer/disprefer having less/more money…”), eventually even true/false, and that standard English could not bear the weight of such a pervasive semantic swap and still yield a coherent systematic interpretation of all of our verbal discourse. And that’s even before we ask whether the semantic swap could also preserve the coherence between our verbal discourse and our actions in the world.

But since then I’ve come to a more radical view about meaning itself, according to which the only difference between a text (a string of symbols P instantiated in a static book or a dynamic computer) that is systematically interpretable as meaning something, but has no “intrinsic intentionality” (in Searle‘s sense) and a text (say, a string of symbols P instantiated in the brain of a conscious person thinking the thought that P) is that it feels like something to be the person thinking the thought that P, whereas it feels like nothing to be the book or the computer instantiating the symbols string. Systematic interpretability (“meaningfulness”) in both cases, but (intrinsic) meaning only in the (felt) one.

I further distinguish meaning, in this felt sense, from mere grounding, which is yet another property that a mere book or computer lacks: Only a robot that could pass the robotic Turing Test (TT; the capacity to speak and act in the real world, indistinguishably from a person to a person, for a lifetime) would have grounded symbols. But if the robot did not feel, it still would not have symbols with intrinsic “intentionality”; it would still be more like a book or computer, whose sentences are systematically interpretable but mean nothing except in the mind of a conscious (i.e., feeling) user. (It is of course an open and completely undecidable question whether a TT-passing robot would or would not actually feel, because of the other-minds problem. I think it would — but I have no idea how or why!)

But this radical equation of intrinsic meaning (as opposed to mere systematic interpretability) with feeling would make Kripke’s observations about color-swapping (i.e., feeling-swapping) and my observations about meaning-swapping into one and the same thing.

It is not only that verbal descriptions fall short of feelings in the way that verbal descriptions fall short of pictures, but that feelings (say, feelings of greater or lesser intensity) and whatever the feelings are “about” (in the sense that they are caused by them and they somehow appertain to them) are incommensurable: The relation between an increase in a physical property and its felt quality (e.g., an increase in physical intensity and a felt increase in intensity) is a systematic (and potentially very elaborate and complicated) correlation (more with more and less with less), but does it even make sense to say it is a “resemblance”?

For this reason, brain “analogs” too are just systematic correlates insofar as felt quality is concerned. I may have (1) a neuron in my brain whose intensity (or frequency) of firing is in direct proportion to (2) the intensity of an external stimulus (say, the amplitude of a sinusoid at 440 hz). In addition, there is the usual log-linear psychophysical relationship between the stimulus intensity (2) and (3) my ratings of (felt) intensity. The stimulus intensity (2) and the neuronal intensity (1) are clearly in an analog relationship. So are the stimulus intensity (2) and my intensity ratings (3) (as rated on a 1-10 scale, say). And so are the neuronal intensity (1) and my intensity ratings (3). But you could get all three of those measurements, hence all three of those correlations, out of an unfeeling robot. (I could build one already today.) How does (4) the actual feeling of the intensity figure in all this?

You want to say that my intensity ratings are based upon an “analog” of that felt intensity. Higher rated intensity is systematically correlated with higher felt intensity, and lower rated intensity is correlated with lower felt intensity. But in what way does a higher intensity rating RESEMBLE a higher intensity feeling? Is the rating not just a notational convention I use, like saying that “higher” sound-frequencies are “higher”? (They’re not really higher, like higher in the sky, are they?) (Same thing is true if I instead use the “analog” convention of matching the felt frequency with how high I raise my hand. And if it’s instead an involuntary reflex rather than a voluntary convention that is causing the analog response — say, light pupillary dilation in response to increased light intensity — then the correlated feeling is even more side-lined!)

The members of our species (almost certainly) all share roughly the same feelings. So we can agree upon, share and understand naming conventions that correlate systematically with those shared feelings. I use “hot” for feeling hot and “cold” for feeling cold, because we have both felt those feelings and we share the convention on what we jointly agree to call what.

That external corrective constraint gets us out of another kind of incorrigibility: Wittgenstein pointed out in his “private-language argument” that there could not be a purely private language because then there could be no error-correction, hence there would be no way for me to know whether (i) I was indeed using the same word systematically to refer to the same feeling on every occasion or (ii) it merely felt as if I was doing so, whereas I was actually using the words arbitrarily, and my memories were simply deceiving me.

So feelings are clearly deceiving if we are trying to “name” them systematically all on our own. But the only thing that social conventions can correct is the sensorimotor grounding of those names: What we call (and do with) what, when. I can’t know for sure what you are feeling, but if you described yourself as feeling “hot” when the temperature had gone down, and as feeling “happy” when you had just received some bad news, I would suspect something was amiss.

Those are clearly just correlations, however. Words are not analogs of feelings, they are just arbitrary labels for them. And although a verbal description of a picture can describe the picture as minutely as we like, it is still not an analog of the picture, just a symbolic description that can be given a systematic and coherent interpretation, both in words and actions (if it is TT-grounded).

Yet we all know it can’t be symbolic descriptions all the way down: Some of our words have to have been learned from (grounded in) direct sensorimotor (i.e., robotic) experience. How/why did that experience have to be felt experience? That’s the question we can’t answer; the explanatory gap. And a lemma to that unanswered question is: How/why did that felt experience have to resemble what is was about — as opposed to merely feeling like it resembles what it is about? Why isn’t grounding just “functing” (e.g., the cerebral substrate that enables us to do and say whatever needs to be done and said in order to survive, succeed and reproduce, TT-scale)? And why is there anything more to meaning than just that?

To close with a famous example of analogs: Roger Shepard showed psychophysically that the time it takes to detect whether two shapes are different shapes or just the same shape, rotated, is proportional to the degree of rotation. This suggests that the brain is encoding the shapes in some analog form, and then doing some real-time analog rotation to test whether they match. This may all be true, but as it happens, the rotation occurs too fast for the subject to feel that it is happening! So here we have the same three-way correlation ( internal neural process (1) external stimulus (2), subject’s output (3)) as in intensity judgments), but without any correlated feeling.

So is the neural “analog” still to count as an analog of feeling, even when there is no feeling?

By the very same token, how is one to determine whether psychophysical data are analogs of feeling, rather than merely systematic functional correlates (especially when the explanation of how and why the correlated functions are felt at all remains a complete mystery, causally, hence functionally)? (This is the public counterpart of Wittgenstein’s private problem of error.)

All this, but I still think that global systematic duals do not in general work, so neither sensory nor semantic pairwise swapping is possible (except perhaps in some local special cases) while preserving the coherence of either actions in the world or the interpretability of verbal discourse. I don’t think, however, that the fact that coherent global duals are impossible, even if it is true, entails that feelings are analogs of physical properties, rather than merely systematic correlates.

Stevan Harnad

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.