Symbols and Sense

Letter to TLS, Nov 4 2011:

“Stevan Harnad misstates the criteria for the Turing Test when he describes a sensing robot that could pass the test by recognizing and interacting with people and objects in the same way that a human can (October 21). Alan Turing’s formulation of the Turing Test specifies a computer with no sensors or robotic apparatus. Such a computer passes the test by successfully imitating a human in text-only conversation over a terminal.
    “Significantly, and contrary to Harnad’s formulation, no referential “grounding” of symbols is required to pass the Turing Test

                         David Auerbach 472 9th Street, New York 11215.

David Auerbach (TLS Letters, November 4) is quite right that in his original 1950 formulation, what Turing had called the “Imitation Game” (since dubbed the “Turing Test”) tested only verbal capacity, not robotic (sensory/motor) capacity: only symbols in and symbols out, as in today’s email exchanges. Turing’s idea was that if people were completely unable to tell a computer apart from a real, live pen-pal through verbal exchanges alone, the computer would really be thinking. Auerbach is also right that — in principle — if the verbal test could indeed be successfully passed through internal computation (symbol-manipulation) alone, then there may be no need to test with robotic interactions whether the computer’s symbols were “grounded” in the things in the world to which they referred. But 2012 is Alan Turing Year, the centenary of his birth. And 62 years since it was published, his original agenda for what is now called “cognitive science” has been evolving. Contrary to Turing’s predictions, we are still nowhere near passing his test and there are by now many reasons to believe that although being able to pass the verbal version might indeed be evidence enough that thinking is going on, robotic grounding will be needed in order to actually be able to pass the verbal test, even if the underlying robotic capacity is not tested directly. To believe otherwise is to imagine that it would be possible to talk coherently about the things in the world without ever being able to see, hear, touch, taste or smell any of them (or anything at all).

Harnad, S. (1989) Minds, Machines and Searle. Journal of Theoretical and Experimental Artificial Intelligence 1: 5-25.

Harnad, S. (1990) The Symbol Grounding Problem Physica D 42: 335-346. http://cogprints.org/0615/

Harnad, S. (1992) The Turing Test Is Not A Trick: Turing Indistinguishability Is A Scientific Criterion. SIGART Bulletin 3(4): 9-10.

Harnad, S. (1994) Levels of Functional Equivalence in Reverse Bioengineering: The Darwinian Turing Test for Artificial Life. Artificial Life 1(3): 293-301.

Harnad, S. (2000) Minds, Machines, and Turing: The Indistinguishability of Indistinguishables. Journal of Logic, Language, and Information 9(4): 425-445. (special issue on “Alan Turing and Artificial Intelligence”)

Harnad, S. (2001) Minds, Machines and Searle II: What’s Wrong and Right About Searle’s Chinese Room Argument? In: M. Bishop & J. Preston (eds.) Essays on Searle’s Chinese Room Argument. Oxford University Press.

Harnad, S. (2002) Darwin, Skinner, Turing and the Mind. (Inaugural Address. Hungarian Academy of Science.) Magyar Pszichologiai Szemle LVII (4) 521-528.

Harnad, S. (2002) Turing Indistinguishability and the Blind Watchmaker. In: J. Fetzer (ed.) Evolving Consciousness. Amsterdam: John Benjamins. Pp. 3-18.

Harnad, S. and Scherzer, P. (2008) First, Scale Up to the Robotic Turing Test, Then Worry About Feeling. Artificial Intelligence in Medicine 44(2): 83-89

Harnad, S. (2008) The Annotation Game: On Turing (1950) on Computing, Machinery and Intelligence. In: Epstein, Robert & Peters, Grace (Eds.) Parsing the Turing Test: Philosophical and Methodological Issues in the Quest for the Thinking Computer. Springer

Harnad, S. (2011) Minds, Brains and Turing. Consciousness Online 3.

Stimulation, Attention, Awareness

The recent findings of Watanabe and Logothetis on dissociating attention and awareness are interesting

Watanabe, M, K Cheng, Y Murayama, K Ueno, T Asamizuya, K Tanaka, N Logothetis (2011) Attention but not Awareness Modulates the BOLD Signal in Human V1 During Binocular Suppression. Science 11 November 2011 DOI: 10.1126/science.1203161

but a more conservative interpretation might be that when there is divided stimulation and divided attention, stimulation and attention both contribute to awareness (of stimulation), with attention selectively enhancing the effects of the stimulation.

Harnad, S. (1969) The effects of fixation, attention, and report on the frequency and duration of visual disappearances. Masters thesis, McGill University.

Angels Rising? Or Tobacco-Company Apologetics?

I have only read the summaries of Steve Pinker’s new book, “The Better Angels of Our Nature,” but I wonder about the demographics on which it is based:

As the centuries go by, is violence declining proportionally or absolutely? I suspect it’s the former. The population grows Malthusianly, but as civilization progresses, the proportion of violence “tolerated” goes down. Yet at our exponential population growth rate, that still leaves it open that the absolute amount of human/human violence is still growing, daily, relentlessly — just not as fast as the human population is growing.

So, yes, it’s nice that the relative proportion of violence is not growing as fast as the population, but that’s just a statistic. The number of (human) sparrows felled (by humans) daily is still monstrous: bigger than it ever was, and growing. Taking solace from the fall in proportion is akin to tobacco-company thinking, it seems to me: Is Steven Pinker unwittingly falling into apologetics for the unpardonable, whether then, since, or now?

(And let’s not forget — although it’s well-hidden and sanitized — that the absolute amount of violence we are heartlessly inflicting daily on the helpless nonhuman creatures that we purpose-breed — not out of necessity: for savour, not survival — is growing just as exponentially as our own numbers
)


This absolute/relative question has obviously been put to Pinker many times:

Q: Your claim that violence has declined depends on comparing rates of violence relative to population size. But is that really a fair measure? Doesn’t a victim of violence suffer just as much regardless of what happens to other people of the time? Was the value of a life less in the 13th century than in the 21st just because there are more people around today? Should we give ourselves credit for being less violent just because there has been population growth?

But Pinker’s reply to the question is not very convincing:

R: You can think about it in a number of ways, but they all lead to the conclusion that it is the proportion, rather than the absolute number, of deaths that is relevant. First, if the population grows, so does the potential number of murderers and despots and rapists and sadists. So if the absolute number of victims of violence stays the same or even increases, while the proportion decreases, something important must have changed to allow all those extra people to grow up free of violence.

This reply provides solace to statisticians, but not to victims.

R: Second, if one focuses on absolute numbers, one ends up with moral absurdities such as these: (a) it’s better to reduce the size of a population by half and keep the rates of rape and murder the same than to reduce the rates of rape and murder by a third; (b) even if a society’s practices were static, so that its rates of war and violence don’t change, its people would be worse and worse off as the population grows, because a greater absolute number of them would suffer; (c) every child brought into the world is a moral evil, because there is a nonzero probability that he or she will be a victim of violence.

Try replacing potential “victim” by potential “perpetrator,” and add that to the fact that the absolute number of victims is still growing.

R: As I note on p. 47: “Part of the bargain of being alive is that one takes a chance at dying a premature or painful death, be it from violence, accident, or disease. So the number of people in a given time and place who enjoy full lives has to be counted as a moral good, against which we calibrate the moral bad of the number who are victims of violence. Another way of expressing this frame of mind is to ask, `If I were one of the people who were alive in a particular era, what would be the chances that I would be a victim of violence?’ [Either way, we are led to] the conclusion that in comparing the harmfulness of violence across societies, we should focus on the rate, rather than the number, of violent acts.”

If one takes the allocentric rather than the egocentric perspective on this, a declining proportion of suffering does not compensate for a growing amount of suffering unless we give my potential pleasure more weight than your actual pain.

Q: What about all the chickens in factory farms?

R: I discuss the chickens in a section on Animal Rights in chapter 7, pp. 469–473.

Well, I guess that settles that, insofar as concerns any potential complaints from the chickens (whose unlucky numbers are not only growing absolutely but whose proportions are not even declining relatively, like those of the lucky human survivors): So let them take solace in humanity’s increasingly angelic nature the same way the growing number of absolute human victims do. (A sentiment reminiscent of Marie-Antoinette — or perhaps a moral-credit Ponzo Plan, in which we amortize the increasing number of victims of human violence by increasing the total human population even faster…)


Yet, all that said, I too think there’s hope: but it will only begin to be realized when it is the absolute number of victims (human and nonhuman) that begins to decline — and not just the proportion. And, yes, reducing rather than increasing our own absolute numbers might not be a bad step in that direction…

The Harmonic Spectrum and Mirror Awe

Re: “Physicists in tune with neurons

My guess is that you could predict consonance/dissonance without recording neuronal activity. It’s already in the physics: consonant sounds share more harmonics, bottom up. You could measure that without neurons, just a device that can detect differences in the harmonic spectrum. (And it would be trivial to make neural devices mirror the same property.)

Besides, consonant/dissonant does not correspond to aesthetically “pleasant/unpleasant” (and the right aesthetic adjective is not quite the word “pleasant” anyway): Some of the most excruciatingly beautiful harmonic moments are dissonant ones. (It has more to do with the drawing out or manipulation of expectation in the passage from dissonant to consonant — but that too is a trivialization…)

(As happens so often: take an absolutely trivial empirical correlation, and make one of its correlates our own precious brain activity, and people are almost superstitiously ready to marvel, the same way they do at their own horoscopes, when they seem to fit…)

And, of course, having detected the physical difference, you’re left with the usual (hard) problem, which is not why one feels pleasant and the other not, but why any of it feels like anything at all…

The Best of the Worst

 Yes, the commentator in the Khodorkhvsky movie who said “Kh was the best of the worst” (the worst being all the oligarchs, including Kh) seems to have captured the essence of the puzzle. 

There is no question that Kh’s enormous business success was due in part to the government selling him public assets at a low price (partly to keep them in Russian hands, partly because of insider wheeling and dealing and self-interest). There were no doubt dirty tricks and gangsterism on both sides (oligarchs and government) along with collusion. There is also little doubt who the worst of the worst was and is (VVP). 

How did Kh become the best of the worst? It looks as if his motives for acquiring wealth never came from those lowest depths of sociopathic cupidity that drove so many others; his motives seem to have been more technical than materialistic: it was a skill he was obsessed with developing. There may even have been some self-serving belief in its “trickle-down” benefits for the rest of the world too. But he clearly had a first round of remorse and rethinking that led to his support of the political opposition to Putin (possibly because of conflicts and conflicts of interest with Putin), and this is what led to his arrest (by which time he had already developed a sense of fatalism, if not martyrdom; probably his wealth and influence also gave him some illusion of immunity, so far only partly confirmed).

But what about now? In prison, having lost (almost) all, he had a second round of second-thoughts about wealth acquisition, and he seems to think he is now fighting for a principle (though it is not at all evident what that principle is).

Probably Kh would have made (and might still make) a better president than Putin. But that just means the best of the worst would be better than the worst of the worst.

Human character is capable of remorse and reform, but I think Russia’s chances would be better in the hands of Politkovskaya (compassionate, intelligent, funny, and equally obsessive and fatalistic —  surely closer to the best of the best) if the worst of the worst (or some of his competitors) had not already done their worst with her.

Making Sense of Sensing

“Wouldn’t it short-circuit all these discussions if you just came out and said that this is how you use the word “Feeling”, that is, to mean any conscious notion or awareness whatever, even if it is not a sensation like taste or pain or fear?  You say “feeling” is a nice honest word, while words like “awareness” and “conscious” are weasel words.  But since a lot of us cannot agree that wondering idly whether it will rain next Tuesday is a feeling, then when you say it is because it just has to be, good old honest-yeoman uncorrupt “feeling” slips into weaseldom, or at least mush, just as all the other words do.  

“Perhaps Hofstadter is right: because these words refer to states we cannot point to or compare, words grounded (in your term) only in private experience, then we are simply clashing by night. We don’t really know what each other means by any of them.  I will swear that I can know I am thinking about next Tuesday, or the square root of twelve, and can tell the difference between these notions, but it is all done separate from sensation of any kind.

“I repeat, why CAN’T the brain deliver information to one’s awareness by at least one other avenue than feelings?  To insist that it cannot makes your denial cease to be an empirical statement and become a definition of “feeling”.”

Very good challenge, and I’m happy to try to rise to the occasion!

The brain not only can but does “deliver information” without its being felt. Not only delivers information, but gets things done. 

It does nocturnal deliveries while we’re asleep, of course, but it also does a lot while we’re awake (keeps my heart beating, keeps me upright, and, most important, delivers answers to my (felt) questions served on a platter (“what was that person’s name?”, “where am I going?”, “what word should I say next?) without me feeling any of the work that went into it. 

These are things we do, and feel we do (“find” the name, “recall” where I’m going, “decide” what to say next), but we are clueless about their provenance: We have no idea how we do them. Our brain does them, and then “delivers” the result.

Some of this delivery is delivery of know-how (riding a bike, speaking) and some of it is of know-that (facts, or putative facts). 

We are the “recipients” of the delivery, and the question is, how does our brain do it?

But these are the “easy” questions: Cognitive neuroscience will eventually tell us how our brain does and “delivers” all these things for us.

But that’s not the hard part. The hard part is explaining why and how it feels like something to be the “recipient” of these “deliveries.” If the result of the deliveries were merely doings and sayings, there would be no issue, because there would be nothing mental; it would all just be mechanical, neurosomatic dynamics. 

Now, you are sort of forcing me to do some phenomenology here — something I’m neither particularly good at, nor set great store by, but here goes:

Am I just linguistically legislating that having received a “delivery,” [say, the “information,” X, that it’s Tuesday today] from their brain, what people mean by “I am aware of X” has to be “It feels as if X is the case”?

Or, worse, am I presumptuously denying what is not only other people’s private privilege but (by my own lights) certain and incorrigible, when I say that people are wrong when they insist it doesn’t feel like anything to know it’s Tuesday? Wrong to just settle for saying they just know it, it’s one of those pieces of “information delivered” by their brain, and that’s all there is to it?

That would be fine, it seems to me, if the “delivery” were taking place while you were asleep or anesthetized or comatose. 

But it seems to me (and here I am doing some amateur phenomenology) that the difference between being (dreamlessly) asleep and being awake is that it feels like something to be awake and it does not feel like anything to be dreamlessly asleep.

“Information” “delivered” and even “executed” by my brain while I am asleep is also being served on a platter, just as it’s served on a platter when I’m awake: I’m just not feeling anything the while.

So far you will say you could have substituted “not aware of (a ‘delivery’)” for “not feeling (a ‘delivery’)” and covered the same territory without being committed to its having to feel like something to be aware of something.

But I can only ask, what does it mean to be awake and aware of something if it does not feel like something to be awake and aware of something?

If you reply “It feels like something to be aware of something, but only in the sense that it feels like something while I’m being aware of something, because I happen to be awake, and being awake feels like something” — then I will have to reply that you are losing me, when you say that it feels like something while you receive the “delivery” but that that something it feels like is not what it feels like to receive the delivery!

Yes, our language about this is getting somewhat complicated, so let me remind you that, yes, our difference could be merely terminological here, for much the same reason that (if I remember correctly) you had objected, years ago, to my insistence that seeing, too, is feeling. 

I think you said that feeling tired is feeling, or feeling anger is feeling, and even feeling a rough surface is feeling, but seeing red is not feeling, it’s seeing. And the way I tried to convey what I meant by “feel” was to point out that you too would agree (and you did) that it feels like something (rather than nothing) to see red. And it feels like something different to see green, or to hear middle C or to smell a rose.

I think I even said that it was just our language — which says I am feeling a headache or I am feeling cold or I am feeling a rough surface, yet not “I am feeling red” but rather “I am seeing red,” and not “I am feeling the perfume” (if we don’t mean palpating it but sniffing it) but “I am smelling the perfume” — is fooling us a bit, when we conclude from our wording that seeing is not feeling. 

I think I even mentioned French, in which both feeling and smelling are (literally): “je sens la douleur”, “je sens le parfum,” as is palpating (“je sens la surface”), whereas, as in English, seeing and hearing have verbs of their own.

There is in the French the residue of the Latin “sentio” — to feel — that still exists in English, but as a sort of ambiguous false-friend, “I sense,” which means more “I intuit” or “I pick up on” than “I feel.” But I would say the same thing about sensing: If I sense something, be it sensory, affective, tactual, thermal, cognitive, or intuitive, then it feels like something to be sensing it, and would feel like something else to be sensing something else, as surely as it feels like something to be seeing red and would feel like something else to see something else.

And not just because I happen to be awake while my brain “delivers” the “information”!

So if I am sensing that it’s Wednesday today, then that feels like something, and feels like something different from sensing that it’s Tuesday today as surely (but perhaps not as intensely) as seeing red feels different from seeing blue.  

To put it another way, the result of the “delivery” is not just my “speaking in tongues.” It feels like something not only to say (or think) the words “It’s Wednesday today” but to mean them. And it feels like something else not only to say (or think) but to mean (or understand) something else.

Entropy

Sociopaths, sadists, zealots and lunatics there have always been. But technology has now empowered them to do harm far beyond their numbers: The “normal” distribution is becoming a hostage, perhaps irretrievably, to a reign of terror from its tail-end.

Comments on Doug Hofstadter’s “I Am A Strange Loop”

(1) Is feeling/nonfeeling an all-or-none distinction? 

The answer is most definitely yes. (But the question is not about whether I’m feeling this or that, nor about whether I am feeling more or less. It is about whether I am feeling at all. I can feel a little tired, say, half-tired, but I can’t half-feel — any more than I can half-move [or one can be a little bit pregnant]).) 

(2) Is believing a feeling (and if so, what’s my evidence that that’s true)?

The answer is most definitely yes, and the evidence is of precisely the same kind as the evidence that seeing — or hearing or smelling or hurting — is feeling. There’s something it feels like to smell roses, and when you’re smelling carnations — or onions — it feels different. In exactly the same way (but more subtly), there’s something it feels like to be believing that it’s Tuesday today, and something different it feels like to be believing it’s Wednesday (and not just the sound of the words it takes to say one or the other). Every JND of difference in mental space feels different. That’s what makes mental states mental, and how we tell different mental states apart: Otherwise I wouldn’t know whether or not I was believing it’s Tuesday any more than I would know whether or not I was in pain. (Knowing is feeling too!)

Aside: None of this has anything to do with Zombies (and I have next to nothing to do with or say about Zombies). But just for the sake of logical coherence: A zombie would be a lookalike that behaved and talked indistinguishably from us, but did not feel. It could not be believing it felt, because believing is feeling! It would merely be behaving (and speaking) exactly as if it were feeling (and believing, and believing it was feeling). 

I consider such a possibility so far-fetched and arbitrary as to be absurd, so I never base any argument on the possibility that there could be such a thing. 

However, I do point out that we can no more explain how and why there could not be Zombies than we can explain how or why we feel (the “hard problem“). Zombies are absurd because all the evidence is against them: All the entities that behave as if they feel are in fact, like us, biological organisms that feel. We don’t know how or why we all  feel, but we do know that we invariably do. The speculation that this invariance could be broken — with entities acting exactly as if they felt, but not feeling a thing — is as far-fetched as imagining a universe in which apples fell up rather than down, or the 2nd law of thermodynamics was the reverse. Not only can nothing interesting, one way or the other, be derived from such idle suppositions, but — and this is most important —  even the correct supposition that Zombies are impossible does not do anything whatsoever toward solving the hard problem (of explaining how and why they are impossible, which is equivalent to explaining how and why we feel, rather than just do).

The statement that “believing is seeing” is no less supported, I should think, than “hurting is feeling”: I can’t do much more than ostension and appealing to what I am pretty confident is our fundamentally similar mental lives in either case. (I did make a bit of a supporting argument about JNDs just now. The gist is that the only thing that distinguishes mental states is that they feel different: Otherwise what makes them not the same mental state? The fact that they may be followed by different behavioral dispositions won’t do the trick, because the states are now, not later, so later divergence in behavioral dispositions still doesn’t distinguish the mental states now, when I’m having them. (My knowledge that I believe it’s Tuesday today and that I don’t believe it’s Wednesday cannot come from what I am inclined to do later — unless, of course, it feels different to be inclined to do this rather than that — which would be fine with me; that still leaves the difference between beliefs as a difference in what they feel like…)

Excerpts from Doug Hofstadter’s “I Am a Strange Loop“:

Semantic Quibbling in Universe Z

There is one last matter I wish to deal with, and that has to do with Dave Chalmers’ famous zombie twin in Universe Z.  Recall that this Dave sincerely believes what it is saying when it claims that it enjoys ice cream and purple flowers, but it is in fact telling falsities, since it enjoys nothing at all, since it feels nothing at all â€č no more than the gears in a Ferris wheel feel something as they mesh and churn.  

I completely agree that this is incoherent — simply because believing is feeling. What Chalmers should have said is that the Zombie behaves and talks exactly as if he was feeling (including believing, and believing that he was feeling) but in fact he was feeling (and hence believing) nothing.

Well, what bothers me here is the uncritical willingness to say that this utterly feelingless Dave believes certain things, and that it even believes them sincerely.  IsnÂčt sincere belief a variety of feeling?  Do the gears in a Ferris wheel sincerely believe anything?  I would hope you would say no.  Does the float-ball in a flush toilet sincerely believe anything?  Once again, I would hope you would say no.

I feel sincerely in agreement, and would add only that it is not only a sincere or passionate belief that is felt, but also a phlegmatic, quotidial belief, such as it’s Tuesday today.

And of course all those mechanical devices don’t feel.

And of course talk of Zombies that are like us on the outside and like the Ferris wheel on the inside is nonsense.

So suppose we backed off on the sincerity bit, and merely said that Universe ZÂčs Dave believes the falsities that it is uttering about its enjoyment of this and that.  Well, once again, could it not be argued that belief is a kind of feeling?  IÂčm not going to make the argument here, because thatÂčs not my point.  My point is that, like so many distinctions in this complex world of ours, the apparent distinction between phenomena that do involve feelings and phenomena that do not is anything but black and white.

I would and do argue the point that believing is feeling.

But I completely deny the point that the difference between feeling and non-feeling is a matter of degree! It’s all or none. 

The quality and intensity of the feeling may differ (the latter in degree), but whether there is feeling going on at all is not a matter of degree (though feeling be may be flickering, intermittently on/off). In particular, there is nothing (except degrees of doing-power) in between a Ferris wheel, that feels nothing, and, say, an amphioxus which, even if all it can feel is “ouch,” is fully one of us sentients. 

(I also think that near-threshold phenomenology and psychophysics — did I feel something or didn’t I? — is irrelevant to all this, but if one insists on citing it: Feeling is instantaneous. In the instant, you feel what you feel (if you are awake and sentient at all). If the source is a stimulus, it is irrelevant that you are uncertain near-threshold: you are not uncertain about what you felt. You felt whatever you felt. You are uncertain whether what you felt was the stimulation you were supposed to be detecting — whether it was external, from a near-threshold “beep” or endogenous: did I just feel the aura of an impending migraine?).

If I asked you to write down a list of terms that slide gradually from fully emotional and sentient to fully emotionless and unsentient, I think you could probably quite easily do so.

Not me. I could rank intensity, maybe even quality, by degrees, but not whether a feeling is felt! That’s an all or none divide and on the other side of it is not an unfelt feeling, but nothing but unfelt doing (a Ferris Wheel). Again, near-threshold judgments about a particular external or internal stimulus by a feeling person are irrelevant here. They are feeling;and we are just fussing over what they are feeling, not over whether they are feeling at all: that’s an all-or-none matter.

In fact, letÂčs give it a quick try right here.  Here are a few verbs that come to my mind, listed roughly in descending order of emotionality and sentience:  agonize, exult, suffer, enjoy, desire, listen, hear, taste, perceive, notice, consider, reason, argue, claim, believe, remember, forget, know, calculate, utter, register, react, bounce, turn, move, stop.

If I’m awake, doing every one of those things feels like something — agonizing as much as tasting or considering or knowing; only quality and intensity differs. 

And of course that includes moving (if it is voluntary and I am not anesthetized).

I won’t claim that my extremely short list of verbs is impeccably ordered; I simply threw it together in an attempt to show that there is unquestionably a spectrum, a set of shades of gray, concerning words that do and that do not suggest the presence of feelings behind the scenes.

There are spectra of feeling quality and feeling quantity, but an all-or-none divide between feeling and nonfeeling. No continuum from me to the Ferris wheel (except doing). And that’s the [hard] problem: doings: easy; feelings: hard…

The tricky question then is:  Which of these verbs (and comparable adjectives, adverbs, nouns, pronouns, etc.) would we be willing to apply to DaveÂčs zombie twin in Universe Z?  Is there some precise cutoff line beyond which certain words are disallowed?  Who would determine that cutoff line?

No tricks at all. If there could be a Zombie, it would have to be feeling nothing at all, just doing, not feeling. But supposing that an unfeeling ramified Ferris Wheel could be doing what we are doing now — namely, discussing feeling, mutually intelligibly — is pure fantasy.

To put this in perspective, consider the criteria that we effortlessly apply (I first wrote “unconsciously”, but then I thought that that was a strange word choice, in these circumstances!) when we watch the antics of the humanoid robots R2-D2 and C-3PO in Star Wars.  When one of them acts fearful and tries to flee in what strike us as appropriate circumstances, are we not justified in applying the adjective “frightened”?  

I think most people’s intuitions about cinematic robots are incoherent. They do and don’t believe that they feel. Nothing hangs on such incoherent notions. Here’s the real test: If the robot were real, would they feel compunctions about kicking it? (I think they would, if the robot was sufficiently like us — just as they are with animals. Below, Doug seems to agree too.)

Here’s a piece — not much longer than this excerpt from Doug’s book — addressing this very issue. Punchline: you get out of a fictional robot whatever the author purports to put into it. If it is decreed, however incoherently, that the robot behaves just as if it feels, but it doesn’t. Then so be it. If it is decreed (as in the Spielberg movie) that it does feel, well then it does. Same for decrees that it flies, it can read minds, it can see into the future, it can change the past, it can redesign the universe, square circles, disprove Goedel’s theorem — in fiction, anything goes…

Harnad, S. (2001) Spielberg’s AI: Another Cuddly No-Brainer.  

Or would we need to have obtained some kind of word-usage permit in advance, granted only when the universe that forms the backdrop to the actions in question is a universe imbued with Ă©lan mental?  And how is this “scientific” fact about a universe to be determined?

No word-usage-permits for “feeling”: In fiction, go with the flow. In the real world, your mind-reading instincts (along with common sense and the invariant correlation of feeling with organism-like doings) will be your guide, whether you like it or not. (And, of course, you can’t be 100% sure in any case but your own.)

“Science” has nothing to do with it — except maybe if you’re wondering about someone in a coma…

And feeling itself is the Ă©lan mental — the trouble is, we don’t know how and why it happens (and, by my lights, we never will, because of limits on the power of causal explanation in any but a counterfactual psychokinetic universe, where feeling really is a causal “force” — but that’s not our universe).

If viewers of a space-adventure movie were “scientifically” informed at the movie’s start that the saga to follow takes place in a universe completely unlike ours â€č namely, in a universe without a drop of Ă©lan mental â€č would they then watch with utter indifference as some cute-looking robot, rather like R2-D2 or C-3PO (take your pick), got hacked into little tiny pieces by a larger robot?

Of course not: Fiction can dictate our premises, but not our conclusions…

Would parents tell their sobbing children, “Hush now, don’t you bawl!  That silly robot wasn’t alive!  The makers of the movie told us at the start that the universe where it lived doesn’t have creatures with feelings! Not one!”  What’s the difference between being alive and living?  And more importantly, what merits being sobbed over?

You’re asking moral questions, and you’re right to. It is only the existence of feeling that makes morality matter at all. And of course we alas have many psychopathic tendencies, not to mention sadistic ones. I don’t know if it’s parents or experiences or genes that cause some people to be indifferent to or even to enjoy pain in others, but it happens.

But none of this affective evocativeness changes the basic facts: Whether or not an entity feels is all-or-none,

And all mental states (including believing) are felt states: that’s what makes them “mental.” Otherwise they’d just be states, tout court, as in a ferris wheel or a float-ball in a flush toilet…

Functional Explanation is Causal Explanation (Reply to Antonio Chella & Riccardo Manzotti)

(Reply to Antonio Chella & Riccardo Manzotti)

Antonio Chella & Riccardo Manzotti suggest that since we know that feeling exists, any explanation that cannot account for it is inadequate. They also suggest that there is a difference between functional explanation and causal explanation, illustrating the difference with examples from physics. Functional explanation may not explain feeling, but causal explanation may succeed, perhaps partly by scrapping the distinction between states that are internal and external to the brain:

CHELLA & MANZOTTI:since the fact that we feel is an empirical[ly] undeniable fact albeit from a first-person perspective, we should argue against any view that does not predict such possibility.

Except if no causal theory can explain feeling — in which case we are better off with one that can at least explain doing than with no eplanation at all.

CHELLA & MANZOTTI:If feeling [does] not fit into the functional description of reality, so much the worse for functionalism.

So much the worse for any causal explanation. The Turing Robot is “merely” indistinguishable from is in performance capacity, but the Turing biorobot also has equivalent internal processes and states, even if synthetic ones. That’s still normal causal explanation, and remains so even if the biodynamics are natural rather than synthetic.

In other words, there is no wedge to be driven between “functional” explanation and “causal” explanation: All dynamical explanations of feeling are equally ineffectual, for the same reasons: There is neither any causal room for feeling, nor is there any causal need for them.

CHELLA & MANZOTTI:we purposefully shifted from a causal description to a functional one

But unfortunately it is a distinction that marks nothing substantive, and does not solve the “hard” problem of explaining how and why we feel.

CHELLA & MANZOTTI:the equations for gravity and electromagnetism have the same form
 The two cases are functionally identical. Yet, they are different both in causal and in physical terms since the physical properties (or powers) which are responsible for the two situations are very different (on one hand, mass and gravity and, on the other hand, electric charge and electromagnetic force)

The equations are equivalent at one level of description, but they are not a complete description. Both mass and charge are measurable, describable, predictable physical properties — unlike feelings, which certainly exist, but do not otherwise enter into the causal matrix.

CHELLA & MANZOTTI:What is still missing is a theory outlining a conceptual and causal connection between neural activity and phenomenal experience and functionalism does not seem to possess the resources to do it.

Nor does any other causal theory.

CHELLA & MANZOTTI:[In] Harnad’s
 conception
 internal and external
 refer to physical events internal or external to the brain as if the brain boundaries were some kind of relevant threshold


Yes, mental states (feelings) — for which I recommend a migraine headache as a paradigmatic example — occur in the head, not outside it. Both doings and their functional substrate can be distributed beyond the bounds of a head, but feelings (until further notice) cannot…

For a critique of the notion of the “extended mind,” see:

Dror, I. and Harnad, S. (2009) Offloading Cognition onto Cognitive Technology. In Dror, I. and Harnad, S. (Eds) (2009): Cognition Distributed: How Cognitive Technology Extends Our Minds. Amsterdam: John Benjamins

CHELLA & MANZOTTI:assuming that the mind is indeed internal to anything may be a misleading

It is misleading to mix up “in the head” with “in the mind.” But “mind” is a weasel word. To have a mind is to feel. And there is no reason to doubt that a headache cannot be wider than a head…