Talking About Feeling: Summary of Forum

In my little essay I tried to redraft the problem of consciousness — the “mind/body problem — as the problem of explaining how and why we feel rather than just do.

It was not meant as a terminological exercise. The usual way we talk about consciousness and mental states uses weasel-words (“conscious,” mental,” “experience”) that are systematically ambiguous about whether we are just talking about access to data (an easy problem, already solved in principle by computation, which is simply an instance of doing) or about felt access to data (the hard part being to explain not just the doing but the feeling).

Nor was it meant as a metaphysical exercise: The problem is not one of “existence” (feeling indubitably exists) but of explanation: How? Why?

The commentaries were a fair sample, though a small one, of the issues and the kinds of views thinkers have on them today. A much fuller inventory will be presented at the 2012 Summer School on the Evolution and Function of Consciousness in Montreal June/July of next year. Think of this small series of exchanges in the On the Human Forum as an overture to that fuller opus.

I have already responded in detail individually to each of the 10 commentators (15 commentaries) so I will just summarize the gist here:

Judith Economos rightly insists, as the only one with privileged access to what’s going on in her mind, that it is not true that she feels everything of which she is conscious: Some of it — the part that is not sensory or emotional — she simply knows, though it doesn’t feel like anything to know it. I reply (predictably) that “know,” too, is a weasel-word, ambiguous as between felt and unfelt access to data. So if one is awake (conscious) whilst one is knowing, one is presumably feeling something. One is also, presumably, feeling something whilst one is not-knowing something, or knowing something else. If all three of those states feel identical, how does one know the difference? For if “knowing” just refers to having data, then it is just a matter of know-how (doing), which is already explained (potentially) by computation, and has nothing to do with consciousness.

Galen Strawson seems to agree with me on the distinction, but prefers “experience” (“with qualitative character”) to “feeling.” Fine — but “experience” alone is ambiguous; and trailing the phrase “with qualitative character” after it seems a bit burdensome to convey what “feel” does in one natural, intuitive, monosyllabic swoop. The substantive disagreement with Galen is about the coherence and explanatory value of “panpsychism” (i.e., the metaphysical hypothesis that feeling, or the potential to feel, is a latent and ubiquitous property of the entire universe) as a solution to the hard problem. The existence of feeling is not in doubt. But calling it a fundamental take-it-or-leave-it basic property of the universe does not explain it; it’s just a metaphysical excuse for the absence of an explanation!

Shimon Edelman is more optimistic about an explanation because there are computational and dynamic ways to “mirror” every discriminable difference (JND) in a system’s input in differences in its internal representations. This would certainly account for every JND a system can discriminate; but discrimination is doing: The question of how and why the doing is felt is left untouched.

David Rosenthal interprets the experimental evidence for “unconscious perception” as evidence for “unconscious feeling,” but, to me, that would be the same thing as “unfelt feeling”, which makes no sense. So if it’s not feeling, what is unconscious “perception”? It is unconscious detection and discrimination — in other words, internal data-doings and dispositions that are unproblematic because they are unfelt (the easy problem). If all of our know-how were like that, we’d all be Zombies and there would be no hard problem. David needs unconscious perception to be able to move on to higher-order consciousness (but that is, of course, merely higher-order access — the easy part, until/unless feeling itself is first explained). So this seems like recourse to either a bootstrap or a skyhook.

John Campbell points out that sensorimotor grounding is not enough to explain meaning unless the sensing is felt, and I agree. But he does not explain how or why sensorimotor grounding is felt.

Anil Seth reminds us that many had thought that there was a “hard problem” with explaining life, too, and that that turned out to be wrong. So there’s no reason not to expect that feeling will eventually be explained too. The trouble is that apart from the observable properties of living things (“doings”) there was never anything else that vitalists could ever point to, to justify their hunch that life was inexplicable unless one posited an “elan vital.” Modern molecular biology has since shown that all the observable properties of life could be explained, without remainder, after all. But in the case of feeling there is a property to point to — observable only to the feeler, but as sure as anything can be — that the full explanation of the observable doings leaves out and hence cannot account for. (Perhaps feeling is the property that the vitalists had in mind all along.)

The remaining commentaries seem to be based on misunderstandings:

Bernard Baars took “Turing Robot” to refer to “Turing Machine.” It does not. A Turing Machine is just a formalization of computation. The internal mechanism of a Turing Robot can be computational or dynamical (i.e., any physical process at all, including neurobiological).

Krisztian Gabris thinks feelings are needed to “motivate” us to do what needs to be done. That’s certainly what it feels like to us. But on the face of it, the only thing that’s needed is a disposition to do what needs to be done. That’s just know-how and doing, already evident in toy robots and toasters. How and why it (sometimes) feels like something to have a disposition to do something remains unexplained.

Joel Marks assumed that the Turing Robot would be an unfeeling Zombie. This is not necessarily true. (I think it would feel — it’s just that we won’t be able to know whether it feels; and even if it does feel, we will be unable to explain how or why.) Hence Joel’s question about whether it would be wrong to create a robot that feared death is equivocal: By definition, if it’s a Zombie, it cannot fear, it can only act as if it feared. (Witnessing that may make us feel bad, but the Zombie — if there can be Zombies — would feel nothing at all.) And if the Turing Robot feels, it’s as important to protect it from hurt as it is to protect any other feeling creature from hurt.

A Fifth Force: But An Acausal One… (Reply to Galen Strawson-2)

(Reply to Galen Strawson-2)

Galen Strawson does a brilliant, heroic job with panpsychism:

The only thing we know for sure — indeed, with a Cartesian certainty that is as apodictic as the logical necessity of mathematics — is that and what we feel.

Everything else we know (or believe we know), we likewise know “through” feeling — in that it feels like something to learn it and it feels like something to know it.

(It feels like something to make an “empirical” observation. It feels like something to understand that something is the case. It feels like something to understand an inference or a causal explanation.)

So feeling is certain, whereas physics (“doing,” in my parlance) is not certain.

But we are realists, trying to do the best we can to explain reality — not extreme sceptics, doubting everything that is not absolutely certain, even if it’s highly probable.

We are just looking for truth, not necessarily certainty.

“Experience” is a weasel-word because it can mean either feeling something — which is highly problematic (the “hard problem) — or it can just mean acquiring empirical data (as in: “this machine had the solution built in, that machine learned it from experience”) — which is unproblematic (doing, the “easy” problem).

So whereas it is true that the only thing we know for sure (besides the things that are necessarily true on pain of contradiction) is that feeling exists, neither everyday life nor science requires certainty. High probability on the evidence (data) will do.

And although it is true that all evidence is felt evidence, it is only the fact that it is felt that is certain. The evidence itself (doing) is only probable.

In other words, although they always accompany the data-acquisition (doing), the feelings are fallible. We feel things that are both true and untrue about the world, and the only way to test them out is via doings. It is true that the data from those doings are also felt. But the felt data are answerable to the doings, and not to the fact that they are felt.

And not only are our feelings fallible, as regards the truth: they also seem to be causally superfluous. Doings (including data-acquisition) alone are enough, for evolution, as well as for learning. Some doings are undeniably felt, but the question is: how and why?

When we are doing physics (or chemistry, or biology, or engineering) and causal explanation (rather than metaphysics), we have to explain the facts, amongst which one fact — the fact that we feel — seems pretty refractory to any sort of explanation except if we suppose that feeling is simply a basic property of the universe (whether local to the organisms in the earth’s biosphere [Galen’s “micropsychism”] or somehow smeared all over the universe [“panpsychism”].)

There’s no doubt that feeling exists, so in that sense feeling is indeed a property of the universe. But with all other properties — doings, all — we have become accustomed to being able(in practice, or at least in principle) to give a causal explanation of them in terms of the four fundamental forces (electromagnetism, gravitation, strong subatomic, weak subatomic). Those forces themselves we accept as given: properties of the universe such as it is, for which no further explanation is possible.

Galen’s metaphysics would require adding something like a fifth member to this fundamental quartet — feeling — with the difference that, unlike the others, it is not an independent force, it does not itself cause and thereby explain doings causally, but rather is merely correlated with them, inexplicably, for some doings.

And our justification for adding a fifth acausal force? The fact that it is inexplicably (but truly) correlated with some doings (all doings that we feel). If feeling had truly been a 5th force (causal rather than acausal), namely, “psychokinesis” (“mind over matter”), then that would indeed have merited elevating it to fundamental status, exempt from further explanation along with the other four.

But there is not a shred of evidence for psychokinesis as a causal force (and all attempts to measure psychokinesis have failed, because the other four forces already covered all the causal territory — doing — with no remainder and no further room for causal intervention).

So all we have, inexplicably, is the fact that we feel. I don’t think that that fact warrants any further metaphysics than that: feeling definitely exists — and, unlike anything else, exists with certainty rather than just probably. It also happens to feel like something to find out and understand anything we know. The rest is an epistemic problem: why and how does getting or having data feel like something (for feeling creatures like us)?

Neither “micropsychism” nor “panpsychism” answer this question. They just take it for granted that it is so.

Home Truths About Doing, Feeling, Explaining and Robots (Reply to Shikha Singh)

HOME TRUTHS ABOUT FEELING, DOING, EXPLAINING AND ROBOTS (Reply to Shikha Singh)

Doings are observable by anyone (via senses or senses plus measuring instruments).

Feelings are observable only to their feeler.

The only feelings a feeler can feel are his own.

That other people and animals feel is a safe guess, because they are related to and resemble us.

That today’s man-made robots feel is as unlikely as that a toaster or stone feels.

That a robot whose doings are Turing indistinguishable from the rest of us for a lifetime would feel would be almost as safe a guess as that other people and animals feel. (Perhaps a biorobot would be an even safer guess).

A robot is just an autonomous causal system that can do some things that people and animals can do.

Cognitive science is about discovering the causal mechanism that generates our capacity to do what we can do. (We can think of it as discovering what kind of robots we are.)

No one but the Turing robot can know whether its causal mechanism does generate feeling.

And even if it does, not even the Turing robot can explain or know how or why.

Why a disposition to feel and then to do — rather than just a direct disposition to do?

(Reply to Krisztian Gabris)

KG:Take the pain exampleā€¦ what would happen if for some reasonā€¦ a decision is made which goes against the evolutionarily ingrained rules of the system. For example, a hand is left in the fireā€¦ What would be the punishment of such behavior in a Turing robot (other than tissue damage)? Nothing, the robot would go on itā€™s own business with signals and internal warnings, but it would not feel the pain. Whereas a human wouldā€¦ feel pain, and would take away the handā€¦ not only because of [genetic] programming, but because ofā€¦ feeling pain.

Yours is the natural intuitive explanation for why we feel — the one that feels right. “Why,” after all, is a causal question: Why do we pull our hand out of the fire? Yes, fire causes tissue damage, but that’s not what makes us withdraw our hand (unless we are anaesthetized): It’s because it hurts!

So surely that’s what pain’s for: To signal tissue damage by causing pain to be felt.

Why? So you’ll withdraw your hand. Because if your ancestors had been indifferent to tissue damage, they would not have had surviving descendants.

So you withdraw your hand because it hurts. And it hurts in order to cause you to feel like withdrawing your hand — and therefore you withdraw your hand.

Injury –> pain –> withdraw hand.

And the reason the feeling of pain evolved is because those whose ancestors felt pain were more likely to feel like withdrawing their hands than those who did not.

But let us note that what was needed, for survival, was to withdraw the injured hand — an act, not a sentiment. The pain was a means, not an end. It’s an extra step; and, as I will try to illustrate with other examples, a superfluous extra step, practically speaking. So the hard problem is to explain how and why this extra, apparently superfluous step evolved at all.

Suppose that what you had chosen for your evolutionary example of the adaptive trait for “motivational” scrutiny had been — rather than the withdrawing of the injured hand — the growing of wings, or the beating of the heart or the dilating of the pupil of the eye.

You’ll perhaps find it strange to ask about feeling the “motivation” to grow wings (though it’s a reasonable question), because growing is not something we ordinarily think of ourselves as “doing.” But note that the very same question you asked about the evolution of pain — and the “punishment” for non-withdrawal of the injured hand if no one feels the “motivation” to withdraw it — applies to the non-growth of wings. And the answer is the same:

If we are talking about evolution — which means traits that increase the likelihood of survival and reproduction — then for both the disposition to grow wings and the disposition to withdraw the hand from injury the “reward” is increased likelihood of survival and reproduction; and for both the lack of the disposition to grow wings and the lack of the disposition to withdraw the hand from injury the “punishment” is decreased likelihood of survival and reproduction.

The very same evolutionary reward/punishment scenario also applies to the disposition of our hearts to beat which is even more obviously something that our bodies do — or, if you want an example of something we do in response to a circumstantial stimulus rather than constantly, there’s pupillary dilation to light intensity.

Or, if you want something we do voluntarily rather than involuntarily — although that’s begging the question, because it is really the involuntary/voluntary distinction that poses the “hard” problem and calls for explanation — consider the implicit improvement in skills that occurs without any sense of having done anything deliberately (sometimes even without the feeling that we have improved) in implicit learning, or the changes in our dispositions caused by subtle Pavlovian conditioning or Skinnerian reinforcement when we don’t even feel that our dispositions are changing, or the voluntary take-over of breathing — usually involuntary, like the heart-beat.

And a disposition is a disposition to do, whether it’s to grow, to beat, to dilate to withdraw, to salivate, to smile or to breathe. So the question remains: Why the extra intermediate step of feeling, when the reward and punishment come from the disposition to do?

The very same reasoning applies to learning itself: We learn to do things — such as what to eat and what to avoid — by trial and error and reward/punishment. The consequences of doing the right thing feel good and the consequences of doing the wrong thing feel bad, so we learn to do the right thing. “Motivation” again. But again, it is the disposition to do the right thing that matters; the feeling of reward and punishment is an extra. Why? Both in evolution and in learning there are consequences (enhanced survival and reproduction in the case of evolution, and enhanced functioning and performance in the case of learning: eating nourishing things gives us energy, eating toxic things makes us sick) and the consequences are sufficient to guide our dispositions to do. But why is any of that felt rather than just done?

These questions are hard not only because of the underlying problem of causality, but because our intuitions keep telling us that it’s obvious that we need to feel. Yet the causal role of feeling is anything but obvious, if looked at objectively, which means functionally.

You assumed that a Turing robot would not feel. That’s not at all sure. But let’s consider today’s rudimentary robots, which are as unlikely to feel as a toaster or a stone. Yet even they can already be designed to withdraw damaged limbs, or to learn to withdraw damaged limbs. They need sensors, of course, but it’s not at all clear why they would need feelings (even if we had the slightest clue of how to design feelings!), if the objective is to do — or to learn to do — what needs to be done in order to survive and function. They need to detect tissue damage, and then they need to be disposed to do — or disposed to learn to do — whatever needs to be done.

If (sensible) anti-Creationism impels us to reject arguments from robotic design, consider that in evolution can be simulated computationally in artificial life simulations; and the kinds of traits we build into our robots can therein be shown to evolve by random variation and selection; the same can be done for computer models of learning (which just involve a change in simulation time scale), including computer models of the evolution of the disposition to learn (e.g., Baldwinian evolution).

And lest we propose the superior power of cognition over Pavlovian and Skinnerian learning, remember that the kind of information processing underlying cognition can be implemented (along with its power and benefits) computationally, in unfeeling machines.

So there is definitely a problem here, of explaining the ostensibly superfluous causal role of feeling in doing. And not only do our intuitions fail us, but so does every objective attempt at the kind of causal explanation that serves us so well in just about every other functional dynamic under the sun.

To be continued in the 2012 Summer School on the Evolution and Function of Consciousnessā€¦

A Turing Robot Is Not a Turing Machine (Reply to Bernard Baars)

(Reply to Bernard Baars)

I don’t think anyone on any side of this discussion has said that the brain is a Turing Machine. The one who comes closest, Shimon Edelman, explicitly says “I argue that feelings in fact are computations, albeit not Turing computations.”

A Turing robot (i.e., a robot capable of passing the Turing Test, indistinguishably from any of the rest of us, for a lifetime) is not a computer (Turing machine). It is a dynamical system, with sensors and effectors, and on the inside it may be implementing any processes — whether dynamic or computational — that give it the capacity to pass the Turing Test, Turing computation being only one among the many possible processes.

The “weak” version of the Church-Turing Thesis is that everything that is “effectively computable” for a mathematician is computable by a Turing Machine.

The strong version of the Church-Turing Thesis is that Turing computation (digital computation) can simulate and approximate (just about) any dynamical physical process in the universe, including sensors and effectors, as well as analog continuous, parallel, distributed processes (such as internal rotation), and indeed also just about any neuro-chemical brain processes (perhaps excluding quantum and chaotic processes). But that simulation is only formal. A purely computational airplane does not fly. And a purely computational brain does not cognize (nor, a fortiori, does it feel). Nor does a purely computational robot (a “virtual robot”).

It is an empirical question, however, what and how much of the actual internal functioning of a Turing robot (or brain) could be performed by Turing computation.

What’s sure is that it cannot be all of it.

BB:I realize that traditionally Turing Machines are taken to be abstract versions of all possible computational implementations, including bio computation. If you can therefore prove, or quasi-prove, that something is possible or impossible for a Turing Machine that is taken to apply to all possible computers. The trouble is that the assumption is wrong.

The strong version of the Church-Turing Thesis holds that Turing computation can simulate and approximate (just about) any dynamical physical process — not that it can stand in for any dynamical physical process. You can’t fly to Chicago on a simulated airplane; flying is not computation. But computation can decompose and test the causal explanation of flying (or cognition).

BB:1. Turing Machines have no memory, and no time, and no string limits. Those are non-biological assumptions.

Turing machines are formal abstractions, but they can be implemented in real finite-state dynamical systems, for example, digital computers (which do have memories, clocks and length limits).

BB:2. Turing Machines are rigidly serial, when the brain is a massively parallel, and parallel-interactive organ.

Yes, but as noted, nobody says the brain is a Turing machine, just that the brain can be simulated computationally by a Turing machine.

BB:3. While it is argued that TMā€™s can simulate parallel and parallel-interactive computations, that is plausible only because TMā€™s totally ignore memory, time, and finite string limits.

They can simulate them because the parallelism is simulated serially, in virtual rather than real time.

BB:4. I believe that Stan Franklin and a colleague have given a formal proof that contrary to earlier claims, there are formal machines that are more powerful mathematically than Turing Machines. This vitiates the whole standard use of TMs.

The subject of hypercomputation is controversial and I think the “hard” problem of explaining feeling is hard enough without complicating it with speculations about hypercomputation (or quantum mechanics!).

The weak Church-Turing Thesis stands unrefuted to date: Whatever mathematicians have regarded as computation has turned out to be Turing machine-computable.

The strong Church-Turing Thesis does not hold that everything is computer-simulable, only just-about everything.

BB:5. Consciousness and qualia are biological entities, which are selectionist rather than instructionist in principle (GM Edelman), and reflect a huge evolutionary history ā€” 200 million for mammals alone.

No doubt. But feeling (i.e., consciousness, qualia) poses a special, hard hard problem, both for evolutionary explanation and for functional/causal explanation. This problem will be the subject of the 2012 Summer School on the Evolution and Function of Consciousness at the UniversitĆ© du QuĆ©bec Ć  Montreal in June/July 2012 in which many of the contributors to this discussion (including Bernie Baars) and many other thinkers will be participating. (The Summer School will also be in commemoration of the centennial of Turing’s birth in June 1912).

BB:6. We have a long and repeated history of ā€˜impossibility proofsā€ designed to falsify important empirical advances. Newtonā€™s action at a distance, the molecular basis of life, etc. These efforts routinely fail, though they sometimes do so in interesting ways.

Explaining how and why we feel is hard (indeed, I think, impossible), but the reason has nothing to do with Turing machines or computation, nor with either the weak or the strong Church-Turing Thesis. (See “Vitalism, Animism and Feeling (Reply to Anil Seth)” in this discussion.)

BB:7. There is no substitute for looking at nature.

Logic is an ineluctable part of nature too…

Harnad, S. (2008) The Annotation Game: On Turing (1950) on Computing, Machinery and Intelligence. In: Epstein, Robert & Peters, Grace (Eds.) Parsing the Turing Test: Philosophical and Methodological Issues in the Quest for the Thinking Computer. Springer

Diverging on Terms, Converging on Substance (Reply to Galen Strawson)

(Reply to Galen Strawson)

GS:If you identify the notion of experiential qualitative character with that of feeling, then we agree on the facts, and disagree only on the terminology.

Then we agree on the facts and just disagree on the terminology!

(I find it much more straightforward and natural to speak about what experiences feel like than to speak of their “qualitative character” — but absolutely nothing substantive rides on this taste in terms.)

GS:the hard problem rests essentially on a false assumptionā€¦that we know something about the nature of the physical that gives us a good reason to think that there is a problem in the idea that the experiential is physical

My hard problem is not that metaphysical one, but this epistemic one: We cannot explain how and why we feel rather than just do (or, if you wish, why and how we have “experiences with qualitative character” rather than just do).

If I may translate into my preferred terms the paragraph you quote from Strawson (1994) (p. 196):

*Each sensory experience is felt, and each thought experience is felt. We have, so far, no explanation of how the eye and brain give rise to feeling. In the same way, we have no explanation of how the systems of the brain that generate thought give rise to feeling. The fact remains that we feel.*

I agree that we have no explanation “so far.”

(I also give some reasons in my paper why I don’t think we ever will. Among other things, I think your own preferred “panpsychism” pays far, far too exorbitant an ontic price for very little in the way of an explanatory purchase. It hypothesizes, without evidence, that feeling is a ubiquitous latent feature of matter all over the universe — which, amongst other things, creates a bit of a mereological nightmare — leaving it just as much of a mystery how and why we feel rather than just do. It borrows the bottom-line — the-buck-stops-here — character of the fundamental forces [electromagnetic, gravitation, strong subatomic, weak subatomic], but without their massive supporting evidence or explanatory power.)

The Ever-Elusive Causal Status of Feeling: Reply to John Campbell-3

(Reply to John Campbell-3)

JC:(1) Characterizing the epistemic role of consciousness. In particular, there’s explaining the work that sensory experience does in (a) our having propositional knowledge of our surroundings, knowing that things are thus-and-so around us, and (b) having concepts of the objects and properties in our surroundings, knowing which objects and properties those are

The trouble is that each of the mental states you mention has an easy aspect (doing and ability to do) and a hard aspect (feeling). So unless you specify which of the two you are referring to, it is difficult to know what you really mean:

JC:(1) Characterizing the epistemic role of consciousness

“Epistemic” is equivocal: it could refer to what can be known in the sense of unfelt knowing (doing, and ability to do: easy) or felt knowing (hard).

And until/unless there are further arguments to show that the distinction is coherent, a “conscious” state is a state that it feels like something to be in, hence a felt state.

JC:In particular, there’s explaining the work that sensory experience does

Unfelt sensory system activity (doing, and ability to do: easy) or felt sensory experience (hard)?

JC:in (a) our having propositional knowledge of our surroundings

Unfelt propositional knowledge (doing and saying, and ability to do and say: easy) or felt knowledge (hard)?

JC:knowing that things are thus-and-so around us

Unfelt knowing (doing, and ability to do: easy) or felt knowing (hard)?

JC:and (b) having concepts of the objects and properties in our surroundings

I have no idea what “having concepts” means! Does it mean being able to do/say certain things (easy) or does it also feel like something to have a concept (hard)?

JC:knowing which objects and properties those are

Unfelt knowing (doing, and ability to do: easy) or felt knowing (hard)?

JC:(2) Explaining how conscious experience can be realized by a physical system.It seems to me that (1) is not well understood, and that arguably it’s prior to (2)

I agree.

JC: I don’t think there’s much hope for a successful assault on (2) unless we have firmly in place a clear conception of exactly what explanatory work the notion of consciousness in general, and of sensory experience in particular, is doing for us

I agree. And the hard part is that on the face of it the answer is: none!

Unfelt Grounding: Reply to John Campbell-2

(Reply to John Campbell-2)

JC:You canā€™t address the symbol-grounding problem without looking at relations to sensory awareness. Someone who uses, e.g., words for shapes and colors, but has never had experience of shapes or colors, doesnā€™t know what theyā€™re talking about; itā€™s just empty talk (even if they have perceptual systems remote from consciousness that allow them to use the words differentially in response to the presence of particular shapes of colors around them). Symbol-grounding shouldnā€™t be discussed independently of phenomena of consciousness.

The symbol grounding problem first reared its head in the context of John Searle’s Chinese Room Argument. Searle showed that computation (formal symbol manipulation) alone is not enough to generate meaning, even at Turing-Test scale. He was saying things coherently in Chinese, but he did not understand, hence mean, anything he was saying. And the incontrovertible way he discerned that he was not understanding was not by noting that his words were not grounded in their referents, but by noting that he had no idea what he was saying — or even that he was saying anything. And he was able to make that judgment because he knew what it felt like to understand (or not understand) what he was saying.

The natural solution was to scale up the Turing Test from verbal performance capacity alone to full robotic performance capacity. That would ground symbol use in the capacity for interacting with the things the symbols are about, Turing-indistinguishably from a real human being, for a lifetime. But it’s not clear whether that would give the words meaning, rather than just grounding.

Now you may doubt that there could be a successful Turing robot at all (but then I think you would have to explain why you think not). Or, like me, you may doubt that there could be a successful Turing robot unless it really did feel (but then I think you would have to explain — as I cannot — why you think it would need to feel).

If I may transcribe the above paragraph with some simplifications, I think I can bring out the fact that an explanation is still called for. But it must be noted that I am — and have been all along — using “feeling” synonymously with, and in place of “consciousness”:

*JC: You canā€™t address the symbol-grounding problem without looking at relations to feeling. A Turing robot that uses words for shapes and colors, but has never felt what it feels like to see shapes or colors, doesnā€™t know what it’s talking about; itā€™s just empty talk (even if it has unfelt sensorimotor and internal systems that allow it to speak and act indistinguishably from us). Symbol-grounding shouldnā€™t be discussed independently of feeling.

I think you are simply assuming that feeling (consciousness) is a prerequisite for being able to do what we can do, whereas explaining how and why that is true is precisely the burden of the hard problem.

You go on to write the following (but I will consistently use “feeling” for “consciousness” to make it clearer):

JC:Trying to leave out problems of [feeling] in connection with symbol-grounding, and then [to] bring [it] back in with the talk of ā€˜feelingā€™, makes for bafflement. If you stick a pin in me and I say ā€˜That hurtā€™ is the pain itself the feeling of meaning? The talk about ā€˜feeling of meaningā€™ here isnā€™t particularly colloquial, but it hasnā€™t been given a plain theoretical role either.

I leave feeling out of symbol grounding because I don’t think they are necessarily the same thing. (I doubt that there could be a grounded Turing robot that does not feel, but I cannot explain how or why.)

It feels like one thing to be hurt, and it feels like another thing to say and mean “That hurt.” The latter may draw on the former to some extent, but (1) being hurt and (2) saying and meaning “That hurt” are different, and feel different. The only point is that (2) feels like something too: that’s what makes it meant rather than just grounded.

Harnad, S. (1990) The Symbol Grounding Problem Physica D 42: 335-346.

Internal/External Isomorphism, Discrimination and Feeling: Reply to Shimon Edelman-2

(Reply to Shimon Edelman-2)

SE: ā€œStevanā€¦ does notā€¦ defend [his] claim [that] ‘feelings are not computationsā€¦ (except for the blinkered believers in the computational theory of mind).’ I argue that feelings in fact are computations, albeit not Turing computationsā€¦

In his paper, Shimon makes it clear that by “computations” he does not just mean the Turing computations referred to by the Church-Turing Thesis: “[E]very physical process instantiates a computation insofar as it progresses from state to state according to dynamics prescribed by the laws of physics, that is, by systems of differential equations.”

Hence what Shimon means by “feelings are computations” is just that they are (somehow) properties of dynamical systems (hardware) rather than just hardware-independent Turing computations (formal symbol systems).

That’s not computationalism (the metaphysical theory that felt states are [Turing] computational states); it’s physicalism (the metaphysical theory that felt states are physical [dynamical] states).

Well, yes, we’re all physicalists rather than “dualists”; but that doesn’t help solve the hard problem — of explaining how and why some physical states are felt states. This is not a metaphysical question but a functional one.

I have not yet read the “22 kiloword” Fekete & Edelman paper in detail, but I think I’ve understood enough of it to try to explain why I think it misses the mark, insofar as the hard problem is concerned:

The goal is to explain how and why we feel. The intuition (largely a visual one) is that external objects are dynamical systems, with (static and) dynamical properties (like size, shape, color) that (1) we feel because (2) they are “represented” in our brain by another system — an internal dynamical system that mirrors (and can operate on) those dynamical properties, right down to the last JND (just-noticeable-difference).

Now the Fekete & Edelman model is not yet implemented. But if ever it is, it is very possible that it might help in generating some of our capacity to do what we can do. Let’s even suppose it can generate all of it, powering a Turing robot that can do anything and everything we can do, right down to the last JND.

And we know how it does it: It has an internal dynamical system that mirrors the properties of the external dynamical systems that we can see, hear, manipulate, name and describe. That solves the “easy” problem of doing.

Now what about the feeling? If the Turing robot views a round shape, it can do with it all the things we can do with round shapes, in virtue of its internal dynamical counterparts, including the minutest of sensory discriminations (though one wonders why internal representations are needed to do same/different judgments on externally presented pairs of round shapes of identical or minutely different size). In any case, the internal analogs may come in handy for tasks such as the Shepard internal rotation task. (I say “internal” rather than “mental,” because “mental” would be a weasel-word here insofar as the question of feeling versus doing is concerned here.)

The internal representations of shape certainly mirror the shapes of the external objects, but do they mirror what it feels like to see round shapes? How? I mean, if we made a trivial toy robot that could only do same/different judgments on round shapes, or on rotated Shepard-shapes, would it be feeling anything, in virtue of its internal dynamics? Why not? Would scaling up its capacity closer and closer to ours eventually make it start feeling something? when? and how? and why?

So, no, although the idea of generating internal dynamical representations that are isomorphic to external objects is a natural intuition about how to go about building what it feels like to perceive the world into a brain or robot, all it really does is give the brain or robot a means of doing what it can do (including the minutest discrimination, all the way down to a JND). It’s an input/output isomorphism, not an input/feeling isomorphism. It is as unexplained as ever why anything should be felt at all, under any of there conditions. Why should it feel like something to discriminate? Discriminating is doing. All that’s needed is the power to do it.

And there’s also the question of commensurability: Internal and external shapes are commensurable; so are input shapes and output shapes. But what about the commensurability of external shapes and what it feels like to see them? They are commensurable only on condition that the internal analogs are indeed, somehow, felt, rather than just used to do something (like making same/different judgments for successive rotated and unrotated external shapes). But why are they felt?

So I would say that such internal analogs and their dynamics may very well cast some light on the easy problem of how the brain can do some of the things it can do, but that they leave completely untouched the hard problem of how and why it feels like something to do or be able to do what the brain can do.

SE: ā€œthe causal role of feelingsā€¦ stems from their close relationship to discernments (JNDs) and therefore to the conceptual structure of the mind

But how and why are discernments (JNDs) felt, rather than just done?

SE: ā€œ[Our model] avoid[s] the panpsychism implied by computational ā€œmodelsā€ that are underconstrained by the intrinsic dynamics of the computational substrate

In other words, make sure that the internal/external isomorphism is tight enough and specific enough to avoid the conclusion that “any kind of organized matter [feels] to some extent.” Agreed. But it remains to explain how and why any kind of organized matter — whether or not isomorphic up to a JND — feels to any extent at all!

SE: ā€œWhat we offer is an attempt at a principled and tightly constrained explanatory reduction of feelings to doingsā€¦ a reductive leap [to the effect that] feelings are doings in the sense that we discuss.

Shimon, I’m afraid the reductive leap doesn’t work for me! Doings are still doings, and it’s not at all clear how or why internal analog dynamics that mirror external dynamics in the service of discriminating or any other doing should be felt dynamics rather just done dynamics.

SE: ā€œ[C]ognitive science is a basic science ā€¦ [Feeling] has the same ontological status as chargeā€¦ Would you tell a physicist who offers you a theory of electrodynamics ‘Yes, I understand what electrons do with their charge, but what is charge and why do they have it?’?

No, I wouldn’t, because it’s evident that the question-asking must stop with the four basic forces of nature (electromagnetism, gravitation, the weak force and the strong force). But feeling (unless you are a panpsychist despite the complete absence of evidence for a psychokinetic force) is not one of the basic forces of nature. And cognitive science is not a basic science!

So I’m inclined to repeat what I said in my first reply: If ā€œBecause!ā€ is the only answer we can ever get to our ā€œhardā€ question, does that mean it was unreasonable to have asked the question at all? I think this would be to paper over a fundamental explanatory crack ā€” probably our most fundamental one. The ā€œhardā€ problem may well be insoluble ā€” but surely that does not mean it is trivial, or a non-problem, or that it was some sort of ā€œcategory mistakeā€ to have asked!