Unfelt Grounding: Reply to John Campbell-2

(Reply to John Campbell-2)

JC:You can’t address the symbol-grounding problem without looking at relations to sensory awareness. Someone who uses, e.g., words for shapes and colors, but has never had experience of shapes or colors, doesn’t know what they’re talking about; it’s just empty talk (even if they have perceptual systems remote from consciousness that allow them to use the words differentially in response to the presence of particular shapes of colors around them). Symbol-grounding shouldn’t be discussed independently of phenomena of consciousness.

The symbol grounding problem first reared its head in the context of John Searle’s Chinese Room Argument. Searle showed that computation (formal symbol manipulation) alone is not enough to generate meaning, even at Turing-Test scale. He was saying things coherently in Chinese, but he did not understand, hence mean, anything he was saying. And the incontrovertible way he discerned that he was not understanding was not by noting that his words were not grounded in their referents, but by noting that he had no idea what he was saying — or even that he was saying anything. And he was able to make that judgment because he knew what it felt like to understand (or not understand) what he was saying.

The natural solution was to scale up the Turing Test from verbal performance capacity alone to full robotic performance capacity. That would ground symbol use in the capacity for interacting with the things the symbols are about, Turing-indistinguishably from a real human being, for a lifetime. But it’s not clear whether that would give the words meaning, rather than just grounding.

Now you may doubt that there could be a successful Turing robot at all (but then I think you would have to explain why you think not). Or, like me, you may doubt that there could be a successful Turing robot unless it really did feel (but then I think you would have to explain — as I cannot — why you think it would need to feel).

If I may transcribe the above paragraph with some simplifications, I think I can bring out the fact that an explanation is still called for. But it must be noted that I am — and have been all along — using “feeling” synonymously with, and in place of “consciousness”:

*JC: You can’t address the symbol-grounding problem without looking at relations to feeling. A Turing robot that uses words for shapes and colors, but has never felt what it feels like to see shapes or colors, doesn’t know what it’s talking about; it’s just empty talk (even if it has unfelt sensorimotor and internal systems that allow it to speak and act indistinguishably from us). Symbol-grounding shouldn’t be discussed independently of feeling.

I think you are simply assuming that feeling (consciousness) is a prerequisite for being able to do what we can do, whereas explaining how and why that is true is precisely the burden of the hard problem.

You go on to write the following (but I will consistently use “feeling” for “consciousness” to make it clearer):

JC:Trying to leave out problems of [feeling] in connection with symbol-grounding, and then [to] bring [it] back in with the talk of ‘feeling’, makes for bafflement. If you stick a pin in me and I say ‘That hurt’ is the pain itself the feeling of meaning? The talk about ‘feeling of meaning’ here isn’t particularly colloquial, but it hasn’t been given a plain theoretical role either.

I leave feeling out of symbol grounding because I don’t think they are necessarily the same thing. (I doubt that there could be a grounded Turing robot that does not feel, but I cannot explain how or why.)

It feels like one thing to be hurt, and it feels like another thing to say and mean “That hurt.” The latter may draw on the former to some extent, but (1) being hurt and (2) saying and meaning “That hurt” are different, and feel different. The only point is that (2) feels like something too: that’s what makes it meant rather than just grounded.

Harnad, S. (1990) The Symbol Grounding Problem Physica D 42: 335-346.

Internal/External Isomorphism, Discrimination and Feeling: Reply to Shimon Edelman-2

(Reply to Shimon Edelman-2)

SE: “Stevan
 does not
 defend [his] claim [that] ‘feelings are not computations
 (except for the blinkered believers in the computational theory of mind).’ I argue that feelings in fact are computations, albeit not Turing computations


In his paper, Shimon makes it clear that by “computations” he does not just mean the Turing computations referred to by the Church-Turing Thesis: “[E]very physical process instantiates a computation insofar as it progresses from state to state according to dynamics prescribed by the laws of physics, that is, by systems of differential equations.”

Hence what Shimon means by “feelings are computations” is just that they are (somehow) properties of dynamical systems (hardware) rather than just hardware-independent Turing computations (formal symbol systems).

That’s not computationalism (the metaphysical theory that felt states are [Turing] computational states); it’s physicalism (the metaphysical theory that felt states are physical [dynamical] states).

Well, yes, we’re all physicalists rather than “dualists”; but that doesn’t help solve the hard problem — of explaining how and why some physical states are felt states. This is not a metaphysical question but a functional one.

I have not yet read the “22 kiloword” Fekete & Edelman paper in detail, but I think I’ve understood enough of it to try to explain why I think it misses the mark, insofar as the hard problem is concerned:

The goal is to explain how and why we feel. The intuition (largely a visual one) is that external objects are dynamical systems, with (static and) dynamical properties (like size, shape, color) that (1) we feel because (2) they are “represented” in our brain by another system — an internal dynamical system that mirrors (and can operate on) those dynamical properties, right down to the last JND (just-noticeable-difference).

Now the Fekete & Edelman model is not yet implemented. But if ever it is, it is very possible that it might help in generating some of our capacity to do what we can do. Let’s even suppose it can generate all of it, powering a Turing robot that can do anything and everything we can do, right down to the last JND.

And we know how it does it: It has an internal dynamical system that mirrors the properties of the external dynamical systems that we can see, hear, manipulate, name and describe. That solves the “easy” problem of doing.

Now what about the feeling? If the Turing robot views a round shape, it can do with it all the things we can do with round shapes, in virtue of its internal dynamical counterparts, including the minutest of sensory discriminations (though one wonders why internal representations are needed to do same/different judgments on externally presented pairs of round shapes of identical or minutely different size). In any case, the internal analogs may come in handy for tasks such as the Shepard internal rotation task. (I say “internal” rather than “mental,” because “mental” would be a weasel-word here insofar as the question of feeling versus doing is concerned here.)

The internal representations of shape certainly mirror the shapes of the external objects, but do they mirror what it feels like to see round shapes? How? I mean, if we made a trivial toy robot that could only do same/different judgments on round shapes, or on rotated Shepard-shapes, would it be feeling anything, in virtue of its internal dynamics? Why not? Would scaling up its capacity closer and closer to ours eventually make it start feeling something? when? and how? and why?

So, no, although the idea of generating internal dynamical representations that are isomorphic to external objects is a natural intuition about how to go about building what it feels like to perceive the world into a brain or robot, all it really does is give the brain or robot a means of doing what it can do (including the minutest discrimination, all the way down to a JND). It’s an input/output isomorphism, not an input/feeling isomorphism. It is as unexplained as ever why anything should be felt at all, under any of there conditions. Why should it feel like something to discriminate? Discriminating is doing. All that’s needed is the power to do it.

And there’s also the question of commensurability: Internal and external shapes are commensurable; so are input shapes and output shapes. But what about the commensurability of external shapes and what it feels like to see them? They are commensurable only on condition that the internal analogs are indeed, somehow, felt, rather than just used to do something (like making same/different judgments for successive rotated and unrotated external shapes). But why are they felt?

So I would say that such internal analogs and their dynamics may very well cast some light on the easy problem of how the brain can do some of the things it can do, but that they leave completely untouched the hard problem of how and why it feels like something to do or be able to do what the brain can do.

SE: “the causal role of feelings
 stems from their close relationship to discernments (JNDs) and therefore to the conceptual structure of the mind

But how and why are discernments (JNDs) felt, rather than just done?

SE: “[Our model] avoid[s] the panpsychism implied by computational “models” that are underconstrained by the intrinsic dynamics of the computational substrate

In other words, make sure that the internal/external isomorphism is tight enough and specific enough to avoid the conclusion that “any kind of organized matter [feels] to some extent.” Agreed. But it remains to explain how and why any kind of organized matter — whether or not isomorphic up to a JND — feels to any extent at all!

SE: “What we offer is an attempt at a principled and tightly constrained explanatory reduction of feelings to doings
 a reductive leap [to the effect that] feelings are doings in the sense that we discuss.

Shimon, I’m afraid the reductive leap doesn’t work for me! Doings are still doings, and it’s not at all clear how or why internal analog dynamics that mirror external dynamics in the service of discriminating or any other doing should be felt dynamics rather just done dynamics.

SE: “[C]ognitive science is a basic science 
 [Feeling] has the same ontological status as charge
 Would you tell a physicist who offers you a theory of electrodynamics ‘Yes, I understand what electrons do with their charge, but what is charge and why do they have it?’?

No, I wouldn’t, because it’s evident that the question-asking must stop with the four basic forces of nature (electromagnetism, gravitation, the weak force and the strong force). But feeling (unless you are a panpsychist despite the complete absence of evidence for a psychokinetic force) is not one of the basic forces of nature. And cognitive science is not a basic science!

So I’m inclined to repeat what I said in my first reply: If “Because!” is the only answer we can ever get to our “hard” question, does that mean it was unreasonable to have asked the question at all? I think this would be to paper over a fundamental explanatory crack — probably our most fundamental one. The “hard” problem may well be insoluble — but surely that does not mean it is trivial, or a non-problem, or that it was some sort of “category mistake” to have asked!

Sensorimotor Grounding of Words Is Necessary But Not Sufficient for Meaning: Reply to John Campbell

(Reply to John Campbell)

JC: “ “I wonder whether ‘feeling’ is really the right notion to fasten onto here. The obvious problem with ‘blind-semantics’ as illustrated by [Searle’s] Chinese Room is that language-use is being described without giving any place to its relation to perceptual experience. The connection between meaning and the hard problems comes when we try to characterize the relation between meaning and our sensory awareness of our surroundings.

Meaning = Sensorimotor Grounding + Semantic Interpretability + Feeling. Yes, computation (formal symbol manipulation) alone is not enough for meaning, even if it has a systematic semantic interpretation. This is the “symbol grounding problem” (one of the “easy” problems).

The solution to the symbol grounding problem is to ground the internal symbols (words) of a Turing robot in its autonomous sensorimotor capacity to detect, categorize, manipulate and describe the symbols’ external referents.

But although grounding is necessary for meaning, it is not sufficient. The other necessary component is feeling:

It feels like something to mean something. If I say “The cat is on the mat,” I am not only generating a syntactically well-formed string of symbols — part of a symbol system that also allows me to systematically generate other symbol strings, such as “The cat is not on the mat” or “The mat is on the cat” or “The rat is on the mat,” etc. — all of which are systematically interpretable (by an external interpreter) as meaning what they mean in English.

In addition to that semantic interpretability to an external interpreter, I am also able, autonomously, to detect and interact with cats and mats, and cats being on mats, etc., with my senses and body, and able to interact with them in a way that is systematically coherent with the way in which my symbol strings are interpretable to an external interpreter.

I have no idea whether there can be “Zombies” — Turing robots whose doings and doing-capacity are indistinguishable from our own but that do not feel — although I doubt it. (I happen to believe that anything that could do what a normal human being can do — indistinguishably from a human, to a human, for a lifetime — would feel.)

But my belief is irrelevant, because there’s no way of knowing whether or not a Turing robot (or biorobot) is a Zombie: no way of determining whether there can be Turing-scale grounding without feeling. Worse, either way there is no explanation of feeling: neither an explanation of how and why a grounded Turing robot feels, if it it is not a Zombie, nor an explanation of how and why we feel and the Turing robot doesn’t, if it’s a Zombie.

But what is clear is what the difference would be: the presence or absence of feeling. And that is also the difference between meaning something or merely going through the motions.

JC: “ “I agree that there is such a thing as ‘speaking with feeling’, or ‘feeling the full weight of what one is saying’ , for example. But the fundamental point of contact with the hard problem has to do with sensory awareness. Once one has grasp of meaning, suitably hooked up to sensory experience, in an agent with some kind of emotional life, then as a consequence of that there will be such a thing as ‘it feeling like something to mean something’, but that’s an epiphenomenon.

It may well be that most of what it feels like to mean “the cat is on the mat” is what it feels like to recognize and to imagine cats, mats, and cats being on mats.

But the bottom line is still that to say (or think) and mean “the cat is on the mat” there has to be something it feels like to say (or think) and mean “the cat is on the mat” — and that for someone to be saying (or thinking) and meaning “the cat is on the mat” they have to be feeling something like that. Otherwise it’s still just “blind semantics,” even if it’s “suitably hooked up (grounded) in sensorimotor capacity.

(Sensory “experience,” by the way, would be an equivocal weasel-word, insofar as feeling is concerned: is it felt experience or just “done” experience, as in a toaster or a toy robot?)

So I’m definitely not speaking of “speaking with feeling” in the sense of emphasis, when I say there’s something it feels like to mean something (or understand something).

I mean that to mean “the cat is on the mat” (whether in speaking or just thinking) is not just to be able to generate word strings in a way that is semantically interpretable to an external interpreter, nor even to be able to interact with the cats and mats in a way that coheres with that semantic interpretation. There is also something it feels like to mean “the cat is on the mat.” And without feeling something like that, all there is is doing.

Now explaining how and why we feel rather than just do is the hard problem, whether it pertains to sensing objects or meaning/understanding sentences. I’d call that a profound explanatory gap, rather than an “epiphenomenon.”

(But perhaps all that Professor Campbell meant be “epiphenomenon” here was that what it feels like to be saying and meaning a sentence is [roughly] what it feels like to be imaging or otherwise “calling to mind” its referents. I’d call that feeling “derivative” rather than “epiphenomenal,” but that’s just a terminological quibble, as long as we agree that meaning must be not only grounded but felt.)

“Unconscious Feeling” vs. “Unfelt Consciousness”: Detecting the Differences (Reply to David Rosenthal-2)

(Reply to David Rosenthal-2)

DR: “We need an argument – not just an assertion – that the states we call feelings can’t occur without being conscious (i.e., without being felt).

I agree that an argument is needed, but I’m not sure whether it’s the affirmer or the denier who needs to make the argument!

First, surely no argument is needed for the tautological assertion that feelings and felt states have to be felt: (having) an unfelt feeling or (being in) an unfelt felt-state is a contradiction.

The more substantive assertion is that all unfelt states are unconscious states and all felt states are conscious states (i.e., feeling = consciousness). And its denial would be either that (1) no, there can be states that are unfelt, yet conscious, or (2) no, there can be states that are unconscious yet felt (or both).

I would say the burden, then, is on the denier, either (1) to give examples of unfelt states that are nevertheless conscious states — and to explain in what sense they are conscious states if it does not feel like anything to be in those states — or (2) to give examples of unconscious states that are nevertheless felt states — and to explain who/what is feeling them if it is not the conscious subject — (or both).

(It will not do to reply, for (1), that the subject in the conscious state in question is indeed awake and feeling something, but not feeling what it feels like to be in that state. That makes no sense either. Nor does it make sense to reply, for (2), that the feeling is being felt by someone/something other than the conscious subject.)

What you have in mind, David, I know, is things like “unconscious perception” and blindsight. But what’s meant by “unconscious perception” is that the subject somehow behaves as if he had seen something, even though he is not conscious of having seen it. For example, he may say he did not see a red object presented with a masking stimulus, and yet respond more quickly to the word “dead” than to “glue” immediately afterward (and vice versa if the masked object was blue).

Well there’s no denying that the subject’s brain detected the masked, unseen red under those conditions, and that that influenced what the subject did next. But the fact is that he did not see the red, even though his brain detected it. Indeed, that is why psychologists call this unconscious “perception.” That’s loose talk. (It should have been called unconscious detection.) But in any case, it is unconscious. So it does not qualify as an instance of something that is conscious yet unfelt.

But, by the same token, it also does not qualify as an instance of something that is unconscious yet felt: Felt by whom, if not by the conscious subject? You don’t have to feel a thing in order to “detect”: Thermostats, robots and other sensors do it all the time. Detecting is something you do, not something you feel.

As for blindsight, some of it may be based on feeling after all, just not on visual feeling but other sensory but nonvisual (e.g. kinesthetic) feelings (for example, feeling where one’s eyes are moving, a doing that is under the control of one’s intact but involuntary and visually unconscious subcortical eye-movement system).

But some blindsight may indeed be based on unconscious detection — which is why the patient (who really can’t see) has to be encouraged to point to the object even though he says he can’t see it. It’s rather like automatic writing or speaking in tongues, and it is surprising and somewhat disturbing to the patient, who will try to rationalize (confabulate) when he is told and shown that he keeps pointing correctly in the direction of the object even though he says he can’t see a thing.

But this too is neither unfelt consciousness nor unconscious feeling: If the subject is not conscious of seeing anything, then that means he is not feeling what it feels like to see. And if he’s not feeling it, neither is anything or anyone else in his head feeling it (otherwise we have more than one mind/body problem to deal with!). If he can nevertheless identify the object before his eyes, then this is unfelt doing, not unconscious feeling.

All these findings simply compound the hard problem: If we don’t really have to see anything in order to detect stimuli presented to our eyes, then why does it feel like something to see (most of the time)?

Ditto for any other sense modality, and any other thing we are able to do: Why does it feel like something to do, and be able to do all those things?

And this “why” is not a teleological “why”: It’s a functional why. It’s quite natural, if you have a causal mechanism, consisting of a bunch widgets, to ask: What about this widget: What’s it doing? What causal role is it playing? What do you need it for?

Normally, there are answers to such questions (eventually).

But not in the case of feeling. And that’s why explaining how and why we feel is a “hard” problem, unlike explaining how and why we do, and can do, what we do. Explanations of doing manage just fine, to all intents and purposes, without ever having to mention feeling (except to say it’s present but seems to have no causal function).

DR: “If somebody doesn’t want to apply the term ‘feeling’ to the [states] that aren’t conscious, fine; but I’m maintaining that the very same type of state occurs sometimes as conscious qualitative states and sometimes not consciously.

Unfelt detection cannot be the very same state as felt detection otherwise we really would have a metaphysical problem! What you must mean, David, is that the two kinds of states are similar in some respects. That may well be true. But the object of interest is precisely the respect in which the states differ: one is felt and the other is not. What functional difference does feeling make (i.e., why are some states felt states?), and how?

And, to repeat, blueness is a quality (i.e., a property — otherwise “quality” is a weasel-word smuggling in “qualia,” another weasel-word which just means feelings). Blueness is a quality that a conscious seeing subject can feel, by feeling what it’s like to see blue. One can call that a “qualitative state” if one likes (and one likes multiplying synonyms!). But just saying that it feels like something to see blue — and that to feel that something is to be in a felt state — seems to say all that needs to be said.

To detect blue without feeling that something it feels-like is to detect a quality (i.e., a property, not a “quale,” which is necessarily a felt quality), to be sure, but it’s not to be in “qualitative state” — unless a color-detecting sensor is in a qualitative state when it detects blue.

To insist on calling the detection of a quality “being in a qualitative state” sounds as if what you are wanting to invoke is unconscious feelings (“unconscious qualia”). But then one can’t help asking the spooky the question: Well then who on earth or what on earth is feeling those feelings, if it isn’t the conscious subject?

There’s certainly no need to invoke any spooks in me that are feeling feelings I don’t feel when I am a subject in an unconscious perception experiment, since all that’s needed is unconscious processing and unconscious detection, as in a robot or a tea-pot (being heated). A robot could easily simulate masked lexical priming with optical input and word-probability estimation. But no one would want to argue that either the robot or any part of it was feeling a thing, in detecting and naming red, and its knock-on effects on the probability of finding a word rhyming with “dead”


DR: “[Your saying] “Unfelt properties are not ‘qualitative states.’ Qualitative states are felt states'” [is] just the denial of my view.

It depends on what is meant by “qualitative states.” If the robot detecting and naming briefly presented red objects — and subsequently more likely to pick a word that rhymes with “dead” — is an instance of a “qualitative state,” that’s fine, but then we’re clearly talking about the easy problem of doing (responding to optical input, processing words) and not the hard problem of feeling. Neither feeling nor consciousness (if there’s any yet-to-be-announced distinction that can be made between them) plays any part in these same doings in today’s robots.

(All bets are off with Turing-Test-scale robots — but even if they do feel — and I for one believe that Turing robots would feel — we still have to solve the problem of explaining how and why they do feel
)

DR: “I don’t understand what it would be for perceiving to be something one feels. For one thing, that begs the question about whether perceiving can occur without being conscious. For another, it seems plain that it can occur without being conscious, as evidenced by subliminal perceiving, and so forth.

If it is unfelt, “perceiving” is just detecting (and responding). We know that countless unfeeling, unconscious devices can do detecting (and responding) without feeling. Hence the question is not whether detection can occur without feeling: it’s how and why some detecting (namely, perceiving) is felt.

The burden of showing that one can make something coherent and substantive out of the putative difference between felt detection and conscious detection is, I have suggested, on the one who wishes to deny that they are one and the same thing. (That’s why “perception” is a weasel-word here, smuggling in the intuition of felt qualities while at the same time denying that anyone is conscious of them.)

So subliminal “perceiving,” if unfelt, is not perceiving at all, but just detecting.

DR: “Well, I don’t know that [“perceiving” is] a weasel word – though I agree that it means both [detecting and feeling]. In the nonconscious case it’s (mere) detecting; in the conscious case, it’s conscious detecting, i.e., feeling.

Agreed!

But that does make it seem as if “feeling” and “consciousness” are pretty much of a muchness after all. And that whatever is or can be done via unfeeling/unconscious detection is unproblematic (or, rather, the “easy” problem), and what remains, altogether unsolved and untouched, is our “hard” problem of how and why some detection is felt/conscious


DR: “There are two issues here. One is to explain why some qualitative states – some perceivings – come to be conscious; why don’t all remain subliminal? I think that’s a difficult question, which I address in several publications (e.g., Consciousness and Mind), but I won’t address here.

Perhaps, David, in your next posting you could sketch the explanation, since that is the very question (for you “difficult,” for others, “hard”) that we are discussing here. If we are agreed that “unconscious” = “unfelt” = “subliminal” states are all, alike, the “easy” ones to explain, whereas the “felt” = “conscious” = “supraliminal” states are the “hard” ones to explain (and only weasel-words like “qualitative states” and “perception” have been preventing us from realizing that), then it’s clearly how you address the hard problem (of explaining how and why some states are felt/conscious) that would be of the greatest interest here.

DR: “I don’t think, however, that it’s reasonable to assume that everything has a utility or function, so that it can’t be the case that at least some of the utility or functionality of perceiving occurs consciously. Not everything that occurs in an organism is useful for the organism. But that’s for another day.

I couldn’t quite follow the middle clause (beginning “so that it can’t be the case”), but it sounds as if you are suggesting that there may be no functional/causal explanation for why some doing and doing-capacity is felt.

I’m not sure what would be more dissatisfying: that there is no way to explain how and why some functional states are felt, or that some functional states are felt for no reason at all! The first would be a huge, perplexing fact, being doomed to remain unexplained; the other would be a huge, perplexing fact being a mere accident.

DR: “[Your saying] “We cannot go on to the (easy) problem of ‘higher-order awareness’ until we have first solved the (hard) problem of awareness (feeling) itself'” begs the question against the higher-order theory – and the occurrence of nonconscious qualitative states.

I think we’ve agreed that calling unfelt/unconscious states “qualitative” is merely a terminological issue. But, on the face of it, bootstrapping to higher-order awareness without first having accounted for awareness (feeling) itself seems rather like the many proofs — before the proof of Fermat’s Last Theorem — of the higher-order theorems that would follow from Fermat’s Last Theorem, if Fermat’s Last Theorem were true. Maths allows these contingent necessary truths — following necessarily from unproved premises — because maths really is just the study of the necessary formal consequences of various assumptions (e.g., axioms).

But here we are not talking about deductive proofs. We are talking about empirical data and (in the case of cognitive science, which is really just a branch reverse bioengineering) the causal mechanisms that generate those empirical data.

So it seems to me that a theory of higher-order consciousness is hanging from a skyhook if it has not first explained consciousness (feeling) itself.

Doing the Doable — But Is “Just Because” an Answer? (Reply to Shimon Edelman)

(Reply to Shimon Edelman)

A serious cognitive scientist ignores the rich and original work of Shimon Edelman at his peril. A master at relating perceptual differences to linguistic differences, if anyone is likely to put the “heterophenomenology” of Dan Dennett (another formidable thinker, in whatever JNDs one might differ from his views!) on a solid psychophysical and computational footing, it is Shimon.

But does the computational (or dynamical) explanation of each and every JND we can discriminate (all of which is doing) explain how or why those doings are felt?

(And if it does not, and “Because” is the only answer we can ever get to this “hard” question, does that mean it was unreasonable to have asked the question at all? I think this would be to paper over a fundamental explanatory crack — probably our most fundamental one. The “hard” problem may well be insoluble — but surely that does not mean it is trivial, or a non-problem, or that it was some sort of “category mistake” to have asked!)

SE:Stevan is
 right in denying the central
 premise
 that feelings are somehow distinct from and independent of doings.”

Feelings and doings seem to be tightly correlated: that’s undeniable. But it’s the causation (and causal function) that’s at issue here.

(And one can have reservations about feeling/doing commensurability too, for the psychophysical correlation is really only a doing/doing correlation: input/output. Inquiring more deeply into the “quality” of feelings, and their “resemblance” to things in the world, runs into Wittgensteinian private-language indeterminacy problems: what’s the common metric? and what’s the error-detector?)

SE:the how part of the question is relatively easy, if one accepts that minds are what brains [and brain-like systems]
 do

Doing is what brains do. How and why they generate feelings — how and why it feels like something to do and to be able to do all that doing — is another matter.

But let me stress that the “why” in the “how and why” question is not an idle teleological query: It is a functional query, which means a causal query. If there are various functional components that generate our doing power, it seems reasonable to ask of each of them: “What causal role do they play in the successful outcome? What do they enable us to do that could not be done without them? What would be functionally missing or misfunctioning without them?”

In other words, the “why” in the “how and why” is just a call for a clear account of the specific causal contribution of feelings to the successful generation of our doing power, lest we simply take it for granted and forget that there’s a huge elephant in the room whose presence still calls for an explanation in an account of doing that looks for all the world as if it would be equally compatible with the presence or the absence of feelings.

SE:there seems to be no
 way of explaining why a
 tomato feels this way
 when I look at it
 red [and] that way to me when I handle it
 squishy
 Asking why any quale [feeling] feels the way it does amounts to a category mistake
 [and] deserves… only [the] answer
 ‘Because’

But the hard problem is not that of explaining how or why something feels this way rather than that way, but explaining how and why it feels like anything at all.

A category mistake is to ask whether an apple is true (“it’s not true? well then it’s false?”). There’s no category error in asking how and why we feel rather than just do.

And if the answer is just “Because,” it’s not the impatient “Because” that questions like “why is there something rather than nothing?” or even “why is there gravity?” deserve. We are squarely in the world of doing, and its functional explanation. And there is a prominent property that is undeniably present but does not seem to have any causal role (despite the fact that, ironically, it feels causal — although that’s not the reason its presence calls for an explanation).

Waving that away with “Because” and “category error” is rather too quick…

SE:there definitely is a
 way to explain why two shades of red feel this different to me
 psychophysics [JNDs]

There definitely is a way to explain how and why we can discriminate everything we can discriminate — and manipulate and categorize and name and describe.

But those are all doings and doing capacities. How and why are they felt doings and doing capacities, rather than just “done” doings and doing capacities?

(The one making the category error here seems to be Shimon!)

SE:The conceptual leap that makes such grounding possible is akin to the explanatory move that is inherent in the
 Church-Turing Thesis [CTT]…the equivalence of Turing computation
 a formal concept, and effective computation
 an intuitive one.”

This is a very clever analogy — between, on the one hand, “capturing” intuitive computation with Turing computation, and, on the other, “capturing” peoples’ feelings with Turing models of doing — but it unfortunately cannot do the trick:

First, the fact that Turing computation merely “captures” mathematicians’ intuitions of what they mean by computation rather than proving that they are correct is the reason CTT is a thesis and not a theorem. Mathematicians have other kinds of intuitions too — such as the Goldbach Conjecture, but those are theorems, subject to proof (such as in the recent proof of Fermat’s Last Theorem). Any thesis or conjecture can be invalidated by a single counter-example. But it takes a proof to show that it is true. The reason CTT cannot be proved true is that except when it is explicitly formalized, it is just a feeling! (Bertrand Russell, drawing on an example from William James, famously reminded mathematicians how feeling can be an unreliable guide too: “The smell of petroleum pervades throughout.”)

Now in science and engineering, we are not looking for proof of the truth of theorems but for evidence of the truth of theories. And evidence does not just mean gathering data that are compatible with and confirm the predictions of the theory. It means giving a causal explanation. This is clearest in engineering, where the way you test whether your theory successfully explains the way to get certain things done is to build a system (say, a vacuum cleaner) that tries to do those things according to the causal mechanism proposed by your theory, and show that the causal mechanism works (i.e., it can suck in dust).

Cognitive science is not basic science; it is more like reverse engineering, along lines similar to ordinary forward engineering. The only difference is that our vacuum cleaners grow on trees, so we have to try to reverse-engineer their doing-capacities and then test whether they have the causal power to generate our doings. That’s Turing’s method.

Now once we have a causal theory that is able to generate all of our doing power, we have causally explained doing (the “easy” problem). But have we “captured” feeling, the way the CTT has provisionally “captured” mathematicians’ intuitions about what computation is?

The answer is already apparent with CTT — which is, in a sense, also a cognitive theory of computation: a theory not only of what computation is, but of how computation is implemented in the brain. Computation, however, is doing, not feeling! So although mathematicians may have feelings about computation — just as we have feelings about tomatoes — it is not feelings that Turing computation implements but computations (doings). And (except for the blinkered believers in the computational theory of mind), feelings are not computations.

So the only sense in which a successful Turing theory of doing “captures” feeling is that it generates (and explains, causally) the doings that are correlated with feeling. It does not explain how or why those doings are felt or depend causally on feeling. The “conceptual leap” — to the conclusion that the successful explanation of doing explains feeling — is just as wrong in the case of Turing robotics as the notion that Turing computation (CTT) has explained mathematicians’ feelings about what computation is. Turing computation (provisionally) captures what mathematicians are doing when they compute. There has not yet been a counterexample; maybe there never will be. That is unproven. Explaining how/why it feels like something for mathematicians to compute — and how/why it feels like something for mathematicians to think about computation and about what is and is not computation — is a gap not bridged by the “conceptual leap.”

SE:Just like the CTT,
 equating
 feelings with discriminations (doings) cannot be proved… [but] many explanatory benefits may ensue (including the demystification of meanings).

Many explanatory benefits ensue from explaining doing (i.e., solving the “easy problem”); that is uncontested. Shimon’s work especially will help integrate perceptual capacity with linguistic capacity. But that’s all on the doing side of the ledger.

SE: Towards a computational theory of experience… equates
 feeling – with the dynamically unfolding trajectory that the collective state of the
 brain – its doing — traces through an intrinsically structured space of possible trajectories (which defines the range of [differences]
 the mind
 can [feel].

This may define the range of differences that the brain can discriminate (do). But the fact that it’s felt alas remains untouched.

Dispositions to Do: Felt and Unfelt (Reply to Joel Marks)

(Reply to Joel Marks)

JM:why did you omit a category of attitudes, or whatever rubric belief and desire would fall under, as something cognitive science needs to explain
?
 you do mention believing and understanding and doubting…

Yes, cognitive science needs to explain attitudes, dispositions and tendencies. They are all part of the “easy” problem: doing.

Believing, desiring, understanding and doubting, besides having an “easy” aspect (dispositions and capacities to do) are also felt. That is the “hard” problem: Why are they felt, rather than just done (i.e., acted upon)?

JM:I am assuming that you count beliefs and desires as non-feelings, although you also say there is something it feels like to be in those states.

No, believing and desiring are felt — though some people (not me) speak (loosely) of unfelt tendencies as “beliefs”: I think that just creates confusion between doing and feeling, things that are, respectively, easy and hard to explain.

JM:Suppose you strongly desired to live but believed you were about to die
rather awfully
 So there could be zombies with [that] belief/desire
 who nonetheless felt nothing. I wonder what it is they would be missing?

What Turing robots would be missing if they did not feel would be feeling. And in that case they wouldn’t have beliefs of desires either: they would just be behaving as if they had beliefs or desires (doing). All they would really have would be capacities and dispositions to do. (But I actually believe that a Turing-scale robot would feel — though of course we have no way of knowing
)

JM:would we feel it was any less important to try to prevent the zombie from [desiring to live/believing it would die] than the person who
was feeling
 it?

(1) There is no way to know whether a Turing robot feels.

(2) For me it’s likely enough that it would feel (so I wouldn’t kick one).

(3) If there could be a guarantee from a deity that the Turing robot was a “Zombie” — as feelingless as a toaster — I suppose it would not matter if you kicked it (except for the wantonness of kicking even a statue). But there are no reliable deities from whom you can know that, so no way to know whether there can be Zombies.

(4) So the question is moot, since the answer depends entirely on unknowables.

JM:I must say that I have become skeptical altogether about feelings, at least as belonging to a distinct realm of sensations

Skeptical that you feel? (That doesn’t make sense to me.)

Skeptical about whether feelings are distinct from sensations? Anything felt is felt. If stimuli (of any kind — optical, acoustic, mechanical, chemical) are felt, they are sensations; if they are merely detected by your brain, but unfelt, then they are not sensations but merely receptor activity, peripheral or central.

In addition, there are other kinds of feelings, besides sensations: emotional, conational and cognitive feelings. Any state that it feels like something to be in.

Unfelt Feelings and Higher-Order Thoughts: Reply to David Rosenthal-1

(Reply to David Rosenthal)

DR:There are two aspects to
 feelings. One is what they all have in common; (1) they’re all conscious. The other is (2) their specific qualitative character.”

I would say the same thing thus:

There are two aspects to feelings. One is that (1) they’re all felt. The other is (2) what each feeling feels like.

DR:The apparent
 hard problem [is] a difficulty in explaining how conscious qualitative states can be subserved by or perhaps even identical with neural states

For me, the hard problem is not this metaphysical one, but the epistemic problem of explaining how and why some neural states are felt states.

DR:the apparent intuition that [(1) and (2)] are inseparable is
 generated by the idea that we know about feelings
 solely by the way they present themselves to consciousness.”

We know that we feel (1), and what each feeling feels like (2), because we feel it.

DR:There seems to be an other-minds problem if.. we know about qualitative states
 only by how they present themselves to consciousness; how
 can we be sure anybody else is in such states?

We each know for sure that we feel because we each feel. How can we each know for sure that anyone else feels? (There are plenty of reliable ways to infer it: that’s not what’s at issue.)

DR:these views are not
 commonsense, pretheoretic intuitions
 but
 [the] theory
 that we know about conscious qualitative states solely by the way they present themselves to consciousness.”

We know for sure that we feel (1), and what each feeling feels like (2), because we feel it. (That does not sound very theoretical to me!)

We each know for sure that we feel because we each feel. We can reliably infer that (and what) others feel (we just can’t know for sure). (Only the inferring sounds theoretical here.)

DR:there is a theoretical alternative: that we know about conscious qualitative states, our own as well as others’, by way of the role they play in perceiving.”

I know (a) that I feel, and (b) what I feel, and (c) that others feel, and (d) what others feel because of the role (a) – (d) play in “perceiving”?

But is perceiving something I feel or something I do? If it’s something I feel, this seems circular. If it’s something I do — or something I’m able to do — then we’re back to doing, and the easy problem; how/why any of it is felt remains unexplained. (And “role” doesn’t help, because it is precisely explaining causality that is at issue here!)

DR:Even bodily sensations such as pains are perceptual
; they enable us to perceive
.damage to our bodies. And we individuate types of qualitative character, in our own cases and in those of others, by appeal to the kind of stimulus that typically occasions the qualitative state in question
 we individuate types of pains by location and by typical feels of stabbing, burning, throbbing, and so forth.”

All true: Bodily stimulation and damage are felt. And, in addition, our brains know what to do about them. But how and why are they felt, rather than just detected, and dealt with (all of which is mere doing)?

(Perception is a weasel-word. It means both detecting and feeling. Why and how is detection felt, rather than just done?)

DR:Professor Harnad
 dismisses this view, by relegating perceiving to mere doing, instead of feeling. But
 the individuation of states according to perceptual role
 will include states with… color, sound, shape, pain of various types, and all the other mental qualities.”

Doing is doing. If it is felt doing then it is “perception”: But how and why are (some) doings felt?

Yes, different feelings feel different. But the hard problem is explaining how and why any of them are felt at all.

DR:But what about consciousness? We know that perceptual states can occur with qualitative character but without being conscious. Nonconscious perceptions occur in subliminal perceptions, e.g., in masked priming, and in blindsight, and we distinguish among such nonconscious perceptions in respect of qualities such as color, pitch, shape, and so forth. So qualitative character can can occur without being conscious. What is it, then, in virtue of which qualitative states are sometimes conscious?

Unfelt properties are not “qualitative states.” Qualitative states are felt states. The roundness of an apple is not a qualitative state, neither for the apple, nor for the robot that detects the roundness, nor for my brain if it detects the roundedness but I don’t feel anything. The hard problem is explaining how and why I do feel something, when I do, not how my brain does things unfeelingly. (Why doesn’t it do all of it unfeelingly?)

DR: “If, as in masked priming or other forced-choice experiments
 somebody is in a qualitative state that the person is wholly unaware of being in, we regard that state as not being conscious. So a conscious state is one that an individual is aware of being in.”

A felt state is a state that it feels like something to be in. “Unfelt feelings” do not make sense (to me).

The only thing the unconscious processing (sic) and blindsight data reveal is how big a puzzle it is that not all of our internal states — and the doings they subserve — are unfelt, rather than just these special cases.

DR:I have argued elsewhere that such higher-order awareness is conferred by having a thought to the effect that one is in that state; but the particular way that higher-order awareness is implemented isn’t important for present purposes.”

We cannot go on to the (easy) problem of “higher-order awareness” until we have first solved the (hard) problem of awareness (feeling) itself.

DR:We have then, the makings of an explanation of how neural processing subserves–and is perhaps identical with–conscious qualitative states.”

I am afraid I have not yet discerned the slightest hint of an explanation of how or why some neural states are felt states.

DR:But conscious qualitative states–what Professor Harnad calls feelings–are not, as is typically thought today, indissoluble, atoms; rather they consist of states with qualitative character, which in itself occurs independently of consciousness, together with the higher-order awareness that results in those qualitative states’ being conscious.”

I don’t see how one can separate a feeling from the fact that it is felt. I don’t believe in (or understand) “unfelt feelings.” Unfelt states are both unfelt, and unproblematic (insofar as the “hard” problem is concerned).

And “higher-order awareness” — awareness of being aware, etc. — seems to me to be among the perks of being an organism than can do what we can do, as well as of the (unexplained) ability to feel what it feels like to do and be able to do some of those things.

Yes, not only does it feel like something to be in pain, or to see red, but it also feels like something to contemplate that “I feel, therefore I’m sure there’s feeling, but I can’t be sure whether others feel (or even exist), but I can be pretty sure because they look and act more or less the same way I do, and they are probably thinking and feeling the same about me
 etc.” And that’s quite a cognitive accomplishment to be able to think and feel that.

But unless someone first explains how and why anything can feel anything at all, all that higher-order virtuosity pales in comparison with that one unsolved (hard) problem.

Vitalism, Animism and Feeling: Reply to Anil Seth

Reply to Anil Seth:

Life Force: Many have responded to doubts about the possibility of explaining how and why organisms feel with doubts about the possibility of explaining how and why organisms are alive (suggesting that life is a unique, mysterious, nonphysical “life force”). Modern molecular biology has shown that the doubts about explaining life were unfounded: how and why some things are alive is fully explained, and there is no need for an additional, inexplicable life force. To have thought otherwise was simply a failure of our imaginations, and perhaps the same is true in the case of explaining how and why some organisms feel.

There is an interesting and revealing reason why this hopeful analogy is invalid: Life was always a bundle of objective properties (a form of “doing,” in the plain-talk gerunds of my little paper): Life is as life does (and can do): e.g., move, grow, eat, survive, replicate, etc., and just about all those doings have since been fully explained. Nor was there anything about those properties that was ever inexplicable “in principle”: No vitalist could have stated what it was about living that was inexplicable in principle, nor why. It was simply unexplained, hence a mystery.

But in the case of the mind/body problem, we can say (with Cartesian certainty) exactly what it is about the mind that is inexplicable (namely, feeling) and we can also say why it is inexplicable: (1) there is no evidence whatsoever of a “psychokinetic” mental force, (2) doing (the “easy problem”) is fully explainable without feeling, and (3) hence there is no causal room left for feeling in any explanation, yet (4) each of us knows full well that feeling exists. Hence the “hard” problem in the case of feeling is not based on the assumption there must be a non-physical “mental force” but on the fact that feeling is real yet causally superfluous.

(My guess is that what vitalists really had in mind all along, without realizing it, when they doubted that life could be explained physically and thought it required a nonphysical “vital force” was in fact feeling! It was actually the mentalism [animism] latent in the vitalism that was driving vitalists’ intuitions. Well, they were wrong about life, and they could never really have given any explicit reason in the first place for their doubts that living could not prove fully explainable physically and functionally (i.e., in terms of “doing”), just like everything else in nature. But with feeling we do have the explicit reason and it is not invalidated by the analogy with living. The important thing to bear in mind, however, is that the “hard” problem is not necessarily an “ontic” one. I don’t doubt that the brain causes feeling: I doubt that we can explain how or why, the way we can explain how the brain causes doing. The problem is with explaining the causal role of feeling, because that causal role cannot really be what it feels like it is.)

Feeling as Explanandum: I don’t think there’s any way for us to wriggle out of the need to explain how and why we feel (rather than just do) by an analogy with the fact that, say, physicists do not ask why the fundamental forces exist: We do not to ask why there is gravity; we just need to show how it can do what it does. If psychokinesis had been a fifth fundamental force, we could have accepted that as given too, and just explained how it can do what it does. But there’s no psychokinetic force. So it remains to explain how and why we feel, even though feeling, causally superfluous for all our doings, is undeniably there!

“Explanatory Correlates of Consciousness”: The only property of consciousness (a weasel-word) that is hard to explain is how and why anything is felt. The rest is “easy”: just an explanation of what our brains and bodies and mouths can do. Yes, the Turing Biorobot is the target to aim at. But it will not solve the hard problem — it won’t even touch it.

Conscious States = Felt States: Although the English word “feel” happens to be used mostly to refer to emotion and to touch, not only does it also feel like something to see, hear, taste, move and smell (in French “je sens” refers to emotion and to smell), but it also feels like something to think, believe, want, will, and understand. It is not “intentionality” (a weasel-word for the fact that mental [another weasel-word] states are “about” something or other) that is the mark of the mental, but the fact that mental states are felt states. Hence, apart from their “correlated” doings, “conscious states
 convey meaning precisely because they are” felt. (The problem, as ever, is explaining how and why they are felt — rather than just done.)

Incorrigibility and Imageless Thought: Reply to Judith Economos

(Reply to Judith Economos)

That I know what I am thinking is setting me apart from a computer or a zombie that does not know what it is thinking and therefore is only “thinking”. You see, I do understand that much. It just seems to me important not to confuse this with feeling, but all the words I might use, like “aware” and “conscious”, are [you say] bad words, dishonest and weaselly. It is true that those words are vague, but so is Feeling, as you use it, and the problem is that what goes on in our minds is private to us and not really shareable. We have to communicate it to each other with words whose meanings to the other can at best be guessed at by analogy with states we think are comparable, working from the most obvious behavioristic criteria to the more delicate shades of inward state, events, processes, efforts, and, er, well, awarenesses. Since we cannot directly compare your mental target with mine, we just can’t know if we are really on about exactly the same thing, or even if there is such a thing in my mind as there is in yours. That is why I am comfortable in my intransigence about what I feel or don’t feel, and why I think argument is futile. Each of us is (as it were) sitting in a closed room considering objects for which we have never had a common name, nor common adjectives, and trying to see if the other has similar objects in his room.”

I think the debate about whether there are unfelt thoughts may be a throwback to the introspectionists’ debates about whether there is “imageless thought.”

The big difference is that the introspectionists thought that an empirical methodology was at stake, one that would allow them to explain the mind. We now know that there was no such empirical methodology, and no explanation of the mind was forthcoming, not just because introspection is not objective, hence not testable and verifiable, but because it does not reveal how the mind works.

(This conclusion was partly reached because it was not possible to resolve, objectively, the rival claims of introspective “experimentalists” as to whether imageless thought did or did not exist. The outcome was an abandonment of introspectionism in favor of behaviorism, until it turned out that that could not explain how the mind works either, which led to neuroscience and cognitive science.)

The difference here, is that we are debating about whether or not there are unfelt thoughts while at the same time agreeing that introspection is neither objective nor explanatory.

The disagreement is, specifically, about what it means to say that we are conscious of something, when that being-conscious-of something does not feel like anything.

The reason I keep calling “consciousness” and “being conscious of” something weasel-words is that if things are going on in my mind that somehow are different yet do not feel different then it is not at all clear how I am telling them apart, or even how or what I am privy to when I say I am “conscious of” them despite the fact that I don’t feel them in any way.

It seems to me that the notion of “unconscious thought” and “unconscious mind” is as arbitrary and uninformative as “unfelt thought” — or, put in a less weaselly way that bares the incoherence: unfelt feelings.

Among other things, this makes it more obvious that “thought” and “thinking” are themselves weasel-words (or, at best, phenomenological place-holders). For apart from the fact that they are going on in my head, it is not really clear what “thoughts” are: we are waiting for cognitive science to tell us! (We keep forgetting that “cognition” just means thinking, knowing, mentation — weasel words, all, or at least vague if not vacuous until we have a functional — i.e., causal — explanation of what generates and constitutes both them and, more importantly, what we can do with them, apart from thinking itself.)

Now explaining how we can do what we can do is these days called the “easy” problem. What, then, is the “hard problem,” and what is it that makes it hard?

My own modest contribution is just to suggest that (1) the hard problem is to explain how and why thought (or anything at all) is felt, and that (2) it is hard because there is no causal room for feeling in a complete explanation of doing.

So where is the disagreement here? Let us assume we can agree that explaining unconscious thought — i.e., those things going on inside the brain that play a causal role in generating our capacity to do what we can do, but that we are not conscious of — are not hard problems but easy problems. They may as well be going on inside a toaster. The rest is just about how they generate the toaster’s capacity to do whatever it can do. And of course we are not privileged authorities on whatever unconscious “thoughts” may be going on in our heads, because we are not conscious of them.

So all I ask is: What on earth do we really mean when we speak of having a thought that we are conscious of having, but that it does not feel like anything to have!

You (rightly) invoke “incorrigibility” if I venture to doubt your introspections, but, to me, that justified insistence that only you are in a position to judge what’s going on in your mind [as opposed to your head] derives from the fact that only the feeler can feel (hence know) what he is feeling: what the feeling feels like. But apart from that, it seems to me, there are no further 1st-person privileges. One can’t say: I don’t feel a thing, yet I know it. On what is that privileged testimony based, if it is not the usual eye-witness report? “I didn’t see the crime, but I know it was committed?” How do you “know”? In the case of thought, what does that “knowing” consist, if not in the fact that you feel it (and it feels true)?

There is no point referring to objective evidence here. I can “know” it’s raining in the sense that I say it’s raining and it really is raining. I have then made a true statement, just as a robot or a meteorological instrument could do, but that has nothing to do with the mind or the hard problem (and, as the Gettierists will point out, it’s not really “knowing” either!). It’s just back to the easy problem of doing. What makes it hard is not just that that “thought” is going on inside your head, but that it is going on in your mind, which is what makes it mental (another weasel-word).

Hence (by my lights) the only thing left to invoke to justify calling such a thing mental, and hence privileged, is that it feels like something to think it, and you are feeling that thing, and you’re the only one in the position to attest to that fact.

Yet the fact you are attesting to here (as an eye-witness) is the fact that it doesn’t feel like anything at all to think something! And that’s why I think I am entitled to ask: Well then what is a thought, and how do you know you are thinking it? In what does your “consciousness of” it consist, if not that it feels like something to think it? (How can you be an eye-witness if your testimony is that you didn’t see a thing?)

Galen Strawson has invoked the less weaselly (but nonetheless vague and hence still somewhat weaselly) notion of “experiencing” something, as opposed to “feeling it.” But that just raises the same question: What is an unfelt “experience” (an experience it doesn’t feel like anything to have)? Galen invokes “experiential character” or “phenomenal quality” etc. But (to my ears) that’s either more weasel-words or just euphemisms because — for some reason I really can’t fathom — one can’t quite bring oneself to call a spade a spade.

To be conscious of something, to experience something, to sense something, to think something, to “access” something — all of those are simply easy, toaster-like “information”-processing functions (“information” can be yet another weasel-word, if used for anything other than data, bits) — except if they are felt, in which case all functional, causal bets are off, and we are smack in the middle of the hard problem: why and how do feel? (Ned Block‘s unfortunate distinction between “phenomenal consciousness” and “access consciousness” is incoherent precisely because unfelt “access” is precisely what toasters have — hence “access” too is of the family Mustelidae, and the PC/AC distinction is bared as the attempt to distinguish felt feelings vs. unfelt “feelings”…)

Amen — but with no illusions of having over-ridden anyone’s (felt) privilege to insist that they do have unfelt [imageless?] thoughts — thoughts of which they are in some (unspecified) sense “conscious” even though it does not feel like anything at all to be conscious of them…

Linguistic Non Sequiturs

(1) The Dunn et al article in Nature is not about language evolution (in the Darwinian sense); it is about language history.

(2) Universal grammar (UG) is a complex set of rules, discovered by Chomsky and his co-workers. UG turns out to be universal (i.e., all known language are governed by its rules) and its rules turn out to be unlearnable on the basis of what the child says and hears, so they must be inborn in the human brain and genome.

(3) Although UG itself is universal, it has some free parameters that are set by learning. Word-order (subject-object vs. object-subject) is one of those learned parameters. The parameter-settings themselves differ for different language families, and are hence, of course, not universal, but cultural.

(4) Hence the Dunn et al results on the history of word-order are not, as claimed, refutations of UG.

Harnad, S. (2008) Why and How the Problem of the Evolution of Universal Grammar (UG) is Hard. Behavioral and Brain Sciences 31: 524-525