Because they are ubiquitous and inescapable in our waking lives, and because they feel as if they are playing a causal role, it is very difficult for us to see that in reality our feelings are functionally superfluous (unless telekinesis is true — which it is not).
(We have a similar difficulty reasoning about the origins and adaptive function of language, because our brains are so deeply “language-prepared” that it is almost impossible for us to think of an object or state of affairs without “sub-titling” it with a verbal narrative.)
I think that Roger Lindsay‘s tentative attempts to close the explanatory gap, below, are based on inadvertently begging the question, by endowing feeling with a (telekinetic) causal power (unexplained) a priori. (The same mistake is made if it is “reasons” to which you give the causal power. For reasons — though they too are felt — need not be felt: they can be functed, as computations or even dynamics. That reasons are felt rather than just functed is just another example of the problem.) I think you are also underestimating the nature and causal power of computation, and probably of dynamics too.
RL: “your claims about the causal sufficiency of functs are certainly true of neonates [but this] decreases with age [and is] less obviously true of adults.”
I am afraid you may have missed my point, which was definitely about adults! The point is that the full causal/functional explanation is always sufficient to explain our performance and our performance capacity. The only thing it does not explain is how and why any of that functionality is felt.
RL: “I have just touched my nose… It is not likely that my action resulted… from some coincidentally pre-existing causal state… [i.e.] not from Humean causes but from voluntary performance on the basis of reasons… [e.g.] love, or hate, or anger or jealousy?”
Yes, the feeling that I do what I do because I “feel like” doing it — rather than because I am being buffeted about by underlying neurological causes — is the heart of the mind/body problem (hence also of the feeling/function problem, which is the very same problem, more transparently stated). And the lack of a causal explanation for feeling (given that telekinesis is false) is the basis of the explanatory gap.
Feelings themselves feel causal, but hardly rational, except in the sense that “My reason for doing it was that I felt like it!” If I say “I withdrew my hand from the fire because it hurt” I am not explaining why I removed my hand: the explanation of our nociceptive performance and capacity is based on the properties of fire and tissue, the evolutionary history of our species, the neurology of our sensorimotor systems, and our history of experience (including what we have seen and been told about the effects of fire). That’s all functing. The unexplained part is how and why pain feels like something.
By the way, a functional story similar to the one I told about why I withdrew my hand from the fire can also be told about why I pay my debts. I have reasons, of course, some historical some verbal. But the explanatory gap is explaining how and why that reasoning is felt rather than just functed.
RL: “Why are we aware of feelings?”
Here’s a good example of why it is much more revealing to re-cast the problem of “consciousness” (“awareness,” etc. etc.), i.e., the “mind/body” problem as the “feeling/function” problem:
That way it becomes obvious how your statement “Why are we aware of feelings?” is redundant: “Why do we feel feelings?” Isn’t that the same as “Why do we feel?” (“Unfelt feelings” are not only self-contradictory, but they reveal the redundancy and question-begging inherent in the usual way of putting it.)
RL: “so that we can move beyond funct determination”
It’s not about “determinacy” vs. “free will.” (In my opinion, that is a rather sterile question, especially when it turns out that the only way to flesh out “freedom” is by invoking randomness!)
The real issue is about the causal status of feeling: Except if telekinesis is true (which it isn’t). feelings have no independent causal power. They are merely (unexplained) correlates of the functing, which is the real causal power.
RL: “If I am aware of my anger, then it can be included in a calculus (let’s call it a rational calculation) that takes other factors into account, low-level functs can be over-ridden by more humane or longer-term considerations.”
“If I feel, then my feeling can be over-ridden.” Sure, and if you don’t feel, but rather just do the functing that needs to be done, then there’s neither feeling nor the need to “over-ride” it. If feeling angry means feeling inclined to hit someone, and you over-ride it, so you don’t hit, why not just over-ride the inclination to hit (functing), without bothering with the feeling either way?
In a nut-shell, this is how every attempt to assign an independent causal power to feeling (other than telekinesis, which is false) fares, when looked at carefully, and functionally. The feeling always turns out to be redundant, superfluous, and hence unexplained (though it is definitely there, correlated with the functing).
RL: “I guess you will respond that I am just proposing a higher-level funct scenario, and the rational calculation process could all be carried out just as well using a variable list with associated numerical indices.”
Yes, except you seem to be over-simplifying functing, reducing it to trivial digital computations: Functing can be computational as well as dynamic.
RL: “But the calculation involves the evaluation of motives, and whilst you can simulate my desire for sex, for example, by using a number, the size of a number won’t make the real me sign up for a dating agency.”
As I said, reducing it to numbers is a caricature but, yes, the functional basis of anger as well as desire is as insentient as digital computation. Yet that’s all there is to functing, and the accompanying feeling remains inexplicable.
RL: “Take pain as another example. A nuclear plant supervisor might ignore a symbolic hazard signal for all sorts of reasons, but if the hazard signal caused her pain that was proportional to the risk, she would need pretty good reasons not to take action.”
This is again just the anosognosia about the causal status of feelings: To increase the likelihood that the supervisor detects and responds to the hazard signal, increase the likelihood that the supervisor detects and responds to the hazard signal. Interposing another “signal” (pain), is just multiplying entities (and somewhat homuncular), with no explanatory gain. (The story is the same for pain itself, as a “signal.)
RL: “Feelings can be intrinsically motivating, awareness of feelings allows an agent to evaluate and sometimes to over-ride them.”
Translation: “Feelings can make you feel like doing something. Feeling that you feel like doing something can be over-ridden by feeling that you feel like not-doing something.”
Remove the redundant feeling of feelings, and also the superfluous feeling itself, and the functing of the doing or not-doing can do it all. Meanwhile, the accompanying feeling remains as mysterious and inexplicable as ever.
RL: “Might this provide a (non-humean) causal role for feeling”
Only at the cost of pretending that telekinesis is true after all…