SIGNALS AND SENTIENCE

Yes, plants can produce sounds when they are stressed by heat or drought or damage.

They can also produce sounds when they are swayed by the wind, and when their fruit drops to the ground.

They can also produce sights when their leaves unfurl, and when they flower.

And they can produce scents too.

And, yes, animals can detect those sounds and sights and scents, and can use them, for their own advantage (if they eat the plant), or for mutual advantage (e.g., if they are pollinators).

Plants can also produce chemical signals, for signalling within the plant, as well as for signalling between plants.

Animals (including humans) can produce internal signals, from one part of their immune system to another, or from a part of their brain to another part, or to their muscles or their immune system.

Seismic shifts (earth tremors) can be detected by animals, and by machines.

Pheromones can be produced by human secretions and detected and reacted to (but not smelled) by other humans.

The universe is full “signals,” most of them neither detected nor produced by living organisms, plant or animal.

Both living organisms and nonliving machines can “detect” and react to signals, both internal and external signals; but only sentient organisms can feel them. 

To feel signals, it is not enough to be alive and to detect and react to them; an organ of feeling is needed: a nervous system.

Nor are most of the signals produced by living organisms intentional; for a signal to be intentional, the producer has to be able to feel that it is producing it; that too requires an organ of feeling.

Stress is an internal state that signals damage in a living organism; but in an insentient organism, stress is not a felt state.

Butterflies have an organ of feeling; they are sentient. 

Some species of butterfly have evolved a coloration that mimics the coloration of another, poisonous species, a signal that deters predators who have learned that it is often poisonous. 

The predators feel that signal; the butterflies that produce it do not.

Evolution does not feel either; it is just an insentient mechanism by which genes that code for traits that help an organism to survive and reproduce get passed on to its progeny.

Butterflies, though sentient, do not signal their deterrent color to their predators intentionally.

Nor do plants that signal by sound, sight or scent, to themselves or others, do so intentionally.

All living organisms except plants must eat other living organisms to survive. The only exceptions are plants, who can photosynthesize with just light, CO2 and minerals.

But not all living organisms are sentient.

There is no evidence that plants are sentient, even though they are alive, and produce, detect, and react to signals. 

They lack an organ of feeling, a nervous system.

Vegans need to eat to live.

But they do not need to eat organisms that feel.

Khait, I., Lewin-Epstein, O., Sharon, R., Saban, K., Perelman, R., Boonman, A., … & Hadany, L. (2019). Plants emit informative airborne sounds under stress. bioRxiv 507590.

Wilkinson, S., & Davies, W. J. (2002). ABA‐based chemical signalling: the co‐ordination of responses to stress in plantsPlant, cell & environment25(2), 195-210.

SIGNAUX ET SENTIENCE

Oui, les plantes peuvent produire des sons lorsqu’elles sont stressĂ©es par la chaleur, la sĂ©cheresse ou les dommages.

Elles peuvent Ă©galement produire des sons lorsqu’elles sont agitĂ©es par le vent et lorsque leurs fruits tombent au sol.

Elles peuvent Ă©galement produire des vues lorsque leurs feuilles se dĂ©ploient et lorsqu’elles fleurissent.

Et elles peuvent aussi produire des parfums.

Et, oui, les animaux peuvent dĂ©tecter ces sons, ces vues, et ces odeurs, et ils peuvent les utiliser, pour leur propre avantage (s’ils mangent la plante) ou pour un avantage mutuel (s’ils sont des pollinisateurs).

Les plantes peuvent Ă©galement produire des signaux chimiques, pour la signalisation Ă  l’intĂ©rieur de la plante, ainsi que pour la signalisation entre les plantes.

Les animaux (y compris les humains) peuvent produire des signaux internes, d’une partie de leur systĂšme immunitaire Ă  une autre, ou d’une partie de leur cerveau Ă  une autre partie, ou Ă  leurs muscles ou Ă  leur systĂšme immunitaire.

Les dĂ©placements sismiques (tremblements de terre) peuvent ĂȘtre dĂ©tectĂ©s par les animaux ainsi que par les machines.

Les phĂ©romones peuvent ĂȘtre produites par les sĂ©crĂ©tions humaines et elles peuvent ĂȘtre dĂ©tectĂ©es et rĂ©agies (mais non sentis) par d’autres humains.

L’univers est plein de « signaux », dont la plupart ne sont ni dĂ©tectĂ©s ni produits par des organismes vivants, vĂ©gĂ©taux ou animaux.

Les organismes vivants et les machines non vivantes peuvent « dĂ©tecter Â» et rĂ©agir aux signaux, qu’ils soient internes ou externes ; mais seuls les organismes sentients peuvent les ressentir.

Pour ressentir des signaux, il ne suffit pas d’ĂȘtre vivant, de les dĂ©tecter et d’y rĂ©agir ; il faut un organe du ressenti : un systĂšme nerveux.

La plupart des signaux produits par les organismes vivants ne sont pas non plus intentionnels ; pour qu’un signal soit intentionnel, il faut que le producteur puisse ressentir qu’il le produit ; cela aussi exige un organe du ressenti.

Le stress est un Ă©tat interne qui signale des dommages dans un organisme vivant ; mais dans un organisme non sentient, le stress n’est pas un Ă©tat ressenti.

Les papillons ont un organe du ressenti ; ils sont sentient.

Certaines espĂšces de papillons ont Ă©voluĂ© une coloration qui imite la coloration d’une autre espĂšce vĂ©nĂ©neuse, un signal qui dissuade les prĂ©dateurs qui ont appris que c’est souvent toxique.

Les prédateurs ressentent ce signal; les papillons qui le produisent ne le ressentent pas.

L’Ă©volution darwinienne ne ressent pas non plus ; c’est juste un mĂ©canisme non sentient par lequel les gĂšnes qui encodent les traits qui aident un organisme Ă  survivre et Ă  se reproduire sont transmis Ă  sa progĂ©niture.

Les papillons, bien que sentients, ne signalent pas intentionnellement leur couleur dissuasive à leurs prédateurs.

Les plantes qui signalent par le son, la vue ou l’odeur, Ă  elles-mĂȘmes ou aux autres, ne le font pas non plus intentionnellement.

Tous les organismes vivants, Ă  l’exception des plantes, doivent manger d’autres organismes vivants pour survivre. Les seules exceptions sont les plantes, qui peuvent effectuer la photosynthĂšse avec juste de la lumiĂšre, du CO2 et des minĂ©raux.

Mais tous les organismes vivants ne sont pas sentients.

Il n’y a pas de preuve que les plantes soient sentientes, mĂȘme si elles sont vivantes, et produisent, dĂ©tectent et rĂ©agissent aux signaux.

Il leur manque un organe de ressenti, un systĂšme nerveux.

Les véganes nécessitent manger pour survivre.

Mais ils ne nécessitent pas manger les organismes qui ressentent.

Khait, I., Lewin-Epstein, O., Sharon, R., Saban, K., Perelman, R., Boonman, A., … & Hadany, L. (2019). Plants emit informative airborne sounds under stress. bioRxiv 507590.

Wilkinson, S., & Davies, W. J. (2002). ABA‐based chemical signalling: the co‐ordination of responses to stress in plantsPlant, cell & environment25(2), 195-210.

Consciousness: The F-words vs. the S-words

“Sentient” is the right word for “conscious.”. It means being able to feel anything at all â€“ whether positive, negative or neutral, faint or flagrant, sensory or semantic. 

For ethics, it’s the negative feelings that matter. But determining whether an organism feels anything at all (the other-minds problem) is hard enough without trying to speculate about whether there exit species that can only feel neutral (“unvalenced”) feelings. (I doubt that +/-/= feelings evolved separately, although their valence-weighting is no doubt functionally dissociable, as in the Melzack/Wall gate-control theory of pain.)

The word “sense” in English is ambiguous, because it can mean both felt sensing and unfelt “sensing,” as in an electronic device like a sensor, or a mechanical one, like a thermometer or a thermostat, or even a biological sensor, like an in-vitro retinal cone cell, which, like photosensitive film, senses and reacts to light, but does not feel a thing (though the brain it connects to might).

To the best of our knowledge so far, the phototropisms, thermotropisms and hydrotropisms of plants, even the ones that can be modulated by their history, are all like that too: sensing and reacting without feeling, as in homeostatic systems or servomechanisms.

Feel/feeling/felt would be fine for replacing all the ambiguous s-words (sense, sensor, sensation
) and dispelling their ambiguities. 

(Although “feeling” is somewhat biased toward emotion (i.e., +/- “feelings”), it is the right descriptor for neutral feelings too, like warmth,  movement, or touch, which only become +/- at extreme intensities.) 

The only thing the f-words lack is a generic noun for “having the capacity too feel” as a counterpart for the noun sentience itself (and its referent). (As usual, German has a candidate: GefĂŒhlsfĂ€higkeit.)

And all this, without having to use the weasel-word “conscious/consciousness,” for which the f-words are a healthy antidote, to keep us honest, and coherent…

Appearance and Reality

Re: https://www.nytimes.com/interactive/2021/12/13/magazine/david-j-chalmers-interview.html

1. Computation is just the manipulation of arbitrary formal symbols, according to rules (algorithms) applied to the symbols’ shapes, not their interpretations (if any).

2. The symbol-manipulations have to be done by some sort of physical hardware, but the physical composition of the hardware is irrelevant, as long as it executes the right symbol manipulation rules.

3. Although the symbols need not be interpretable as meaning anything – there can be a Turing Machine that executes a program that is absolutely meaningless, like Hesse’s “Glass Bead Game” – but computationalists are  mostly interested in interpretable algorithms that do can be given a coherent systematic interpretation by the user.

4. The Weak Church/Turing Thesis is that computation (symbol manipulation, like a Turing Machine) is what mathematicians do: symbol manipulations that are systematically interpretable as the truths  and proofs of mathematics.

5. The Strong Church/Turing Thesis (SCTT)  is that almost everything in the universe can be simulated (modelled) computationally.

6. A computational simulation is the execution of symbol-manipulations by hardware in which the symbols and manipulations are systematically interpretable by users as the properties of a real object in the real world (e.g., the simulation of a pendulum or an atom or a neuron or our solar system).

7. Computation can simulate only “almost” everything in the world, because  — symbols and computations being digital — computer simulations of real-world objects can only be approximate. Computation is merely discrete and finite, hence it cannot encode every possible property of the real-world object. But the approximation can be tightened as closely as we wish, given enough hardware capacity and an accurate enough computational model.

8. One of the pieces of evidence for the truth of the SCTT is the fact that it is possible to connect the hardware that is doing the simulation of an object to another kind of hardware (not digital but “analog”), namely, Virtual Reality (VR) peripherals (e.g., real goggles and gloves) which are worn by real, biological human beings.

9. Hence the accuracy of a computational simulation of a coconut can be tested in two ways: (1) by systematically interpreting the symbols as the properties of a coconut and testing whether they correctly correspond to and predict the properties of a real coconut or (2) by connecting the computer simulation to a VR simulator in a pair of goggles and gloves, so that a real human being wearing them can manipulate the simulated coconut.

10. One could, of course, again on the basis of the SCTT, computationally simulate not only the coconut, but the goggles, the gloves, and the human user wearing them — but that would be just computer simulation and not VR!

11. And there we have arrived at the fundamental conflation (between computational simulation and VR) that is made by sci-fi enthusiasts (like the makers and viewers of Matrix and the like, and, apparently, David Chalmers). 

12. Those who fall into this conflation have misunderstood the nature of computation (and the SCTT).

13.  Nor have they understood the distinction between appearance and reality â€“ the one that’s missed by those who, instead of just worrying that someone else might be a figment of their imagination, worry that they themselves might be a figment of someone else’s imagination.

14. Neither a computationally simulated coconut nor a VR coconot is a coconut, let alone a pumpkin in another world.

15. Computation is just semantically-interpretable symbol-manipulation (Searle’s “squiggles and squiggles”); a symbolic oracle. The symbol manipulation can be done by a computer, and the interpretation can be done in a person’s head or it can be transmitted (causally linked) to dedicated (non-computational) hardware, such as a desk-calculator or a computer screen or to VR peripherals, allowing users’ brains to perceive them through their senses rather than just through their thoughts and language.

16. In the context of the Symbol Grounding Problem and Searle’s Chinese-Room Argument against “Strong AI,” to conflate interpretable symbols with reality is to get lost in a hermeneutic hall of mirrors. (That’s the locus of Chalmers’s “Reality.”)

Exercise for the reader: Does Turing make the same conflation in implying that everything is a Turing Machine (rather than just that everything can be simulated symbolically by a Turing Machine)?

Wikipedia Talk on the Hard Problem of Consciousness

@J-Wiki:Your edits have been thoughtful, so I don’t want to dwell on quibbles. Tom Nagel’s remark is correct: No one knows how a physical state can ‘be’ a mental (i.e., felt) state. That is in fact yet another way to state the “hard problem” itself!

But to say instead “No one knows how a physical state can be or yield a mental state” is not just to state the problem, but to take a position on it, namely, the hypothesis of interactionist dualism (or something along those lines).

My edit was intended to avoid having the article take a position, by adding the possibility of the dualist solution (“yield”) to the possibility of a monist solution (“be”). This is in keeping with Chalmer’s original statement of the “hard problem”:
It is widely agreed that experience arises from a physical basis, but we have no good explanation of why and how it so arises. Why should physical processing give rise to a rich inner life at all? It seems objectively unreasonable that it should, and yet it does.
He uses the verb “to arise”, which Merriam-Webster defines as “to begin to occur or to exist”. Of course, the same word could be used in the introduction, but ot wouldn’t be the best writing style.J-Wiki (talk) 06:35, 11 September 2018 (UTC)

But this is exactly what acknowledging that it is a “hard problem” is meant to avoid. Yes, no one has any idea how a physical state could ‘be’ a mental state, but that already covers the fact that no one has any idea how a physical state could cause a mental state either!

The inability to be something does not necessarily preclude the ability to cause something.J-Wiki (talk) 06:35, 11 September 2018 (UTC)

Singling out that particular symptom of the problem and elevating it to the statement of the (hard) problem itself amounts to giving one particular hypothesis a privileged position among the many (vacuous) hypotheses that have been the symptoms of the (hard) problem itself.

By the same token one might have extended the statement of the problem to include all of the popular hypothetical (and vacuous) non-solutions: (1) the physical is identical with the mental (materialism, identity theory), (2) the physical causes the mental (parallelism), (3) the mental interacts with the physical (interactionism), (4) the physical replaces the mental (eliminativism), (5) there is no mental (physicalism), (6) there is only the mental (mentalism), (7) the mental is our only perspective on the physical (dual aspectism), (8) mental states and physical states are all just “functional” states (functionalism), etc. etc.

All these non-explanatory non-solutions are already implied by the hard problem itself. That P might (somehow) be the “cause” of M is already one of the many inchoate hypotheses opened up by admitting that we have no idea whatsoever as to how P could “be” M (or M could “be” P).

Yes, there are other theoretical possibilities, but Chalmers didn’t consider them popular enough to mention. Therefore, the article, in summarizing the topic, shouldn’t.
Also, the recognition that there is a “hard problem” of consciousness does not imply that the solution must be monist. It is intended to make clear that finding physical explanations of even all of the “easy problems” will not solve the “hard problem”.J-Wiki (talk) 06:35, 11 September 2018 (UTC)

That’s why I think it would have been more NpV to use one neutral copula as the verb-of-puzzlement (“to be”) rather than a neutral one plus an arbitrary choice among the many problematic hypotheses on the market (“to yield” — i.e., “to cause”).

As explained above, I don’t see the verb “to be” as neutral in this case.J-Wiki (talk) 06:35, 11 September 2018 (UTC)

From glancing at (but not reading) other Wp articles you have edited, a hypothesis occurs to me: might your own PoV be influenced by quantum hypotheses…?

Which PoV would that be?J-Wiki (talk) 06:35, 11 September 2018 (UTC)

As for me: “Hypotheses non fingo” —User:Harnad (talk) 12:47, 10 September 2018 (UTC) —User:Harnad (talk) 13:07, 10 September 2018 (UTC) —User:Harnad (talk) 13:10, 10 September 2018 (UTC)

Thank you for your comments.
Please see my replies interspersed above.
The essence of this discussion should be copied to the article’s talk page, so that others can benefit from the discussion for the purpose of editing the article.
J-Wiki (talk) 06:35, 11 September 2018 (UTC)
As you request, I will copy this exchange to the talk page of Hard problem of consciousness. Not to extend this side-discussion too long I will just make two replies:
(1) The hard problem is a problem of explanation: “how” and “why” do organisms feel rather than just function (insentiently, like machines)? Dave Chalmers did not claim to invent the hard problem, just to name it and point out that it is hard to explain how and why organisms feel rather than just function. Scientific explanation is not normally a metaphysical matter. “Monism” vs. “Dualism” (i.e., is there one kind of “stuff” in the universe [“physical”] or two kinds of “stuff” [“physical” and “mental”]?) is a metaphysical distinction, which in turns implies that the hard problem is hard for metaphysical reasons. But that is only one of many possible reasons why the hard problem is hard. Metaphysical dualism should not be given a privileged status among the myriad conceivable solutions to the hard problem, and it certainly should not be part of the definition of the hard problem. I think it is a mistake — and misleading to WP readers — to make metaphysical conjectures part of the definition of the hard problem. It is clear that some internal states of organisms are unfelt states and other internal states are felt states. The copula “are” here is not a metaphysical “are.” It takes no sides on how or why some internal states are felt. It just states that they are, and that explaining how or why this is the case is hard. The copula “is” is not a statement of the “identity theory” or any other mataphysical conjecture or scientific hypothesis. It is just the “is” we use in any subject/predicate statement.
(2) What PoV might someone interested in quantum mechanical puzzles unwittingly import into their view of the hard problem? Well, wave/particle ‘duality,” for a start…–User:Harnad (talk) 14:13, 11 September 2018 (UTC)
@Harnad: I’ll respond here instead of on the talk page where I saw this discussion since my response is not related to editing Hard problem of consciousness. One problem with many philosophical discussions of “mental states” (including the discussion above) is that the timing of such “mental states” is poorly specified. Philosopher Robert Prentner, for example, recently argued that the hard problem needs to be reframed with finer temporal specification; he wrote:
(Notice, incidentally, how Prentner’s mention of Whitehead’s “subject-predicate dogma” relates to your statement above: the “is” we use in any subject/predicate statement.)
If you or J-Wiki know of any interesting recent discussions of the hard problem that address temporality on multiple scales as Prentner alluded to, I would be grateful if you could share relevant citations. Biogeographist (talk) 19:11, 11 September 2018 (UTC)

@Biogeographist: I’m afraid I don’t know of recent, relevant writings on this question. I can only say that I find “dual-aspect” theory as unhelpful as the other 8 nonsolutions listed in paragraph 9 above (or many more that could have been mentioned). I don’t know what Prentner means by “finer temporal resolution” (though I’m pretty sure that by “tantamount” he means “paramount”). My guess is that the “is” ambiguity (“is” as stating a proposition and “is” as making an identity claim) is not really a profound matter. There is always a problem with physical to mental or mental to physical predication because of the (unsolved) “hard problem.” We do not know how (or why) feeling is generated. Dualists insist on reminding us that we don’t even know whether feeling is (physically) generated, or somehow sui generis. Timing won’t help.

(I assume that the hope is that if the physical (functional) state and the mental (felt) state don’t occur simultaneously, this will somehow help sort things out: I think it won’t. I did note once, in a Ben-Libet context (and Dan Dennett cites it in one of his books on consciousness) that it is impossible to time the exact instant of a mental event: it could precede, coincide with, or follow its physical correlate (and subjective report certainly cannot settle the matter!): there’s no objective way to pinpoint the subjective event except very approximately, not the fine-tuning Prentner seems to want. Saul Sternberg thought it could be done statisticaly, with averaging, as with event-related potentials. But I think it wouldn’t help either way. Whether feeling occurs before, during, or after a neural correlate, it does not help with the hard problem, which is a problem of causal explanation, not chronometry.) —User:Harnad (talk) 22:01, 12 September 2018 (UTC)

@Harnad: Thanks for the response. I’m familiar with Libet and some related discussions but all of that is quite old at this point. I suspect there is a role for chronometry in further elucidation of the hard problem but it will require, among other things, further experimental research (as always) and technological development. If “there’s no objective way to pinpoint the subjective event” then what is the warrant for even speaking of a “felt state” in contrast to an “unfelt state” (as the lead of Hard problem of consciousness does) when a “state” can’t even be pinpointed in phenomenal experience? (This is a question about the appropriateness of “state” as a common denominator between felt and unfelt, or subjective and objective, rather than about the hard problem in general. It’s also a rhetorical question: I don’t expect that there is a good response.) Biogeographist (talk) 15:23, 13 September 2018 (UTC)

@Biogeographist: First, “no objective way to pinpoint the subjective event except very approximately” is not the same as no way to pinpoint it at all.

Second, the limits of human timing accuracy for detecting felt states or events are pretty well known. I can say whether it felt as if my tooth-ache started a half-second earlier or later but not whether it started a millisecond earlier or later. So the temporal boundaries of felt instants are probably too coarse for pinning them to neural correlates (which can be much finer).

Of course one can always dream of new technology, but that would still only be based on more accurate timing of objective neural events (unless you are imagining either a drug or a form of brain stimulation that increases the limits of human timing accuracy for detecting felt states, which I think is unlikely, though not inconceivable).

But even if subjective detection could be made as accurate as objective (neural) detection, how can that more accurate chronometry help with causal explanation? As I said, the felt instant could precede, coincide with or follow its neural correlate, but none of the three options helps explain how or why (or even whether) neural events cause feelings.

The causal problem is not in the timing. It’s in the functionality. Neural events can clearly (and unproblematically) cause motor as well as other physiological activity (including “information-processing”), all of which can be objectively timed and measured. No causal problem whatsoever there. Suppose some neural events turn out to slightly precede or even be exactly simultaneous with felt states (within the limits of measurement): How would that help explain how and why the felt states are felt? Even if the felt states systematically precede their neural correlates (within the limits of measurement), how does that help explain how and why felt states are felt?

That’s why I think “temporality” is not going to help solve the hard problem. I think the real problem is not in the timing of either firings or feelings. The problem is that feeling seems to be causally superfluous for causing, hence explaining, our cognitive capacities — once the “easy problems” (which are “only” about the causal mechanisms of behavior, cognitive capacity and physiology) have been solved.

Even if one imagines a scenario where the feeling precedes its neural correlate, and the neural correlate can also occur without the preceding feeling, but we can show that the neural correlate alone, without the preceding feeling, is incapable of generating some behavioral capacity (i.e., an easy problem), whereas when preceded by the feeling, it can. This sounds like the ultimate gift to the dualist: but what does it explain, causally? Nothing. It is just a just-so story, causally speaking. It leaves just as big a causal mystery as the scenario in which the neural correlate precedes or coincides with the feeling. None of this gives the slightest hint of a solution. Neither monism nor dualism solves the hard problem. It just soothes metaphysical angst, hermeneutically.

Now there could be a form of dualism that does give a causal explanation, hence a solution to the hard problem: If, in addition to the four fundamental forces of nature (gravitation, electromagnetism, and the strong and weak atomic forces) there were a fifth force which corresponded to feeling (or willing), then we would be no more entitled to ask “how and why does this fifth force pull or push” than we are to ask how and why do the other four fundamental forces pull or push. They are simply fundamental forces of nature, as described by the fundamental laws of nature, and as supported by the empirical evidence. — But the latter is exactly what feeling as a fifth force lacks. There is no empirical evidence whatsoever of a fifth fundemental force, whereas there is no end of observable, measurable evidence of the other four.

So even on the last hypothetical scenario (feeling precedes neural correlate and some behavioral capacity cannot be generated by the neural correlate alone when it is not preceded by the feeling), the “causal power” of feeling would remain a mystery: a hard problem, unsolved. The only thing I can say in favor of this fantasy scenario (which I don’t believe) is that if it did turn out to be true, it would mean that the “easy problems” cannot all be solved without the (inexplicable) help of feeling, and hence that some easy problems turn out to be hard! —User:Harnad (talk) 22:48, 13 September 2018 (UTC)

@Harnad: Thanks again for the response. It’s very interesting to read how you think. You went in a direction I wouldn’t have anticipated: I’m not sure where your sudden emphasis on “causal explanation” in your comment above comes from. (Perhaps it came from Prentner’s article, but I think he brings up causal explanation just to point toward the alternative of mereological explanation.) The word “causal” doesn’t appear in the Hard problem of consciousness article, and “not all scientific explanations work by describing causal connections between events or the world’s overall causal structure” (from: Lange, Marc (2017), Because without cause: non-causal explanation in science and mathematics, Oxford studies in philosophy of science, Oxford; New York: Oxford University Press, doi:10.1093/acprof:oso/9780190269487.001.0001, ISBN 9780190269487, OCLC 956379140). I don’t see why responses to the hard problem aimed at explaining the relationship between phenomenal experience and objective data, or at explaining the very nature of the phenomenal, would require a causal explanation. So I don’t think that “how can that more accurate chronometry help with causalexplanation?” is the right question, but I’ll take the question with the word “causal” omitted. And I’ll admit that I don’t have a good answer, but I’ll keep thinking about it. I can say that I often don’t know very well exactly what it is that I am feeling, and I’m familiar enough with the psychotherapy research literature to know that I’m not alone—this is common even among relatively normal people (which includes me, of course), as philosopher Eric Schwitzgebel has also emphasized (e.g., Schwitzgebel, Eric (2011), Perplexities of consciousness, Life and mind, Cambridge, MA: MIT Press, ISBN 9780262014908, OCLC 608687514). So that adds another explanatory target for the hard problem: not just why and how people (OK, sentient beings) feel, but also why and how they don’t know what they feel. And I suspect there is a role to play for a better understanding of “temporal relations” (Prentner) and of many other things. I’ll happily concede that it may not help with causal explanation because, as I said, I don’t see how causal explanation is required. Biogeographist (talk) 01:18, 14 September 2018 (UTC)

@Biogeographist: Yes, Dave Chalmers may not have written about explanation or causal explanation in relation to the hard problem, but I have. ;>) And I don’t think the hard problem has much to do with our subjective perplexities about what it is that we’re feeling: A coherent causal explanation of how and why tissue-damage — besides generating the “easy” adaptive responses (limb-withdrawal, escape, avoidance, learning, memory) — also generates “ouch” would be sufficient to solve the hard problem (without any further existential or phenomenological introspection on the meaning or quality of “ouch”). Best wishes, —User:Harnad (talk) 12:19, 14 September 2018 (UTC)