Saturday, 2 January 2016

10a. Dennett, D. (unpublished) The fantasy of first-person science

Extra optional readings:
Harnad, S. (2011) Minds, Brains and Turing. Consciousness Online 3.
Harnad, S. (2014) Animal pain and human pleasure: ethical dilemmas outside the classroomLSE Impact Blog 6/13 June 13 2014

Dennett, D. (unpublished) The fantasy of first-person science
"I find it ironic that while Chalmers has made something of a mission of trying to convince scientists that they must abandon 3rd-person science for 1st-person science, when asked to recommend some avenues to explore, he falls back on the very work that I showcased in my account of how to study human consciousness empirically from the 3rd-person point of view. Moreover, it is telling that none of the work on consciousness that he has mentioned favorably addresses his so-called Hard Problem in any fashion; it is all concerned, quite appropriately, with what he insists on calling the easy problems. First-person science of consciousness is a discipline with no methods, no data, no results, no future, no promise. It will remain a fantasy."

Dan Dennett's Video (2012)

Week 10 overview:

and also this (from week 10 of the very first year this course was given, 2011): 


  1. Is there another way to get access to this article? The link is not working for me.

    1. This should be it!

  2. Alright - I have a lot to say/ask about this one.

    1) Confusions about Heterophenomenology (HP)
    I’m left a bit muddled on the actual concept of (HP). For all his extolling of its universal usefulness, I never felt Dennett actually resolved exactly what it is meant to be doing…
    As I understand it: HP involves acquiring ‘3rd-person’ data from an individual including verbal reports, hormonal changes, physiological changes, neural correlates etc. etc. to 'describe phenomenology’ while “never abandoning the methodological principles of science.” Alone, this almost feels like a satisfactory definition, until I try and relate it to the opening question Dennett claims to be pursuing an answer to via HP: Turing’s question, ‘how to make a robot that has thoughts’. Am I missing something? Because I really don’t see how HP -however useful it might be for ‘describing phenomenology’, or even predicting it - does anything related to generating a causal mechanism to answer Turing’s question (namely, the why and the how of what we can do and/or feel).

    Questions re: the Hard & Other Minds Problems
    Okay so let me first try and give the most concise possible summation of my interpretation of the two Teams.
    Team B & Captain Chalmers - There is a Hard Problem
    Team A & Captain Dennett - There is no Hard problem, because there is no ‘feeling’ except, for the things we say about it
    It seems like we should all necessarily have to draft ourselves on to one of the 2 teams. But my problem is that, based on what follows in the article, I can’t find myself fully in agreement with either Dennett or Chalmers, and my issue arises from Chalmers' definition of a zombie:

    Molecule for molecule identical to me, and identical in all the low-level properties postulated by a completed physics, but he lacks conscious experience entirely… he will certainly be identical to me functionally… and with indistinguishable behaviour resulting… It is just that none of this functioning will be accompanied by any real conscious experience.”

    This is where I get trapped. I would definitely count myself a member of the B team, in that I believe there is feeling, and I believe that it is a separate thing from doing. Vis a vis, I believe that the Hard Problem exists. However, it seems that Chalmers here is describing a perfectly successful reverse-engineered T4 system, and then asserting that it doesn’t feel, that it can’t be feeling. And here I have to strongly disagree - I thought the entire premise of the Other Minds Problem was that none of us can say with certainty whether this T4 system is a ‘zombie’ devoid of feeling, or not. I find myself asking at this point - what more could you ask for? What more could you possibly do in solving problem of a causal mechanism of condition? If this system can do absolutely everything that I or my human friends can do, how can I possibly assert that it is a zombie?

    So herein lies my dilemma re: which team I’m really on. It’s entirely possible that I’m misinterpreting Chalmers - if so, any clarification would be much appreciated.

    1. **causal mechanism of cognition, not condition. Damn autocorrect.

    2. 1. You're right. "Heterophenomenology" is just weather-forecasting, not explanation, nor a solution to either the other-minds problem or the "hard problem."

      2. Not quite. Yes, T4 (or T3 or T2 -- or, for that matter, T5) could be a zombie. There's no way to know, one way or the other, because of the other-minds problem.

      But there's also no way we canto know whether or not there could be a zombie, not just because of the other-minds problem, but because we have no solution to the "hard problem":

      The hard problem -- "How and why can we feel?" is exactly equivalent to "How and why can't we be zombies?"

      (By the way, there are almost certainly feelingless zombies: protons, pots, planets, PCs, probably also plants. The question is only interesting when you ask whether there can be T3- (or T4-) zombies.)

      ("Stevan Says" there can be no T3 (or T4) zombies, because although we can't solve the hard problem, feeling is almost certainly not just an accident: it's somehow needed to pass T3.)

    3. Why is it that feeling is somehow needed to pass T3? I'm still not certain about this and I'd really like some more explanation. I don't see why a non-feeling robot can't pass T3? Wouldn't that be like saying that we are 100% sure that all humans are feeling even when we can't ever be?

    4. Hey Alba, if I understand correctly, the argument that feeling is somehow needed to pass T3 essentially comes back to a mix of why 'Stevan Says' the hard problem is insoluble and the other mind's problem. It could be the case that a non-feeling T3 robot could pass the Turing Test, but as we are unable to feel (or identify the lack of feeling) in this robot, we cannot make the assumption either way of whether or not it is in fact sentient or merely a zombie (hence, the other minds problem with an added doubt of the robot being engineered-man made not nature-man made).
      What Professor Harnad is arguing is that as (thus far) we seem to be unable to identify a causal mechanism and role for the fact that we feel at the same time was we 'funct' as opposed to just 'funct' then in the process of building the full-fledged 'functing' mechanism (when we will be able to engineer that in the assumingly fairly distant future) feeling will automatically arise, thus not in the sense of building in the 'feeling add on' to the robot but simply (though not so simply) in the outcome of the fully operational functing mechanism!
      What i am unclear on is, if what i have said above is correct, and if there indeed seems to be no added causal role for feeling, then why is 'epiphenominalism' such a bad description for it??

    5. i am not trying to make a case for epiphenominalism, i hold absolutely no attachement to it, though there is something i am just not getting (perhaps what epiphenominalists themselves are arguing) because the difference is not clear cut to me!

    6. is the distinction that epiphenominalists argue that there is no causal role? (as opposed to there being a causal role, that we are unable to know?)

  3. I will attempt to portray my understanding of what I found to be a very difficult article to understand

    Team A says that a third person observer has the ability to answer the hard problem by viewing the body and mind as a collection of machines that when put together create a “call to action” which seems to be his way of saying consciousness without the phenomenon having any special ineffable property

    Team B says that there is consciousness, we feel it. Thus we need 1st person introspection to judge our own consciousness and that we need to just ignore whether other people have consciousness or not (the zombie hunch). That is they say we can ignore the other mind’s problem and proceed in an undefined way towards answering the hard problem (?)

    The first section has a whole lot going on and I didn’t understand most of it when I went through the first time but the quote at the end of the appendix really helped frame what was going on

    “The old introspectionism failed precisely because it attempted, unwisely,
    to give subjects more authority than they can handle; as the years rolled on, more cautious and savvy researchers developed the methodology I have dubbed heterophenomenology. They crafted a maximally objective, controlled way to turn verbal reports (and interpreted button pushes, etc., etc) into legitimate data for science. All I have done is to get persnickety about the rationale of this entirely uncontroversial and ubiquitous methodology, and point out how and why it is what it is and then I've given it an unwieldy name.”

    In the article, Dennet is explaining how heterophenomenology (HP) is a practical scientific method and abides by the rules of science. It does not only require verbal report as all reactions are valid. This data is used to create a fictional HP world that is what it is like the be that person. The verbal reports are interpreted but the subject’s beliefs are held off because they could be erroneous. HP then explores “what could explain the existence of those beliefs”. He explains how any info absent to the 1st person view would not be available to report for HP so there isn’t anything that you can experience in 1st person that would be able to be captured in HP.

    The debate then moves onto what CogScientist actually do in their research. Team A is saying that currently CogScientist do not rely on introspection, they merely use it as another tool to corroborate other data obtained through other methods (I believe this is his point). He then reiterates that with HP they do not focus on whether the beliefs that they record are uniquely human beliefs or if they could also be believed by zombies that lack the consciousness we have. He concludes with examples of how HP can be used and why it is more apt of a methodology than to solely rely on 1st person introspection. The debate then delves into what is meant by a “zombie” and on why the opposition believes introspection is necessary for debating the zombie dilemma. I get confused while reading this where the divide occurs between team A and B. Team A just seems to absorb all the benefits of team B and says that they use the ideas of team B but remain agnostic about its meanings and thus I suppose Team B is just reinforcing the necessity of introspection and asserting that this 3rd person methodology doesn’t work? This is about where I break down in the article.

    1. "Team B" says we feel (hence there is feeling) but it certainly does not explain why we "have" to feel.

      HP is approximately the same as T4. It let's you mind-read (weather-forecast) what someone else is doing plus what they say -- or act as if -- they are feeling. Feeling remains untouched (the other-minds problem) and unexplained (the hard problem).

      The weasel-word here is "beliefs." It feels like something to believe something. An unfelt belief is not a belief, it's just a datum, or a state, as in tea-pot or (I hope) a tree. Hence all talk about whether or not beliefs about feeling are true is circular and question-begging. (Think about it.)

      Team B has no methodology and it makes no particular use of introspection except to state the obvious: Humans (and non-human animals) feel. We are not zombies.

      And that just amounts to saying that, yes, there is a hard problem, and that "Team A" is trying to skirt it.

  4. Dennet's paper was a little bit confusing for me. I'm not sure how I feel about the whole third person qualia argument, as I thought qualia was inherently subjective. I thought that was the whole point. Anyways the concept of being able to study consciousness exclusively in a third person "scientific" way seems a bit problematic. He mentioned that everything reported by the subject would be transcribed and coded and analyzed by researchers, but there's still something missing. First, humans are not reliable to transcribe their own consciousness, and not because it is inaccurate (whatever an inaccurate consciousness is), but probably because we can't escape it. How can I accurately describe the nuances of something I never have been without? Dennett's argument reminded me of the dubious gas payment story told in the first or second class. If you think you paid for gas, and that is your perception, for your consciousness (until the gas station owner starts following you down the block) that is all that matters, regardless if you're right or wrong. And shouldn't that be the central focus when studying consciousness too?
    I'm obviously still stuck on qualia, and the fact that Dennett mentions that if we put someone in a brain scanner during a paradigm, and the brain scanner records no shift, but the subject disagrees and says a shift occurred, that that means the subject is wrong. Wrong about what? What they are perceiving? My perception can be "wrong" if I'm seeing things "wrong" but if we want to study consciousness isn't this missing the whole point? If we study what the brain is doing exclusively and discredit subjective experiences are we even studying consciousness anymore? At the end of the article, Dennett mentions that heterophenomenology does take into account peoples' beliefs about their conscious experience, but that's not consciousness either. We want first person subjective moment to moment experience, which often can't be articulated and is (as mention above) often not even realized by the person. Dennett seems to have a very strong behaviourist-ish attachment to what is concrete and tangible. I don't think peoples' beliefs are accurate either (like Dennett) but isn't that an inherent feature of the messy consciousness of which we want to study?
    The example of colour change seems a bit unfair to the 1st person argument. Perception (especially visual) is well studied and mapped onto the brain and is thus one of our most concrete cognitive capacities. We know fairly well what is happening, and where, and it involves an outside source (whatever we are viewing) whose actual features can be compared with the recount of the subject in question. Coding and analyzing this would be must easier than doing the same for something that involves emotion or feeling or any of those other messy abstract cognitive capabilities.

    On the other hand, I was very disappointed that I didn't see Fodor and Dennett battling it out in the appendix. That would have been fun to read.

    1. Yes, Dan Dennett is basically still a behaviorist. He's taken all the internal computation and T4 dynamics on board, but he still imagines that because feeling is not a (publicly) observable datum like all other data, it does not exist.

  5. It was great to read an article by Dennett because his work (first his new atheism stuff then the cognitive science) is part of what made me want to go into this field. That being said I think he missed the point in this paper. It essentially boils down to a few mistakes that are repeated throughout. He either ignores feeling, or tries to describe it without explaining it. I’ll give a few examples.

    “heterophenomenology maintains a nice neutrality: it characterizes their beliefs, their heterophenomenological world, without passing judgment, and then investigates to see what could explain the existence of those beliefs.”

    A belief is not a feeling. Beliefs are surely a part of someone’s phenomenological world. But there’s more to it than that. Dennett blurs these two very frequently in his paper and I really wonder why.

    “Less often still, the existence of beliefs is explainable by showing how they are illusory byproducts of the brain’s activities: it only seems to subjects that they are reliving an experience they’ve experienced before (déja vu).”

    We know very well that people can make incorrect reports and their feelings may not correspond to reality. That’s fine (and really interesting for its own sake) but feeling itself needs to be explained. The fact that their brain has played a trick on them is one thing but the fact that it feels like something to have that experience is something else. Dennett is stuck in the easy problem. Unfortunately he set out to explain a lot more than that and he has fallen short.

    “You see apparent motion. Does the yellow blob really move? The blob on the screen doesn’t move. Ah, but does the subjective yellow blob in your experience move? Does it really move, or do you just judge that it moves? Well, it sure seems to move! That is what you judge, right? Now perhaps there are differences in how you would word your judgments. And perhaps there are other differences.”

    Dennett seems to be trying to describe feeling but has not explained it or why we have it. Sure, there are many feelings and different people feel different things. Being able to describe them all and understand what gives rise to them are very important questions of tremendous theoretical and clinical interest. They are all worth looking into and Heterophenomenology would be a great way to do that. I don’t want to debate him on that. But understanding what gives rise to specific feelings in specific circumstances is still the easy problem. He has not addressed the question of why we feel. If I understand correctly, that’s the hard problem and that’s what he’s missing.

    Overall, I was really confused about how consistently Dennett seems to miss the point. Does he refuse to acknowledge the experience of internal states in the first place? Or is he a non-feeling zombie?

    As an aside, is Dennett’s heterophenomenology related to Varela and Thompson’s neurophenomenology? (Kid sib: Varela and Thompson are two cognitive scientists who are well versed in Buddhist philosophy. They have repeatedly suggested that 3rd person methods need to be paired with first person phenomenological investigation for us to understand felt experience.) If so, is “hetero” a placeholder for various 3rd person scientific fields of inquiry?

    1. Yes, Dan fudges on beliefs. An unfelt "belief" is not a belief; it's just a state, as in a teapot. So it's incoherent to ask whether beliefs about the existence of feelings are true -- as incoherent as asking whether feelings about the existence of feelings are true!

      Yes, Dan is (still) a behaviorist, and thinks the easy problem (doing) is the only problem.

      Of course Dan does not explain feeling: that's the hard problem, and he's not solving it; he's denying it's a problem.

      Alas Varela & Thompson are woolly-headed, whereas Dan Dennett is merely (behavioristically) blinkered. (If you can provide a coherent V&T statement for kid-sib, I'll be happy to have a go at it, but I've always found it too nebulous to grasp...)

  6. If anyone is a little bit confused about Dennett's article this paper might help. It's called "What is it like to be a bat?" Coincidentally, Nagel is actually one of the "B-Team" members Dennett mentioned.

    In a nutshell, Nagel argues that even if we know everything there is to know about a bat's neurophysiology, we still won't know what it feels like to be a bat. We could take the idea further and hang out upside down all day in a dark, but we still wouldn't know what it's like to be a bat.

    To me, that's the "feeling" that Dennett misses. Even if the bat could talk and tell us what its like, we still wouldn't really understand.

    Maybe all the cog sci students in the class are already familiar with this but as a psych student I only came across this recently and I thought it was a really cool way to think about the problem.

    1. Yes, "what does it feel like to be a bat" captures not only the other-minds problem but the hard problem, illustrating how even a bat T4 would not explain feeling.

  7. I will try to do the distinction between team A and team B. Dennett, as the team A leader, claims that heterophenomenology has the potential to described everything there is to described when it comes to subjective experiences. From what I understand, his point is that there is nothing more to be added to a verbal report and visible expressions, therefore, heterophenomenology is an adequate and sufficient mean of accessing cognitive processes. I don’t believe Dennett is denying the existence of internal beliefs and state, but that he doesn’t see how it can be applied to cognitive research, apart from heterophenomenology.
    I do believe heterophenomenological means of accessing someone state are useful and should be use. They are neutral, non-invasive (obviously), methods that are congruent (up to a point) with a person’s beliefs. If I can make a connection in term of clinical use, pain is said to be the fifth vital sign. Clinicians ask patient to rate their pain on a scale from 0 to 10. It really is a subjective sensation that is difficult, even impossible, to measure physically. Pain tolerance varies enormously among people and should be treated in agreement with their verbal report.
    Even if I think I understand Dennett’s point, I also believe there is something fundamentally unique about internal feelings: mental life. It is hard to agree with Dennett since he seems to discard what it is like to be alive. The way I experienced life does seem unique; does seem to carry an ‘extra spark’. Team B have a point when they say they just know that heterophenomenology is not enough: because we just feel it! I don’t like the idea that all my mental states are accessible through verbal report. Once one has those verbal reports, they should know what it is like to be me? I’m not always telling exactly what I feel. Something I keep things for myself. What I am saying isn’t always in sync with what I am thinking.

    1. "Team A" is just predicting whether you feel this or that. Maybe they are right, maybe not. Suppose they are always right. That's heterophenomenology. But what it does not explain is how and why you feel anything at all. And because it cannot explains that, it denies that there are feelings at all. Just doings. And beliefs, which are merely internal (T4) doings.

  8. One other thing I wanted to bring up with this article:

    faced with these failures of overlap - people who believe they are conscious of more than is in fact going on in them, and people who do not believe they are conscious of things that are in fact going on in them – heterophenomenology maintains a nice neutrality: it characterizes their beliefs, their heterophenomenological world, without passing judgment, and then investigates to see what could explain the existence of those beliefs.

    In this and the surrounding section of the paper where Dennett touts the 'neutrality' of heterophenomenology, he explains that this neutrality is supposed to be between false positives ('you're wrong and don't know what you're talking about') and false negatives ('you genuinely can't tell what's going on'). At face value, this seemed like a valid strength of heterophenomenology - but now I actually see it as kind of an arrogant overreach. Specifically, I think that it's arrogant for heterophenomenology to presume to know more about an individual's own consciousness than they do. If that were the case, why would they even bother with the research?

    That whole train of thought came from me thinking about optical illusions presented in another class (which I actually think are a useful illustration of the issue). How would heterophenomenology treat these? Take, for example, this image, which gets me every time:

    When you look at the image (with full screen brightness), the two squares appear to be completely different shades, but if you cover the midline with your hand, it's instantly obvious that they're the same colour. How would Dennett treat this, if I were to be a subject of his and report on first viewing that I see 2 different colours? Presumably it would be something along the lines of "You're wrong - both squares are the same", and maybe a demonstration showing they're the same colour by covering the midline. But my issue, then, is this: although the squares are in fact the same colour (in terms of wavelength etc.), the conditions in those two scenarios are different. As a result, how could heterophenomenology tell me that I was wrong in the first case, when the two squares clearly created the feeling of seeing 2 different shades ?

    I appreciate that, yes, the wavelengths (and other colour properties) are identical even when the other conditions change (hand placement etc.), but these are just correlates of colour, as the illusion demonstrates. At the end of the day, colour is something I feel , so at the end of the day, what good is the 'neutrality' of heterophenomenology?

    1. Descartes: You can be wrong about what you see, but you can't be wrong about what you feel you see. Hence you can't be wrong about the fact that you feel. (Dan Dennett is just talking about discrepancies -- one way or the other -- between what you see and what you feel you see; and he manages to obscure it by talking instead about what you believe [weasel-word] you feel...)

  9. “Of course experimenters on illusions rely on subjects’ introspective beliefs (as expressed in their judgments) about how it seems to them, but that is the agnosticism of heterophenomenology; to go beyond it would be, for instance, to assume that in size illusions there really were visual images of different sizes somewhere in subjects’ brains (or minds), which of course no researcher would dream of doing.”

    Isn’t the point of heterophenomenology to figure out just what people are experiencing based on a complete appraisal of their physical state? If there is a behavioural correlate of this belief, surely there is also a brain state which causes it and which must in itself reveal what size the visual images are, for heterophenomenology to be viable.

    “You are not authoritative about what is happening in you, but only about what seems to be happening in you, and we are giving you total, dictatorial authority over the account of how it seems to you, about what it is like to be you. And if you complain that some parts of how it seems to you are ineffable, we heterophenomenologists will grant that too. What better grounds could we have for believing that you are unable to describe something than that (1) you don’t describe it, and (2) confess that you cannot? Of course you might be lying, but we’ll give you the benefit of the doubt.

    Is there anything about your experience of this motion capture phenomenon that is not explorable by heterophenomenology? I’d like to know what. This is a fascinating and surprising phenomenon, predicted from the 3rd-person point of view, and eminently studiable via heterophenomenology.”

    Granting that some of the content of experiences is ineffable leaves the content that cannot be expressed unaddressed. Just because they predicted that a perceptual effect would occur, it doesn’t mean that they understand what the effect is like to experience.

    I think that Jackson’s thought experiment involving Mary the color scientist is illustrative of the difference. Jackson considers the hypothetical case of Mary, who has been confined for the course of her entire life to a black and white environment. While in this environment, she has learned everything there is to know about the physical processes behind colour perception: the properties of light, cone cells, the visual processing pathways of the brain, and everything in between. Then one day she sees the colour red for the first time. Has Mary learned something new? If so, this something is the type of experiental knowledge that exceeds the scope of the heterophenomenological account of the motion capture phenomenon, or any other experience.

    1. There are doings (whether T3 doings or T4 doings) and there are feelings, which probably each have T4/HP correlates, right down to the last JND (just-noticeable difference) between feelings.

      And sometimes feelings are right about the world, and sometimes they are not.

      T4/HP would pick up all those correlations, allowing it to predict and mind-read correctly.

      But the hard problem (of how and why we feel) or even the other-minds problem (of whether we feel) is untouched. All you have with HP is our T3 doings, including their T4 correlates (also doings).

      Yes, Mary, when she finally sees color, finally knows what it feels like to see color. Till then, all she had was HP. We're the same way about "what it feels like to be a bat" no matter how good we get at chiropteran heterophenomenology.

      We're good at human HP, because we all know it feels like to be human.

      But that neither solves the hard problem, nor makes it go away...

      (I suggest dropping all the weasel-words that are just synonyms for "feeling" and give the illusion of going beyond it: experiences, qualia, awareness, consciousness, subjectivity, 1st-person states, intentionality, beliefs, etc. etc. etc.: there's just one hard problem, and that's enough, and hard enough!)

  10. “From the recorded verbal utterances, we get transcripts (e.g., in English or French, or whatever), from which in turn we devise interpretations of the subjects’ speech acts, which we thus get to treat as (apparent) expressions of their beliefs, on all topics. Thus using the intentional stance (Dennett, 1971, 1987), we construct therefrom the subject’s heterophenomenological world. We move, that is, from raw data to interpreted data: a catalogue of the subjects’ convictions, beliefs, attitudes, emotional reactions, . . . (together with much detail regarding the circumstances in which these intentional states are situated), but then we adopt a special move, which distinguishes heterophenomenology from the normal interpersonal stance: the subjects’ beliefs (etc.) are all bracketed for neutrality. […] include not just subject’s subjective beliefs about their experiences, but the experiences themselves.”

    I just want to make sure that I get all of this correctly, because I’m really not sure that I do. So in heterophenomenology these are the steps you follow:
    a) Take whatever the subject tells you and treat it as a belief
    b) Combine the belief, the subject’s physical state with its context to make sure that the data is “bracketed for neutrality”
    c) From this combination, interpret what the subject is really feeling, but what he/she thinks he/she is feeling

    I don’t see how this informs us more about the experiences themselves. If what the subject tells us does not match what the experience itself should be according to the whole data, wouldn’t the subject be the ultimate judge? How can one say “you only THINK you feel A, but ACTUALLY you are feeling B”?

    1. Exactly: "How can one say 'you only THINK you feel A, but ACTUALLY you are feeling B'? "

      And that is exactly what Descartes would deny: you cannot be wrong about what you are feeling. What you are feeling may merely be a felt belief that is objectively incorrect ("I feel a toothache, but in reality referred pain from a jaw injury" etc.). That just means your feeling was mistaken, not that it was not felt!

  11. Like many of Dennett’s opponents, I am also having trouble “leaping over the Zombic Hunch.” The way I understand it, the Zombic Hunch describes the idea that there is something (some feeling, “direct experience,” consciousness, whatever you want to call it) about subjective experience that is only accessible to the person having the experience. There is some property of experience that depends entirely on the experiencer. This something is what separates Chalmers from his zombie twin:

    It is just that none of this functioning will be accompanied by any real conscious experience. There will be no phenomenal feel. There is nothing it is like to be a Zombie. (page 6)

    So to Chalmers, a necessary part of any experience is the feeling induced in the experiencer due to the process of experience itself. If I’m understanding his argument correctly, Dennett disagrees because from a heterophenomenological viewpoint, we can never be sure that the experiencer has really experienced the feeling they say they have experienced. All we can really be sure of is that the experiencer believes they have experienced this feeling:

    Although he says the zombie lacks that evidence, nevertheless the zombie believes he has the evidence, just as Chalmers does. Chalmers and his zombie twin are heterophenomenological twins: when we interpret all the data we have, we end up attributing to them exactly the same heterophenomenological worlds. (page 7)

    Chalmers and his zombie twin are identical (heterophenomenologically) because they are going through the same experience, and considering they have all the same input, it is reasonable to conclude that they would experience the same feeling as well (or at least believe they experienced the same feeling, which is all that really matters here). Because they are completely identical beings in completely identical situations, the feelings they experience must be completely identical as well.

    Does this mean Dennett is a computationalist? It seems like he is arguing that subjective experience is implementation independent: put identical people in identical situations and they have identical experiences. With the same set of inputs, you will always get the same output, regardless of the system that is having the experience. Does this line of thinking imply that subjective experience is merely a computation? Does leaping over the Zombic Hunch just mean accepting that there is nothing more to subjective experience (and therefore cognition in general) than simple input-processing/ouput-production? There can be nothing else there, there is no unique “feeling” to experience, there is only the computational process. In other words, cognition and computation are inseparable. The computation itself is the cognition, there is no room for consciousness to complicate things. Consciousness is merely a bi-product (and a reliably predictable one at that) of the computational process.

    To me, it seems like Dennett definitely subscribes to the computationalist theory of cognition. In fact, the way he describes consciousness as simply arising from the computational experience as a whole makes me think he would argue some form of the Systems Reply in response to Searle’s Chinese Room Argument. But I’m not 100% sure I’m right here, and I’d love to be convinced otherwise if you think differently.

    1. See above: Dan is playing somewhat loosely between feelings and beliefs. But beliefs are felt too, otherwise they are not beliefs but just states, like states in a toaster or vacuum-cleaner or computer or zombie-robot. (And again, "experience" is a redundant weasel-word. T4/HP can predict feelings because of correlations, but it cannot explain how and why T3/T4 feels at all; and it certainly cannot demonstrate that it's just a zombie, because of the other-minds problem.)

      I don't think there's any need to twist oneself into Chalmers twin-koan pretzels to show that T4/HP is still subject to the other-minds-problem, and that it does not solve the hard problem, and that there is a hard problem because there are feelings. Descartes pretty much already did that, even if partly unwittingly...

      I don't know whether Dennett considers himself a computationalist. I'm pretty sure that if he gave it a little thought, he, like Turing, would say he was a hybridist (computational/dynamical), not a pure computationalist. But for his attempt to deny the existence of feelings it does not make much difference one way or the other.

  12. " I asked him if he considered the capacity of industrial chemists to predict the molar properties of novel artificial polymers in advance of creating them as the epitome of such explanatory correlation, and he agreed that it was. Ramachandran and Gregory predicted this motion capture phenomenon, an entirely novel and artificial subjective experience, on the basis of their knowledge of how the brain processes vision” (Dennet 2001).

    While most of the class may be skeptical of Dennet, I am very intrigued by heterophenomenology. At first I thought it was a silly notion because it relied disproportionately on introspection. However, Dennet says it would be possible to verify these verbal reports through 3rd person science. Denote provides the example that with change blindness, there is a change in the fMRI scan when someone becomes cognizant of the change. Though this is a correlation, and philosophers and science may argue that this isn’t causal, I think this is as close as you can get to verifying the veridicality of someone’s verbal report. Though correlations are not useful as a whole, the fact that some people can predict phenomena using correlations entails some sort of causality and I think this is hopeful beginning, like that provided in the quote above.

    "Although he says the zombie lacks that evidence, nevertheless the zombie believes he has the evidence, just as Chalmers does. Chalmers and his zombie twin are heterophenomenological twins: when we interpret all the data we have, we end up attributing to them exactly the same heterophenomenological worlds” (Dennet 2001).

    Here Chalmers tries to say that the zombie doesn’t have conscience experiences, so the phenomena is pseudo-conscious. However, because of the hard problem we can’t tell if that zombie really does or doesn’t have conscious feeling. This doesn’t prove it does have feeling, but like Dennet, I think we can only try to make appropriate guesses as to whether they have conscious experience or not.

    I do agree with Chalmers on a few things though. The first is that beliefs may be founded in some sort of experience and the fact that a zombie doesn’t have experience probably would mean it wouldn’t have the same beliefs. I found Dennet’s refutation to be inaccurate because he said it would be possible to eliminate the experience, have the zombie keep saying that it was experiencing a phenomena, and still the zombie would experience them as phenomena. Like Chalmers, I think if experience does play a causal role in beliefs, you can’t make the leap of eliminating them and saying the belief at the end of the day would sill be the same phenomena to the person

    Lastly, Chalmers asks if heterophenomenology can truly remain neutral because we give a disproportionate weight to the subjects report. Even if we are verifying these reports, is it not possible that these reports can taint our view of how to explore the scientific question. This all rests on the premise that heterophenomenology can verify the the truth of the phenomenon and, if it is false, prove why people believe in it. I am not sure if heterophenomeonology has reached a stage where scientist agree that its explanations are what’s needed to verify the truth of the phenomena, so until then I will remain skeptical of this field remaining truly neutral.

    1. 1. Correlation is not causation and prediction is not explanation. And even if neurons were to cause feelings, somehow (and they might), it is not clear how, and even less clear why. Mind-reading is not the goal of cognitive science: causal explanation is.

      2. Zombies (a fiction) are unfeeling by definition. If they existed, and they felt, then they would not be zombies!

      3. Unfelt beliefs are not beliefs, les alone beliefs about feeling.

      (I don't understand your last paragraph.)

  13. "So the answer to the question “What else is there, besides know-how?” is that it’s exactly the same sort of extra thing that there is, over and above know-how, in the case of a headache. In T2, the know-how underlying a headache is simply to be able to state that you have a headache, and to make the rest of your discourse consistent with that fact. In T3, you might also have a pained expression on your face, cradle your head in your hands, and react in an uncharacteristically abrupt way when touched or spoken to. That’s headache know-how too. If I suspect you are faking it, I could move to T4 and request a brain scan."

    It's true - with the "other minds" problem, we don't really know if the people to whom we speak actually feel things. We assume that robots don't feel things, and even on the slim chance that they do, we definitely assume that there's no way that they feel in the same ways as humans, or that it feels the same. However, yes, it is true that we could and should and do have the same uncertainty pertaining to other people. I have to say the simple argument here, though. Can’t humans tell if someone else is feeling something because you can only describe a feeling if you’ve felt it? We can use examples with each other that wouldn’t usually make sense but would be appropriate if someone else has felt that feeling. I could tell someone that having your heart broken feels like a train hitting you in the stomach. Would a robot understand that? A robot would understand what a train is, what a stomach is, and probably would understand the simile. And a robot could also nod its head (assuming this is a T3 robot) and say, “yes, I agree. Having your heart broken feels like a train hitting you in the stomach.” And a robot could do this because it can ground symbols. However, could a robot come up with another abstract example like that? Or would the robot, when asked to describe the feeling of heartbreak using an example, not be able to describe it in terms with which a human who has had his/her heart broken understand? Even if a robot were told to lie and tell people it felt things, would it be able to appropriately discuss the feeling if it didn’t feel it?

    Another point:

    "The Cogito. Well perhaps certainty is a bit too much to ask. We already know from Descartes that we can be certain about the necessary, provable truths of mathematics but apart from that (with one prominent exception we will get to in a minute), there’s no certainty. We can’t even be certain about the laws of physics: They are just highly probably true. We can’t be certain about the existence of the outside world; it’s just highly probable. Same for the existence of other people, and for the fact that other people think (the “other minds” problem): Highly probable, not certain. So what does that matter? Maybe certainty is something that one can only have in the formal world of mathematics."

    1. Continued : What does true mean? Provable? You can prove that 2+2 is 4. If you have two marbles and then add another two, you will have four. And this is the result that is always yielded. But this is true until it’s not. We just say it’s true because in all of the instances in the past, it’s been 4, so it’s highly probable that it will always be 4. I think it may be a bit of a stretch to say that we can be certain about the truths of mathematics. I know I'm challenging Descartes here, and it makes me feel like a bit of an idiot (who am I to challenge Descartes), but I always found the idea of "certainty in mathematics" a little troubling. Obviously I agree that we can be BASICALLY certain, because two marbles and two more marbles has always been four and there is no (obvious or apparent) reason for it not to be four, but like anything else, I think it's important to realize that it is true until it's not. If the laws of physics are only highly probable, I find it interesting that we deem math to be "certain". If the laws of physics betray us and confirm that such laws ARE, in fact, only highly probable, couldn't that ever expand to one marble and another marble not equalling two marbles?

    2. No need to turn to extreme examples like heartbreak: A robot either would or would not feel something it they hear, see, or touch something. If it does, it feels; if it doesn't. it's a zombie,

      It's fun to speculate about what a grounded T3 would mean by "feel" if it were in fact a zombie (though we can't tell, because t talks just like us). "That surface feels hot" would just refer to the zombie's (unfelt but internal) state when it touches a hot surface. "He is angry with me" would just refer to the (mirror) internal state it's in when it is preparing to attack someone, etc. Seems to be plenty of room for grounded, T3-indistinguishable behavior based on unfelt (but detected) internal states that are just behavioral dispositions (the way the behaviorists had said, thinking that that's what feelings were!).

      "True" just means true. But "necessarily true" (true on pain of contradiction) means you can be sure it's true.

      Maths is not about marbles. In fact, it's not about anything at all. It's just formal symbol manipulation, according to rules based on the symbols' shapes: "2 + 2 = 4" is necessarily true if you assume the (formal) axioms and rules of inference of Peano's arithmetic. Marbles are just an interpretation.

      And, no, real marbles cannot violate the formal truth that "1 + 1 = 2." If one of the marbles suddenly disappears, that has nothing to do with maths...

  14. “And if some subjects in our apparatus tell us that their qualia do shift, while our brainscanner data shows clearly that they don’t, we’ll treat these subjects as simply wrong about their own qualia, and we’ll explain why and how they come to have this false belief.”

    From what I have gathered in the article heterophenomenology is a method that adheres to the principles of science and is used to “do justice” to subjective experiences. I think what Dennett means by “do justice” is describe subjective experiences, but by the end of the article it remained unclear to me how this had anything to do with either making a robot that does things the way we do, or how this explain the how or why of how we feel. Instead it seems that heterophenomenology is a method for study mental phenomena by using verbal accounts of a person’s beliefs about their experiences as well as all their behavioural, visceral, emotional and other reactions.

    Dennett emphasizes that in hetereophenomenology subjects’ beliefs are “all bracketed for neutrality”, which I take to mean their beliefs are taken as how things “seem” to be to the subject, rather than they perhaps actually are. I assume if any of my interpretations thus far are incorrect I will be put in my place….

    Because of the emphasis of heterophenomenology on the way things “seem” to be to the subject one part of the reading is not entirely clear to me (see the quote above). Dennett discusses false positive beliefs (like the qualia example) in which subjects have beliefs about their conscious experience which are provably false. There is also discussion about false negatives, in which subjects fail to report or even answer questions about things that if presented a forced select task reveal that they have had some influence. But how would heterophenomenologists treat these things? Are they dismissed? Are these the items that are “bracketed for neutrality?”. If these false beliefs are dismissed does this not completely contradict the idea of heterophenomenology addressing what the subject “seems” to experience” whether it is detectable for fMRI or some other method or not?

    1. "You're not feeling what you feel you're feeling!"

      Renuka, you are spot-on. "Heterophenomenology" (HP) can tell you that things are not the way they seem: that that's just a hallucination of a chair, not a chair; that that pain is referred pain from your jaw, not a tooth-ache. But it cannot tell you that it doesn't seem to be what it seems to be, which would be that it doesn't feel like what it feels like. Descartes was right about that (the "Cogito"/"Sentio") and Dan isn't.

      This is equally true of "false positives" and "false negatives." Both of those are just about whether you do or don't feel something that is correlated with something is the world (like a beep): If there's a beep, and you report hearing it, that's a true positive. You there's no beep and you report not hearing it, that's a true negative. If there's a beep and you report not hearing it, that's a false negative. If there's no beep and you report hearing it, that's a false positive (a "hallucination"). What is being tested is whether you hear the beep, not whether you hear the hearing.

      But more important than the fact that feelings are incorrigible (certain) and so cannot be "corrected" by the objective evidence of HP (which is merely T4, by the way), but HP cannot explain how or why we feel at all.

      (By the way, in a variant of the gas-station story I told in class, you could present a long beep, and the subject could say they didn't hear anything, but then you could say "but listen for a faint hum" and the subject could say: "O, yes, I do hear that, so I must have been hearing it too" (or even "You're right, that, I was hearing it, but I just didn't notice that was what that sustained beep was"). But now we are talking about how you interpreted what you felt or even how you interpreted it in retrospect -- describing in words what it felt like -- not about whether you felt it. You felt whatever you felt, and it felt however it felt. (God, how I hate phenomenology!) But there's a subtler side to this that complicates it more -- without changing anything: It feels like something to interpret a feeling as X [e.g., toothache] and it feels like something different to interpret that same feeling as Y [e.g., referred jaw-pain]. This is related to the fact that it feels like something to understand or mean X: Feelings are not just sensory: everything going on in your "mind" is felt, including wanting, willing, planning, meaning, understanding imagining, wishing, etc. That's what makes it "mental" rather than just "cerebral," But never mind. No insight whatsoever about how and why anything is felt at all comes out of introspecting about the phenomenology of feeling, and how different feelings combine. And, again, "heterophenomenology" is no more nor less than just T4.)

  15. - I got confused by Dennett’s explanation of how heterophen. manages to leap over the “Zombie Hunch.” So to be heterophen. twins, both individuals need to have the same beliefs and same functional capacity. However, zombies lack conscious experience, but can have beliefs so a heterophen.ologist must take their beliefs for granted and see if their beliefs are actually true. A heterophen.ologist believes that subjective belief depends on the contents of the experiences that make up the belief, and not on the consciousness/awareness accompanied with those experiences? I don’t understand how a heterophen.ologist has overcome the zombie hunch then because doesn’t the ex hypothesi states that whatever we are not conscious of cannot be taking as data? So beliefs can occur without consciousness? What does that even mean?

    - I also don’t understand the difference between beliefs, feelings/consciousness, and thinking in terms of what they are exactly. Are all of these states? Is thinking and beliefs states of consciousness? Or are they mental states and consciousness also a mental state? Or is consciousness not a mental state and is something unexplainable/out of the reach of scientific explanations?

    - I do sort of see Dennett’s point about how the phenomenological approach to consciousness is a discipline with no methods because it’s so hard to approach the study of consciousness at all. I don’t believe that the heterophen. approach is a lost cause since it can probably explain the easy problem, can’t it? I don’t understand what phenomenologists want as an answer to the hard problem anyways, since representationalism and computationalism both seems to not provide a causal explanation to feelings…

    1. All believing is feeling, but not all feeling is believing

      The simplest way to see how Dan Dennett's "heterophenomenology" (HP) not only fails to solve (or dissolve) the "hard problem" (of explaining how and why organisms feel rather than just do), but does not even address it, is to grasp that HP has conflated feeling something with believing that you are feeling something: conflating feelings with beliefs about feelings.

      Dan believes there is no such thing as feeling. Feelings are just cerebral (T4) states: Your brain can be in the cerebral state of believing that P ("the cat is on the mat") or of believing that not-P ("the cat is not on the mat). We test whether your belief is true or false by checking, "scientifically," whether there is a cat on the mat.

      Similarly, you can be in the cerebral state of believing Q ("I am feeling a toothache") or not-Q ("I am not feeling a toothache") and your dentist tests whether or not there is something wrong with your tooth, or your neurologist tests whether or not there is something wrong with your brain, and thereby determines whether your belief is true or false.

      The trouble is that a feeling is not (just) a belief. Nor is it (just) a belief about a feeling. It is a feeling. When you (or an axolotl) are being pinched, you are not (just) believing that you are being pinched (and the axolotl probably isn't "believing" anything at all. You are both feeling a pinch (even if it's just a hallucination: and a hallucinated pinch still feels like a pinch!)

      And to make things even more complicated, not only does it feel like something to be pinched (or to hallucinate being pinched), but it feels like something a bit different (or a bit more) to believe that you are being pinched (whether or not the pinch is actually a hallucination): It feels like something to believe something. (An unfelt belief is not a belief, just as an unfelt feeling is not a feeling. Let's not get into Freudian "unconscious beliefs." Let's just say that beliefs are beliefs only when you are actually believing them, just as actions are actions only when you are actually performing them. Otherwise they are just potential beliefs/actions, or dispositions to believe or do.)

      So HP does not solve the hard problem of explaining how or why organisms feel rather than just funct. It does not even touch it. HP does make a little progress on the Other-Minds Problem, which is trying to predict not only what people will do, but also what they will say they are feeling. T4 would make this kind of prediction ("mind-reading," mental weather-forecasting) possible, because of the potentially perfect correlations between T3 doings, including reports of feelings, T4 brain states, and feelings. )

      But that's all still just correlation. HP predicts which cerebral states will be felt states, and what they will feel like, but not how or why they are felt. It is not a causal explanation of feeling. T4 obviously isn't able to tell us whether or not there can be T4 zombies. ["Stevan Says" there can't be, but only a solution to the hard problem could explain how or why there can or can't be.].

      And (most important), a hallucination is not merely a "false belief." It is a feeling.

  16. I get the sense that Dennett is belittling the hard problem by ignoring how and why we feel all that we feel. Even if Dennett's heterophenomenology gets sufficiently close to accessing the feelings associated with thought/experience, it ultimately leaves the hard problem untackled.

    "we move, that is, from raw data to interpreted data: a catalogue of the subjects' convictions, beliefs, attitudes, emotional reactions"

    Two questions here:

    1. How does raw, objective data become interpreted to account for beliefs, attitudes, etc? How can Dennett be sure that his interpretation of person X's beliefs is true? Isn't this an infinite regress of the other minds problem type?

    2. What is objective, raw data? I definitely fall within camp B; I attribute power to the fact that we feel, and thus believe that regardless of whether Dennett is collecting verbal reports or galvanic responses ("objective raw data") he cannot avoid the interaction between behaviour and feeling. Our verbal reports and our skin galvanic responses are surely influenced by our conscious feelings. Truly objective, raw data must then be behavioural data devoid of feeling, that is vegetative states. Which is probably not what Dennett meant to refer to...

    1. Yes, HP not solve the hard problem, which is a problem of explaining the causal function of feeling. (T3 and T4 would already solve the "easy" problem of doing, and HP is just a part of T4.)

  17. “As I like to put it, we are robots made of robots–we’re each composed of some few trillion robotic cells, each one as mindless as the molecules they’re composed of, but working together in a gigantic team that creates all the action that occurs in a conscious agent.”

    When I hear this statement I can’t help but think of the Selfish Gene by Richard Dawkins. Dawkins basically believes that our genes control everything from the way we think to our behavior, in order to help us live and reproduce so that they can live on from generation to generation. I know that Dennett is referring to robotic parts, but he refers to the molecules as “robotic” because they are mindless. Our genes are also mindless, but if Dawkins is right, they also control all of the action that makes us “conscious agents”. Just like people don’t want to agree with Dennett that we can be like robots, they also don’t want to agree with Dawkins that our genes run our lives. Dennett says that this is due to a “Zombic Hunch” and that people want to believe that we are more than just robots, just like we want to feel like we control our actions directly and that they have not been predetermined by our genetic code. We want to feel like we have free will.

    “In this chapter we have developed a neutral method for investigating and describing phenomenology. It involves extracting and purifying texts from (apparently) speaking subjects, and using those texts to generate a theorist’s fiction, the subject’s heterophenomenological world. This fictional world is populated with all the images, events, sounds, smells, hunches, presentiments, and feelings that the subject (apparently) sincerely believes to exist in his or her (or its) stream of consciousness. Maximally extended, it is a neutral portrayal of exactly what it is like to be that subject–in the subject’s own terms, given the best interpretation we can muster. . . . . People undoubtedly do believe that they have mental images, pains, perceptual experiences, and all the rest, and these facts–the facts about what people believe, and report when they express their beliefs–are phenomena any scientific theory of the mind must account for. (CE, p98)”

    From what I understand, Dennett is saying that due to false positives and false negatives in self-reports, we must be objective and develop a “neutral” method to describe how people feel the way they do, and why they do what they do. He is saying that we could do this partially by coding what people say based on text. My problem with this is that I still don’t think it can solve the problem of qualia. I don’t think that we could ever exactly understand the exact way that some one else feels and experiences things. Even if we are talking about colors, someone can verbally describe them one way, but “red” to me is not the same as “red” to you. Our vocabulary is not advanced enough to be able to describe in exact detail how we experience things. Dennett says that heterophenomenology looks at more than just verbal speech, but I don’t think scientific technology is advanced enough to be able to look at people’s hormones for example and know exactly how they feel. Even if we knew that a surge of the neurotransmitter, Dopamine, makes people feel happy, we still would not understand what that experience of happy truly feels like for that individual. It is naïve to think that my feeling of happy is the same as yours.

    1. 1. Genes don't feel. Cells don't feel. Organs don't feel. Only organisms with nervous systems feel. Genes code for the properties of organisms, including the development of nervous systems that generate feeling. The hard problem is explaining how and why.

      2. Please distinguish the hard problem from the other-minds problem. Heterophenomenology (which is really just T4) does not touch the hard problem at all.

  18. Were your qualia changing before you noticed the flashing cupboard door?

    This question is what I think I mentioned a few weeks ago talking in class about categorical perception and the Whorf hypothesis, in the “What did Mary see?” example of whether or not someone who hasn’t ever seen a color has learned something new by seeing a color for the first time.
    (I am inclined to go with option C from Dennett’s paper in this example – I don’t know).
    Dennett is a computationalist and he endorses option A from the start of his paper, that Turing found a way to answer Kant’s question, because by reverse engineering cognition using computation we can understand how we have experiences.
    Later in his paper when he uses the flashing cupboard door example to illustrate the “messiness” of the concept qualia, he doesn’t seem to know which option to choose. By defining qualia as an essentially first person access phenomenon, he claims that these philosophers have made it too variable and problematic because it could be defined differently by each person and it seems that there is room to manipulate the data to one’s own advantage (like cheating on a personality test when you want to seem like a better person, self-report reliability measures are not great, seems to be the thrust of his argument against qualia and first person science).

    I understand his argument but I don’t agree that first person perspectives are unstudiable by science and thus useless to studying cognition and consciousness and all the other “weasel words” we use to define human mental experience.
    I think that to throw out first person science is a really easy way of being a computationalist and adding power to your claims.

    I am a little confused on heterophenomenology. If anyone has a good kid sib definition/example not in the paper, please comment! What I can glean is that heterophenomenology is the reliable way of studying subjective first person experiences in an objective “third person” scientific way.

    (this is an aside, but isn’t is kind of paradoxical to say third person science? All science is necessarily first person in the sense that a scientist and thus a subjective human is practicing it… according to some philosophers at least).

    1. There is no 3rd person science or 1st person science. There is just the easy problem (explaining causally how and why organisms can do what they can do) and the hard problem (explaining causally how and why organisms feel).

      Heterophenomenology is a part of T4: using the correlations between T3 doings, T4 brain activity, and feelings to predict (but not explain) feelings.

  19. Overall I found this a fairly hard article to wrap my head around (especially given that Dennett uses so many synonyms, yet explains/uses them in different ways, such as qualia, experience, consciousness, feeling). My understanding is of the argument is that Dennett does not believe that there are feelings (thus no point trying to causally explain how and why we feel because there is no hard problem) and Chalmers believes there are feelings (thus the hard problem exists; however, he does not provide an explanation of how/why we do feel and simply states that we do). So, Dennett is of the opinion that what we call ‘feelings’ are actually our behaviours (whatever we do/say) during different states (looking at the yellow blob, listening to music) + the neural activity that goes on in our brains during all this (so we can measure this using the “3rd person scientific method”). Even more, he thinks feelings are merely beliefs that can have true/false values. Given this stance that he does not believe there are feelings, does he deny Descartes’ Cogito? Because if Dennett (and Team A) supposes that no one feels, does that suggest that he does not feel? I may be totally off here. We have established throughout this class that I know that I can feel because it feels like something to feel (i.e. there are no unfelt feelings as Harnad mentioned in class), but we cannot know if other people feel due to the other minds problem. We are constantly feeling; we feel when we think, eat, walk, laugh. In fact, I am not sure anyone could describe what it feels like not to feel (in attempting to describe it, I am sure feeling would be involved). Based on this train of thought, it seems I am not a proponent of Team A.

    Also he constantly mentions that Heterophenomenology is neutral, but I do not fully understand how the neutrality of behavioural and verbal correlates can lead us to an understanding of how and why we do what we do/feel. This idea brings me back to the earlier weeks in this class when we were talking about behaviorists – correlates between brain activity and our behaviour do not provide any information of the how and why of everything we can do. So, the action of blushing or frowning + neural activity involved in this action gets us no further in understanding.

  20. I do agree with hetero phenomenology’s (hp) emphasis on 3rd person data in order to conduct research on cognition, but Dennett’s arguments for this style of observation seem to subject themselves to the same pitfalls as behaviorism. In the realm of this debate, hp experiments seem to just describe a subject’s response to a certain stimuli presented, or to just predict the brain activity produced by a task. What is missing is how the person has come up with their response, or how this brain activity leads to behaviour; all of which are needed in order to answer Dennett’s formulation of Turing’s question: “How could we make a robot that had thoughts, that learned from “experience” and used what it learned the way we do.” On another note though, Dennett’s description of Ramachandran’s apparent motion experiments does point to the usefulness of some hp experiments. In this experiment it is pointed out that one way our brain perceives motion is by comparing an objects raltive position in discrete time steps, rather than following it continuously. This explanation does explain how we perceive motion and produce our subjective responses to the stimuli, but this example is weak in the grand scheme of cognitive science because it explains a perceptual response to one of the most well studied topics of cognition (visual perception). Furthermore, the efficacy of hp is weakened in Dennett’s description of “change blindness” experiments.

    “You saw each picture several dozen times, and eventually you saw a change that was “swift and enormous” (Dennett, 1999, Palmer, 1999) but that swift, enormous change was going on for a dozen times and more before you noticed it. Does it count as a change in color qualia?”

    Dennett then goes on to give explanations to the different answers to this experiment, but fails when he suggests that something like fMRI activity can help us decipher how we are blind to the change in “qualia.” Again this is just the behaviourist-esque explanation of analyzing output, without ever touching the causal mechanisms.

    On another note I had one question – do proponents of hp believe that “feeling” is included in one’s subjective response to experience and thus there is no distinction between the hard problem and deciphering how we do the things we do? Or do they reject the fact that feeling is a special property in and of itself?

  21. I am going to start off by stating that I found this article difficult to understand. I especially found Dennett’s method of understanding someone else’s feelings confusing.

    Dennett believes that one can understand someone else’s feelings by asking them questions about their feelings and by observing their physiological changes and reactions.

    I often undergo experiences which make me feel certain things, but I am unable to express those feelings verbally. For example, when I watch a ballerina dance a beautiful ballet piece, I am often at loss for words to describe my feelings. Even when I personally dance the same ballet piece myself, I am still left speechless because of some of the same feelings and some very different feelings (neither feeling I can really describe or explain). Sure I can describe some general feelings I have, but I cannot describe all of them, nor can I go into great detail. Furthermore, I do not think that my physiological reactions could accurately help to express those feelings since my heart rate would be increased for reasons other than the feeling I am feeling, for example, it could be increased because I am moving my body a lot.

    It seems as though Dennett is trying to make the hard problem in to something else. He uses heterophenomenology to try to explain the hard problem, but I don’t think it explains it.

  22. One thing I find interesting in this article is the language they use for Team A and Team B. I’m thinking about the slide from Stevan’s powerpoint that displayed all the words that mean the same thing as feeling. The point is that we can get pretty confused about what we’re talking about if we constantly switch up the words, and Stevan is pointing out how we can avoid that. In particular, Stevan hates the word “intentionality.”
    Anyways, it seems to me like the author of the article is rooting more for Team A, and I noticed how often he changes words to describe what Stevan says is feeling. It seems like an effective tactic to make Team B look way too complicated and arm-chair-philosophical. If we go from beliefs to intentions, then switch over to something like consciousness, the whole question (how/why feeling) looks terribly ill-posed.

    One of the authors talks about the Hard Problem as some kind of “Zombic Hunch.” I think it’s pretty obvious how the author is using his choice of words carefully to portray Stevan’s Team. Of course, the ‘zombic’ part refers to the zombie that’s talked about in the Turing Test. I think it also looks like kind of a scary word. But it’s the ‘hunch’ part that really gets to the problem Team A has with the other team. Team A sees the “hunch” as this unnecessary philosophical conjecture that simply gets in the way of learning more about humans. Using the word “hunch” really makes the issue of feeling seem about as important as dating-advice given by Will Smith. If we use the word “problem,” however, I think the whole feeling thing gains a lot of clarity. It’s not philosophical guess-work, rather the Hard Problem discloses and explores the limits of research in the field of cognition. This author might as well call the Other Minds Problem “Kant’s Turd,” for all of his factual bravado.

  23. In this article, a false positive belief can be said to mean believing something is real which is not in reality. As mentioned, people believe their visual fields are detailed all the way out but it is not the case in reality. We can consider this as a type 1 cognitive error and a false negative being a type II error. We are capable about making these kinds of inferences about ourselves because we have this so-called 'mind' which allows us to self-reflect and draw on deeper insights within ourselves. I'm wondering whether the belief in God could be a type I cognitive error? Are we projecting this spiritual presence onto an invisible agent without knowing the truth? How does our mind generate such strong beliefs about what we do not know for certain and why do we choose to accept it? I mean it's pretty incredible the kinds of things we can convince ourselves of - even right now, we might just be stuck in some white room in a psychiatric ward talking to ourselves imagining that this is in fact reality, how bizarre! How do we know? Would this appearance of reality then be considered a false negative? It's a belief about our conscious state which may not be what we suppose (we really could be mentally ill stuck in that white room). These beliefs as I've said arise from the mind and seem to separate us from any other species on this planet (then again, they could be thinking creatures too who are just incapable of language). I'm just curious how is it that we've been able to collectively place our beliefs - ever so strongly - into invisible agents? That is so unique to human nature - organized religion. I found what Daniel Dennett had to say about religion summarized my thoughts exactly,

    "Similarly, these ideas are just good for themselves. They’re good at reproducing in minds.’ They start out, as it were, as wild superstitions that happen just because they can. They enter through cracks in our cognitive machinery. Then, they’re around; they can be used. People begin appreciating them; people begin to use them for other purposes—and now we’re on our way to organized religion."

    Going back to an evolutionary standpoint, did we need organized religion/did it help us evolve?

    In regards to the A and B team, I am definitely on the side of the B team. We've learned again and again in this course that it feels like something to feel. It's also becoming more evident that we are unable to reverse-engineer this feeling - it would only be a simulation of what it feels like to feel. So then Turing's proposal of creating a robot that has thoughts and learns from experience won't actually create any consciousness and even if it did - how would we know?! We could have all the data from the HP world - the subject beliefs/feelings/experience that a robot has as well as the actual experience - and still not know whether they have consciousness. I don't understand how this 3rd person scientific method is supposed to help us understand anything about consciousness? Don't we always run into the other-minds problem?

  24. There is still some block in my mind and I cannot understand why this hard problem is so hard. I don’t understand why “how and why we feel” is so different from “how and why we think”. The main difference I see is that thinking has some measureable or objectively visible output (ie a robot can do everything that we can do and we can see that by observing or speaking with it) whereas feeling/consciousness is entirely hidden beyond the other minds problem. Other than people telling us they feel or telling us that they are conscious we don’t know that they are. As This is why I said in class that it almost seems like the hard problem=other minds problem. To me without this issue feeling is relegated to the exact same level of difficulty as the rest of cognition. If we build a robot who can do everything we can do (this probably would require feeling I guess but we can leave that out for now) then theoretically we can open up it’s “brain” and see how it’s doing it. Reverse engineering. Now there is a made-up world where the other minds problem doesn’t exist because everyone who feels has blue skin. And everyone who is conscious and feeling has blue skin. So we build our robot that can do everything we can do but its skin doesn’t turn blue. So we try again and with enough tweaks (and probably some T4 elements/chemicals) its skin turns blue and we know that it feels! Then we know however he is thinking and feeling is probably how we think or feel- same as with the first robot who couldn’t feel. We’ve reverse-engineered feeling. Why would this not work? I mean personally I don’t think even reverse engineering to solve the easy problem is possible but if we accept that it is, why would this be any different? That’d be the how and the why has to be evolutionary- salience. Feeling makes things more salient. It heightens rewards and shocks against pain, it helps us to remember things better and to be more interactive with our environment.
    Yet the hard problem obviously still exists so what’s wrong with all of this, specifically? Can anyone help?

    1. You have to distinguish (1) the easy problem, (2) the other-minds problem and (3) the hard problem.

      (1) The easy problem is explaining how and why organisms can do what they can do. (T3 or T3 would be the solution.)

      (2) The other-minds problem is about how to tell whether or not something feels. Each one of us knows (along with Descartes' Cogito/Sentio) that they feel. With other humans it's not a problem. They act and look just like ourselves. No reason to assume they don't feel too. Pretty much ditto for animals (just that with clams it's a bit subtler). When it comes to human-made machines, all bets are off until they are so much like us that they can pass T3, and then, as with other people and other animals, we give them the benefit of the doubt.

      (3) The hard problem is explaining how and why organisms (or T3s or T4s) can feel. It is not about whether they feel, but how and why. And the reason it's hard is that the solution to the easy problem has already used up all the available degrees of freedom for a causal explanation (apart from normal underdetermination): it has already explained all our doings. That leaves feelings dangling, superfluously, unexplained, and with no degrees of explanatory freedom left (unless we invoke psychokinetic dualism -- feeling is an extra causal force in the universe -- for which there is not only no evidence, but all evidence is against it).

      We are left with the undeniable fact that organisms feel, but no clue as to how or why, since all their doings have been explained without having had to make the slightest reference to feeling.

    2. Hmm sorry but i'm still not satisfied! I definitely understand the differences between the 3 problems, although I think they're intertwined. But what exactly about that specific scenario that I described above with the two robots is not possible?

    3. Hi ailish,
      I think that I agree with you on a few of your points.
      Definitely that the easy problem is not easily solvable, and that we cannot explain performance capacity only through reverse engineering. Despite understanding why introspection cannot help us reverse engineer cognition i still think it's a crucial part of solving the easy problem (knowing what it's like inside to be able to speak publicly is an important part of understanding how it is that certain people are good at public speaking and others are terrified for example). So I agree that the easy problem isn't explained by reverse engineering.

      I also understand the distinction between the problems we have come back to again and again, and I also still think that the other minds problem sounds a lot like the hard problem, or at least in the way I'm understanding them. If I'm never sure anyone else has a mind (but I have to take their word for it, so I won't kick Riona or Renuka or eat a chicken and so on) I also can never be sure if I have explained how organisms feel (because I don't know if they can feel).

      I like your thought experiment a lot - i think that if there were some condition that said feeling things look like X that'd be a great way of figuring out if something was feeling or not.

      I don't know if I agree that the why we feel must be salience, though, because there are so many other heuristics and ways that we keep salient facts in our head, why would there be something so elusive and specific as feeling? Also, if anything, feeling seems to hinder a lot of our evolutionary benefits (despite the articles from the evolutionary psych week of which I am still not convinced about "paternal jealousy etc")

      Sorry this probably didn't help much, but I think your thought experiment would solve the problem (but that doesn't really matter because it's never going to happen and we seem to be interested in practicality in this course), and it still wouldn't explain why we feel.

    4. This comment has been removed by the author.

    5. I hope that maybe I will be able to help a little bit with understanding why people do call it the hard problem. However I totally share some of your sentiments and don't fully agree with all the opinions I'm about to post below, just think they might be helpful in understanding where the hard problem comes from originally!

      This philosophy encyclopedia article might help at least a little bit if you read the summary of Chalmers argument (just as a warning, they do assume the P-consc/A-consc distinction):

      Basically, Chalmers says that the hard problem arises from the fact that feelings cannot be described in the transitive manner that our other cognitive behaviours can be. This is based on the plausibility of philosophical "zombies" that are identical to us behaviourally but don't actually have any qualitative, phenomenal experience (feelings). However, I know that when I first learned about philosophical zombies I didn't really find them all that plausible either.

      A thought experiment that might help you see why the hard problem is so called is the "qualia inversion argument." It is actually an argument about a specific theory of mind, but I think its a good way to understand the distinction between cognition and feelings.

      I'll post a link that goes in depth but also will try to paraphrase the argument..

      One common example uses colours, where the "quail" is the qualitative "feeling" experience of 'blueness' or 'greenness,' which are private feelings that you can't explain.

      Basically the argument goes:

      You have two individuals, one (the nonvert) who sees the regular colour spectrum and another (the invert) that sees an inverted colour spectrum. So when the nonvert looks at a banana they see yellow but the invert sees blue, the nonvert looks at an apple and sees red but the invert sees green, etc.

      While looking at the banana however, both when asked, will say that it is yellow or will say that an apple is red despite having different phenomenal experiences.

      Extending this to the idea of philosophical zombies, imagine that it is your robot that is behaviourally equivalent to you in every way but doesn't have any phenomenal experience. Yet, when you kick it in the shin it shouts and grabs it's shin, when you tickle it it giggles and squirms, and when you show it a banana it says that it is yellow.

      Basically, feelings (according to Chalmers, I myself am unsure on this subject) seem to be this superfluous in terms of the causally explainable chain of things.

      However, Dennett is known opposer of the hard problem (as we know from this paper) and things like qualia so perhaps you are just more on his side in terms of the meta-hard problem, which asks if there even is a hard problem to begin with. I'm curious to know what criteria you would use for knowing how to make these feeling robot's skin turn blue though?

    6. Hi Kaitlin

      That's a really interesting analogy. For me though, it's still based around the other minds problem. In my example, the blue skin is not something I would envision being built into the robot, it's just something that naturally occurs in this "fake world". It's just a way of saying, if we get rid of the other minds problem, the hard problem (or at least the how part) seems solvable to me. That's why I equated the other mids problem with the hard problem. I don't believe there's ever a way to get around the other minds problem. But Harnad said in class that the other minds problem was not = to the hard problem and if so then there should be something wrong with my thought experiment. In other words even if we get rid of the other minds problem there should still be something preventing us from discovering "how we feel" and I can't see that there is. So that was the only purpose of the blue skin. I do believe that the hard problem cannot be solved but for the reasons of 1) the other minds problem or 2) reverse engineering cognition itself isn't going to work (which also negates the easy problem)

  25. While reading this paper, I also couldn’t help but think of Searle’s Chinese Room Argument. I mean, Dennett is essentially completely ignoring/disregarding the “feeling” part of us; he thinks our brains are solely some mechanism that generates doing, and doesn’t explain anything else. So what, our minds are composed of “feelingless” parts? Does he completely deny that we feel at all?

    I mean he says: “We are robots made of robots –we’re each composed of some few trillion robotic cells, each one as mindless as the molecules they’re composed of, but working together in a gigantic team that creates all the action that occurs in a conscious agent” (p.1).

    So he thinks we’re composed of “mindless” parts – but in the end we feel?!
    I may be really off, but this is starting to sound like the CRA! And from the looks of it, Dennett probably endorses the Systems Reply.

    But Searle showed that by internalizing the rules, he became the system, but still didn’t understand Chinese. I think Searle knocked out several points with one stone – that is, he showed cognition can’t all be computation, but by also solving the System’s Reply, didn’t he also show that the mind isn’t made up of “mindless parts”?!
    I feel as if Dennett is saying: “ Yeah, the mind is made up of “mindless parts” but in the end, when they all come together, something happens and we’re conscious beings”. How does this make sense!?

    Perhaps, I am way off and have lost/confused myself … that also tends to happen.

    1. The "mind" -- i.e. cerebral processes that generate our capacity not only to do what we can do, but also to feel -- is indeed made up of feelingless parts: molecules, genes, cells, organs, neurons, brain. And it does indeed generate our capacity not only to do what we can do, but also to feel. The trouble is that explaining the doing part is "easy," whereas explaining the feeling part is hard.

      No, Searle's Chinese Room Argument only works against a "System Reply" that tries to rescue computationalism. Normally the other-minds problem prevents you from being able to say for sure whether or not anything at all (whether a stone of or another human being) other than yourself feels. But because computation is implementation-independent, it allowed Searle to show it for computation ("Searle's Periscope"). If instead of the hypothesis that cognition is just computation, the hypothesis is that cognition is any other kind of physical (dynamical) system, we'd be back with the other-minds problem, unable to know one way or the other whether it feels. And that's where Turing comes in, saying that once's you've solved the easy problem with T3 or T4, there is no point worrying about the other-minds problem because it is insurmountable; ignore it. (But it's not the same as the hard problem, which Turing also suggested ignoring.)

  26. “Heterophenomenology is nothing but good old 3rd person scientific method applied to the particular phenomena of human (and animal) consciousness” (p. 3).

    Actually, it seems that heterophenomenology (I think) is predicting and explaining all doings, capacities and mental states from correlations, however predicting what and when someone will/does feel something isn’t causally explaining why or how we feel. If I understood this correctly, it seems that heterophenomenology includes all of the Turing hierarchies (T2, T3, T4, T5) as well as introspection – so it can predict and explain all of our “doings” but it still cannot explain feeling, how or why its there.

  27. One sentence in Harnad’s 'Animal pain and human pleasure: ethical dilemmas outside the classroom.’ really stuck out; "In an insensate world there would still be natural laws (laws of motion, gravity, electromagnetism) but no such thing as morality, or laws of conduct, or right or wrong, because if nothing feels, nothing matters.”.

    Ethical treatment of animals is decided on that animal’s ability to feel. It seems bizarre that there exists natural laws, like laws of motion and gravity etc that are respected, yet laws of conduct are not conducted. It does not make logical sense for it to be considered wrong to mistreat humans by sending them to a slaughter house but not wrong to do the same thing to animals.

    This article really opened my eyes, thanks for posting it!

  28. This comment has been removed by the author.

  29. I must say, this was a confusing article to follow. Many of the concepts were most definitely not put kid-sibly. I will try to put two concepts kid-sibly and ask some clarifying questions.

    First, heterephenomology is the idea that we can predict behavior/feeling but it does not solve anything. In other words, we can attempt to predict/explain what everyone else is feeling but there’s still no explanation of how. It’s a third person scientific approach. Those who believe in this think we can mind read to a certain extent and find a description of what is going on in the subjective experiences of others. But how do we know that it is correct an explanation that we come up with? Those who think Cog Sci is heterphenomonology think it’s stronger than T3, that it is like T4. The problem is the heterophenomenology is just weather forecasting. It’s not a solution and not an explanation to the “hard” problem. It still does not say WHY. This is the point that everyone is missing!

    First, I will explain Chalmer’s Zombie Twin. A zombie is a physically identical entity to a normal human being. The only difference is that it lacks conscious experience, thus the ability to feel is absent and has no experience, qualia or sentience. They behave the same way as humans but do not feel anything. For example, a zombie could put its hand on something hot and not feel anything at all. But, it would still move its hand away since it acts the same way as humans. The zombie is used as a way to explain the “hard problem” of consciousness? In other words, “why aren’t we zombies?” Or “why did evolution both to produce us if zombies, who do not feel, would have survived and reproduced just as well?” You can probably see that from these questions, the idea of the Zombie is to get at the answer to the hard problem: “WHY do we feel?” Chalmers and his zombie twin are heterophenomenological twins.

    I seem to understand the concept of the Zombie Twin, but I don’t fully understand its importance in the heterphenomonology argument and which side Chalmers takes and which side Dennett takes? Stevan, could you help simplify this for me?

    1. Hi Jordana,
      Chalmers is in the opposite camp from Dennett. The way I understood it, his Zombie Twin example is his way of countering what Dennett says. Dennett says that heterophenomenology is enough to explain everything about human experience, consciousness, mental states etc… While Chalmers believes that all this still isn’t enough to account for conscious experience, of what is going on in the mind. That is, he took Dennett’s heterophenomenological observations and arguments and put them into a humanoid creature. But even then, his argument is that it still would only be a zombie, with absolutely no feelings or consciousness, it would only pass as human. Much like Turing’s T4. And as we’ve discussed in class, passing T4 (or T3) simply isn’t enough to say that there is feeling going on. It’s not because it acts like a thinking, feeling person that it is indeed thinking and feeling. So Chalmer’s argument is a counter-argument to Dennett, he’s showing that Dennett still isn’t solving the hard problem of consciousness.

  30. “"that conscious experiences themselves, not merely our verbal judgments about them, are the primary data to which a theory must answer." (Levine, 1994)
    This is an appealing idea, but it is simply a mistake. First of all, remember that heterophenomenology gives you much more data than just a subject’s verbal judgments; every blush, hesitation, and frown, as well as all the covert, internal reactions and activities that can be detected, are included in our primary data. But what about this concern with leaving the “conscious experiences themselves” out of the primary data? Defenders of the first-person point of view are not entitled to this complaint against heterophenomenology, since by their own lights, they should prefer heterophenomenology’s treatment of the primary data to any other. Why? Because it does justice to both possible sources of non-overlap. On the one hand, if some of your conscious experiences occur unbeknownst to you (if they are experiences about which you have no beliefs, and hence can make no "verbal judgments"), then they are just as inaccessible to your first-person point of view as they are to heterophenomenology. Ex hypothesi, you don't even suspect you have them--if you did, you could verbally express those suspicions. So heterophenomenology's list of primary data doesn't leave out any conscious experiences you know of, or even have any first-person inklings about. On the other hand, unless you claim not just reliability but outright infallibility, you should admit that some--just some--of your beliefs (or verbal judgments) about your conscious experiences might be wrong. In all such cases, however rare they are, what has to be explained by theory is not the conscious experience, but your belief in it (or your sincere verbal judgment, etc). So heterophenomenology doesn't include any spurious "primary data" either, but plays it safe in a way you should approve.”

    Ironically, my first reaction to what Dennett says is that “Zombic Hunch”, termed by Chalmers, in that it feels intuitively wrong. However I will still attempt to explain why this may be. Although Dennett explains that heterophenomenology can account for every single bodily physical reaction, there is still no way it can account for first person point of view, for the FEELING. There is no way to be sure that a blush or a frown can be linked to one emotional state rather than the other. Would someone frown if they’re angry? Or if they don’t understand something? Or if they’re blinded by the sun? It’s true that heterophenomenology (though unrealistic in terms of collecting data in such detail) might be the best tool we have so far in terms of trying to get a representation of feeling. It is nonetheless not a reason to take it as the absolute truth, or give it more credit than it can provide. In the false positive example of peripheral vision, I disagree that most people think they can see as well in their periphery as in their foveal area. Of course, I can only use my personal experience as a reliable example, but it’s always been the case that when I need to observe something I’ll turn my head and orient my eyes in such a way that I’ll be looking straight at the thing in question. I (and other people) wouldn’t need to do this if my peripheral vision alone was enough. And Dennett’s example of the question not to ask, “How come, since people’s visual fields are detailed all the way out, they can’t identify things parafoveally?”, has the answer in itself, namely that people can’t identify things parafoveally BECAUSE their visual fields aren’t detailed all the way out. Therefore it’s highly unlikely that someone would ask such a question.

    1. Continuation:
      Furthermore, in his false negative example of people being unaware of psychological things happening in them, there are two points to bring up: first of all, it still “feels like something to not know”, which heterophenomenology cannot account for. Also, if heterophenomenology is able to explain these “unconscious” experiences, then it does not accurately represent what is going on in the mind, as the mind itself is unable to explain it. Therefore even though it is useful in supplying more information, that information is unavailable to the mind thus an inaccurate representation of what is going on in conscious experience. In other words, heterophenomenology don’t account for the feeling of not knowing and explains events which are not in the conscious experience.
      Extra point: heterophenomenology doesn’t take beliefs into account, but regardless of whether they have a basis or are rational, they still are feelings and conscious experiences. These aren’t accounted for.

  31. "We move, that is, from raw data to interpreted data: a catalogue of the subjects’ convictions, beliefs, attitudes, emotional reactions, . . . (together with much detail regarding the circumstances in which these intentional states are situated), but then we adopt a special move, which distinguishes heterophenomenology from the normal interpersonal stance: the subjects’ beliefs (etc.) are all bracketed for neutrality."

    I'm having a hard time understanding how you can properly interpret a subject's convictions, beliefs, attitudes, emotional reactions, etc unless you yourself are that person. The idea of subjective experiences are that they are experienced differently by different subjects and that there is no way to objectively quantify them. So here they 'resolve/avoid' this issue by saying that in heterophenomenology, "the subjects's beliefs are all bracketed for neutrality." So to translate that into simpler terms, they are saying under heterophenomenology, subjective beliefs are neutral - i.e. void of decided views, expression, or strong feeling. Which to me feels like a complete contradiction of the essence of subjective beliefs. But let's see what their arguments are. I'm going to try and break them down and make sense of them.

    Heterophenomenology's justification of this is due to 2 'failures of overlap' which are labeled as false positive and false negative.
    False positive states that "some beliefs that subjects have about their own conscious states are provably false, and hence what needs explanation in these cases is the etiology of the false belief." Etiology meaning cause, origin. So they are saying the question is 'why do people think they have a certain ability/state (ex. having detailed/accurate peripheral vision)' rather than 'people do have a certain ability/state, so why are they unable to do the things they should be able to do given that they have this ability'?
    Fair enough. But how is this an argument for heterophenomenology?
    I suppose they are saying that since the question asks why a person believes they have certain abilities in removes the subjectivity in a sense because it does not presume that the person has these abilities. If they were to ask the second question, they would be inferring about the subject's mental capacities by assuming the subject does in fact have this ability. Whereas with the second question doesn't assume the subject possesses or does not possess certain abilities. It simply asks why the subject thinks they have this ability.

    False negative states are "some psychological things that happen in people (to put it crudely but neutrally) are unsuspected by those people". People are entirely unaware of certain things that happen to them and can offer no information on them. So here we have 2 states, false positive and false negatives, which are entirely opposite such that there is no room for overlap ('failure of overlap'). Again, with false negative states, as with false positive states, heterophenomenology questions WHY we think/don't think we have certain abilities or have experienced certain things. And this allows a third person perspective.

    In order to allow this, heterophenomenology insists the data should not include the experiences themselves, just subject’s subjective beliefs about their experiences. Many camps argue that this is not true, but based on the arguments of the article I tend to agree. If we focus on the why and not the true content of the beliefs, we can attain a more consistent and scientific approach.

  32. Dennett asserts that HP is "the neutral path leading from objective physical science and its insistence on the third ­person point of view, to a method of phenomenological description that can (in principle) do justice to the most private and ineffable subjective experiences, while never abandoning the methodological principles of science." which he claims bypasses the Hard Problem. As a behaviourist, Dennett is saying that because feelings are not necessarily visible or 'exhibit behaviour' then therefore it doesn't matter

    The 2 things I found interesting about the article are
    1) Dennett claims that "heterophenomenology is nothing new; it is nothing other than the method that has been used by psychophysicists, cognitive psychologists, clinical neuropsychologists, and just about everybody who has ever purported to study human consciousness in a serious, scientific way." and that "First­ person science of consciousness is a discipline with no methods, no data, no results, no future, no promise. It will remain a fantasy." His argument against the Hard Problem is that since introspection is too ridiculous and gives the subject too much power and we haven't found a way to address it then the Hard Problem must be moot.

    2) the process of heterophenomenology rests on the subject reporting what they perceive/feel/experience, AND the objective, empirical recorded raw data. This raw data includes the "manifestations of belief, conviction, expectation, fear, loathing,etc". The most important part, though, is that "the burden of heterophenomenology is to explain, in the end, every pattern discoverable in the heterophenomenological worlds of subjects; it is precisely these patterns that make these phenomena striking" First of all, the subject would be reporting on what they feel they are feeling/experience and not what they ~are~ feeling/experiencing, so how does that 'discrepancy' register into the infinite number of HP patterns?. Second, if two people are actually feeling/experiencing the same thing, yet their physical manifestations of belief, conviction, expectation etc differ, such that their HP reports are completely different, then to what extent do the differences in patterns reflect the differences in consciousness between person 1 and 2?

  33. I am SO glad we are reading Dennett. I went to a talk of his last year and you’ll never believe it, but I sat next to him for a full 15 min and asked him about his favourite books and alcoholic beverage (I can’t really remember what he responded). It was such an honour to be next to him for even that short amount of time!!! Anyway, he’s great.

    I read a book, a compilation on Consciousness, that interviewed different philosophers/neuroscientists/cognitive scientists/etc about their views on consciousness. One of them, whose name I forget right now, was of what Dennett calls the “B” team. It was funny, but he was convinced that people like Dennett (of the “A” team) are clearly not mentally the same as the B-teamers, since they just don’t get why the Hard problem is so hard! He thought we should do a psychological experiment on A-teamers to see why they are incapable of understanding this fact and see what makes them different from us. I thought it was hilarious.

    I really do think Dennett misses the point here. Yes, people’s experiences can be wrong – but they are not wrong insofar as they are their experiences. That’s it. It’s not super-hard to understand, which is why I agree with (Richard Swinburne? I think it was him) the guy who thinks we should test these folks and see why they just don’t get it.

    Dennett says that we have “beliefs” about “conscious experiences”. No. Beliefs can be true or false. Experiences are experiences. They are not about truth judgments. We have experiences, we don’t have beliefs about them. I mean, we also have beliefs about them, but the beliefs are based on experiences.

    In terms of qualia, they are experiences. They change when we notice them change. If we don’t notice them change, then they aren’t changing. This is a matter of attention though, but I think that’s what it means. Subconscious experience is not really experience, in my opinion, since we only know about them through inference or belief or something of that sort. Not the sort of raw experience that conscious experience is.

  34. What a fascinating read. It's interesting to read about a viewpoint that seems so far-removed from what we've learned in this class. Here it seems that 'feeling' is de-emphasized since heterophenomenology (HP) is the answer to all our problems.

    So if I'm getting this right, in short HP is done by recording a person's account of their feelings at a given time, looking at the neural correlates of those feelings and calling it a day.

    I'll admit that at first I was suckered into this explanation because it seemed clean and easy, but I suppose it doesn't explain the causal how or why of feeling as Prof. Harnad repeatedly stated in his reply.

    If I can get a little meta, I think it's exceedingly interesting to read a series of correspondences between a number of people who are VERY passionate about this question (made evident by the amount of us vs. them talk outlined in the first few paragraphs of this paper).
    Everyone seems to feel rather strongly, but everyone also seems to be talking past one another. Just reading the paper for 10b. made me feel like Dennett and Harnad were discussing two entirely different topics (how & why how & why how & why)! With a question so old, it makes me wonder how we haven't agreed on any sort of common ground on which to discuss the hard problem.

    To be harsh for a moment, it seems that there are two fully entrenched groups who are barely discussing the same thing! One seems concerned with neural correlates of descriptions of feeling, claiming that that's sufficient, and the other is philosophizing about whether it is possible for two functionally identical individuals to exist, where one is feeling and the other is not. Prof Harnad, it would seem, is calling out for the first group to get back to basics and give some causal mechanisms for all the correlates they have to show.

    I guess I just find it surprising that such an important question appears to be so ill-defined in the minds of people who have so much to say about it. I mean, if they're not answering the question of how and why organisms feel, what question are they answering?!

    Then again, I'm probably biased in favour of the way we were taught the hard problem, so maybe there is some as yet unseen positive to Dennett's point of view. I'd love to see a reply to Prof Harnad's article addressing the question of how and why directly, and I imagine this field could benefit greatly from everyone sitting down for some coffee and agreeing on what they're talking about.

  35. The idea that there is a kind of belief that is “off-limits to zombies but not to us conscious folks” is an assumption that is not based on any solid ground, other than a bit of human egocentrism. Why wouldn’t the zombie have consciousness?

    For the sake of argument let’s bring together the hypothetical things of Chalmer’s zombie and Turing’s robot, where this thing had thoughts, learned from experience, used what it learned the way we use what we learn and was identical to Chalmer molecule by molecule. In other words, a T4 bot. Based on Chalmers’ definition of his zombie twin, if the zombie is functionally identical to him, and that it addresses Kant’s conditions for the possibility of experience to Turing’s robots level, how does Chalmers know that this functioning will not be accompanied by any real conscious experience?

    It seems that this is really the other minds’ problem again – I don’t know that you feel to the same extent that I don’t know that zombie Chalmer/robot feels. Why should I be convinced that zombie Chalmer’s can’t feel and that you do? What does “direct evidence” mean? That he knows that he feels because he can feel? If this is what is meant by direct evidence, it seems like a resurgence of introspection, and it further implies how a reductionist method of determining consciousness is a road with no future.

    For the above reasons I support Dennett, if even for the sake that if a 3rd person scientific method cannot be applied to the particular phenomena of consciousness since it doesn’t address the “why” for why and how we feel, then cognitive science is reduced to unanswerable philosophical questions. The brain is too complex for some ivory tower all encompassing theory to be the answer. Scientific methodology is necessary to make progress in the field, as the advancement of neuroscience, technology and computer science are arguably what has made progress in the field in the past. Heterophenomenology is at least useful, instead of fixating and stagnating on a questions that cannot be answered, and serves no purpose if it were to be answered.

  36. Thank you Jordana for explaining heterephenomology in a kid sib way. The article was really hard to follow but this makes things a lot easier:

    “First, heterephenomology is the idea that we can predict behavior/feeling but it does not solve anything. In other words, we can attempt to predict/explain what everyone else is feeling but there’s still no explanation of how. It’s a third person scientific approach.”

    From the looks of it, reading about heterephenomology brings us back to the hard problem of cognitive science of how and why we feel.

    Coming back to the Team A and B:

    Team A essentially predicts feeling. Assuming they are right, we can call this heterophenomenology. That’s all there is to say but it provides no explanation at all and hence fails to acknowledge the hard question of cognitive science. This seems like the behaviourist approach which ignores everything inside the “black box”.

    On the other hand, Team B states that there is a hard problem. And I side with this team. I do believe that humans feels. For no particular reason but purely out of intuition.

    The optional reading illustrates this very well by asking the question: “Would you kick the robot in your class?”. I wouldn’t. Humans have the capacity to be sympathetic/empathetic and feel for others. Just purely because I have the capacity to feel for others, I believe that there is a hard problem.

  37. This reading had a very interesting approach to the concept of consciousness/feeling. Throughout this transcript, I held a question remaining unanswered: Dennett argues for heterophenomenology, which primarily relies on verbal evidence from its subjects, taken along with behavioral reactions, visceral reactions, hormonal reactions, etc. and seems to deny feelings; however, this view undermines itself: coming back to the symbol grounding problem, proposed by Harnad, how can words describe feelings if they are not grounded in feelings?