Saturday 2 January 2016

(10a. Comment Overflow) (50+)

(10a. Comment Overflow) (50+)

6 comments:

  1. This comment has been removed by the author.

    ReplyDelete
    Replies
    1. “‘that conscious experiences themselves, not merely our verbal judgments about them, are the primary data to which a theory must answer." (Levine, 1994).’ This is an appealing idea, but it is simply a mistake. First of all, remember that heterophenomenology gives you much more data than just a subject’s verbal judgments; every blush, hesitation, and frown, as well as all the covert, internal reactions and activities that can be detected, are included in our primary data.”

      This may be true but I think reducing consciousness to these overt and measurable phenomena is a vacuous oversimplification. If we took something like blushing or increased heart rate as measurements of internal states, this seems like a mistake. These in fact aren’t measurements of the internal states (i.e. feelings) themselves, but instead of physical phenomena in the body that are correlated with feeling.

      “you don't even suspect you have [unconscious experiences]--if you did, you could verbally express those suspicions. So heterophenomenology's list of primary data doesn't leave out any conscious experiences you know of, or even have any first-person inklings about.”

      Two things here: 1) “conscious experiences [that] occur unbenknownst to you” is really an oxymoron. How can you have a conscious experience that you don’t experience…? 2) the “list of primary data” above really leaves out everything.

      Isn’t this the same problem of introspection from the easy problem applied to the hard one? Even if the verbal expressions of feeling are taken from the third person, verbally expressing experiences doesn’t express why or how you feel these experiences in the first place!

      Another thing that I started thinking about while reading this is the idea of reverse engineering as an explanation. There are plenty of things that we can’t reverse engineer (e.g. a black hole) but we can have theories that we accept as explanations for physical phenomena. Stevan mentioned in class yesterday that the reason we can’t apply this to consciousness is because there’s no fifth force of “feeling.” I didn’t fully grasp the explanation at the time, anyone care to elaborate?

      Delete
  2. This is the point at which I get frustrated and throw my hands up. I recognize what heterophenomenology is trying to do; provide us with data, create a link between experience and third—party observation, but I also get very nervous at the thought of codifying introspection to match with spurious third-person data. Nevertheless, it is an attempt to use current tools to address the problem of understanding why and how we think.

    Is the question “are people faithful reporters” or is it that “there is more in the head than meets the eye, and in such a way that we cannot ‘subtract differences’ by using measured behaviours”? Or, to rephrase, what is the difference between conscious experiences and our verbal reports of them? Well as we saw in the language section, we know that the language of thought cannot be similar to our natural human social languages. Thus we could then Dennett answers this with “on the one hand, if some of your conscious experiences occur unbeknownst to you (if they are experiences about which you have no beliefs and hence can make no “verbal judgements”) then they are just as inaccessible to your first-person point of view as they are to heterophenomenology… what has to be explained by theory is not the conscious experience, but your belief in it.

    There are a few things here: Are we all that we express? In some senses yes, if we consider that the self is a largely social construct, and that we judge consciousness by feeling. We can’t know any better really than to say that we are all that we express. Therefore we may as well work from that definition. I personally favour this, but we may very well also come into a new paradigm that gives us another way to get insight into feeling. I can’t imagine it, but maybe it is there.

    Secondly, Dennett is taking a much more pragmatic view. If, for every person, we can offer an explanation as to why they are currently consciously thinking as they are thinking, and behaving as they are behaving then we have done perhaps all the important stuff. Is there more that matters after that? I am convinced of the insolubility of the hard problem using today’s conceptual frameworks, but not yet entirely convinced that it isn’t a circular question posed so as to be insoluble. It seems dangerously close to a question such as “why is there something rather than nothing”; which in my opinion is useless to ask because it gets us nowhere.

    ReplyDelete
  3. While reading Dennett’s paper I could not decide if I was part of the A team or the B team. I feel I am part of the A team because I think agree with Dennett, Turing has framed the consciousness problems philosophers have been struggling with for so long in a tangible third person science way that can provide real answers that first person science can’t. To me I see first person science purely coming up with the question. Dennett is right, it is a fantasy that first person science will provide answers out of thin air, they’ve tried, it’s not enough. That is not to say however that first person science is not important, as I mentioned before we would not have these questions without it. So the bigger question becomes is third person science and engineering solutions, through such methods as the Turing test, enough? Is it enough to answer the hard problem? This where I start batting for team B. Although I agree with team A, that is probably our best shot at answering these hard questions, it may not be enough. It is difficult to imagine that a robot will be able to explain how and why we feel (the hard problem) but with introspection ruled out and philosophy only producing more questions, it seems like the only shot we have.

    ReplyDelete
  4. I am definitely on team A. I like heterophenomenology, or at least I like what I think it is.
    My first observation is that Dennett seems to start off making a burden-of-proof argument: how do you know that you’re conscious any more than a zombie? I think this is somewhat compelling, and I agree that the Zombic Hunch is just that — a hunch.
    Though I have never read Chalmers’ paper on the Zombie thought experiment, I think I agree with Dennet in a couple of ways. It is a non-starter to me that the Zombie can have awareness of its internal states without in some sense being conscious. How would this play out? Rather than feeling hunger as an emptiness in the gut, the Zombie is able to read a metabolic meter from its dashboard to judge its caloric needs? Surely the experience of coming to the realization of a calorie deficit is equivalent to feeling a tummy rumble. Who cares about where and how the experience is brought to the Zombie’s “attention,” if I’m even allowed to say that.
    This echoes back to the question Professor Harnad raised in class. He asked how many of us are unfazed by the insolubility of the Hard Problem. I am unfazed. If it’s not knowable, is it worth talking about? If the Zombie seems conscious and reports consciousness, who am I to say what its experience is or is not?

    ReplyDelete
  5. I reject the idea that Dennett doesn’t believe that there is a hard problem, of explaining why other organisms feel rather than just do. He is simply stating the restrictive nature that such a problem imposes, to the point where no progress can be made.

    The A team is expressing that if another thing (whether biological or engineered) had thoughts, learned from experience and used what it learned the way we use what we learn, that thing could be conscious as readily as the person sitting next to me would be. Simply because the biological system is made up of cells, that does not necessarily afford it any additional property than if the system were to be equally functional, but engineered through a different mechanism. A cell by itself is no more conscious than a computer chip. On a cellular basis, I support the notion that feeling is an epiphenomenal phenomenon arising from the collective action of these unfeeling units.

    The B team is suggesting that this reasoning cannot hold, because that thing would not have experience (or feeling, as "experience" is corrected to in Harnad's paper "The Mind/Body Problem is the Feeling/Function Problem"). Heterophenomenology—using physical data to explain consciousness—could still provide insight as to why we feel, what the purpose of feeling is. I think that by having a theory that is at least testable, progress can be made, and I respect the efforts to move the conversation in that direction.

    Heterophenomenology includes everything that we do, whether we are aware of it or not, by recording our beliefs, and understanding that some beliefs may feel true but are actually false, and some beliefs we may feel we do not have, but can be elicited as being present outside of our explicit consciousness. Dennett clarifies that this goes beyond verbal recording, to “all behavioral reactions, visceral reactions, hormonal reactions, and any physically detectable state."

    ReplyDelete