Saturday 2 January 2016

(11b. Comment Overflow) (50+)

(11b. Comment Overflow) (50+)

4 comments:

  1. I understand that you dislike extended cognition, and that’s fine. I don’t fully understand the structure of the paper nor the arguments contained therein. It is a lot of snippets and tidbits that we’ve read/heard in your other papers and in class, but I don’t understand the logical connections between them. I wish I had more to say or anything interesting at all… Sorry.

    ReplyDelete
  2. Harnad and Dror's paper on distributed cognition really helped me to understand, especially in the previous article, that we are not referring to the notebook or any kind of technology or external thing as a cognizing system in and of itself... nor are we explicitly assigning cognition to that particular notebook. In the section titled "Cognitive Technology", I now understand how Otto's notebook can just be seen as a piece of cognitive technology, just like any other peripheral device in this world (as I mentioned in my 11a commentary) can be as well (and substitute each other too). Then the argument favoring distributed cognition becomes a little more clear. We are not extending our mind (which is complicated enough to define, is it not?) to external properties of this world but rather saying that technologies that exist today share, to a varying degree, a role in our cognitive states. This is if and only if we allow the idea of a cognitive state to actually stretch beyond the physical restriction of our body and skull!

    ReplyDelete
  3. cont'd...

    This has got to be one of my favourite readings from the entire course. One thing that I have always been passionate about is the field of using brain computer interfaces (BCIs) to enhance or correct for our cognitive capacities. It seems as though this type of cognitive technology was (although not explicitly) brought up in certain sections of this paper.

    In the section "Cognitive Technology: Tools R Us?" the role of internal and external cognitive technologies are discussed in their relation to distributed cognition. We can note that it makes no difference whether or not let's say, a calculator, is used externally (always accessible to you) in your daily life or whether it is some kind of chip in your brain that allows you to make the calculations (internally inside you always) - this piece of cognitive technology is in the very least aiding us in our cognition. The question of whether or not the physical calculator is a part of the distributed cognitive state is a grey area because we do not normally assign cognition to inanimate objects, or even some living objects for that matter. But as Harnad writes, why can't we consider it so? Is it just because it is not inside the cognizing human body? Again, we not extending our cognition to say whether something is cognizing or not but rather making an emphasis in its role in helping us to cognize - whether it be to enhance our abilities or to substitute for inability!

    This brings me to the next section of interest, "Neural vs Google Storage and Retrieval". I absolutely love the example that is made about how even if you retrieve this fact about the name of the poet in an unconscious retrieval state the next morning, why should that make a difference than if you recalled it through conscious experience? You recalled it didn't you? Here we can transition into the whole talk about the implementation of BCIs in humans... I think this is where the fun begins! In our world of endless tech and innovation, the potential to improve quality of life through the use of neural implants is exponentially increasing. We can replace lost neural function (for individuals with neurological diseases, sensory impairment, or physical injuries) through the use of neural prosthetics as well as develop BCIs to tweak our mental capacities for the better. So what if we really can enhance cognitive function, induce thought communication and manipulate emotion? Where do we draw the line for what is actually contributing to this "distributed cognition" and what is not? Will we ever see these neural implants as carrying more of the weight for being the central cognizer than our own natural parts currently hold? Of course, augmenting sensory capacities for whichever purpose is an ethical gray area but we can argue the benefits of something like implementing a thought-controlled wheelchair or a device that can translate thought to speech for individuals suffereing from ALS, etc.

    ReplyDelete
    Replies
    1. cont'd...

      The field of BCIs gets me thinking about how we can produce "extra" function to our existing cognition and what it means to control our current cognition. Is decoding brain activity into human intention a viable way to glimpse into the inner-workings of our minds? Yes, we've spent quite a bit of time saying that simple brain-imaging and studying network connectivity and whatnot will not bring us closer to explaining how we do what we do, let alone provide us with an answer for the hard problem. But what if we are able to directly harness neural activity (as we see with BCIs) to produce existing (capacities we already have) cognition or even new (capacities we do not yet have like connecting our brain to the internet) cognition?

      It is certainly a dangerous road to attempt to improve the human condition beyond normal human capacities but it seems to me that this method of "probing from within" (not just measuring from scans or observation) is a tempting option to get a better glimpse of how we are able to do what we do now! Would it not be a mix of internal reverse & forward engineering to try to answer this problem? The easy problem can be argued to have potential to be answered here but the hard problem remains.. unless we somehow induce feeling in a non-feeling being (but how will we know they are a non-feeling being in the first place?)

      Delete