Saturday 2 January 2016

(3b. Comment Overflow) (50+)

(3b. Comment Overflow) (50+)

5 comments:

  1. "This last is not an arbitrary speculation, but a revised notion of understanding. Searle really has no defense against it, because, as we shall see (although he does not explicitly admit it), the force of his CRA depends completely on understanding's being a CONSCIOUS mental state, one whose presence or absence one can consciously (and hence truthfully) ascertain and attest to (Searle's Periscope). But Searle also needs no defense against this revised notion of understanding, for it only makes sense to speak of unconscious MENTAL states (if it makes sense at all) in an otherwise conscious entity. (Searle was edging toward this position ten years later in 1990a.)"

    This was interesting to me because it directly draws the link (that Searle omitted) between consciousness and understanding. I think this paper helped me understand Searle's CRA more as an argument against a conjunction. So, if I understand correctly, Searle is arguing against TT, T2, T3 as well (since the robots could have "blindsight" in the sense of responding to all sensorimotor stimuli but never experiencing them per se). However, he is advocating for T4 since:
    a) Mental states are not implementation independent (since you need exactly or similar stuff to what the brain is)
    b) The stuff needs to be similar to the brain in a causal sense

    I think I don't really understand what a "machine" is though, because I'm not sure in what sense Searle says the brain is a machine?

    ReplyDelete
  2. From the section introducing implementation-independence, the hardware/software distinction is made to clarify the propositions of Strong AI into “mental states are computational states” and that “computational states are implementation-independent”, (and the third that I will not touch on). I accept the explanation of how these are not contradictory where a dynamical system is required to implement the mental state, but the physical state of that system are not relevant. I would like to seek clarification on how this can be related to the human cognitive mind/neuronal brain system. One on hand, one can look at when the physical state affects the mental state. For example, for those with fronto-temporal tumors affecting personality, irritability and speech behaviors, is the computational ability not affected by the hardware malfunctioning in this circumstance? This would suggest that hardware does affect the software.
    On the other hand, what could be being suggested is that very similar mental states occur across different humans who have arguably different hardwares - every human has a brain but all brains are different. I am unsure how to proceed with these two differing positions when relating the software/hardware separations to human cognitive processes.

    It is also unclear to me when suggesting whether implementational details do matter, where the computer has the “right” kind of implementation, whereas Searle’s is the “wrong” kind. Is it referring to the passing of the T2 test where since Searle does not understand Chinese and he would be the “dynamical system” in the CRA that therefore the computer cannot pass T2?

    ReplyDelete
  3. “The Turing Test is decisive.”: Which level? Harnard restructures this third tenant of computationalism, completing it with “There is no stronger empirical test for the presence of mental states than Turing-Indistinguishability; hence the Turing Test is the decisive test for a computationalist theory of mental states.” I feel to truly complete this, as the Turing Test is the only available test at this time, we should clarify which level of the Turing Test must be passed in order to possibly prove or deny the presence of cognition. T3 should be clarified as the desired level to pass. However, as Harnard points out, T2 is the level Searle is disputing and basing his Chinese Room Argument on. I feel like T2 can already be brushed aside as not thinking by asking any question about sensorimotor experience, Searle should be arguing on the level of T3.
    System Reply: How I understand what Harnard is saying is that the executing program could be part of the understanding, and this executing program could be in our heads as well, but without our awareness. This is how the idea of multiple personalities is brought up. As we are not aware of the executing program but it is there, like those with multiple personalities may be unaware of their different personalities. The basis of multiple personalities is trauma, however, and it does seem rather far fetched to claim a traumatic disorder to be likened to all understanding.
    The conscious vs. unconscious understanding: Unlike Harnard, I do feel this is partially addressed by Searle by comparing English, a language he understands, and Chinese, a language he does not. However, I agree it is not addressed in a decisive way. In this comparison Searle knows there is a difference even if the output for both is the same, therefore understanding must be at least partially conscious. As Harnard alludes to, this bringd the idea of sleep talking and sleep walking to the forefront of my mind. Although people when sleep talking or walking are for all intensive purposes unconscious, they can understand and even respond to stimuli around them. This does make me wonder the role of memory in understanding. So maybe understanding can be conscious and unconscious…

    ReplyDelete
  4. I like the way that the logic of Searle’s argument was made explicit. It made it much easier to understand why Searle is so obviously correct about much of it, but it also made it so painfully obvious that he overshot his conclusions by a lot. Any first year undergrad in a logic course could tell you that an argument by contradiction doesn’t hold water if there is more than one assumption. Or at least you can’t know which of the assumptions is the problematic one. It seems to me like a good candidate for suspicion should be the T2 clause, though you could likely rig up a similar thought experiment based on T3. But at the end of the day Searle demonstrates not that cognition is in no way computation, but rather not all of cognition is computation.

    ReplyDelete

  5. I like the example of reverse-engineering a duck that Harnad uses to explain the difference between reverse engineering functional equivalence versus structure. I think it elucidates the different requirements of T2 versus T3 well, although I am confused about the difference between t0 and T2 in the duck example: if quacking is T2, and a reverse-engineered duck that is externally indistinguishable in function and structure from a real duck is T3, what would the t0 “duck” do? In other words, what is less than quacking?

    I had an issue with Harnad’s wording when he describes the shortcomings of a T2 TT alone. He writes “This is the point where reasonable men could begin to disagree.” I find this terminology outdated and unnecessary to communicate the point that is being made.

    ReplyDelete