Saturday 2 January 2016

(3a. Comment Overflow) (50+)

(3a. Comment Overflow) (50+)

7 comments:

  1. This is just a reflection.

    To me, the most interesting part of the paper was where Searle explains that strong AI presupposes dualism. He says that although it is not Cartesian dualism in the sense of two substances, it is dualism since it presupposes a mind both conceptually and empirically separable from the brain. It is surpising to me because, as Searle points out, vocal non-dualists and computationalists tend to overlap (at least when I imagine them).

    This reminded me of hylomorphism, where being is a combination of form and matter, i.e. matter is what persists and has potential to persist despite change in attributes, and form is whatever accounts for that potency being actualised. I remember reading that Edward Feser argues that the contemporary mind-body problem arises from dualism, and if we were to revert to Thomistic hylomorphism, it would no longer be an issue (in his "Scholastic Metaphysics").

    I guess that all of this means that we are not multiply realizable, as I learned in some Phil of Mind class or the other.

    My other thought while reading this paper was confusion over why Searle restricted his argument to "intentionality". I thought this just meant referring to something extra-mental. But I think, that in any sort of interesting thought, there are at least 3 aspects: a) the intentional aspect; b) the qualitative aspect (of feeling); c) the intelligible/universal aspect. I think considering all of these, and why we "understand" all of them. In a), it is interesting that we can refer or represent anything at all. How do we know what symbols mean? In b), it is interesting because, well, why do we feel stuff? In c), it is interesting because can a physical organ instantiate multiple, and indeed all, things (of a certain category)? I'm not convinced that a "program" in and of itself can do any of this stuff, let alone one.

    ReplyDelete
  2. This comment has been removed by the author.

    ReplyDelete
  3. My reply is in respect to the "brain simulator reply" from Berkeley/MIT

    "Suppose we design a program that doesn't represent information that we have about the world, such as the information in Schank's scripts, but simulates the actual sequence of neuron firings at the synapses of the brain of a native Chinese speaker when he understands stories in Chinese and gives answers to them"... with which Searle replies "As long as it simulates only the formal structure of the sequence of neuron firings at the synapses, it won't have simulated what matters about the brain, namely its causal properties, its ability to produce intentional states".

    While it's agreed upon that formal properties are not sufficient for the causal properties, it's not agreed upon what IS required for causal properties of cognition. What exactly is going through the mind of a native Chinese speaker, and would it even be possible to pinpoint these neurological operations? Wouldn't these operations also differ from one native speaker to another?

    Correct me if I'm wrong, but doesn't this, in a nutshell, question the theory of computationalism itself? The entire premise is that cognition IS computation, but how would computationalism be proven right or wrong if we cannot agree on what cognition entails? Therefore, while we can certainly decipher characteristics of cognition, insofar as the human brain and it's myriad capabilities are in question, so is the theory of computationalism too- no?

    ReplyDelete
  4. When clarifying “understanding” as it relates to machines, Searle rightly differentiates between human understanding and the lack-of understanding of many basic machines that can be considered “smart”. Previously I had been inclined to suggest that if machines could do something that was different from what we could do, but was still complicated and sophisticated, that we had no say in whether that machine understood or not, depending on our definition of understanding.
    This clarification has brought me to reject this original stance. Comparing the arbitrary squibbles and squabbles of language into semantics and meaning is an example of our understanding. The important aspect is that we can make inferences, connections, accurate assumptions etc on the basis of an input that is much richer than whatever the simple output may be. For example, someone could mention “My new car is red”. In the receiver’s, mind they are putting together the sentence, identifying what a “car” is and everything related to a car in both their semantic and autobiographical memory, and what “red” represents to them. They could think about how a past relative had a red car, that insurance is higher with a red car etc. but their output could only state “I like red cars”. Even without being consciously aware of these connections made, it can still affect cognition and subsequent behaviors as can be shown in priming experiments.
    All of the above, I would say, is necessary for complex understanding - the output alone of a machine is not sufficient to measure understanding of the machine. In this sense I would suggest even a T3 Turing Machine would not have a sufficient level of freedom of inferences from an input to but a machine that understands.

    ReplyDelete
  5. Strong and weak AI seems an important distinction to make. If I am correct, it seems to be the difference between claiming all cognition is computation (strong AI) and some cognition is computation (weak AI). Searle has no problem with weak AI, strong AI on the other hand… The important idea in Searle’s piece is the idea of understanding. Strong AI claims computation is the answer to understanding. All human cognition and therefore abilities can be understood through computation. All human understanding is due to computation. Searle puts this to the test by acting (theoretically) as the computational machine strong AI claim him to be and demonstrates the difference between his understanding of Chinese and English. He does not know Chinese, would not be able to distinguish the symbols from other similar symbols. Over time, with computation procedural instruction input, he learns to produce the right output. The right output indistinguishable from the right output given by a native Chinese speaker. But through this learning he has not learned Chinese, he has learned shape manipulation. No understanding of the language has been created by the computational instruction, just a mimicry of understanding. How does he know he doesn’t understand? Feeling. He does not feel he understands Chinese, unlike his feelings toward understanding English. Although introspection has its flaws, the fact we can introspect and recognize the difference in our feelings makes the difference between us and AI. This can be linked back to the halting problem. We can introspect and know our thinking is halted while a computer cannot, it will simply halt. We have the ability to recognize we are not actively computing or thinking anymore.

    ReplyDelete
  6. Let’s talk about the systems reply. I want to advance a half-assed attempt at a defense/clarification/discombobulation of it. Not that it’s the unconscious part of Seattle’s mind that understands, but rather Searle’s own mind in conjunction with the cognitive artifact that is the algorithm for passing Chinese T2 that understands Chinese. Considering Chalmers’ extended mind, an implication is that an algorithm written by A is a free-floating part of A’s extended mind. It is this part of A’s mind, in addition to Searle's own, that understands Chinese….??

    ReplyDelete
  7. “Now in this case I want to say that the robot has no intentional states at all; it is simply moving about as a result of its electrical wiring and its program. And furthermore, by instantiating the program I have no intentional states of the relevant type. All I do is follow formal instructions about manipulating formal symbols.”

    In Searle’s response to the Robot Reply, his argument against the validity of T3 or T4 actually makes sense to me at first glance. But after a closer look he seems to actually be making an assumption that he shouldn’t. Jumping from discussing T2 to T3 is a bigger step than he makes it out to be. Since dynamics are now in the mix, who is to say that the symbols used by the robot don’t have meaning? As he says, the robot has information coming in as input from sensory information (e.g. the “television camera”mentioned). This information then is presumably somehow translated into symbols the robot can then manipulate to output action in the world. If the symbols from the input are tied to the dynamic objects in the world, how could these symbols possibly have any more meaning than that?

    ReplyDelete