Saturday 2 January 2016

3a. Searle, John. R. (1980) Minds, brains, and programs

Searle, John. R. (1980) Minds, brains, and programsBehavioral and Brain Sciences 3 (3): 417-457 

This article can be viewed as an attempt to explore the consequences of two propositions. (1) Intentionality in human beings (and animals) is a product of causal features of the brain I assume this is an empirical fact about the actual causal relations between mental processes and brains It says simply that certain brain processes are sufficient for intentionality. (2) Instantiating a computer program is never by itself a sufficient condition of intentionality The main argument of this paper is directed at establishing this claim The form of the argument is to show how a human agent could instantiate the program and still not have the relevant intentionality. These two propositions have the following consequences (3) The explanation of how the brain produces intentionality cannot be that it does it by instantiating a computer program. This is a strict logical consequence of 1 and 2. (4) Any mechanism capable of producing intentionality must have causal powers equal to those of the brain. This is meant to be a trivial consequence of 1. (5) Any attempt literally to create intentionality artificially (strong AI) could not succeed just by designing programs but would have to duplicate the causal powers of the human brain. This follows from 2 and 4. 



see also:




75 comments:

  1. “Third, this residual operationalism is joined to a residual form of dualism; indeed strong AI only makes sense given the dualistic assumption that, where the mind is concerned, the brain doesn't matter.” (Searle 13).

    As clarification, the Turing Test only determines if a machine can think correct? In that case, this can be a valid criticism from Searle, who says that believing strong AI is a mind is a form of dualism. Monism states that the mind cannot be separated from the brain because the brain is the matter and the cause of the mind (so they are not independent). From Searle’s perspective, it is surprising that we would try and make a mind from computation because that would be believing that the brain and the mind exist independently, which is a form of dualism.

    However, I still think if something passed T4, you could argue it has a mind since it should be able to go beyond symbol manipulation and demonstrate identical understanding to have passed the T4 test. Essentially, it shows it does what we do, so we should believe it has a mind since we have no method of discerning otherwise. Earlier, I would have said that another mind would have to show it has similar “hardware” and a process leading up to that hardware to consider it is like a human mind. However, if an “alien” were to have been created by a different process than evolution, and if it displays a range of behaviors that are identical to myself, I think I would conclude it has its own mind. Going back to creating a machine that passes T4, I don’t think this a form of dualism because we are saying that it is the manipulation of matter that resulted in the mind, so I think it would still adhere to monism. Obviously, this wouldn’t be strong AI anymore since it would have at least a dynamical component if it passed T4.

    Alternatively, we can think that “thinking” is a part of what the mind does, but not necessarily the mind itself. Then I don’t think we broach the topic of dualism and the separation of the mind from the brain at all because we are just trying to reverse-engineer the “thinking” process, not the mind itself so to speak.

    ReplyDelete
    Replies
    1. 1. No, the thesis that Cognition = Computation (hence that mental states are just computational states, and hence the hardware (brain) details are irrelevant to explaining the mind, only the computations are relevant) is definitely not dualism.(Searle is, again, wrong here.) It would only be dualism if it claimed (ridiculously) that there could be a mental state without a hardware implementation (hanging out there, like the Cheshire Cat's grin!).

      2. T2 (if passed by computation alone) is ungrounded. A T3 robot (like Riona or Renuka) is already grounded (meaning its symbols are connected to what they refer to); T3 cannot be passed by symbol-manipulation alone (a robot is not a computer: it is at least a hybrid dynamic/computational system). T4 ( = T3 plus internal indistinguishability from a real brain) would not be any more grounded than T3. But, unlike T2, when passed by computation alone, T3 and T4 are both impenetrable to Searle's Periscope, because of the Other-Minds Problem.

      3. It feels like something to think. (Cogito ergo sum, remember?) If something (e.g., a computer) is doing something that does not feel like anything to do, it is not thinking. (If Searle does what the computer does, he is not thinking either: he is just manipulating symbols. But since Searle, unlike the computer, is a living, feeling human being, it feels like something when he manipulates symbols. It's just the wrong something. It should have felt like what it feels like to understand -- and think in -- Chinese!)

      Delete
    2. "If something (e.g., a computer) is doing something that does not feel like anything to do, it is not thinking."

      You've said in class that cognition is what goes on to allow us to do the things we can do, and that it is separate from the hard problem of consciousness (i.e. feeling). Unless I've misunderstood those points, are you saying that thinking is more than just cognitive? Because if feeling is necessary for thinking, than it seems like "thinking" would not be a part of the easy problem. In other words, if passing the Turing Test means solving the easy problem, we still could not be sure the machine thinks, since we would have no idea if it would feel like it were doing the things it would do (due to the other minds problem).

      "But since Searle, unlike the computer, is a living, feeling human being, it feels like something when he manipulates symbols. It's just the wrong something. It should have felt like what it feels like to understand -- and think in -- Chinese!)"

      Wouldn't this only be the case if he understood the symbols themselves and what manipulating them meant? Regardless, he would still be the homunculus here, right?

      Delete
  2. I want to clear up a confusion I have with the Chinese room example first. Is Searle the English “program” that Searle receives giving him direct translations of the Chinese symbols? Or are they formal instructions in how to relate the Chinese symbols? If it is the latter, how is Searle able to pass the Turing test? I find it difficult to believe that there can be a way to answer questions just based on knowing the relationship to symbols and have no idea the actual context of the question. I do see how Searle can produce legitimate responses that somewhat tackle the the things being asked, but I don’t understand how the responses could be enough to fool an observer.

    I had written the above questions early on in the reading and I think I have cleared them up but wish to leave it there to be able to still observe the response. Moving on, I believe what Searle is in contention with with the thesis “mental processes are computational processes over formally defined elements” is that although machines can produce adequate responses to the Turing Test, they do no actual “feel” what it is to understand the questions and responses. He uses the Chinese room example to illustrate this by showing that he never feels that he understands Chinese. In a sense he does understand Chinese writing because he can basically communicate with the language in writing; he has a set of formal rules and relationships with symbols that he can then manipulate to produce a response, which is the same for all languages. What is missing is the feeling of understanding that he gets when he writes in English. Each English symbol is grounded in something that when Searle sees it, he feels what it is to understand it. This is what is missing in a machine. When brought back to the question of whether “machines can think?” we now need to decided whether thinking requires one to feel understanding or to rather just be able to formally communicate through an interaction like the Turing Test.

    “"But could something think, understand, and so on solely in virtue of being a computer with the right sort of program? Could instantiating a program, the right program of course, by itself be a sufficient condition of understanding?"
    This I think is the right question to ask, though it is usually confused with one or more of the earlier questions, and the answer to it is no.”

    I find Searle’s answers to the questions at the end to be kind of contradictory. Here he says a computer cannot think based on a computer program but in his former argument he states this:

    “If by "digital computer" we mean anything at all that has a level of description where it can correctly be described as the instantiation of a computer program, then again the answer is, of course, yes, since we are the instantiations of any number of computer programs, and we can think.”

    Are these not the same arguments? Or am I missing something to differentiate the two?






    ReplyDelete
    Replies
    1. 1. Yes, Searle is just manipulating symbols, based on their shapes, not their meanings. That's what all computation is.

      2. In the same way, Searle is just manipulating Chinese symbols based on their shapes, not their meanings: Is that what you think understanding Chinese is?

      3. Searle is distinguishing "Strong AI" (i.e., computationalism: cognition is just computation) from the Strong Church-Turing Thesis, according to which (just about) everything can be simulated by computation. That does not mean that (just about) everything is just computation (e.g., flying can be simulated by computation, but it is not computation; so perhaps ditto for thinking/cognition).

      Delete
  3. “The problem with the brain simulator is that it is simulating the wrong things about the brain. As long as it simulates only the formal structure of the sequence of neuron firings at the synapses, it won't have simulated what matters about the brain, namely its causal properties, its ability to produce intentional states. And that the formal properties are not sufficient for the causal properties is shown by the water pipe example: we can have all the formal properties carved off from the relevant neurobiological causal properties.”

    If we are talking about stimulating neurons to produce meaning, it only makes sense to stimulate a neuron if that neuron has been encoded with information. I don’t see how this could be the case with a computer program. A few years ago researchers at MIT showed that specific memories are encoded in individual hippocampal neurons - see http://www.nature.com/nature/journal/v484/n7394/full/nature11028.html Searle doesn’t mention the fact that neurons (simulated or not) need to be encoded for there to be any meaning.

    I’m also a little unsatisfied with how Searle dissociates neurons from the causal properties of the brain and the ability to produce intentional states. While simulating a neuron (or a neural population, a network, etc) surely won’t produce an intentional state, what about stimulating one?

    (As a side note I would be interested in discussing the merits of using nonhuman animals in research. I'm against using nonhuman animals in research 99% of the time because the findings are generally next to useless. However, every now and then a paper like the one mentioned above comes along which I think is pretty cool, despite the fact that it involved the administration of electric shocks to mice. I'm interested in hearing some other opinions though, especially from some students who have worked with nonhuman animals. What do you all think?)

    ReplyDelete
    Replies
    1. Cool Suffering

      1. What do you mean by "encoding" neurons? If a neuron is active if and only if you think of your third grade home-room teacher, how is that "encoded" in the neuron (or group of neurons)?

      2. To produce a state (say, heat or thought) is not the same as simulating it computationally (if the state is not itself a computational state). Heat is not a computational state; Searle suggests thinking isn't either.

      Animal research is a very profound problem. Unlike using animals for food, clothing, or sport, all of which is not necessary for our health and survival, some animal research -- though only a tiny portion of it -- is potentially life-saving. The decision about whether or not to do it is in every case extremely difficult and troubling no matter what is decided, because it involves conflicting life/death interests.

      In my opinion, however, even though finding out that "activating a sparse but specific ensemble of hippocampal neurons that contribute to a memory engram is sufficient for the recall of that memory" is "cool," I don't think the finding is worth having hurt animals. It's nowhere near life-saving or even potentially life-saving, even if it's Nature-article-generating... I have nothing against curiosity-driven research where no one is being hurt, but the constraints on research that hurts animals should be much, much more constrained.

      However, in terms of sheer numbers as well as agony, ending the horrors of the meat, dairy, and fur industries is even more urgent than deciding what to do about the small portion of scientific research that really is actually or potentially life-saving.

      (My guess is that as a vegan you are likely to want to re-think your "coolness" criterion...)

      Delete
    2. "I have nothing against curiosity-driven research where no one is being hurt, but the constraints on research that hurts animals should be much, much more constrained."
      This topic is touching more on the importance of the brain (in particular animal research of the brain) in cognitive science which is more a topic 4a/b sort of topic, but I wanted to write a short reply to this comment.

      I don't mean this in any antagonistic way, I just mean to understand a little better your stance on a topic that I myself sometimes struggle with. Let's say we do constrain animal research so that it is limited to 'life-saving' research... well first of all what does 'life-saving' mean? It seems to be a sort of nebulous category for only that research which is most obviously applicable to human diseases.

      But then where does 'curiosity-driven' basic neuroscience (let's say) fit in with this? To an extent I think it could be argued that basic neuroscience, which investigates the normal functioning of healthy brain circuitry, is important to give context to other research, including so-called 'life-saving' research. So does this sort of research now constitute life-saving research? And if not, then to what extent will these sorts of constraints hobble other research which depends on a basic understanding of the brain?

      Delete
  4. I had two main confusions in this paper.

    I am not entire clear of what he means when he speaks of intentionality (which is basically his entire paper). It seems like he’s taking “intentionality” to mean both consciousness and meaning at the same time –which isn’t entirely clear to me how they’re the same or what intentionality is taken to mean in the end.

    Also, Searle concludes that “only a machine could think, namely brains and machines that had the same causal powers as brain—whatever it is that the brain does to produce intentionality cannot consists in instantiating a program since no program, by itself, is sufficient for intentionality ” (p.14). What I seem to understand is that he is saying T4 is the ultimate TT? It’s the only system that can “think” and the only way in which we will come to understand and explain cognition is by studying the dynamics of the brain itself? I don’t exactly understand how this will come to explain anything more? I also don’t understand how he can conclude that T3’s cannot “think” on the premise of his Chinese Room Argument? Because it cannot interpret meaning and therefore doesn’t cognize? To me, the main purpose of his paper is to show that computation cannot account for meaning (intentionality?), but I don’t really understand how we can conclude much more than that. It’s not that cognition is not computation, but computation cannot account for meaning itself.

    ReplyDelete
    Replies
    1. Hi Cait,

      I also had a very similar question! Although, I don’t agree that when he speaks of intentionality he means “meaning”.

      “Instantiating a computer program is never by itself a sufficient condition of intentionality.”

      The abstract outlines the 5 main points of his argument and in the end summarizes by saying that creating any sort of “intentionality” requires one to recreate the causal effects of the brain. He then goes on to say that machines can’t think unless they have the same “internal causal powers” of the brain.

      Harnad (in the second reading) suggests that Searle means consciousness when he says intentionality. I am confused as to how intentionality is related to thinking? How could this entire argument have been constructed when an important term is ambiguous?

      Delete
    2. Hi Renuka,
      Thanks for your input! I learn towards your way of thinking as well – how can such a big argument be formulated when its core term is ambiguous? Even with his "definition", I am still not satisfied -- I am still not entirely clear with what he means.

      Something else that bothers me is that Searle basically argues that “interpreting meaning” can’t be computation, but in reality, he’s only shown that understanding can’t just be a function of computation. He also doesn’t really define what understanding is, but does say that “only the system with the right causal power” can understand. Throughout his paper, he expresses his strong intuition that the brain is the only “machine” that has the causal power, and that the only way to find out is by studying the brain itself (not quite sure how this is correct, but ok). But what bothers me is that he can’t show this! We cannot conclude this from his CRA alone! His argument does work for T2 but not T3!
      So Searle doesn’t show that cognition is not computation at all, but rather cognition is not all computation!

      Delete
    3. And coming back to "intentionality" -- Harnad takes it to be a "weasel word" mean consciousness. Searle seems to have a strong intuition that "intentionality" (whatever that may be) is essential in understanding cognition. But why not leave it at, if something feels, it is conscious?
      I don't understand what "intentionality" contributes anyways -- if someone could enlighten me here, that would be great!

      Delete
    4. Hey Guys! This is definitely an interesting point in Searle's article!

      I think what Searle means when he speaks of intentionality is the property of something to refer to or be about something else, or in other words, to encode meaning within something that does not in itself hold that meaning. For example, the word 'apple' is not actually an apple but it refers to an apple and as cognizing beings, we are able to understand that the word 'apple' in effect refers to the objects that exist in the world that are apples.
      When Searle says “Instantiating a computer program is never by itself a sufficient condition of intentionality.”, i think what he means is that the computer program will not be sufficient to create understanding within the computer in the same sense that cognizing beings understand

      Delete
    5. I definitely also have issue with what he means by 'understanding' though:

      "But first I want to block some common misunderstandings about "understanding": in many of these discussions
      one finds a lot of fancy footwork about the word "understanding." My critics point out that there are many
      different degrees of understanding; that "understanding" is not a simple two-place predicate; that there are even
      different kinds and levels of understanding, and often the law of excluded middle doesn-t even apply in a
      straightforward way to statements of the form "x understands y; that in many cases it is a matter for decision and
      not a simple matter of fact whether x understands y; and so on. To all of these points I want to say: of course, of
      course. But they have nothing to do with the points at issue. There are clear cases in which "understanding'literally applies and clear cases in which it does not apply; and these two sorts of cases are all I need for this
      argument 2 I understand stories in English; to a lesser degree I can understand stories in French; to a still lesser
      degree, stories in German; and in Chinese, not at all. My car and my adding machine, on the other hand,
      understand nothing: they are not in that line of business. We often attribute "under standing" and other cognitive
      predicates by metaphor and analogy to cars, adding machines, and other artifacts, but nothing is proved by such
      attributions. We say, "The door knows when to open because of its photoelectric cell," "The adding machine
      knows how) (understands how to, is able) to do addition and subtraction but not division," and "The thermostat
      perceives chances in the temperature." "

      I am not sure how his point about thermostats and adding machines has any relevance to the intended audience of the paper, I am not sure that anyone actually inherently believes in their own warping use of language when they say such things about their thermostats... Making such a point of debunking this claim seems unnecessary and irrelevant to the discussion of strong AI and affords unnecessary confusion as well.

      Delete
    6. Cait, you're right that Searle is playing double-entendre with the weasel-word "intentionality."

      I recommend sticking to the word "feeling." It feels like something to mean X (say, "apple") and it also feels like something to understand X. When people use "intentionality" they mean "the meaning I have in mind," my "intended meaning," the meaning I am conscious of, "felt meaning." (But there's a lot of useless voodoo associated with the word too. And the only-partial overlap with the ordinary meaning of "intentional" [i.e., deliberate] does not help either.)

      Meaning is more than that. (1) The word "apple" has a referent (those round red or green things). (2) "Apple" also has a "sense" (sometimes also used as a synonym for a "meaning"), namely, whatever it is that you need in order to know what its referent is, how to use it when you speak, and perhaps also how to define it. (3) And last, as I said, it feels like something to say, mean and understand "apple" (and it feels different from saying/meaning/understanding "banana").

      Searle's argument is trading on the fact that when T2 is passed by computation alone, it has (2) but not (3). (It also lacks the connection with (1) (apples), which you could only get via T3.)

      Yes, Searle thinks that his argument shows that cognition cannot be computation at all, and that the only way to understand cognition is via T4 (study the brain). In fact, all he really shows is that cognition cannot be all computation: T2, passed by computation alone cannot think, understand, etc. But he does not show anything about whether T3 can.

      Naima, I agree that for most of us know that "my thermostat knows when to turn on the heat" is just metaphorical. But I'm not sure those who agree with the "System Reply" ("Searle doesn't understand Chinese, but "the System" does) do...

      Delete

  5. It seems to me in Searle’s Chinese argument that he’s glossing over something quite substantial which is that in order for him to give answers to the question, he needs to have been given the set of rules ahead of time. In other words, he needs to be pre-programmed to be able to answer all of the questions he needs to answer. He cannot extrapolate outside of these rules he’s been given because he doesn’t understand Chinese. The question is, has Schank’s computer been pre-programmed to answer all possible questions? In the restaurant example, the word “ate” wasn’t in the story and so the answer to “did he eat the hamburger?” had to be implied from the sentences given. A rule set therefore would have to presuppose or predict all manner of possible abstract questions like these relating to the story for which the main themes or words of these questions were not explicitly present. Thus, Schank’s computer must either be pre-programmed with rules for these many specific question sets or it really is able to understand implicit meaning and generate appropriate responses. Since we assume the former then it would seem that the computer would have a finite list of questions to which it had the rules to be able to answer would it not?

    Ok so we say that the computer cannot understand the story – is understanding really necessary for cognition? This brings me back to a previous skywriting where Dr. Harnad said that memorizing was not cognizing, but remembering was since “feels like something to remember”. The example I gave was if I regurgitated information on a test that I learned the night before by memory but with no actual understanding of it (the way a computer might simply gives responses that it has been pre-programmed to give). In Searle’s example, he doesn’t understand Chinese but it feels like something to remember all those rules and Chinese symbols. Does this then mean that he is cognizing in this situation (where he is supposedly imitating the computer) and the computer is not cognizing simply because we say it cannot feel?

    ReplyDelete
  6. I’ve been wrestling with something that was said in Monday’s class but I wanted to wait until I finished the readings to comment. The statement that there’s “no such thing as an unfelt feeling,” is this really true? There are surely feelings that I have never felt, and feelings that I have only heard about from other people. Perhaps there are also feelings that exist that no one has ever felt, and never could. When its put this way, feeling seems analogous to sight: there are unseen sights, sights that I haven’t personally seen and things that no one in the world will ever see. Does that make them not real even if we can’t comprehend them. I know this explanation is kind of throwing feeling into the abyss, and severing software from hardware (Cheshire cat smile?) but if we can’t know what other people are feeling (problem of other minds) how could we say that unfelt feelings don’t exist somewhere?

    “If you can exactly duplicate the causes, you could duplicate the effects”

    This is said after stating that if you create a brain, identical to a homegrown one, with dendrites and axons etc. you could create intentionality. This seems to me to be getting ahead of oneself. Speaking mostly from what we discussed in class, even if I make an exact replica of someone’s’ brain down to the last molecule, how do we know that there isn’t something else that governs intentionality and cognition? How about some condition that humans don’t know about or can’t perceive?

    Overall I agreed with the majority of this article so I unfortunately don’t have much to critique. However, when Searle states:

    “My critics point out that there are many different degrees of understanding; that "understanding" is not a simple two-place predicate”

    Is it fair to argue that computers “understand” quantitative concepts (numbers) at a higher level than qualitative things (“the cat is on the mat”)? Computers receive input via numbers, so its their “language;” how much more of an “understanding” of numbers do we have compared to computers when to operationalize with a number (add, subtract etc.) IS to understand what it is? Its numeric value is inherent in its usage, whereas if I tell a computer to add a cat to a mat, or pet a cat, or name a cat, its no closer to knowing what a cat is other than a symbol. I wouldn’t argue that a computer understands math at the same level that humans do but is it not an argument for multiple tiers of partial understanding within computers?

    ReplyDelete
    Replies
    1. I think the phrase ''no such thing as an unfelt feeling'' refers to the fact that once you have felt or experienced something, there is no way of ''unfeeling'' it. There are definitely feelings that we haven't yet felt, or never will do, but I don't think that is the issue here.

      Once you have experienced something, there is no way of returning to the exact state you were in before. Everything in life is built on previous experiences, even if you lose the conscious memory of everything that has happened.

      I think difference with sight is that you are only ''feeling'' what it is like to see. It is possible that everything is an illusion or a simulation (unlikely, but when you get deep into the justifications, difficult to prove that it is not). Perhaps everything has been formed by your imagination. As the professor was saying, all you know is that you are feeling a certain way, at the present time.

      The other minds problem means that we cannot know what, if anything, other people are feeling. However, our own feelings govern the way we respond to people and we seem to attribute what we can feel to them.

      So, will we ever get to the stage that we respond to computers or robots in the same way as we do to humans? Technically, if we can get a robot to mimic identical human behaviour, I think we would have to start treating them in the same way. If we really believe that the only thing we can be sure about are our own feelings, and what we believe about the outside world is purely based on interpretations and inferences, then a robot exhibiting human behaviour seems to be worthy of the same treatment.

      Delete
    2. Hi guys! While Rose is right that “once you have felt or experienced something, there is no way of 'unfeeling' it,” I think the phrase “no such thing as an unfelt feeling” is simply stating that, by definition, a feeling must be felt. Whatever it is that creates feeling, that must happen for feeling to occur.

      However, this is different than sight. While there are “things that no one in the world will ever see,” those things are still reflecting light. The process that creates sight is still occurring, even if no one actually sees it. Same thing with sound. If a tree falls in the forest and no one hears it, does it still make a sound? Of course it does. Air particles still vibrate due to energy displacement. The process that creates sound still occurs, even if nobody is around to perceive that sound.

      Things are different with feeling though. While I don't know how feeling is created, that process must occur for something to be a feeling. Things get complicated because we use the term “feeling” as both a verb (the process) and a noun (what the process creates). But there can't be “feelings that exist that no one has ever felt, and never could,” provided the term “no one” incorporates all beings that feel and not just humans. For something to actually be a feeling, the process that creates feeling needs to happen. In other words, a feeling must be felt, or “there is no such thing as an unfelt feeling.”

      Delete
  7. Searle states that while he understands English (his native language), he does not understand Chinese for "whereas the English subsystem knows that "hamburgers" refers to hamburgers, the Chinese subsystem knows only that "squiggle squiggle" is followed by "squoggle squoggle"".

    I argue that Chinese and English are fundamentally different languages, thus the two are incomparable. Chinese is a logographic language, in which individual symbols represent whole words (ex cat). On the other hand, English is an alphabetized language in which words are made up of several individual letters and sounds.

    Searle states that although he is given all the symbols and rules and manipulates them to produce correct output he still does not understand. I can't help but think that he is comparing his concept of 'understanding' to the standard operation of his only native language: English. A better analogy or comparison, in my opinion, would be between Chinese and another logographic or pictographic communication system. Instead of being shown stories in English, he should be shown stories composed of illustrative symbols (as would be shown in a picture children's story book for instance). Vice versa, if he wants to use English as a comparison, instead of Chinese he should used another alphabetized language like French.

    Has anyone else thought about this?

    ReplyDelete
    Replies
    1. Yes, I had similar thought in wondering why Searle had decide to choose Chinese out of all the languages. Originally, I consider the differences in the two writing system and the comparison between logographic and alphabetized language; however, it seems that the two writing systems are merely a representation in supporting that, through programming, any formalized system can be manipulated without comprehension. I do agree with that maybe a comparison between Chinese or other languages with a pictograph communication would be an interesting approach. Although, I would be unsure how it would better the comparison to strengthen the argument. I hypothesized that Searle had chosen Chinese in comparison with English because of their extreme dissimilarity. So that it can better highlight his argument in demonstrate that it is possible to instantiate the program, no matter how complex, and still not have the relevant intentionality (in contesting that strong AI’s embodiment of a cognitive state.)
      What do you think?

      Delete
    2. Hi Maya,

      I'm not sure if this is going to come out right but here is my best shot.

      As I read these section I was not explicitly thinking about differences in Chinese vs. English as logographic vs. alphabetized systems. I was thinking about Searle's decision to use language in his argument. For me however, the problem is his statement that the rules being learned and written in English (as opposed to Chinese) allow him to manipulate symbols without speaking a word of Chinese. I assume by saying he doesn't speak Chinese he also means he does not understand it.

      Take for example my language knowledge. I am a native english speaker, I began learning french age 9. I stopped taking french at age 16. Because I don't speak enough french, frequently I have to translate words that are spoken to me in french (in my brain) and then I can derive the english meaning and "understand" them. But does this step of translation mean that I don't "understand" French? My answer is no. I don't think that my need to translate to my native (more competent) language means that I don't understand it. In fact, my ability to translate French words to English must mean that I understand french to some extent? But to Searle would this be symbol manipulation and not understanding?

      To come back to your point, I think the difference in these types of languages might be dismissed as inconsequential because of the argument that sign languages like ASL have comparable acquisition, syntactical structures etc. despite the difference in form. As a result the "form" of the language could be argued not affect how a person is able to understand it.

      Delete
    3. Hi Renuka, Grace and Maya,

      Renuka,

      I’m going to expand on your point about French and English which I thought was a great point.

      Being a moderately good speaker, but not with full comprehension, I often find myself hearing French then translating it in my head to English to understand. Do you truly understand French? Do I truly understand French? Is any second language acquired later in life (non-native language) truly “understood”? Because if it is and if I truly understand French, then I wouldn’t need to go through the translation process, right? So then maybe when I hear French, or in this case when I read French my brain is going through the process of symbol manipulation. Then I have 0 comprehension according to Searle. HOWEVER, when people ask me if I understand French my answer is yes. There is a fine line here and maybe it comes down to the definition of the word “understanding”. In the notes section of the paper, Searle defines “understanding” as: implying both the possession of mental (intentional) states and the truth (validity, success) of these states. Well, I possess the mental states of French rules and syntax, so then I do in fact understand.

      Grace and Maya,

      I will touch on the point you both made about him choosing Chinese as the language to base his argument.

      Correct me if I am wrong but I think the argument would be the same even if he chose Japanese, Hebrew, Arabic or even Jibberish. There is a common saying when someone does not understand something: “that sounds like Chinese to me”. This just means that when hearing an unknown language (it could be any language), it sounds like jibberish and makes no sense. Since these languages all have different alphabets it would take symbol manipulation to even at first learn the language (like Searle does in a room with Chinese symbols).

      Say you decide to learn Chinese or Hebrew or any symbol language. You sign up for a beginner course. The first thing you will be doing is learn the new alphabet of that language with respect to English, which you already know. Ie) Learning that “squiggle” represents the letter “a”. This symbol manipulation with enough practice will turn into you knowing that that symbol means the letter “a” which then leads to comprehension of the symbol.

      Delete
    4. This comment has been removed by the author.

      Delete
    5. Hey guys,

      I agree with Grace, I don’t think the choice of the language would change anything to the argument Searle makes. The main issue seems to be the understanding of the input received. In any case, when receiving Chinese symbols, he would not be able to understand any of them nor is he asked to do so. The task he has to do has nothing to do with translating. He is not asked to translate, but only to produce an appropriate response. An analogy that I think would be correct would be receiving a red box (having a meaning unknown to you) and following instructions that tell you “if you receive a red box send out blue box”. To someone outside, that blue box you send has some meaning that is unknown to you.

      I think that is the difference between the task accomplished by the Chinese Room and a translation. At no moment must Searle assign a meaning to the input and the output, whereas when in presence with an input he can understand (even if it’s the word “pomme” that he might feel compelled to translate as “apple” in your mind) he assigns some meaning to the symbols manipulated (“pomme” and “apple” are both arbitrary labels assign to a type of fruit).

      Delete
    6. I think Hernán brought up an important point. I agree, that I don’t think that it is necessarily language that Searle was trying to tackle. Instead, I think it was any input/output. But, it is important to note that when the brain receives input and sends output, it does not do so just by following a set of rules, like the case in the Chinese Room (or “if you receive a red box send out a blue box”). Instead the brain is constantly adapting and learning (rewiring). Unlike a machine, the brain does not simply follow rules, it uses information from past experiences to create new rules, constantly. And even these new rules are constantly changing and adapting.

      Also, I was wondering if is possible that the actual feedback system itself ‘understands’ the information and the people who are processing the information do not.

      Delete
  8. The problem with the brain simulator is that it is simulating the wrong things about the brain. As long as it simulates only the formal structure of the sequence of neuron firings at the synapses, it won't have simulated what matters about the brain, namely its causal properties, its ability to produce intentional states. (Searle, page 8)

    The hypothesis that the dummy has a mind would now be unwarranted and unnecessary, for there is now no longer any reason to ascribe intentionality to the robot or to the system of which it is a part (Searle, page 9)

    Only something that has the same causal powers as brains can have intentionality (Searle, page 12)

    Throughout his article, Searle consistently relies on the idea that cognition requires intentionality, as demonstrated by the three examples quoted above. Even Dr. Harnad agrees that cognition is the idea Searle was conveying by using this term:

    The synonymy of the "conscious" and the "mental" is at the heart of the CRA (even if Searle is not yet fully conscious of it -- and even if he obscured it by persistently using the weasel-word "intentional" in its place!) (Harnad, pages 7-8)

    However, when Searle actually clarifies what he means by “intentionality,” he reveals the limitations of such an understanding of cognition. In Note 3:

    Intentionality is by definition that feature of certain mental states by which they are directed at or about objects and states of affairs in the world. Thus, beliefs, desires, and intentions are intentional states; undirected forms of anxiety and depression are not. (Searle, page 15)

    To me, this clarification implies that there is more to cognition than just intentionality. Anxiety and depression are still manifestations of cognition, as they are clearly valid feelings and mental states, regardless of the intentions associated with them. However, to be fair to Searle, he does specify that they need to be undirected forms of anxiety and depression. But what exactly does that entail? Can the anxiety or the depression we are familiar with be considered “directed.” Is the anxiety or depression we feel directed at our situation (states of affairs in the world). Are they directed at ourselves (objects in the world)? If so, what are these undirected forms of anxiety and depression he refers to? What makes them undirected? Can undirected feelings even exist? Wouldn’t their existence depend on the fact that they are directed (about objects and states of affairs in the world), as something must create them in the first place? Even if we do not know what (specifically) is causing the anxiety or depression, is it fair to assume that there actually is something that is causing it? Or can these feelings simply “pop up” on their own? And if they can, do we still consider them manifestations of cognition, or do we agree with Searle and classify them as something else?

    ReplyDelete
  9. “Whatever purely formal principles you put into the computer, they will not be sufficient for understanding, since a human will be able to follow the formal principles without understanding anything.”

    “The reason we make these attributions is quite interesting, and it has to do with the fact that in artifacts we extend our own intentionality;3 our tools are extensions of our purposes, and so we find it natural to make metaphorical attributions of intentionality to them”

    “The computer understanding is not just (like my understanding of German) partial or incomplete; it is zero.”

    “If strong AI is to be a branch of psychology, then it must be able to distinguish those systems that are genuinely mental from those that are not. It must be able to distinguish the principles on which the mind works from those on which nonmental systems work; otherwise it will offer us no explanations of what is specifically mental about the mental.”

    “I am the robot's homunculus, but unlike the traditional homunculus, I don't know what's going on. I don't understand anything except the rules for symbol manipulation. Now in this case I want to say that the robot has no intentional states at all; it is simply moving about as a result of its electrical wiring and its program. And furthermore, by instantiating the program I have no intentional states of the relevant type. All I do is follow formal instructions about manipulating formal symbols.”

    I believe that a human does understand something, understanding how to follow formal principles; given symbols and shapes, a human understands how to follow rules to manipulate the symbols to produce an output. Searle argues that a machine has zero understanding of meanings behind the symbols, but the fact that the machine is programmed to understand how to follow rules, that in itself is understanding. He states that this is due to an extension of our purposes, that we are making attributions of intentionality to them, but if that is true, then what he is saying is that the machine is programmed after our ability to follow rules; this idea is exactly describing how the person would be able to understand English. The fact that we are able to make inferences from symbols because we understand it, means that we have built in rules within our English subsystem that has storage to encompass meanings, to which then we know how to apply the rules given the symbols. The machine has none of that unless we built it in ourselves, and even then it’s not a true model of how we process, because the machine is given symbols and rules to manipulate the symbol, but it is not trained to learn the connections between symbols and output response, given input and rules. The machine is thus, not building up a storage of meaning like we have then. Thus, Searle is right in that the computer he uses in his analogy is not capable of understanding, but I believe that he did not choose the right machine for comparison, assuming having intentionality and mental states is attributed to the idea of having memory (storage) and beliefs in the first place.

    ReplyDelete
    Replies
    1. sorry by machines I mean strictly computer and computer programs

      Delete
  10. This comment has been removed by the author.

    ReplyDelete
  11. Searle’s argument seems to focus mainly on the causal features of the brain, namely understanding, derives from intentionality in human beings. So that in attempting to create intentionality artificially (Strong AI) by designing programs would not suggest that there exist neither intentionality nor cognition.

    In extension for the systems reply, if a man “internalize all of the elements of the system” and embodies all of the formal rules with calculation in his head, does this not suggest that that he has a form of understanding, in that manipulating the Chinese language? Evidently, this is not the same with the original argument in understanding Chinese within a literal sense, but an understanding on a higher, meta-context in formalized manipulation rules. If, as Searle defined, the word “understanding” is concerned with the possession of the states, then the man as part of the system holistically it has possessed the state and can infer something from the received input. Can this concept of comprehension be factored into showing that the programmed computer embodies a cognitive state?

    Perhaps in trying to couple Schnack’s story telling program with Searle’s Chinese Room argument together: If I were to present the hamburger story in the Chinese language to the man in the Chinese room, would he be capable of making that inference and reach the common conclusion? If so, then wouldn’t this suggest that, by Schnack’s argument that to make an inference and to answer the question correctly is correlated with thinking, then a programmed computer and/or the man in the room, by mediums of formal translation, can understand the context and infer from the context to give a reasoned answer?

    ReplyDelete
  12. “what we mean by "understanding a language," which surely means CONSCIOUS understanding”

    Additionally, (and this may be personal) I have a problem with the use of a language as an example for something that the computer or person “understands”. The argument is constructed around a person who has learned a set of rules in their own language (English) about symbols in another language. This person has become so good at manipulating the symbols that they are indistinguishable to a person who understands Chinese.

    Every time Searle asks “…does it even provide a necessary condition or a significant contribution to understanding?” my intuitive response is no. But when followed by “[it is] more formal symbol manipulation that distinguishes the case in English, where I do understand, from the case in Chinese, where I don't” I am tempted to say maybe. This is because I can’t answer the question of do we “consciously understand” language? I know that the words we use have meaning, and that using words isn’t about their form (symbol) but their meaning, but I’m taunted by the possibility that we could just be really really good symbol manipulation machines.

    “No one would suppose that we could produce milk and sugar by running a computer simulation of the formal sequences in lactation and photosynthesis”

    I also have a problem with the example above because lactation and photosynthesis are (correct me if I’m wrong) what we have been referring to as “vegetative” processes.

    ReplyDelete
  13. "The study of the mind starts with such facts as that humans have beliefs, while thermostats, telephones, and adding machines don't. If you get a theory that denies this point you have produced a counterexample to the theory and the theory is false."

    While I wouldn't claim that toasters have intentional thought, I think this is a somewhat harsh statement, solely for the 'other minds' aspect. My personal opinion is that if you had a nearly perfect theory about consciousness that explained it all but had the corollary that more things than we thought were conscious are indeed conscious, I might buy it.

    ReplyDelete
  14. Reading this article made me think about a movie that I watched over Christmas break Ex Machina. This movie is about how a robot that supposedly had artificial intelligence tricked the two men that were at the research center and after getting rid of these men leaves the research center and goes to explore the world. What was interesting in this movie is that this robot used seduction, which is something that humans do and unthinkable of robots doing. What I found interesting in the analogy between this movie and this paper was that instead of focusing on how artificial machinery should react to certain stimuli (use computation), the machine that was created was created to analyze how people thought and to think in a similar fashion. The software developer that created this machine was the founder of a prominent search engine (like Google). In order to make this machine work, the software developer gave access to all the cameras of the world to this machine and to all the search engine entries. This allowed the machine to learn how to think like humans. I of course understand that this is all fiction, but I am wondering if designing a machine that thinks like humans would get artificial intelligence closer to human intelligence?

    ReplyDelete
    Replies
    1. Hi Anastasia,

      I actually had a conversation with my friend about this last night. We were arguing over computers and cognition. She is in computer science, and I am in psychology/philosophy, so obviously we are at different ends of the spectrum in terms of arguing about AI.

      She tends to take the Strong AI view that Searle argues against, she maintains that when a computer is programmed, it is analogous to a person who is thinking. She cited examples of machine learning to show that for example a program (like Searle's Chinese Room) might be able to get better and better at a command (like learning Chinese) until it has really more or less "learned" the command and can do new things with it.

      This reminds me of a point I think of reading Searle that he never addresses- surely after a certain amount of exposure to the Chinese symbols, he would get some understanding, right? Just through basic learning? So I feel like her argument in that sense was decent.

      BUT! I still feel like Searle's argument really elegantly highlights something we brought up in class, that I brought up to my friend about the movie Ex Machina.

      The robot is simulating being lonely and sad when it wants to get out of the room. Since it is an anthropomorphized feminine robot, it also sexualizes itself. My friend the computer scientist used this movie to try to show me how an artificially intelligent being could use machine learning principals to become so intelligent as to have it's own true mind, and feelings, she argued. I feel like it is a huge leap to go from saying that because a machine can execute certain commands, it is a thinking, feeling, human-like thing.

      So I am ambivalent, because I love Searle's argument in terms of showing why anthropomorphizing AI using the Turing test is never a good idea. But, I can see the case for at least AI that learns mechanically in a way that is similar to humans, and while it is not a cognition per se, it is a complex process with a lot of coordination involved similar to a neural network in the brain.

      Delete
  15. This comment has been removed by the author.

    ReplyDelete
  16. “Whatever purely formal principles you put into the computer, they will not be sufficient for understanding, since a human will be able to follow the formal principles without understanding anything. No reason whatever has been offered to suppose that such principles are necessary or even contributory, since no reason has been given to suppose that when I understand English I am operating with any formal program at all.”

    There has been one major question on my mind while I read the paper...
    When humans are young children learning a language for the first time I believe we are constructing our formal program through our formative years. It is through listening, speaking, reading and from the environment and from school. Once you get to a certain age and have learned all the rules and words in let’s say, the English language, then you now have a formal program. It just takes years to acquire. If we as humans have now acquired this formal program, why do we say that “we understand language” rather than saying that “we have acquired a formal program and for the rest of our lives and through everyday situations our brain is manipulating symbols whether visual or auditory as input in order to give output (speech, writing, etc.)”. In comparison to a computer or machine, it is given the formal program all at once, and each time it is given input, it manipulates the symbols to give output.
    I am by no means certain that we are just as computers and computers work just like human brains, but I need some clarification to believe that humans are not symbol manipulators, because there seems to be evidence (as states above) that it is possible that they are.

    ReplyDelete
    Replies
    1. I completely agree with your point! I don't think it's because we are not constantly thinking exactly which grammatical or syntactic rule we are applying when speaking our native languages (or even languages we have managed to assimilate fairly well but that are not native to us) that we are not acting in a computational manner nonetheless ! When you ask a native speaker of a certain language why one format of a sentence is 'correct' and another not, they simply know based on the fact that "it sounds right". In essence, this seems computational even if its not accessible to conscious know how! When a cognizing being understands a language there seems to be both a formal program that has been sufficiently integrated combined with the "extra something" that allows a human to understand the broad meaning of what is being said.

      Delete
    2. I agree with your point, and was actually thinking that when reading Searle’s argument, but I’d like to elaborate a bit on Naima’s response. Searle does say that the human brain is a type of machine, but he differentiates it from a machine like a computer or a robot by saying it has “causal power”. When he says this I believe he is referring to the fact that our brain is biological. Rather than just having the hardware or neurons firing in particular patterns, we have neurotransmitters and hormones released in different quantities relying partially on probability affecting them. This is where we are different from other machines. Searle even says afterwards that other physical or chemical processes could produce these effects, and if we could program another machine with biology that way, then it could potentially have intentionality too. I think that the “extra something” is biological. If we were to reproduce it with chemicals and physics in the future, we would pose a similar argument to the Brain Simulator Response, which defeats the purpose of AI to understand mental processes without using neurophysiology.

      Delete
    3. Hey guys, I really appreciate this point because I've been grappling with this for a while, but I keep going back and forth about its answer. I'm not sure what to think about it all because to simply say that our brains really seem like they're just computing when we're conversing in our native language, I think this eliminates the aspects of feeling and colloquial understanding. If we really were computing with some English to english type dictionary then we wouldn't be able to grasp sayings that are seemingly nonsensical without context, without tone, etc. But we are able to understand them and easily and in nuanced ways. Does this not make it seem like there is something else to it? I think this might be what Naima means when she says "When a cognizing being understands a language there seems to be both a formal program that has been sufficiently integrated combined with the "extra something" that allows a human to understand the broad meaning of what is being said." But all of this being said, I think the sentence "In comparison to a computer or machine, it is given the formal program all at once, and each time it is given input, it manipulates the symbols to give output." that Jordana wrote might be sidestepping the limitations that a computer has in truly understanding and using language with the intricacy and multiplicity that comes so naturally to humans.

      Delete
  17. This comment has been removed by the author.

    ReplyDelete
  18. This comment has been removed by the author.

    ReplyDelete
  19. Searle has managed to put into elegant writing many arguments I have been trying to express myself. In essence, what I take away from this is that it turns out that strong AI is no further from being a “powerful tool” than weak AI.

    Several sections of the paper stood out to me:

    1) “there are many different degrees of understanding; that “understanding is not a simple two-place predicate; that there are even different kinds and levels of understanding…” I agree with Searle that no matter how desperately one tries to convince us that the machine is truly “understanding” what it is doing, the issue of cognition is being evaded. He points out household examples such as a thermostat that has an “understanding” of the room temperature or automatic doors that have photoelectric cells that “understand” movement. Sure, we can continue to define and redefine how inanimate objects have different kinds of levels of understanding, but it still boils down to an understanding of nothing – “objects without intentionality” . When critics make these kind of claims, it serves to only set back AI.

    2) “Our tools are extensions of our purposes, and so we find it natural to make metaphorical attributions of intentionality to them”
    Can this be interpreted as the mind projection fallacy by E.T Jaynes? Going back to the inanimate objects in 1) (thermostat, etc), we are attempting to attribute some kind of “understanding” to them by avoiding to make real statements about how these things really are (thermostat has a bimetallic strip that reacts to the temperature/carries electricity, yada yada). There is no intentionality in the thermostat as Searle says, yet some claim that it has understanding just as some claim that licorice has bad taste. It is indeed we who are projecting this attribute onto these things, not the things who possess these intrinsic traits. It is not the licorice that tastes bad but our taste buds carrying information that we assign some valence to (tasting bad).

    3) “If he doesn’t understand, then there is no way the system could understand because the system is just a part of him.” While reading this article, I supported Searle’s response to the systems reply. However, I’ve come to wonder about whether or not the “understanding” could lie within the system. I’m not saying that the understanding could be in the “bits of paper” containing Chinese symbols but that the room/system as a whole contains the understanding. Likewise, in the human brain we have input coming in, some internal state and then an output. This internal state is where our own understanding lies, not within particular neural sets. Perhaps to reach real understanding however, there must be learned associations with the symbols – that squiggle squiggle refers to orange and orange is a fruit which is a type of food you can eat and it has a citrus taste, etc. If we add background knowledge to the squiggles and squaggles, then would all the symbol manipulation still be meaningless and be interpreted as an “understanding of nothing”?


    ReplyDelete
  20. I. The systems reply (Berkeley). "While it is true that the individual person who is locked in the room does not understand the story, the fact is that he is merely part of a whole system, and the system does understand the story. The person has a large ledger in front of him in which are written the rules, he has a lot of scratch paper and pencils for doing calculations, he has 'data banks' of sets of Chinese symbols. Now, understanding is not being ascribed to the mere individual; rather it is being ascribed to this whole system of which he is a part."

    From what I understand, The Systems Reply is saying that the person does not understand, but the whole system does. If the whole system consists of the programmers (and the rules they create), the person, and the interpreters, and the person in the room does not understand, then the understanding and intentionality all lies within the programmers and the interpreters. If we are referring to the person in the room as a machine, then this is like saying that a machine can think because its programmers and the people using it can. The “thinking” here is all in the brains creating the programs and interpreting the results, not in the computer itself. Just like with a thermostat, it manipulates symbols, but the “system” would consist of the people who created it and the people using it. They are the ones doing the thinking, not the thermostat itself.

    ReplyDelete
    Replies
    1. I understand what you mean, but I think he was trying to say that saying that the system as a Chinese understanding system in fact doesn't have any understanding of Chinese - because in isolation (without the minds of the native Chinese speakers), there is absolutely no comprehension of Chinese.
      In the same way, a thermostat has no comprehension of what changes in temperature mean (like heat or cool or whatever, the way the person who programmed it does), it only has the ability to react to those environmental changes.

      Delete
    2. Hi Amanda & Julie,

      I was also a little puzzled with the “Systems Reply” and did a few additional readings (took me quite a while, like I was deciphering Chinese!). Here is what I came to understand.

      So according to the systems reply, someone who is a defender of “Strong AI” (as Searle puts it) would say, sure, Searle doesn’t understand Chinese. He’s simply going through the instruction manual written in English, he’s following instructions; all he understands is English (not quite clear how) and doesn’t understand Chinese. However, the whole system of which Searle is just a part of does understand Chinese. The whole system includes not only Searle, but also the instruction manuals & perhaps it even includes the symbols coming in and out of the room. So this whole system includes Searle as one of its parts, but it also includes the book with all the rules as well as the symbols that are being manipulated – and it is this entire system that understands Chinese. So in essence, defenders of “Strong AI” are saying; yes Searle doesn’t understand Chinese, but Searle is not what is running the program. What is running the whole program is the entire system and it is the whole system that understands Chinese.

      But then Searle comes back with a reply saying; OK, forget about the actual room and the instruction books, but rather imagine that Seale memorizes all the instructions in the manual and instead of receiving the symbols through some slot under the door, these symbols are just shown to him. Now, when he receives a symbol, he looks at it and thinks through his memory of the instruction manual that he’s memorized and given a particular Chinese symbol (input) he knows which Chinese symbol to generate (output). So now, Searle has internalized the system! He’s no longer just a part in the entire system, he IS the system – but he still doesn’t understand Chinese. So Searle is once again saying that the program is being run without anything or anyone understanding Chinese. But this time, we can no longer say that Searle is only a part of the system because he is the entire system. He is the one generating the right symbols depending on the symbols that are being shown to him but he still doesn’t understand Chinese.

      Hope I am not way off!

      Delete
  21. "It is important to know that all I am doing is manipulating formal symbols: I know none of these other facts. I am receiving information from the robot's perceptual apparatus, and I am giving out instructions to its motor apparatus without knowing either of these facts. I am the robot's homunculus but unlike the traditional homunculus, I don't know what's going on (8). "

    To me, this example serves to illustrate the difficulty with understanding cognition in general. The homunculus is an infinite regress, and I feel that sometimes when I am trying to understand why human cognition is special, I end up falling prey to the type of logic that ends up in this sort of line (but then everything comes together to make feelings - but what are feelings and how do they come together - i don't know).

    ReplyDelete
  22. “Because the formal symbol manipulations by themselves don't have any intentionality; they are quite meaningless; they aren't even symbol manipulations, since the symbols don't symbolize anything.”
    “And the point is not that it lacks some second-order information about the interpretation of its first- order symbols, but rather that its first-order symbols don't have any interpretations as far as the computer is concerned. All the computer has is more symbols.”

    Searle seems to be hinting at the symbol grounding problem here but before getting into that, I fail to see how humans taking input of a word and mapping it to experiences of what that word represents does not compare to a pattern of activation in a computer referencing another pattern of activation that represents some sort of real world input. I see both as simply patterns mapping to other patterns. Furthermore to say that computers are just symbols all the way down seems imply that there’s a strictly dynamic system in place that allows humans to ground symbols which would be a stronger statement for why cognition cannot be purely computational. Only it confuses me then as to why this point was never made or asserted by the author as it could, with proper explanation, be a valid and somewhat strong point as a concrete point of where cognition and computation differ.

    Also as a separate note and something that has been asked or questioned already in the posts above is the author's definition of "understanding". It seems to me from the reading that the way he uses the word seems to require some sort of introspection or pre-requisite of consciousness in order to "understand" something. I can't say that isn't what he means but that was the impression that I was left with. I also have a hard time myself "understanding" a hypothetical situation in which one could "understand" something without cognition or consciousness in place. If someone could give me an explanation or theoretical example that would help!

    ReplyDelete
  23. “Whatever else intentionality is, it is a biological phenomenon, and it is as likely to be as causally dependent on the specific biochemistry of its origins”

    I have shared though about this claim. In a sense, I do believe in the multiple realization argument: stating that consciousness could be achieve by different kind of physical instantiation. However, a physical, digital, metallic robot couldn’t reach the level of complexity biology provide. There is something fundamental about biology that enables living organisms to learn and to evolve in their environment the way they do. Maybe it is what gives us intentionality as Searle states. I believe the causal link biochemistry has with the emergence of mental states could hardly be translated into a program.

    In the example of the Chinese room argument, Searle argue that the men in the room is only doing symbol manipulation, thus, he does not understand Chinese. What if this is what is happening in the brain at the basis of neurons, and out the highly complex and parallel processing, consciousness emerge. We know the interconnectivity of the brain is tremendous. AI might reach the same level of interconnectivity, but it would still lack the biological foundation we have. So when Searle says : “Whatever it is that the brain does to produce intentionality, it cannot consist in instantiating a program since no program, by itself, is sufficient for intentionality”, the whatever here could be the emergence principle. I don’t think it explains why we are conscious,(and still very far from explaining the hard problem too) but in terms of my personal intuition, it gives a sufficient explanation for why intentionality would depend on biological understructure.

    ReplyDelete
  24. "Very briefly, and leaving out the various details, one can describe Schank's program as follows: the aim of the program is to simulate the human ability to understand stories. It is characteristic of human beings' storyunderstanding
    capacity that they can answer questions about the story even though the information that they give was never explicitly stated in the story."

    Here Searle describes Schank's proposal for a 'watered-down' version of the Turing Test where the only capacity that is tested is the capacity to infer information that is not explicitly present in a story. This is what Searle uses as a main illustration of why any computer program (no matter how 'properly implemented') cannot in effect understand as we do. I don't see how one could even imagine that such a watered down candidate could ever be considered to be 'understanding' in the same sense as we do (since our understanding is based on a myriad of both explicit and implicit learning experiences). I definitely see the importance of Searle's argument with regards to what should even be deemed worthy of Turing Testing and this most definitely seems to convince me that T3 level is to the very least necessary to be even considered as a TT candidate.
    I would definitely like to learn more about why those who feel like (or are convinced that) T2 would be sufficient feel that way! My knowledge of the capacity of computer science is certainly too limited for me to be able to actually posit anything on what an immobile computer would or would not be capable of doing with much conviction!

    ReplyDelete
  25. http://users.ecs.soton.ac.uk/harnad/CM302/Granny/sld012.htm (thanks to alex for posting this)

    This is not directly related to Searle's argument though tangentially it is. I am a little confused about the 11th granny objection! It states that 'computers are isolated form the world; we are not' and the rebuttal to that objection is that 'computers can be as interactive with the world as their input/output devices make them'. I am confused about what this response entails. What comes to mind when reading this would be a program similar to the CYRUS program (also developed in Schank's AI group) not because it would model a specific person (we've already established that the peculiarities of one person were not the point but rather the overall basic capacities of any human brain) but because it sources its information from the internet. Is this what is meant by 'interactive with the world'? If so, it certainly seems too limited to establish 'understanding' in the same sense as us

    ReplyDelete
    Replies
    1. Hey Naima, glad you found the link helpful. I think “interacting with the world” simply means processing external input and using that information to make changes in the world. So you're right, the CYRUS program interacts with the world:

      because it sources its information from the internet.

      It takes input from the world and uses that information to determine the appropriate action. Although I think there are simpler examples that still illustrate this point. For example, let's say you have a coffee machine that starts making coffee as soon as you get to the office every morning. The way this coffee machine works is that it is placed in a windowless room, and it has a sensor that responds to light. At night, when it's dark, the machine is off (0). But when you get there in the morning and turn on the lights, the sensor activates and the machine turns on (1). Then it starts making your coffee based on whatever instructions/program it has been given. It takes input from the world (light), and uses that to make a change in the world (coffee). So machines can be as complicated as we want, with however many inputs, programs, and outputs we need, but the overall idea is that these machines, and the computers that enable them to function, are indeed interacting with the world.

      If so, it certainly seems too limited to establish 'understanding' in the same sense as us

      Exactly! I completely agree, and I think Dr. Harnad is with us on this one too. Interacting with the world is definitely not enough to establish understanding. While computers are not actually understanding, they can still interact with the world, and that's why this Granny Objection is a non-starter.

      Delete
  26. Searle’s paper revolves substantially around the controversial/ambiguous terms of ‘understanding’ and ‘intentionality’ (both of which I have not fully resolved the correct definition). Searle strongly believes that understanding requires more than computation and the mechanical processing of information; therefore, his T2-level Chinese room argument, states that the computer cannot actually understand and is merely performing symbol manipulation. So, when the brain understands some phrase or question, it is doing much more than shuffle around symbols to produce an appropriate output. If the brain is using computation since it is such a powerful tool (i.e. in the sense of performing operations based on the form of symbols), then should neuroscience work towards finding out where and how the symbols are encoded and represented? Would this be a necessary part to show we understand? So, according to Searle, only a system with causal power can fully understand, and the brain is an example of a type of system of this sort. Is it right to conclude that Searle does not demonstrate that the brain is the only system that can have the causal power to understand? (i.e. T3, T4, T5 can understand?)

    “My response to the systems theory is quite simple: let the individual internalize all of these elements of the system. He memorizes the rules in the ledger and the data banks of Chinese symbols, and he does all the calculations in his head. The individual then incorporates the entire system. There isn't anything at all to the system that he does not encompass. We can even get rid of the room and suppose he works outdoors. All the same, he understands nothing of the Chinese, and a fortiori neither does the system, because there isn't anything in the system that isn't in him. If he doesn't understand, then there is no way the system could understand because the system is just a part of him.”

    When Searle states his counterargument against the systems reply, it seems as if he is describing a T3 robot that interacts with the world and “works outdoors.” However, if this is the case, I think it is a stronger (and not necessarily true) claim that this robot does not understand. If the robot is able to interact with the environment, give answers by binding together heard phonemes to what it is seeing with its ‘visual system’ and using this information (as well as previous knowledge) to formulate an appropriate answer, then I would argue that this is understanding (regardless if this is done in English, Chinese, or another language.) It could be that the feeling of understanding is attributable to what it feels like to interpret/combine all the sensory stimuli + symbols to convey the correct output. But then again, as Harnad mentioned, the symbol grounding that occurs in T3 is still an act of doing, not feeling. Once again, this feeling is hard to concretely describe; thus, it seems we are no closer at knowing the what it feels like to understand and this begs the question if robots will ever reach this capacity (or put another way, if we will ever be able to accurately describe how/if robots understand).

    ReplyDelete
  27. I wanted to discuss Searle’s response to the Robot Reply, which was the one point of his argument that I found inaqeduate. While I think that (at least in the manner it was phrased in the paper) the original Robot Reply was weakly argued itself, I was unsatisfied nonetheless with what I think is a pretty critical point, especially if we are to accept T3 as the true TT (Stevan says).

    … the answer to the robot reply is that the addition of such “perceptual” and “motor” capacities adds nothing by way of understanding, in particular, or intentionality, in general… Suppose, unknown to me, some of the Chinese symbols that come to me come from a television camera attached to the robot and other Chinese symbols that I am giving out serve to make the motors inside the robot move the robot’s legs or arms.

    Searle continues here by concluding that he is the “homunculus” inside this robot, and that since he, as the homunculus, is still not “understanding” in any way, the Robot reply is debunked. I could be misunderstanding the concept of T3 we have accepted in class, but it seems to me that he is underselling what the perceptual abilities of the robot can really do – importantly, the attachment of real-world meaning to the formal symbols that will be used in future computations. Wouldn’t a robot at the true T3 level be immune to his argument, because it’s no longer implementation-independent? I would intuitively feel that a T3 system enables symbol grounding.

    I’ll try and make my thoughts more clear with a kid-sib-ish hypothetical example. Let’s say the robot (with Searle sitting tight inside with his trusty manuals) ‘sees’ with its visual system a cat. That system then transduces the real-life analog cat to a digital image of a cat, but also presumably has the capacity to store this image, and beyond that, to recognize its important features and assign them to the category “cat” in its storage (memory) system. Now imagine that the Chinese symbol for cat is presented to the television camera system Searle mentions, and the Chinese symbol is paired with the image of a cat. The dynamical robot system can now pair that symbol with a meaningful category it has in its memory. If we then imagine that this happens with every Chinese symbol included in the manuals that Searle has, it seems to me that it becomes trivial that Searle himself doesn’t understand any of the symbol manipulation he is doing – he is just syntactically stringing together sets of symbols without any knowledge of their semantics, but these semantics (if what I’m saying makes sense) are grounded in another part of the robot. In other words, the system understands, even if Searle doesn’t.

    I guess through this thought experiment I’ve inadvertently tried to make a case for the systems reply, but what I really wanted to focus on was his misrepresentation of a robot, at least insofar as we have discussed them.

    ReplyDelete
  28. On the basis of his 'Chinese Room' (CR) thought experiment, Searle reasonably rejects the claim that computation is sufficient for cognition, but prematurely dismisses the idea that it may be necessary.

    As regards the second claim, that the program explains human understanding, we can see that the computer and its program do not provide sufficient conditions of understanding since the computer and the program are functioning, and there is no understanding. But does it even provide a necessary condition or a significant contribution to understanding?

    Searle takes his CR as a counterexample to the thesis that cognition is computation; the CR describes a system that is capable of passing T2 by means of formal symbol manipulation, but which lacks the cognitive capacity of “understanding” or “intentionality,” which a complete model of cognition must account for. I agree with this assessment. Searle goes on to ask whether computation might still be necessary for understanding, or if it has any role in understanding at all. A more relevant question is whether computation has a role in cognition in general—i.e. is formal symbol manipulation going on when cognition goes on, regardless of how or whether those symbols get their meaning?

    One of the claims made by the supporters of strong AI is that when I understand a story in English, what I am doing is exactly the same -- or perhaps more of the same -- as what I was doing in manipulating the Chinese symbols. It is simply more formal symbol manipulation that distinguishes the case in English, where I do understand, from the case in Chinese, where I don't. I have not demonstrated that this claim is false, but it would certainly appear an incredible claim in the example.

    This claim, that understanding is simply the result of further symbol manipulation, is not the relevant claim to consider. Having conceded that formal symbol manipulation is not in fact sufficient for cognition, since it cannot account for the meaning of symbols—i.e., for understanding—the question that remains is whether it has a role in cognition at all.

    Such plausibility as the claim has derives from the supposition that we can construct a program that will have the same inputs and outputs as native speakers, and in addition we assume that speakers have some level of description where they are also instantiations of a program. On the basis of these two assumptions we assume that even if Schank's program isn't the whole story about understanding, it may be part of the story. Well, I suppose that is an empirical possibility, but not the slightest reason has so far been given to believe that it is true, since what is suggested – though certainly not demonstrated -- by the example is that the computer program is simply irrelevant to my understanding of the story. In the Chinese case I have everything that artificial intelligence can put into me by way of a program, and I understand nothing; in the English case I understand everything, and there is so far no reason at all to suppose that my understanding has anything to do with computer programs, that is, with computational operations on purely formally specified elements.

    Searle correctly points out the assumptions that motivate the idea that computation plays a crucial role in cognition, but by limiting his discussion to “understanding” he again loses sight of the issue at hand. Even Searle has no problem with the assumption that a purely computational system could in principle pass T2. However, he assigns minimal importance to its conceivably, since he has honed in on “understanding” or “intentionality” as the mark of cognition rather than a mark of cognition. Satisfying the input/output functionality of cognitive processes is a victory in itself; that computational models can conceivably do so within the scope of T2 reveals the power of computation to explain at least this aspect of cognition.

    ReplyDelete
  29. This comment has been removed by the author.

    ReplyDelete
  30. The comments in this paper regarding dualism and biological processes (as well as many of the other arguments) have officially convinced me that computationalism cannot possibly be the process by which cognition, and as such, understanding and intentionality, occurs. The quote that made this the most obvious was from the conclusion:

    "No one would suppose that we could produce milk and sugar by running a computer simulation of the formal sequences in lactation and photosynthesis, but where the mind is concerned many people are willing to believe in such a miracle because of a deep and abiding dualism: the mind they suppose is a matter of formal processes and is independent of quite specific material causes in the way that milk and sugar are not."

    This quote reformed my understanding of what computationalism was. While I did not agree fully with computationalism, I couldn’t seem to separate it from my understanding that neuronal firing results in all that occurs in the brain, be it random firing, or full blown cognition and the mind. I had been thinking of neuronal firing as a correlate to ‘inputs and outputs’ from computationalism, but in reality, this is not the case. Neuronal firing is a biological process, within another biological system. Previously, I had a looser understanding of that which can be classified as an input: I had thought that the biological process itself could be the ‘input’, and then through a combination of these processes (inputs), you have the output that is the mind. But the definition of inputs and outputs in this context cannot be stretched in such a way, because in doing so, it suggests that the ‘hardware’ is a part of the input, which by definition is not computationalism (which also makes the dualist argument so much more obvious). I suppose this would be the dynamics that we referred to in class. Dynamics also cannot be seen as a computational input because they are part of the hardware, that is, a part of the biological systems of the brain. They have a role in the equation that results in the mind, but being in the equation is not the same thing as being a computational input (my biggest ‘ah ha!’ moment). I now realize the difference between simulation and duplication, and what it means in terms of the brain. The fact that the actual thing cannot be reproduced in a computational simulation of lactation is exactly why we cannot assume that the actual mind is reproduced in a computational simulation of the brain. All we have is more symbols further simulating the output, not the actual real-life output; no actual mind, so no actual understanding. Simulated understanding, not duplicated understanding. This is a eureka moment!

    I also feel it is necessary to mention my appreciation for Searle, who, in his acknowledgements, extended one of the sassiest thank-yous that I have seen in academic writing. Understandably so, since he is clearly completely boggled by the pursuit of computationalism as a theory for cognition.

    ReplyDelete
  31. This comment has been removed by the author.

    ReplyDelete
  32. Overall, I enjoyed Searle’s critique of Computationalism – and truly appreciated his stance on intentionality as a biological phenomenon. “Whatever else intentionality is, it is a biological phenomenon, and it is as likely to be as casually dependent on the specific biochemistry of its origins as lactation, photosynthesis, and or any other biological phenomenon.” The systems/theories provided to understand the human mind do not support dualism, and I feel I am left to ponder what level biological complexities might be necessary or sufficient to manifest intentionality, or our terms, consciousness.

    Furthermore, it feels as if Searle’s attempts to invalidate the Turing Test and Computationalism, only go so far as to refute T2 – would this be a correct interpretation?

    ReplyDelete
    Replies
    1. I’m not sure it’s a correct interpretation; the way I understood it, T2 does not involve interaction with the outside world, while T3 does. Searle’s Chinese Room argument thus targets T3 because there is input from the outside world to the inside of the room. That way, even though there is input, it’s unintelligible, so even with interaction with the world there is not necessarily consciousness, and T3 is refuted.

      Delete
  33. I had not fully internalized (although I did accept) the idea that cognition is not computation until I read Searle's point that we wouldn't expect to obtain milk and sugar by running a formalized simulation of lactation and photosynthesis. Of course this is true! This does, however, lead me to think about whether we can have “fake” feeling, in the same way that we have artifical sweeteners, or replacement milk. We often will even call food grown in a laboratory “fake”, even though it has the same composition. Two things: 1. can we have simulated, fake feeling in the same way that we can create an artificial heart? Can there be said to be a substitute? If not, then AI must be a bust, but then whether or not the AI is really thinking is just a question of semantics.

    ReplyDelete
    Replies
    1. Hey Nirtiac,

      I think you've raised some interesting questions that I'd like to tackle.
      " 1. Can we have simulated, fake feeling in the same way that we can create an artificial heart? Can there be said to be a substitute?"

      If I understand your question correctly, I think the answer is yes. 'Simulated' feeling, that is, feeling which was induced by some extrinsic force (say transcranial magnetic stimulation which can modulate the excitability of cortical surfaces and induce the feeling of seeing spots in one's vision) is still feeling all the same. Just because the feeling is not generated endogenously (by endogenous neural substrates), doesn't mean that it's not still felt. In the same way, artificial hearts and milk are not endogenously generated, but still behave as a normal heart and milk would.

      "If not, then AI must be a bust, but then whether or not the AI is really thinking is just a question of semantics."

      Hmm. This makes me think you meant 'computationally simulate' feeling... If we're going by a strict definition of computational simulation, then feeling would certainly not be a possibility since a simulation is syntactic symbol manipulation on the basis of form not meaning. You can certainly simulate cognition (and I suppose feeling by extension) but as per Searle's CRA, there is no understanding, or feeling of understanding in computation, hence 'feeling' cannot be 'simulated'. I don't see why this should mean that AI is a bust. What's wrong with T3 robots that break free of this computation-only restriction? We don't know whether they'll be feeling, but we certainly can't rule it out!

      Delete
  34. Searle’s Chinese room argument is intuitively convincing, in the way that Greek myths was used to explain natural phenomena. Although his description of the man in the room seemed to solve the problem of the mental in AI, it seems Searle’s representation was an oversimplification of the complex mechanism of the brain.

    Searle asks “where is the understanding in this system?,” a system in which a man receives instructions and turns valves on or off accordingly and produces an output without understanding the formal instructions he was given. Drawing a parallel, he asks where the understanding would be in a brain simulator. While this seems like a logical question to ask, it doesn’t hold much weight as we can ask, “where is the understanding in the brain?” Granted, we can’t point to a specific interaction or even area in the brain simulation and say “the understanding is right there,” but we also can’t say anything about the understanding in our brain.

    “the man certainly doesn-t understand Chinese, and neither do the water pipes”
    It seems ridiculous to ask whether the formal program or even physical hardware of the brain simulation has an understanding, but we cannot discredit the simulation because its individual parts do not possess an abstract function that arises out of a system: individual neurons also don’t have an inherent understanding of what it’s doing, but there is an understanding that occurs when a certain sequence or pattern of neurons are activated.

    “The problem with the brain simulator is that it is simulating the wrong things about the brain. As long as it simulates only the formal structure of the sequence of neuron firings at the synapses, it won't have simulated what matters about the brain”
    It seems that Searle disagrees with the idea that mental states arise as a by-product of the brain processes, or in other words, that the mental states is the brain processes. What is wrong with simulating the formal structure of the brain? Certainly, neither neurons nor neuronal networks have an “intentional state” that must be reproduced in a simulation. The intentional state is what arises from the sequence of neuronal firing, not that certain neurobiological molecules have causal properties of the intentional state.

    It seems that Searle adopts a dualist point of view: he denies that the formal structure of the sequence of neuronal firings does not produce intentionality – there lies something about the powers of the brain that produces understanding, perhaps a “soul.”

    ReplyDelete
  35. "One of the claims made by the supporters of strong AI is that when I understand a story in English, what I am doing is exactly the same -- or perhaps more of the same -- as what I was doing in manipulating the Chinese symbols. It is simply more formal symbol manipulation that distinguishes the case in English, where I do understand, from the case in Chinese, where I don't. I have not demonstrated that this claim is false, but it would certainly appear an incredible claim in the example. Such plausibility as the claim has derives from the supposition that we can construct a program that will have the same inputs and outputs as native speakers, and in addition we assume that speakers have some level of description where they are also instantiations of a program."

    This made me think of language acquisition and the ongoing discussion of whether or not second or third language acquisition can ever be native-like. I don't believe there is an agreed upon theory, but there are many different theories, some, for example, claiming that second language speakers can never be native-like due to constraints imposed by their first language. If this turned out to be true, this means that the human mind is incapable of "constructing a program" which will have the same outputs (and perhaps also the same inputs, if they have difficulty distinguishing between the unfamiliar phonemes of the second language) as native speakers. Of course, this does not discount the possibility of native languages being constructed as programs. Still, it does suggest that not all of cognition is computation.

    I was rather confused throughout the article about Searle's use of the word 'intentionality'. I assumed it had something to do with intention, but I had to look it up to be sure. Turns out it is a philosophical concept defined as "the power of minds to be about, to represent, or to stand for, things, properties and states of affairs. The term refers to the ability of the mind to form representations and should not be confused with intention." Is this what he meant by intentionality?? Or was he talking about intentions? Because the article reads differently depending on which meaning he intended. Perhaps I missed something and he did clarify that at some point?


    ReplyDelete
  36. After reading this I find myself going in circles, I completely agree with Searl when he says that no program can have understanding. I really liked his systems reply when he says "let the man internalize the system and he still doesn't have understand Chinese." But I also get stuck when we think about the brain, what else could It possibly be doing that isn't computation. Our view so far is that neurons interact in certain ways, firing or not firing depending on the inputs of other firing neurons. Its hard to imaging the brain is doing anything else that can not be represented with computation. The robot reply is tempting to agree with but if we allow the idea that allowing the robot to see and hear gives it cognition, or the ability to understand, then you are also somewhat saying that a human born blind and deaf will never have cognition or the ability to understand.

    I am unwilling to accept that so I am back to the place where there must be something inherent to the brain itself, that allows it to understand and to have cognition, but it cant be just computation. But so far thats the only thing we know the brain does for sure.

    ReplyDelete
  37. Response to “the Robot Reply”:
    The first thing to notice about the robot reply is that it tacitly concedes that cognition is not solely a matter of formal symbol manipulation, since this reply adds a set of causal relation with the outside world [cf. Fodor: "Methodological Solipsism" BBS 3(1) 1980]. But the answer to the robot reply is that the addition of such "perceptual" and "motor" capacities adds nothing by way of understanding, in particular, or intentionality, in general, to Schank's original program. To see this, notice that the same thought experiment applies to the robot case. Suppose that instead of the computer inside the robot, you put me inside the room and, as in the original Chinese case, you give me more Chinese symbols with more instructions in English for matching Chinese symbols to Chinese symbols and feeding back Chinese symbols to the outside. Suppose, unknown to me, some of the Chinese symbols that come to me come from a television camera attached to the robot and other Chinese symbols that I am giving out serve to make the motors inside the robot move the robot's legs or arms. It is important to emphasize that all I am doing is manipulating formal symbols: I know none of these other facts. I am receiving "information" from the robot's "perceptual" apparatus, and I am giving out "instructions" to its motor apparatus without knowing either of these facts. I am the robot's homunculus, but unlike the traditional homunculus, I don't know what's going on. I don't understand anything except the rules for symbol manipulation. Now in this case I want to say that the robot has no intentional states at all; it is simply moving about as a result of its electrical wiring and its program. And furthermore, by instantiating the program I have no intentional states of the relevant type. All I do is follow formal instructions about manipulating formal symbols.

    This excerpt is an answer to the counter argument to Searle’s Chinese room though experiment. The counter argument posits that a robot governed by the Chinese room and acting in a way indistinguishable from other humans beings can be said to have a mind.
    I am quite fond of the above counter-counter argument. Although Searle disagrees with it, he does not completely tear it down, but instead acknowledges its valid points, such as the fact that “cognition” is not only symbol manipulation, but also encompasses relations with the outside world. It did not seem like he had taken that into account previously, however here he was able to incorporate it into his argument in a cohesive and sensical manner. For in the end, be it with regard to perceptual or motor functions, all can be reduced to a Chinese room, as he clearly explains.
    However, the one point that I am slightly unsure of, it this “television camera” that would feed Chinese symbols to the room. If the camera is connected to the outside world (simulating sight), how can we be sure that the only input would be Chinese symbols? Wouldn’t there be other images? Were that the case, wouldn’t these images somehow influence the person inside the room, or yield some kind of understanding? In which case the complete lack of understanding would no longer be present… I am slightly confused about this because in that case it would no longer be pure manipulation of formal symbols.

    ReplyDelete
  38. Searle writes in The Combination Reply “Imagine a robot with a brain-shaped computer lodged in its cranial cavity, imagine the computer programmed with all the synapses of a human brain, imagine the whole behavior of the robot is indistinguishable from human behavior, and now think of the whole thing as a unified system and not just as a computer with inputs and outputs. Surely in such a case we would have to ascribe intentionality to the system” (pg. 9). Any such machine would have passed T1, T2, and T3, also known as the Total Turing Test. The Turing Test hierarchy explains that a machine that passed the Total Turing Test would be “indistinguishable in both text-based and total external sensorimotor (robotic) ability” (Cullen, 2009). Searle argues that such a robot would be considered to have intentionality, so long as we were unaware that the computer brain was guided by a formal program. Searle argues with the suggestion that if a computer has the right sort of program this would be sufficient to create understanding in that machine. He states that formal symbol manipulations (which would create the “rules” by which the computer would “think”) by themselves don’t have any intentionality, and “such intentionality as computers appear to have is solely in the minds of those who program them and those who use then those who emend in the input and those who interpret the output”. It seems like Searle’s conception of intentionality may be better described as consciousness, or self-consciousness of one’s own thoughts and behaviors. This consciousness may be sufficient to allow the machine to form its own beliefs and deviate from it’s programmed “thinking” to a process more similar to conscious human through. What dimensions of consciousness/thought/intelligence do you think Searle meant to encompass in his use of the term “intentionality”?

    Cullen, J. (2009). Imitation versus communication: Testing for human-like
    intelligence. Minds & Machines, 19, 237-254. DOI 10.1007/s11023-009-9149-3

    ReplyDelete
  39. "But could something think, understand, and so on solely in virtue of being a computer with the right sort of program? Could instantiating a program, the right program of course, by itself be a sufficient condition of understanding?

    [No] Because the formal symbol manipulations by themselves don't have any intentionality; they are quite meaningless; they aren't even symbol manipulations, since the symbols don't symbolize anything. In the linguistic jargon, they have only a syntax but no semantics. Such intentionality as computers appear to have is solely in the minds of those who program them and those who use them, those who send in the input and those who interpret the output."

    I think this question and answer nicely summarize the point that Searle is trying to make in his Chinese Room argument. Ultimately, his refutation of computationalism (which I find very convincing) seems to be founded in the symbol grounding problem. How could cognition = computation if there is no true understanding involved in symbol manipulation? In some sense I sympathize with Searle when he mentions that mental phenomena are biological byproducts (at least I find the idea of a conscious machine conceivable but the things which most people believe are conscious seem to only be biological). I am unlikely to doubt that another human or a biological organism with a similar nervous system is feeling pain when they step on something sharp and cry out, yet if a robot (with some sort of mechanosensory mechanism of transduction) steps on something sharp and cries out, I am not as inclined to believe that the robot truly felt pain - couldn't it just be that the robot has a rule that IF a certain mechanical input has the features of being sharp, THEN it will execute a shout?

    ReplyDelete
  40. After reading Searle’s paper, I believe I have a more thorough understanding of computationalism and cognition. However, there are some points that I wanted to touch upon that I am still unclear about. Firstly, the paper refutes the concept of Strong Al, and does not attempt to object the concept of Weak AI. In describing Weak AI, the article states, "the principal value of the computer in the study of the mind is that it gives us a very powerful tool. For example, it enables us to formulate and test hypotheses in a more rigorous and precise fashion”. Although the author believes that there is little to object when it describes to Weak AI, perhaps there is a bigger problem rooted in its existence. How are we confident that Weak AI is providing us with more answers, and not simply more questions? If there are so many doubts when it comes to Strong AI, why should we not go back and analyze the concept of weak AI?

    Another point I thought was very interesting was in regards to the different degrees of understanding. We often attribute "under standing" and other cognitive predicates by metaphor and analogy to cars, adding machines, and other artifacts, but nothing is proved by such attributions. We say, "The door knows when to open because of its photoelectric cell," "The adding machine knows how) (understands how to, is able) to do addition and subtraction but not division," and "The thermostat perceives chances in the temperature. This is further supported by Newall and Simon (1963)who write that the kind of cognition they claim for computers is exactly the same as for human beings. This is very interesting in the sense that it sheds light on our metaphoric association to objects understanding. However, I don’t think it is reasonable to draw the parallel comparison of a door understanding when to open, to a computer understanding, to a human understanding. These are all very different concepts of ‘understanding’ - the first being an extension of our own intentionality, and the second and third still being highly controversial. However, I think they all revolve around different definitions of the word ‘understanding’, which is why they cannot be directly compared in the first place.

    ReplyDelete
  41. "Suppose also that after a while I get so good at following the instructions for manipulating the Chinese symbols and the programmers get so good at writing the programs that from the external point of view that is, from the point of view of somebody outside the room in which I am locked -- my answers to the questions are absolutely indistinguishable from those of native Chinese speakers. Nobody just looking at my answers can tell that I don't speak a word of Chinese...

    ...The whole point of the original example was to argue that such symbol manipulation by itself couldn't be sufficient for understanding Chinese in any literal sense because the man could write "squoggle squoggle" after "squiggle squiggle" without understanding anything in Chinese. "

    The fact that Searle (or anyone/anything) would not experience understanding is an assumption. Perhaps an intuitive one, certainly one that is hard to disagree with (I don’t necessarily) but an assumption nonetheless. Presumably if Searle had wanted to defend computationalism he could have just as easily constructed a though experiment very much like this one with the only difference that, to begin with, he assumes understanding is an obvious by-product of properly instantiating the right program. That is, that the "right” program, being instantiated in the “right” way, would ‘magically’ yield understanding by virtue of being just such a “right” program. Given that he has not actually conducted the experiment of instantiating such a program (as of course he never could) there is no real reason he should claim a program as he has imagined (simple set of instructions, full set of chinese symbols) could somehow result in the desired output, as he does assume, but does not inevitably create understanding. Perhaps he is not letting his imagination run as wild with the capabilities of the program as he is with the supposedly obvious results (i.e. perfect performance on the Turing test)

    ReplyDelete
  42. I am curious about the following: Lets consider a machine that could only compute some situations and not others (just like how many humans can understand some things but not others). Would Searle consider such a machine as a suitable description for human cognition?

    ReplyDelete
  43. I thought it would be interesting to investigate what Searle means when is talking about “intentionality.” I can’t help but see this as a borrowed philosophical term, and I do think the concept plays a central role to the main claim of his paper. His first claim (I) is that “certain brain processes are necessary for intentionality.” Another claim is basically that semantic interpretation (in the sense of understanding a language, for instance – Chinese) is sufficient for intentionality. He argues this by creating the Chinese Room thought experiment. He wants to show how simply turning on the machine (instantiating the program) is not sufficient to establish intentionality. From outlining his argument a little bit, I think it’s clear that Searle uses this concept of intentionality quite a lot.
    So how does Searle explain what me means by the word intentionality? I think he really clarifies his understanding of the concept in light of how a person might “understand” the roles and purposes for the objects around them. I think Searle is employing a phenomenological understanding of the intentionality. He says that we extend our intentionality into the objects around us. Tools become extensions of the purposes belonging to the perceiver of those tools. It seems “natural” to make metaphorical attributions to world of objects which are ready-to-hand and are within-the-world (to use terminology employed by Martin Heidegger). These attributions are “natural” in the sense that, at least according to Heidegger, they exist as primordial ontological assumptions. We carry an understanding of what it means to be, and we throw it at the objects we encounter – simultaneously forgetting the role of that understanding.
    Anyways, I don’t think Searle is the first psychologist to employ Heidegger’s phenomenological concept of intentionality for the purpose of his claims. Phenomenology has had a huge impact on existential psycho-analysis, and humanistic psychology in general. What I’m saying is that I see a strong philosophical basis for Searle’s understanding of what computation isn’t. Computation is not sufficient for intentionality. This is to say that understanding instructions in the sense of a machine performing symbol manipulation is not the same as extending our own intentionality into the artefacts of the world. Moving around the shapes of squiggles according to given rules is very different from throwing purpose into the meaning of words.

    ReplyDelete
  44. “according to strong AI, the computer is not merely a tool in the study of the mind; rather, the appropriately programmed computer really is a mind, in the sense that computers given the right programs can be literally said to understand and have other cognitive states”

    From the article, we learn that Searle is essentially manipulating symbols, based on squiggles and squaggles and not based on their meanings. This is what computation is. Hence when we look at the Chinese room experiment, we see that what is being done is computation and not cognition. There is no understanding of Chinese but the end result is Chinese.

    So, what we see is that the inputs and outputs are the same when it comes to a computer and a ‘thinker’ but the process of getting to the output from the input is different.

    This leads me to the example of how humans used to mimic birds to learn how to fly. They failed. But then the wright brothers came along and created an airplane which was not something to mimic birds but was something that was inspired by birds. Similarly, I am of the view that computation is not going to be cognition, but that similar end results can be achieved despite them not being the same.

    But when we take the Strong Church Turing thesis into account, we see that everything can be simulated by computation. Searle was a computationalist and believe that ‘thinking’ was hardware independent. But simulation is not doing. For a sense of doing, we need hardware to be lined to the software. Hence, can we say the, with the right hardware, computation can help us achieve the end-results/output of cognition (feelings aside)?

    ReplyDelete
  45. Searle states, ‘’So there are really two subsystems in the man; one understands English, the other Chinese.’

    By this phrase, it seems like Searle is saying that a person is able to tell what a word means. For example, a person who is speaking English but does not understand Chinese understands English. He knows what “celery” means but not in Chinese but he is unaware of that reference. Then, is it safe to conclude that the computer has understanding of that language when it uses it to manipulate the other language? Then maybe, the machine understands the language giving instruction and not the one being manipulated? Then, why couldn’t a machine have the capability to manipulate things that have a content. Why cant they do much more than computation?

    ReplyDelete