Saturday 2 January 2016

3b. Harnad, S. (2001) What's Wrong and Right About Searle's Chinese RoomArgument?

Harnad, S. (2001) What's Wrong and Right About Searle's Chinese RoomArgument? In: M. Bishop & J. Preston (eds.) Essays on Searle's Chinese Room Argument. Oxford University Press.



Searle's Chinese Room Argument showed a fatal flaw in computationalism (the idea that mental states are just computational states) and helped usher in the era of situated robotics and symbol grounding (although Searle himself thought neuroscience was the only correct way to understand the mind).

95 comments:

  1. (Somewhat unrelated to this article)

    Question re: the Hard Problem, Hofstadter, Godel, prinicipia mathematica, and computationalism

    I have a question regarding the hard problem and feeling. Godels theorum to discredit principia mathematica, stating it is impossible to create a complete system that is in no way self referencing and thus without contradiction, refers to how the strength of the system itself creates the inevitable biproduct of self-reference. This question is kind of brought up by Hofstadter in GEB: The eternal Golden Braid, where he theorizes that it is the patterning of infinite loops resulting from inevitable self-reference within complex systems, as pointed out by Godel, that creates meaning (he provides a Layman's explanation by saying that this patterning is the connection between the inanimate and the animate).

    I am wondering why human cognition cannot be an example of such a system. If it is feeling that separates animals from the inanimate as we said in class, then this must be the same concept that Hofstadter refers to, which he attributes to 'infinite loops'. 'Infinite regression' or 'eternal contradiction' (If I recall correctly, all three terms are used in the book) create the experience of feeling, because it can be argued that feeling is the meaning that arises from the experience of doing (Hofstadter argues true meaning only exists within a pattern, not as an individually interpreted symbol). I think this is what one of the student's in class today may have been intending with their comment .. they brought up something about "feelings being simultaneous", but they couldn't quite define what they were simultaneous with. If an action or experience has meaning because of the feeling, and the feeling is the product of a self-referencing system that creates an infinite loop with meaning, then it could be said that some infinite loop is triggered by each experience which gives rise to the internal feeling of that external experience, and thus the idea of a "soul". (sorry, that was not kidsib friendly..)

    Now I am not a compsci student, so I am not certain that something like principia mathematica is purely computational or if it is a hybrid of both dynamic and computational interaction. My impression had been the latter. So I am not stating this to give credit to computationalism, but rather, to suggest that feeling could be the result of a particular type of patterning that occurs within the dynamics and computations of the brain, one that is so distinct that it creates a result we would maybe call feeling, and the type of patterning Hofstadter calls infinite loops. (I don't mean to suggest that some circuit in the brain would go on for forever, but more that there could be some neural correlate to this concept).
    [continued in next comment]

    ReplyDelete
    Replies
    1. [continued]
      I know this explanation does not well define the question of what is an infinite loop in terms of neuroscience. It also still does not solve the other minds problem that seemingly makes the hard problem unsolvable, since no matter what we produce, we still cannot be certain we have produced feeling (like today's explanation for why T4 and T5 are inadequate). But if we were to create a robot or some AI system that in its perfection, created self-reference, contradiction, and as such, infinite loops, could this be a viable theory on how to recreate feeling? I'm troubled by the arguments that you made today, that there is still no real way to know that we have succeeded without the discovery of a force that is in itself feeling. Not troubled in terms of understanding; more so that I don't feel we have yet reached a contradiction to imply such conclusive impossibility. I am also then even more uncertain, because by Hofstadter's logic, if a contradiction were to be found, this contradiction could actually be the beginning of the meaning itself, assuming it took some form of an infinite loop. So long as we assume feeling to be a biproduct, is there any way to test Hofstadter's theory? Or is the attempt to do so an infinite loop itself, like trying to solve the wave/particle dilemma in physics? To rephrase, I feel like until a contradiction with respect to feeling has been discovered, we cannot conclusively say that the hard problem is unsolvable (or solvable). The irony in that is that while in logic, contradiction is sufficient for disproving something, by Hofstadter's theory regarding infinite loops, complex contradiction is the creation of true meaning itself.

      I don't know if this makes sense.. I am incorporating some ideas that I admittedly do not necessarily understand to their completeness, which makes this a little more complicated. I was the one who brought up Higgs Boson today, but after thinking deeper into it, the possibility of finding another force, after the Higgs Boson look alike was shown to act in the necessary way to explain the interaction of all other forces (especially the irregularity of the weak forces), seems more and more implausible. But I am curious for your thoughts on the things I mention above. Thank you!

      Delete
    2. Hofstadter Hermeneutics I

      Hi Esther: The liar's paradox ("This sentence is false") is based on self-reference (or rather on self-denial) and leads to a paradox (if it's true, it's false and if it's false, it's true). Russell (the author of Principia Mathematica, an attempt to reduce arithmetic to logic) turned the liar paradox into a set-theoretic paradox ("Does the set of all sets that do not contain themselves contain itself or not?"). Then Goedel did a much more sophisticated construction in which he Goedel-numbered all the terms and strings of terms in arithmetic and showed you could always generate a theorem with Goedel number G that says "The theorem with Goedel number G is not provable." This is not a paradox: The theorem is true, as we see (it doesn't say "I am false," it just says "I am unprovable.") Turing did another proof of the fact that there are always unprovable theorems in arithmetic which, as far as I know, proves the same thing, but without using self-reference.

      I think the one obsessed with self-reference is Doug Hofstadter! (But since he is a vegan, he is forgiven for these self-indulgences.)

      Self-reference and its paradoxes have next to nothing to do with consciousness, nor with the animate (living) vs the inanimate (there are living things, like plants, bacteria and microbes, that [probably] don't feel) and especially not with the "hard problem" of explaining how and why (feeling) organisms feel. The self-referential puzzles are a hermeneutic red-herring, mostly arising from an obsession with self-consciousness (an obsession I've never shared or understood). (Kid-sib: hermeneutic means "a matter of interpretation" rather than fact.)

      Feeling does not separate human animals from non-human animals. Both feel. And Doug Hofstadter's loops have nothing to do with it...

      'Infinite regression' or 'eternal contradiction'... create the experience of feeling, because it can be argued that feeling is the meaning that arises from the experience of doing"

      That's what I mean by hermeneutics. Astrologists do hermeneutics too, and Freudian psychoanalysts, and religious believers, and Marxists etc. etc. Anyone who has a belief system in terms of which anything can be interpreted. That is exactly what causal explanation is not.

      "(Hofstadter argues true meaning only exists within a pattern, not as an individually interpreted symbol)."

      You'll have to try telling that to an annelid, a little worm whose only feeling in life may be "ouch".

      Delete
    3. Hofstadter Hermeneutics II

      "If an action or experience has meaning because of the feeling, and the feeling is the product of a self-referencing system that creates an infinite loop with meaning, then it could be said that some infinite loop is triggered by each experience which gives rise to the internal feeling of that external experience, and thus the idea of a "soul"."

      And it's also too fanciful an explanation to fit an annelid, yet with annelids (assuming they are not Zombies, but feel pain) you already have the full-blown hard-problem.

      "by Hofstadter's logic, if a contradiction were to be found, this contradiction could actually be the beginning of the meaning itself, assuming it took some form of an infinite loop. So long as we assume feeling to be a biproduct, is there any way to test Hofstadter's theory?"

      No, because it's not a theory. It's just an interpretation.

      "I feel like until a contradiction with respect to feeling has been discovered, we cannot conclusively say that the hard problem is unsolvable (or solvable)."

      You are right. I certainly have not given a proof that the hard problem is unsolvable. (That's why I said it's just "Stevan Says.") I've just pointed out some things that make it unlikely to be solvable. Here's another one: In a Darwinian world, -- where the name of the game (of life) is doing whatever it takes to survive and reproduce -- once you have fully explained doing, not only are there no causal degrees of freedom left over to explain feeling with, but feeling looks kind of superfluous: The T3 mechanism itself would explain how and why organisms can do what they can do. It generates the capacity, indistinguishable from our own. And doing the right thing is all that's needed to survive and reproduce. So what is the causal role of feeling? (To put it another way, why should injury hurt? Isn't it enough to detect it, and be able to do what it takes to escape it, and learn to avoid it? And that's what T3 explains. So why on earth does any of that doing-capacity need to be felt?)

      Delete
    4. I think there may be many possible adaptive reasons why things need to be felt. After all if we assume so many other species can feel in the sense we do, I have a hard time believing there isn't an adaptive purpose behind it. However, I admit that there is an inability to test any hypothesis on this since we cannot manipulate the "felt" experience.

      Delete
    5. William, do you have an example of an adaptive reason for why beings feel? (I don't mean to put you on the spot, I'm just curious). I've been thinking about that question a lot and I don’t have a good answer.

      The closest idea I have is that feeling would give an additional incentive for living beings to ruminate on past or future events. If something was painful, and it felt painful, this could cause the being to think about the causes and conditions leading up to that event, in order to act differently in the future (so as not to be injured and increase the chances of passing on one’s genes).

      While this doesn't make feeling necessary, feeling could be a means to accomplish this end. Of course, it is possible the same could be accomplished without feeling, so I'm not sure how good my speculation is.

      Delete
    6. I think the casual explanation for why we feel is exactly for learning itself. Sure, computers may not need to feel to learn but that doesn’t distract from the fact that it is integral to our learning capabilities. Perhaps the first life form to “feel” did so by a complete evolutionary accident and then for humans, feeling has been conserved by evolution since it is extremely adaptive. For example those who do not feel pain due to congenital defects usually do not make it to middle age. Feeling is our way of detecting things, learning from them and remembering them. If you tell a child “don’t play near the bee hive, bees sting” they can understand that bees are dangerous without feeling anything but they’ll understand much, much better if they feel the pain of a bee sting and are forever cautious around bees. So I agree with Joseph that feeling might not be necessary but is one means to an end. I’m not sure that this consigns feeling specifically to humans or animals though – maybe a computer won’t “feel” in the way that we do (essentially serotonin and dopamine) but could do so in other ways.

      Delete
    7. I would like to touch on an underlying confusion I seem to be feeling more strongly as we progress in the course on the differences in what is meant by 'how' and 'why' our brain does what it does.
      When one discusses finding the causal mechanisms underlying what the brain does one is looking for what 'things' are responsible for producing the broader 'output' (im at a loss for a better term, 'behavior' somehow didn't seem appropriate) observable on the level of the cognizing being. Why would a neuroscientific (once its been sufficiently completed) account not be sufficient to solve the 'how' part?
      Part of the explanation to this question would be that correlation is not causation and that being able to trace what kind and what doses of neurotransmitters are being released and in what areas of the brain at a certain time could only account as being corollary to whichever experience is going on at the same time as this mechanism in the brain.
      I guess I am having difficulty wrapping my head around the fact that there could be an elusive 'third factor' causing the other two.

      In terms of the 'why' does our brain do what we do, I can't seem to envision how explanations for this would be anything more than speculation (though this could certainly be very strongly argued and convincing speculations), though i guess this comes back to the underdetermination issue.


      Delete
    8. WB & JT: Sure, feeling must be adaptive. The problem is explaining how and why. Sure, we feel; Descartes' Cogito already tells us that. And sure it feels as if feeling is useful, and that we wouldn't take our hand out of the fire if we didn't feel it. But to take a hand out of the fire all that's needed is to detect the tissue-damage and act on it, not to feel it. Today's robots can already do that.

      AV:: Sure it feels like something to learn, and we learn from feeling, for example, the bee bite. But isn't all that's really needed is to detect what needs to be detected and do what needs to be done? If that can all be done without feeling too, then what's the function of feeling? The introspection that "it feels necessary to feel" is not an explanation.

      NK-F: The test of a causal explanation in reverse-engineering is whether the device built using the mechanism you claim is generating the performance capacity is indeed generating the performance capacity, given the same inputs. If you reverse-engineer a heart with a pump-mechanism, your mechanical heart can really pump blood. You can observe that it can and does, and you can explain exactly how and why. With cognition, the internal causal mechanism (T3) has to be able to do everything we can do. Again (if you ever succeed in designing T3), you can observe that it can do anything thinkers can do, and you can explain, causally, exactly how and why it does it.

      But how do you reverse-engineer feeling? First of all, you can't observe anyone's feeling but your own. (That's the Cogito + the other-minds problem.) But you can take it on faith and probability that people do feel when they say and act as if they do. So suppose you find the brain correlates of feeling -- the parts of the brain, or the activity patterns, or the chemicals that, when they are there, and active, people say they feel, and when not, not: Have you explained how or why the brain causes feeling, the way you have explained how the heart pumps blood or the brain pumps behavior?

      Both the how and why matter: Let's simplify and say that whenever something is felt, the "red nucleus" pulses, and when not, not: Is that (or a more complicated version of that) a causal explanation of how the brain generates feeling? Isn't it still a complete mystery how the red nucleus generates feeling? In contrast, with (observable) sensory input and (observable) internal processing (whether dynamic or computational) and (observable) motor output, there is no mystery at all, any more than there is in a mechanical heart of a vacuum cleaner.

      And that's not all. There's also the unanswered "why" question: Ok, so suppose the red nucleus (somehow) causes feeling: Why? What does the feeling add that the performance capacity (the "easy" problem) has not already delivered? In fact, once all is said and done, what's left for feeling itself to "do"? What (if anything) is the value-added of feeling, over and above the doing capacity, which the solution to the easy problem has already generated and explained? Even if feeling started from a random mutation, what was felt doing's adaptive advantage over just plain done doing? (The "goal" of evolution, after all, is to get the job -- of surviving and reproducing -- done, and that's all done by just doing!)

      Evolutionary explanations of the "why" of doing capacity (the easy problem) we will be discussing in week 7, on "evolutionary psychology."

      Delete
  2. “ The CRA would not work against a non-computational T2-passing system; nor would it work against a hybrid, computational/noncomputational one (REFS), for the simple reason that in neither case could Searle BE the entire system; Searle's Periscope would fail. Not that Systematists should take heart from this, for if cognition is hybrid, computationalism is still false” (Harnad).

    I remember asking about how Searle thought something that passed T3 could still be incapable of thinking, but suddenly this does become apparently unprovable. As Steven says, Searle is basically the software but the additions that Searle talks about in terms of sensorimotor capabilities in some sense do have to be dynamical. Because these components are dynamical, Searle cannot claim the system does not have understanding because he can no longer speak for the dynamical components since they are not part of him as the software.

    “If all physical implementations of one and the same computational system are indeed equivalent, then when any one of them has (or lacks) a given computational property, it follows that they all do (and, by tenet (1), being a mental state is just a computational property)” (Harnad).

    I have a hard time accepting this statement. Yes, if one physical implementation device implements a computational system, then all other physical implementation devices can implement it as well because they can be reprogrammed. However, if any of these devices lacks a computational property, then I don’t think you can generalize that all must lack the computational property. It might just mean that it needs to be reprogrammed. But this doesn’t affect the rest of the paper because Searle’s periscope shows that if mental states were computational states, then it would be possible to solve the other minds problem (which is largely held to be unsolvable) or they are unconscious states (which doesn’t make sense since understanding is necessarily conscious).

    ReplyDelete
    Replies
    1. I was referring to computational properties. Two physical implementations (hardwares) running the same algorithm (software) can have wildly different physical properties, but it they're both running the same algorithm, they are doing the same computation. If they are doing different computations, that's the same as saying they are not running the same algorithm. Computations are just formal, not physical.

      Delete
    2. This comment has been removed by the author.

      Delete
    3. This comment has been removed by the author.

      Delete
    4. Could you elaborate a little more on “Searle’s periscope” and its implications? From what I understand, it is simply the notion that computational systems which are the same share the same computational properties regardless of their implementation i.e. if the software is the same, it will have the same properties across all hardware. And as William pointed out, (according to computationalism) this solves the other-minds problem since one could reach the same mental state as another since mental states would just be reproducible computational states.

      Maybe where my confusion is coming from is the fact that you dubbed the term Searle’s periscope, when in fact Searle would dispute this. After all, the CRA is arguing that mental states (or “intentionality”) are not at all computational states. Any clarification on this term would be great!

      Delete
  3. “Searle thought that the CRA had invalidated the Turing Test as an indicator of mental states. But we always knew that the TT was fallible; like the CRA, it is not a proof. Moreover, it is only T2 (not T3 or T4 REFS) that is vulnerable to the CRA, and even that only for the special case of an implementation-independent, purely computational candidate. The CRA would not work against a non-computational T2-passing system; nor would it work against a hybrid, computational/noncomputational one (REFS), for the simple reason that in neither case could Searle be the entire system; Searle's Periscope would fail. Not that Systematists should take heart from this, for if cognition is hybrid, computationalism is still false.”

    With regards to the “non-computational T2-passing system,” what would make the system non-computational and what could the system look like? I am just confused as to how to visualize such a system. Would it be something governed by dynamical systems that take the form similar to physical systems described by differential equations? If so, how would we apply this to cognition and the CRA?

    On another note, I understand how T3 is not entirely vulnerable to the CRA but I am still interested in how a T3 passing system would be able to understand. All that is added is sensorimotor characteristics, but, correct me if I’m wrong, the robot would still operate on a program in order to interact with the world, thus not much would have changed regarding how it communicates with others. As a result, if it is just manipulating verbal symbols, or sensorimotor symbols, how would it understand? I guess one thing I may have wrong would be that the T3 passing system would necessitate it to be “non-computational” or a hybrid, which then reverts me back to my previous question above.

    ReplyDelete
    Replies
    1. For a noncomputational T2-passing system, just imagine a robot or even a mechanical lung, rather than a computer. A computer (Turing Machine) just manipulates symbols (Searle too can do that) and its physical details are irrelevant. By doing the same thing the computer does, Searle can become "the (whole) system." But if it's a robot (or a mechanical lung), then it's not just manipulating symbols, and its physical details are not irrelevant, so Searle cannot become "the (whole) system" any more than he can become you or me (or a real -- as opposed to a computer-simulated) snowstorm.

      Delete
  4. “Can we ever experience another entity's mental states directly? Not unless we have a way of actually becoming that other entity, and that appears to be impossible -- with one very special exception, namely, that soft underbelly of computationalism: For although we can never become any other physical entity than ourselves, if there are indeed mental states that occur purely in virtue of being in the right computational state, then if we can get into the same computational state as the entity in question, we can check whether or not it's got the mental states imputed to it.”

    I’m a little confused about how we could experience computational mental states. How could we get into the same computational state as the entity in question? Or is this what the CRA example is all about?

    “A more interesting gambit was to concede that no conscious understanding would be going on under these conditions, but that unconscious understanding would be, in virtue of the computations. This last is not an arbitrary speculation, but a revised notion of understanding. Searle really has no defense against it, because, as we shall see (although he does not explicitly admit it), the force of his CRA depends completely on understanding's being a conscious mental state, one whose presence or absence one can consciously (and hence truthfully) ascertain and attest to (Searle's Periscope). But Searle also needs no defense against this revised notion of understanding, for it only makes sense to speak of unconscious mental states (if it makes sense at all) in an otherwise conscious entity.”

    I agree with this point intuitively but I would be interested in seeing some evidence or rational demonstration for it. Computationalism isn’t arguing that computers are conscious entities, only that they have a mind. As such the “mind” would operate “unconsciously”.

    ReplyDelete
    Replies
    1. Weasel-Words

      1. The point of the CRA is that a mental state is not just a computational state, any more than a snowstorm is.

      2. That's the trouble with multiplying weasel-words (especially different weasel-words for puzzling things that are really the same thing):

      Felt states are states it feels like something to be in.

      All mental states are felt states.

      All felt states are mental states.

      All conscious states are felt states.

      All felt states are conscious states.

      Hence "conscious" = "mental" = "felt." We don't need different three terms, just one.

      An unfelt state is not a mental state -- obviously not if it occurs in a vacuum cleaner or a rock, but also if it occurs in a body that is in delta sleep, in profound coma, under general anesthesia, or dead.

      Unfelt states occurring inside the brain of a live but unconscious person are "cerebral states" or "neural states," but not mental states. There can also be some cerebral states going on in the brain of a wide-awake person that are not felt.

      Once you realize that "conscious" = "mental" = "felt," then the only term to use is "felt": It will keep your intuitions clear and honest: There is no such thing as an unfelt feeling. That's why unconscious states are not mental states but just cerebral states.

      And to have a "mind" simply means to be able to feel.

      Cognizers can feel, but that doesn't mean that all their internal states must be felt states. (In fact, most are not.)

      It's common but loose talk to refer to the unfelt internal states of cognizers as "mental states." But some do it because we still have no idea which of the internal states going on when we are awake and feeling are actually the generators of what we are feeling, and which are not.

      Delete
    2. What wasn’t clear for me was whether or not Searle agrees that a part of cognition could be computational. As Harnad supposes, perhaps this “executing program might be part of what an understanding ‘system’ like ourselves really does” . So if we assume the possibility of the systems reply being correct, then the understanding is contained somewhere within the system. However it still puzzles me that there is no way of knowing whether the brain is doing the same thing as what is happening in the CRA. Does our brain not have the same “subsystem” that is processing commands with little to no understanding? How can we really be sure we are recreating the right thing given that consciousness is a postulate that cannot even be verified (yet)?

      I find it interesting to think of consciousness as an emerging phenomenon in the brain. If we think of it this way (similar to systems reply) then we do not have to pin-point or try to isolate the exact location of consciousness. We know that consciousness is not contained in single neurons but that it is more probable that different brain networks (neural networks) work together to produce this phenomenon.

      One field in AI today that is truly fascinating is deep-learning . It is when machine learning means big data that we begin to see the complexity of deep neural networks. Thanks to big data and given outputs, these neural networks can be trained to interpret the underlying regularity in the dataset (e.g. image recognition or speech recognition). These models are then able to make predictions and learned associations to classify things and interpret them. Check out http://clarifai.com/#demo as an example. If supervised learning algorithms are already able to learn classification systems, could we not then pose the idea that “unsupervised” learning algorithms (ones that need minimal supervision/no given outputs) will lead to an emergence of consciousness? It is evident from Searle’s CRA that we cannot program consciousness so why not try to let it flourish naturally within the system? Maybe a network of different functionality and deep learning could lead us in the right direction.

      Delete
    3. 1. No, Searle does not think cognition is part computation. He thinks his argument shows that it's not computation at all, and that only studying the brain (T4) can lead to a causal explanation.

      2. But is cognition is partly computation that definitely would not mean that the "System Reply" to the Chinese Room Argument was correct after all.

      3. "Emergence" is never an explanation of anything. It just means we can't explain it.

      4. "Deep Learning" might be a good candidate mechanism for learning categories and grounding symbols (the "easy problem") but it does not offer a clue of a clue about the hard problem...

      Delete
  5. “And even in conscious entities unconscious mental states had better be brief!”

    I’m a little confused as to what this means, although I did hear it touched upon briefly in class. Is this claiming that mental states that are unconscious cannot stay unconscious for long, as they will have to be “performed” and therefore become conscious? I don’t subscribe to the psychoanalytic tradition but is there no place for unconscious states that never become conscious?

    Moreover, I don’t think that unconscious understanding is occurring in Searle’s CRA. It confuses me to think of the notion of unconscious understanding because how useful is this concept? It cannot really be proven or disproven or even quantified in a meaningful way. How does one test for unconscious understanding of a language, and if it is testable, how do we check for validity in that testing? Is this another unsolvable dilemma akin to the problem of other minds?

    ReplyDelete
    Replies
    1. It's weasel-words again. Here is how I should have said it: “And even in feeling entities unfelt states had better be brief!”

      It feels like something to understand Chinese. Searle says he is not feeling it. Trust him, because it's true.

      Delete
  6. One of the replies called the brain simulator reply seems inherently wrong if we accept that consciousness is “feeling”. Simply put (as we have discussed), stimulating something is not the same as actually creating the same thing. The feeling of a real waterfall is just that, and a simulation of a waterfall may feel the same but as soon as you take off the glove the feeling is gone.

    Searle offers that the reason this reply is invalid is because it “won't have simulated what matters about the brain, namely its causal properties”.

    If Harnad’s suggestion that for “conscious” or “mental” Searle used “ the weasel-word intentional in its place”, why did Searle not simply refute the brain simulator reply based on its inability to simulate the causal properties of intentionality (or thinking?).

    ReplyDelete
    Replies
    1. EDIT to second sentence:

      Simply put (as we have discussed), experiencing a simulation of a feeling, is not the same as actually feeling the same thing.

      Delete
    2. What is needed to pass T3 is not a simulated robot in a computer but a real robot in the real world.

      But Searle does not always grasp the implications of his own (valid) argument. The only causal property relevant here is that really being able to think means not just being able to pass T2 (or, for that matter, T3) but being able to feel what it feels like to understand what is being said. And Searle doesn't. That's the missing "causal" power. (Searle instead thinks only T4 would have that causal power.)

      Delete
    3. I'm not sure why this is confusing to me but why would Searle posit that only T4 can have the causal power missing in T3 that allows the ability to feel what it feels like to understand the language. Why does it have to be the same indistinguishable human substance that something is made out of in order to have that "feeling"? How can we know this if there has never been anything that has properly tested this? How can you say that any artificially made indistinguishable-from-human-stuff robot automatically has the power to feel what it feels like to speak? Am I missing something or just stuck with a philosophical problem that's irrelevant??

      Delete
  7. Why has Searle's Chinese Room Argument historically fuelled so much controversy and objection? (the robot reply, the systems reply, the brain simulator reply to name a few) It seems to me like the crux of the controversy surrounds Searle's definition (or lack thereof) of "understanding". It seems as though Searle's definition of understanding as more of an intuitive feeling is inadequate, and thus opens the door to many objections.

    Reading all these objections, I can't help but stop and think - what is so bad about accepting Searle's CRA? Why is it so controversial to accept that the mind is not the same as a computer program since it is capable of subjective, conscious, feeling experiences, unlike the program. When I pause and think about the implications of objecting this premise, I realized I'd much rather accept a Searlian (?) world view than any other one. To me, Searle's position is the "human", natural one. Surely we don't want to live in a world where the mind is equivalent to a computer program. Above all our other concerns, we don't want to have to start worrying about the feelings of our computers and iPhones when we shut the lights off or neglect them for hours during the day!


    ReplyDelete
    Replies
    1. Searle's is a rational argument. It's not about what kind of world we'd rather life, but about whether computationalism ( = "Strong AI") is right or wrong. It computationalism is right, then Searle should be understanding Chinese when he passes T2. He doesn't. So computationalism is wrong. (Be careful not to accept the conclusion just because of the kind of world you'd rather live in! And remember that our grandmother's agree with Searle too -- but not because they understand his argument. And if you asked them why they think thinking is not just computation, they would give you all kinds of wrong answers...)

      Delete
  8. “This decisive variant did not stop some Systematists from resorting to the even more ad hoc counterargument that even inside Searle there would be a system, consisting of a different configuration of parts of Searle, and that that system would indeed be understanding… as a result of memorizing and manipulating very many meaningless symbols, Chinese-understanding would be induced either consciously in Searle, or, multiple-personality-style, in another, conscious Chinese-understanding entity inside his head of which Searle was unaware.”
    “…(They show only that the CRA is not a proof; yet it remains the only plausible prediction based on what we know.) A more interesting gambit was to concede that no conscious understanding would be going on under these conditions, but that UNconscious understanding would be, in virtue of the computations.”

    I want to clarify if I got this right. The physical system is understanding because the software contains another software that has memorized all the meaningless symbols, such that unconsciously, the software has been able to give those meaningless symbols meaning, such that Searle, the physical system, is unconsciously understanding without being consciously aware? The dynamic system, Searle, magically is able to learn? Does that mean, as Searle suggested, if one goes along with this argument, that mind is everywhere, just that it is unconscious? Like the toaster example, doesn’t that mean there’s just no mental state there? What’s the point of that reasoning then?

    Also, I'm curious between the difference between consciousness and intentionality with relationship to the mind; so intentionality only explains our conscious mental state, but does not include the unconscious? The mind is a computational state that is dependent on its intentionality, but then what about unconscious mental state? That just bypasses intentionality?

    ReplyDelete
    Replies
    1. Hey Oliver, I had similar thoughts about the unconscious understanding argument.

      Searle, the physical system, is unconsciously understanding without being consciously aware? The dynamic system, Searle

      The physical/dynamic terminology here complicates things a bit, so I found it best to break things down, as Dr. Harnad puts it, “multiple-personality-style.” The way I interpret their argument is that Searle, as a system, has multiple components that are interconnected, yet still functionally distinct. Let's call this version of Searle, the entire system, Searle the System. Now we can break Searle the System down further into Searle the Body (neurons, tissues, limbs, etc.) and Searle the Mind (consciousness, feeling, “the self,” etc.). While Searle the Mind does not think he understands Chinese, does not feel like he understands Chinese, and does not understand Chinese, Searle the Body can still produce correct Chinese responses, and because of that, Searle the System does understand Chinese.

      The physical system is understanding because the software contains another software that has memorized all the meaningless symbols, such that unconsciously, the software has been able to give those meaningless symbols meaning.

      I don't think these “Systematists” are saying that the meaningless symbols now have meaning. It's just that because Searle the Body can manipulate the symbols to produce output that is TT-indistinguishable from a Chinese speaker, that Searle the System must be understanding, even if Searle the Mind is not. In other words, Chinese speakers understand the language, and they produce the same responses as Searle the System, therefore Searle the System must understand the language as well.

      Searle, magically is able to learn?

      Not magic. Understanding. That something that enables Searle the System to produce TT-passing responses is, in fact, understanding. Even if Searle the Mind is not consciously understanding.

      Does that mean, as Searle suggested, if one goes along with this argument, that mind is everywhere, just that it is unconscious? Like the toaster example, doesn’t that mean there’s just no mental state there?

      Bingo. I think so too.

      What’s the point of that reasoning then?

      ¯\_(ツ)_/¯ That's where the logic of this argument begins to break down.

      Delete
    2. If you're suggesting the Searle the system, does understand Chinese, then it would essentially be equivalent to passing T3 already, which is a bold statement considering it's just simulating understanding. Thus, I wouldn't consider the system as understanding Chinese if it cannot resolve the symbol-grounding problem, which I wouldn't know if it is because of the other minds problem, so that would disapprove the need of a hybrid symbolic/subsymbolic system to pass T3, because then we would be just going with Searle's argument, that computation is enough. Therefore, by using the term "understanding" is very misleading in this case, from my point of view.

      Delete
    3. If you're suggesting the Searle the system, does understand Chinese, then it would essentially be equivalent to passing T3 already, which is a bold statement considering it's just simulating understanding.

      I'm with you on this one; I'm not convinced Searle the System understands Chinese either. That's just what the “Systematists” are arguing. The way I see it, Searle the System is not even “simulating understanding,” he's just formally manipulating symbols. Meaning is never actually assigned to the symbols, so that can't be true understanding. As you pointed out, the biggest problem here is that:

      using the term "understanding" is very misleading in this case

      Exactly! While Searle's rebuttal to this argument is reasonable, he could have been a bit more clear with his word choice. Just like his commitment to the term “intentionality.” The overall ideas make sense, but his terminology complicates things a bit.

      Delete
    4. OH & AH: I think you're both getting a bit carried away. There is no "panpsychism" (mind is everywhere in the universe, every "thing" has a mind...). Until further notice, only (some) living organisms on earth have minds, and having a mind means nothing more nor less than that they can feel.

      Searle is, among other things, one of these earthly organisms that feel.

      Now computationalists think that running the right computer program (on any hardware that will execute it) is enough to generate two things: (1) the capacity to pass T2 in Chinese and (2) the feeling of understanding Chinese. (Yes, it feels like something to understand Chinese -- or English.)

      Searle just points out that even if there were a computer program that could do (1), it would not be able to do (2), because he would not understand Chinese if he executed the same computer program. And there is no one else in there!

      The "System" reply is wrong no matter how complicated an interpretation you might want to make of the code (computations) that are being executed by the T2-passer (whether it's the computer or Searle). Searle would be doing (1) (if there were such a program) but not (2).

      Conclusion: Computationalism ("Strong AI") is wrong. Cognition ≠ Cognition.

      "Stevan Says": Just computer program couldn't even do (1): In other words it couldn't even pass T2. Why? Because you need the capacity to pass T3 in order to pass T2, and computation alone obviously cannot pass T3.

      Delete
  9. So let us call the pen-pal version of the Turing Test T2. To pass T2, a reverse-engineered candidate must be Turing-indistinguishable from a real pen-pal. Searle's tenet (3) for computationalism is again a bit equivocal here, for it states that TT is the decisive test, but does that mean T2? (Harnad, page 5)

    I was wondering the same thing as I was reading Searle's article. To me, it seems like he uses his Chinese Room Argument to demonstrate the shortcomings of T2:

    The only motivation for saying there must be a subsystem in me that understands Chinese is that I have a program and I can pass the Turing test; I can fool native Chinese speakers. But precisely one of the points at issue is the adequacy of the Turing test. (Searle, page 6)

    However, he seems torn between whether T3 or T4 is the version of the test we should be using. He concedes that because a T3-robot behaves exactly the same way we do, we would attribute its behavior to the same cognition that facilitates our behavior:

    If we could build a robot whose behavior was indistinguishable over a large range from human behavior, we would attribute intentionality to it, pending some reason not to. (Searle, page 9)

    So a T3-robot can successfully demonstrate cognition, until we have some reason to believe it is not cognizant. And to Searle, the neurophysiological differences between us and the robot is that reason:

    Given the coherence of the animal's behavior and the assumption of the same causal stuff underlying it, we assume both that the animal must have mental states underlying its behavior, and that the mental states must be produced by mechanisms made out of the stuff that is like our stuff. We would certainly make similar assumptions about the robot unless we had some reason not to, but as soon as we knew that the behavior was the result of a formal program, and that the actual causal properties of the physical substance were irrelevant we would abandon the assumption of intentionality. (Searle, page 9)

    Accordingly to Searle, the only way a robot can be completely indistinguishable from humans for its entire existence is if it is a T4-robot. The T3-robot would only fool us until we realize it is made from circuit boards rather than neurons. So Searle would attribute cognition to T3-robots until they dropped a bolt or needed to plug themselves in to recharge, at which point he would start kicking them because they clearly don't feel anything. At least that's the way I see it. Any thoughts?

    ReplyDelete
    Replies
    1. Yup, Searle thinks T4 would be the only way to generate cognition. Now go tell that to Riona and Renuka...

      Delete
  10. ''And even in conscious entities unconscious mental states had better be brief! We're ready to believe that we "know" a phone number when, unable to recall it consciously, we find we can nevertheless dial it when we let our fingers do the walking. But finding oneself able to exchange inscrutable letters for a lifetime with a pen-pal in this way would be rather more like sleep-walking, or speaking in tongues (even the neurological syndrome of "automatic writing" is nothing like this; Luria 1972). It's definitely not what we mean by "understanding a language," which surely means CONSCIOUS understanding.''

    I would argue that 'sleepwalking' and 'speaking in tongues' are very different activities to participating in the Turing Test. The only reason people seem to 'respond' to instructions when sleepwalking is that they already have an underlying knowledge of the language, which can be activated even when in this 'unconscious' state of mind. If you instructed a sleepwalker in a language they had no understanding of, they would not be able to perform the correct action. Understanding at a conscious level, is a prerequisite for understanding at a subconscious level.

    Likewise, speaking in tongues would only make coherent sense if the person understood what they were saying. I don't doubt they could produce phrases by themselves, but I do doubt that they would be able to maintain a coherent conversation and pick up on nuances of language if they had no understanding of what was being said.

    Take, for example, ambiguous sentences which can have more than one meaning, despite using identical 'input' of words. Without an understanding of the context the sentence is being used in and inferring the meaning that the other person is trying to express, there would be no way to disambiguate between the available options of how to interpret the input. For example, ''the chicken is ready to eat''. If preceded by the phrase ''I opened the oven'' or ''it was feeding time at the farm'', the sentence takes on a very different meaning. If the computer was asked to parse the sentence in the correct way, in response to these two different conditions, I would propose it would be unable to produce the correct interpretation without knowledge and understanding of what the phrase related to in the real world. Only if the computer achieved conscious understanding, and bridged the gap between symbols and their references, would they be able to pass tests such as these. Therefore, I would argue that it is impossible for computers to mimic human behaviour at any stage of the Turing Test hierarchy, without having understanding of the outside world.

    ReplyDelete
    Replies
    1. I think you're focusing too much on the nature of sleep-walking and speaking-in-tongues: The point was that that's not what Searle is doing when he memorizes and executes the T2 program.

      Delete
  11. 3. b) What’s Right and Wrong about the Chinese Room Argument
    Firstly, I need some clarification on what Stevan defines a "Granny Objection" because this seems to come up in his papers and in class discussions. I think he just means what someone would say many years ago without all the acquired knowledge that we have now. Someone clarify?

    Second, everyone is criticizing Searle for his Chinese Room Argument without looking at the positives of him writing his paper. It has given scholars on the topic of the mind, the computer, computation and cognition a lot to talk about. This issue and conversation about computation and cognition has no concrete facts. It seems to me that it is a very philosophical topic. Therefore, how can one even know for sure who is right and who is wrong in these back and forth criticisms? There are going to be people who believe in computation and those who do not.

    ReplyDelete
    Replies
    1. Hey Jordana,
      You are right, I feel like a lot of us are looking at the Searle paper and upset by how unempirical it seems. I first read this paper for a class in philosophy, and in that context I was totally enamored by it (because it explained really well how the semantics are just as if not more important than the syntax often).
      But, I also think that the argument over computation/cognition does have some concrete facts, as all philosophical arguments do. I don't think it is so much about being right or wrong, but about being able to defend your position and not relying only on intuitions. Like, for me, intuitively I want to say computers are very far from humans because I feel there is something special about being human, but for me to argue this, I need to use logic and the few facts that might support my argument (things like the other minds problem, the Turing hierarchy etc).
      Just some thoughts as well!

      Delete
    2. Hey Jordana, as far as “Granny Objections” go, you're pretty close with:

      what someone would say many years ago without all the acquired knowledge that we have now.

      But I think “all the acquired knowledge” is only in the context of using computers. So they're “Granny Objections” because they're arguments your grandmother might make, as she is still adjusting to today's technology. For example,

      (6) Lady Lovelace's Objection:
      "The Analytical Engine has no pretensions to originate anything. It can do whatever we know how to order it to perform"... a machine can "never do anything really new."
      (Harnad, reading 2b, page 14)

      With her limited computing experience, it is reasonable a Granny would firmly believe that computers only do what we tell them to do, exactly how we tell them to do it. Whereas those who have been raised with computers know that they often do things we don't want them to, for reasons we can't explain.

      Dr. Harnad has a slide show on his University of Southampton userpage (http://users.ecs.soton.ac.uk/harnad/CM302/Granny/sld001.htm) that sites 11 common Granny Objections and provides refutations to each of them. It helped me understand exactly what he meant by "Granny Objections" and why those objections were incorrect. Hope it helps you as well!

      Delete
    3. Good replies here! Nothing for me to add...

      Delete
  12. “No one supposes that computer simulations of a five-alarm fire will burn the neighborhood down or that a computer simulation of a rainstorm will leave us all drenched. Why on earth would anyone suppose that a computer simulation of understanding actually understood anything?”

    I believe I may have had a fever when thinking about this but humour these rather out there thoughts…

    Perhaps computers can understand but only in a virtual plane, consider the term virtual meaningless when asking a computer to understand our reality. A computer simulation as existing within the system that is computing it would then just exist on another plane, a virtual plane just as our “real” world would just be a plane in another sort of simulation. Each plane is a closed system and could theoretically be closed systems within closed systems. Just as we could not escape our reality and understand a theoretical outer system that has created us (our minds are unable to grasp the infinities that lay beyond us in both time and space), so too could computer programs not escape their computer and understand our reality. This might then mean that although human cognition might then be some sort of computation it would not be able to be recreated with another machine in our reality due to these closed systems I described above.

    I’m not saying that I would necessarily believe any of what I just described but I think it could be an interesting discussion around how cognition arises in a system and another way to think about a potential divide between humans and machines. I believe this sort of theoretical evolved or higher order computation is described briefly in Searle’s article but never fully discussed. This was my sort of interpretation of how we may still be doing computation on a primitive level and there exists some much more advanced method that could have theoretically created our reality. Again, I don’t necessarily believe in this, but it makes for a fun time to think about.

    ReplyDelete
    Replies
    1. Yep, those thoughts about the "virtual plane" do sound a bit feverish.

      I suggest we stick to the logic of Searle's argument and its critics. There's enough sci-fi in it with the premise (which I don't personally believe), namely, that a computer program alone could pass T2. ("Stevan Says" only a grounded T3 robot could.)

      Delete
  13. In this article, I find it interesting how artificial intelligence is discussed. I find particularly intriguing what is mentioned about understanding.The author claims that having a robot respond appropriately to certain stimuli and to certain questions is not enough to claim that the robot has understanding because the robot is simply running pre-determined scripts. The author also mentions that we attribute understanding to animals and other human beings because we assume that they function really similarly to us. If we imagine that robots that can pass the T4 test have been invented. Why cannot we assume in the same fashion, as we do for other humans and animals that that robot has understanding? Why cannot we overcome the other-mind’s problem with robots and other human made machines?

    ReplyDelete
    Replies
    1. A computer passing T2 is not a robot. A system passing T3 or T4 cannot be just a computer.

      Delete
  14. Because the formal symbol manipulations by themselves don't have any intentionality; they are quite meaningless; they aren't even symbol manipulations, since the symbols don't symbolize anything. In the linguistic jargon, they have only a syntax but no semantics. . (11)”

    The preceding part of this quote said that the reason that we can say no to the question of whether instantiating “the right program” is enough to be a sufficient condition for understanding. I agree with Harnad, in that I think that maybe symbol manipulation is a necessary but not sufficient property for understanding. I am not sure I can think of any understanding that occurs without some sort of symbol manipulation, but I may be missing something.

    It confuses me a bit, however, as the quote goes on to say that “ Such intentionality as computers appear to have is solely in the minds of those who program them… (11)”
    This might be nit picky, but in the Chinese Room argument, as Harnad goes on to say, we are supposed to be convinced that a system that appears to have intentionality because of a human can be programmed in a way that shows it has no additional intentionality. Thus adding an intentional being to a system is not sufficient for that system to have understanding or intentionality. If my understanding is correct and this is indeed true, how is any understanding of anything possible?

    ReplyDelete
    Replies
    1. First suggestion: Replace the weasel-word "having intentionality" with felt -- which is also synonymous with "conscious," "mental,' "subjective" etc etc.

      Searle feels. No doubt about that. And he understands English. No doubt about that either. And, yes, it feels like something to understand English. So the question is, if Searle executes the Chinese T2 program in place of the computer executing it, will he understand Chinese?

      Answer: No.

      How does he know? Because it feels like something to understand Chinese, and he can already say with confidence (does anyone doubt him?) that memorizing and executing a bunch of rules for manipulating meaningless squiggles and squoggles is not going to generate an understanding of Chinese (even if -- as I doubt ["Stevan Says"] -- it were able to generate the capacity to pass T2 in Chinese).

      Delete
  15. I had a question referring to the “Searle’s Perscope” (p.4)

    If I understood this correctly, we can use this to show that Searle’s CRA works for T2, but he cannot show that it applies to T3? As we’ve seen, computation is implementation-independent, that is, if a given software is able of producing certain capacities, we can implement that software in any physical device and that device will be able of generating those capacities. In the CRA, Searle makes himself the physical device that runs software that is able of passing the Turing Test at a T2 level, but he shows that he passes the test even though he doesn’t understand anything. Since T2 doesn’t include sensorimotor capabilities, this means Searle doesn’t encounter the “other-mind” problem and can conclude that the system is generating all its capabilities even though it doesn’t understand Chinese. However, since T3 includes sensorimotor capacities, Searle cannot “become the other mind”. So in essence, Searle showed that the computational aspect of passing T2 can be implementation-independent, but because of the “other-mind” problem, we cannot say that about the entire system (namely, T3). Computation itself is implementation-independent, but things become fuzzy when we incorporate sensorimotor capacities.
    But without “being the other mind” there is no way of knowing whether mental states can be computation, rather they are just states? For all we know, they could be a function of both sensorimotor capacities as well as computation? So Searle is wrong in concluding that Cognition is not at all computation – Cognition could partly be computation. We can conclude with certainty that T2 doesn’t understand, but because of the other mind problem, we cannot conclude the same with T3. So we should ultimately try to understand the workings of T3 as the ultimate TT?

    ReplyDelete
  16. As said above, from both Searle’s CRA and Harnad’s critique of the CRA, it has become increasingly apparent that computationalism alone does not hold the answers to ‘if machines can think or not’, or rather if they understand, and therefore, T2 is not a good enough test for a cognizing being. Searle points to “very special machines” and the only machine with causal powers that is known, namely the brain, as holding the key and that meaning (he used ‘intentionality’) is “causally dependent on the specific biochemistry of its origins.” On the other hand, Harnad supports a more hybrid view, in which a combination of dynamics and computations is enough to answer this (i.e. T3 looks promising). So, we have come to the conclusion that other features besides computation are necessary to reverse-engineer our cognizing robot. Following from Searle’s argument, these other features could include the feeling of understanding (or equivalently consciousness). That being the case, could it mean that a specific set of sub-properties ascribed to the dynamics of the brain are needed to figure this out (by this I mean at least a certain extent of T4) and not just any hardware/dynamic system? I know that computation is hardware/implementation-independent, however, does this mean that dynamics of the T3 system would have to be hardware-independent as well?

    ReplyDelete
    Replies
    1. I have the same issues with Searle’s argument.

      Why would meaning be “causally dependent on the specific biochemistry of its origins as lactation, photosynthesis, or any other biological phenomena”. I think that it is entire conceivable that a T3 system would work in similar ways to how a brain works without being implemented in the exact same way (for example using some other physical mechanism than biochemistry to reproduce the firing of neurons). I find it strange that Searle’s argument denies that it could be possible to reverse engineer something having the same properties as a brain without it being a brain if we were to understand the dynamics supporting all the processes executed by the brain, in the same way that it is conceivable that we could reverse engineer photosynthesis by replicating in some other way the use that plants make of chlorophyll.

      Delete
    2. Feeling is not a feature of the brain with which we causally explain something: it is a feature of the brain that itself needs a causal explanation (the "hard problem"). If T4 properties are needed, it is only T3 that can decide which ones. (No, dynamics is not hardware-independent: it is hardware. But not hardware for running computations!)

      Delete
    3. Responding to Hernán's post: "I find it strange that Searle’s argument denies that it could be possible to reverse engineer something having the same properties as a brain without it being a brain if we were to understand the dynamics supporting all the processes executed by the brain"

      I think when Searle made this argument he was talking purely about the formal properties of the brain. He says to the simulator reply, "The problem with the brain simulator is that it is simulating the wrong things about the brain. As long as it simulates only the formal structure of the neurons firing at the synapses it wont have simulated what matters about the brain, namely its causal properties" He goes on to say later in the Many Mansions reply: "I see no reason in principal why we couldn't give a machine the capacity to understand English or Chinese" But he does say "I do see very strong argument for saying that we could not give such a thing to a machine where the operation of the machine is defined solely in terms of computational processes over formally defined elements"

      So it seems his main argument is that when replicate the connections of neurons in the brain in a computer program you have not copied what gives "understanding" to the brain. That is to say, that simplifying the brain to the connections of neurons is an oversimplification of what the brain does to give "understanding", there is something else that the brain does that gives rise to it. That is why he agrees that "perhaps other physical and chemical processes could produce exactly these effects [intentionality]"

      Delete
    4. It's not about neural connections. It's about computations: symbol manipulation.

      Delete
  17. Searle was also over-reaching in concluding that the CRA redirects our line of inquiry from computation to brain function: there are still plenty of degrees of freedom in both hybrid and non-computational approaches to reverse-engineering cognition without constraining us to reverse-engineering the brain (T4). So cognitive neuroscience cannot take heart from the CRA either.

    I have a clarification question about this passage with respect to implementation-independence, and if/how it would apply beyond a T2 level. I know that ‘implementation-independence’ itself only really applies to pure computation, but is there a comparable concept that could be applied at the level of a dynamical system? Searle appears to strongly conclude that T4 is the only legitimate way to reverse-engineer cognition, but is equivalence at the dynamical system level - at least in part -a reason you (Dr. Harnad) conclude that Searle over-reached?

    I’ll try and break down my thoughts a little more clearly. So we accept that some features of T3 are necessary to even pass T2. If we then accept (Stevan says) that T3 is the appropriate level of TT for a reverse-engineered system to pass, then some features of T4 would seem to be necessary for passing T3. What I’m asking is, could we allow the outputs of these features to be ‘implementation-independent’ (I feel like “equivalent” is a better word here)? Take for example the mammalian vs. invertebrate visual systems – both transduce light, but through very different physiological/biochemical means. Are these types of examples the “degrees of freedom” you are referring to in the passage above?

    ReplyDelete
    Replies
    1. It is trying to pass T3 (verbal + robotic capacity) that will reveal what features of T4 may be needed -- to pass T3!

      Sometimes the same dynamical functions can be performed by different dynamical systems, but this is not the same as the implementation-independence of computation.

      Delete
  18. Can we ever experience another entity's mental states directly? Not unless we have a way of actually BECOMING that other entity, and that appears to be impossible -- with one very special exception, namely, that soft underbelly of computationalism: For although we can never become any other physical entity than ourselves, if there are indeed mental states that occur purely in virtue of being in the right computational state, then if we can get into the same computational state as the entity in question, we can check whether or not it's got the mental states imputed to it.

    I have some trouble with the idea of “getting into the same computational state” as another entity. It seems to generate a problem at least for Turing Machines (TM). As I understand it, the individual mental states of TMs are defined in relation to the total functional architecture of the TM, as defined by the machine table specific to the program it implements and the programming of the 'head'. So in order to share a single functional or computational state, two given TM implementations must have the exact same total program. Does this not cause problems for the idea that individual mental states are 'multiply realizable' across different entities which have a different total functional architecture? Or must we conclude, for instance, that the pain that I feel and that an octopus feels are not the same mental state since we are not functionally equivalent in the grand scheme of things, and hence do not implement the same TM or share any of the same computational states?

    ReplyDelete
    Replies
    1. Two TMs are equivalent if they are executing the same algorithm (programme). That's called "strong equivalence." "Weak equivalence" is if they give the same output for the same input. The T2 only calls for weak equivalence. T3 is no longer just computation, so algorithm equivalence is not relevant any more. T4 is a "kind of" strong equivalence -- not computational, but dynamic. (Until further notice, none of this covers feelings: just doings, the "easy" problem.)

      Delete
  19. I am somewhat confused by the idea that we could have a computational and non-computational division of labour within our mental processes (“Searle is wrong that an executing program cannot be PART of an understanding system” “the CRA certainly does not show that it cannot be computational at all”). While I have no argument against this, I have difficulty conceptualizing such a thing. I see a few options: 1. We have a central feeling module that packages off tasks for the computational model to complete. This has been soundly refuted by Searle and any anti-homuncular argument. I see little point in entertaining this, although it could by implied by the division into parts in the Harnad paper. 2. Feelings arise as an epiphenomenon of processes in the brain (the speed and complexity argument re consciousness). This offers support for the idea that consciousness (which I am here equating with feeling) must be instantiated in our biological systems; perhaps it is true that our material makeup is the only one capable of supporting such complex processes such that the byproduct cannot be interpreted as mechanical interactions (ie. We can only feel the end result, not the components that make up the end result, because our brains are complex, but not complex enough to interpret themselves). Or 3. that we are all collectively deluded into interacting as if we had something like feelings, which does require some sort of modeule like that mentioned in 1) or 2) but removes the necessity for a complex module (i.e. we just need something that provides for “what it feels like”, but nothing more, because what it feels like to be in pain etc is entirely socially provided). I personally like this one, because I believe that we do not have things like selves or identity without social interaction (in fact, that these things are provided entirely by others), but do recognize its lack of productivity.

    ReplyDelete
    Replies
    1. 1. You can't have a "central feeling module" unless you can first explain feeling. Better stick to the easy problem of just explaining doing. (This has nothing to do with homuncularity, which is just a way of deferring explaining anything at all.)

      2. Calling feeling an "epiphenomenon" (just like calling it "emergent") explains nothing. It's soothing to say it, but it does not answer the question how? or why? -- I suggest sticking to the easy problem...

      3. Feelings are not delusions: delusions are feelings. (If you think feelings are entirely "socially provided," tell that to an annelid worm, who has "ouch" but no social life -- or to a newborn baby, crying -- or to a hermit in the woods who falls and stubs his toe...!)

      Delete
  20. “Can we ever experience another entity's mental states directly? Not unless we have a way of actually BECOMING that other entity, and that appears to be impossible -- with one very special exception, namely, that soft underbelly of computationalism: For although we can never become any other physical entity than ourselves, if there are indeed mental states that occur purely in virtue of being in the right computational state, then if we can get into the same computational state as the entity in question, we can check whether or not it's got the mental states imputed to it.”

    I understand why this disproves computationalism as a valid argument, but where I start to get confused is when I think of trying to prove cognition and understanding with any other conceivable theory without the ‘other minds problem’ disproving it that as well. No matter what, we always run into the same problem: there is no way to know if whatever system we have created also cognitively understands. We assume other humans to be, by virtue of the fact that we are of the same species and so what I do, others must do too. But even this seems like it could be a flawed argument, especially if that is the argument that will be made for why something that can pass T4 must also likely be cognitively functioning. If there can be genetic variance that leads to some people being exceptional in some capacities, and others lacking those capacities, or even genetic mutations that result in missing a human characteristic altogether, then who is to say that simply because we are from the same species, we also all have minds (or cognitive understanding). We still only base this on the behavioural output that we can see. In the same way that a person can stand up straighter to appear taller, humans can appear to be understanding and cognitively functioning, even if they are not, through learned behaviours (although most would say learning is part of cognition, it can still arguably be done without it, as we have seen with machine replication). The point I am trying to make, is that the proposal is flawed that we must simply take by assumption that every human has a mind, and therefore, extending that argument to say that something indistinguishable from a human must therefore also have a mind. While obviously there can be no proof of another mind, there should be a better line of reasoning beyond simply assuming its existence because of similarities in species.

    ReplyDelete
    Replies
    1. Computationalism ("Strong AI") fails because it over-reaches: It claims that cognition is just computation. Searle use that to show it's false.

      Of course, because of the other-minds problem, we can never know whether T3 is enough either (but that's why we have Renuka and Riona, to remind us that T3 is the only way we "solve" the other-minds problem with one another. Even T4 can't do it.

      The capacity to learn is essential to all three levels: T2, T3, T4.

      Delete
  21. "Again, it is unfortunate that in the original formulation of the CRA Searle described implementing the T2-passing program in a room with the help of symbols and symbol-manipulation rules written all over the walls, for that opened the door to the System Reply. He did offer a pre-emptive rebuttal, in which he suggested to the Systematists that if they were really ready to believe that whereas he alone would not be understanding under those conditions, the "room" as a whole, consisting of him and the symbol-strewn walls, WOULD be understanding, then they should just assume that he had memorized all the symbols on the walls; then Searle himself would be all there was to the system."

    If Searle memorized the condition of the room, making him the entire system, and The Systems Reply were true, then we would be saying that there is a part of the system (Searle) that learns, but we wouldn’t be specifying anything. It would be like the homunculus problem. We could use Searle’s brain in this metaphor as another metaphor for another Chinese Room, and we still would not know where the learning was coming from. It is like saying that there is a brain in the brain. We cannot make this argument for cognition, so we should not be able to make this argument for computation.

    ReplyDelete
    Replies
    1. No reason to believe that memorizing and executing a bunch of squiggle-squoggling would generate another mind in Searle... (I don't understand your further "metaphors." The argument is a pretty simple, straightforward one, not needing a lot of complex premises, conjectures or interpretations, even though it all sounds like sci-fi.)

      Delete
  22. “Moreover, it is only T2 (not T3 or T4 REFS) that is vulnerable to the CRA, and even that only for the special case of an implementation-independent, purely computational candidate. The CRA would not work against a non-computational T2-passing system; nor would it work against a hybrid, computational/noncomputational one (REFS), for the simple reason that in neither case could Searle BE the entire system; Searle's Periscope would fail. Not that Systematists should take heart from this, for if cognition is hybrid, computationalism is still false.”

    I think that this article is helping me to better understand Searle’s argument, but I don’t understand how the CRA could not be reframed for T3. Couldn’t you have a similar concept, but with Searle controlling a robot based off of Chinese commands? Searle would be inside a robot and be given commands in Chinese, but could use the rulebook to know which levers he has to pull for the robot to make an action. Of course, he would have to be unable to see the results of the levers being pulled, so he could make no connection between the levers and the robotic action. If this were the case, you could argue that Searle is a machine, being given input, and producing output, without understanding why he is doing it, or what he is doing.
    The two biggest problems with this example would be that the situation is more difficult to imagine in real life than a man in a room is, and it would probably cause more confusion than Searle’s CRA did.

    ReplyDelete
    Replies
    1. With T2, passed by computation alone, Searle is himself being the whole system. If he controls a robot, he's not being the whole system, so all bets are off. The only dynamics he could be would be his own.

      Delete
  23. The idea of Searle’s Periscope was very interesting: assuming that mental states are computational by nature, a person’s mental state at a given time corresponds to a certain “computational state.” This implicates that we can experience the same mental state of that person if we had the same computational state. Harnad offers two defenses against this view: to separate the mental state from the computational state or to chalk up computational states as a background, subconscious function, neither of which benefits computationalism.

    I found this interesting because it was an alternative approach to look at The Other Minds problem. If computationalism were wholly true, The Other Minds problem would be solved. Although it isn’t wholly true, perhaps it’s not all wrong. According to Searle’s Periscope, if a person and I were in the same computational state, we experience the same mental state. In real life, this is rather impossible because every brain is unique; however, if we looked at basic functions of the brain, like Dr. Penfield had, this view becomes less absurd. Before operations, Dr. Penfield stimulated specific areas of the patient’s brain with a small electrical current and observed the response. He found that stimulation of a certain area of the brain (somatosensory cortex) elicited predictable responses. For example, if he applied the electrical probes to a certain region in the somatosensory cortex, the patient would report a feeling of pressure on his right index finger. Continuing with this example, if a surgeon stimulated the region that corresponds to the same feeling of pressure on the same spot on the same finger in two people, each of them would report having felt that exact feeling.

    Perhaps this provides some evidence for computationalism. While Searle’s Periscope does put computationalists in an awkward position, it also offers an alternative way to think about certain aspects of computationalism.

    ReplyDelete
    Replies
    1. You are not talking about computation any more, but about T4.

      Delete
  24. Firstly, I realized that I too had viewed the Turing Test as the all-powerful decider of the computation vs. cognition debacle. But since TT only demands functional equivalence, it does not say anything about whether a machine who passes the test possesses or does not possess a mind.

    "We're ready to believe that we "know" a phone number when, unable to recall it consciously, we find we can nevertheless dial it when we let our fingers do the walking. But finding oneself able to exchange inscrutable letters for a lifetime with a pen-pal in this way would be rather more like sleep-walking, or speaking in tongues (even the neurological syndrome of "automatic writing" is nothing like this; Luria 1972). It's definitely not what we mean by "understanding a language," which surely means CONSCIOUS understanding."

    I have to wonder if this would really be possible to attain? If someone was to exchange letters for a lifetime with a pen-pal would it really become like sleep-walking; non-conscious and without understanding? I would think you would eventually actually learn the language to the point where you would understand it. Isn't that how people sometimes learn languages (other than conversing or being directly taught)?

    Towards the end of the article, it is mentioned that only T2 (and only a T2 which is purely computational) is vulnerable to the CRA, but not T3 or T4. The way to go is proposed as a hybrid computational/non-compuational system.

    "The CRA would not work against a non-computational T2-passing system; nor would it work against a hybrid, computational/noncomputational one (REFS), for the simple reason that in neither case could Searle BE the entire system; Searle's Periscope would fail. Not that Systematists should take heart from this, for if cognition is hybrid, computationalism is still false."

    I was a little confused as to what was meant that Searle could not be the entire system in the case of a hybrid model. Does this mean to say that Searle would not be entirely computation but would also be something else (cognition)? Or that there would have to be something external other than Searle himself contributing to the system..?

    ReplyDelete
    Replies
    1. 1. The TT requires input/output equivalence (indistinguishable doing-capacity), which would be "weak equivalence" (if computationalism were true).

      2. I doubt that someone executing a Chinese T2-passing program for a lifetime would ever learn Chinese from it. (Remember, there's no translation, just squiggles and squoggles.) But even if they could, that would be irrelevant. The computer, when it implements the T2 program doesn't "learn" Chinese, from doing it over and over. It is already speaking and understanding Chinese, simply by executing the right symbol-manipulations (if computationalism is true.).

      3. Because computation is implementation-independent, any hardware implementing the programme is doing the computation -- it is "being" the whole system (if the system is purely computational). When Searle does it, he too becomes the whole system. But how could Searle become Riona? She is a robot. She is not the hardware for a computation. She is not implementation-independent.

      Delete
  25. First off I’d like to start off by saying that I’m a bit confused by this paper, especially because what exactly Harnad is countering with regard to what Searle said. What I think I have understood is that Searle argues that cognition isn’t purely computation. It seems like Harnad agrees with this but further argues that computation involved in cognition must necessarily be part of a greater system incorporating dynamics and symbol grounding. But what exactly is the greater system/all its parts? And what does the dynamic system consist of? (I know we’ll be reading about symbol grounding soon so I won’t ask about it here). Basically I’m confused about dynamic vs computational, because wouldn’t the hardware be considered dynamic? Or vice versa, the dynamic as an instance of the computational?

    ReplyDelete
    Replies
    1. A computer is just hardware for doing computation (symbol-manipulation). A robot -- like Renuka or Riona -- is not. R & R can move, they have sensors, and they probably have a lot of other dynamical components. Dynamical just means physical. And dynamics is not hardware independent. It is hardware -- but not hardware for doing computation. It is hardware the same way a heart, a waterfall, a vacuum-cleaner of a neuron is hardware.

      Delete
    2. Basically what Searle concluded from the Chinese Room Argument was that cognition cannot be computation AT ALL. He makes the claim instead that cognition (or I suppose consciousness) is a strictly biological phenomenon, which would mean it depends solely on the brain and thus solely as a dynamic system. Computationalism holds that mental states are computational states and that these computational states are independent of their hardware, so even though hardware is dynamic, it isn't relevant AT ALL to cognition from this viewpoint. He argues that as a result of believing cognition = computation we would have to accept a sort of dualism which Professor Harnad calls Searle's Periscope:

      "Can we ever experience another entity's mental states directly? Not unless we have a way of actually BECOMING that other entity, and that appears to be impossible -- with one very special exception, namely, that soft underbelly of computationalism: For although we can never become any other physical entity than ourselves, if there are indeed mental states that occur purely in virtue of being in the right computational state, then if we can get into the same computational state as the entity in question, we can check whether or not it's got the mental states imputed to it. This is Searle's Periscope..."

      What Professor Harnad is saying in this paper is that Searle's conclusion from the Chinese Room Argument is too strong and that he believes the best approach is through a hybrid model including dynamics AND computation, rather than Searle's dynamics ONLY attitude:

      "Searle was also over-reaching in concluding that the CRA redirects our line of inquiry from computation to brain function: There are still plenty of degrees of freedom in both hybrid and noncomputational approaches to reverse-engineering cognition without constraining us to reverse-engineering the brain (T4). So cognitive neuroscience cannot take heart from the CRA either. It is only one very narrow approach that has been discredited: pure computationalism."

      Hope this helps :)

      Delete
    3. Searle's not being a dualist. He just shows that the software/hardware distinction is not a solution to the mind/body problem (otherwise known as the "hard problem").

      Delete
    4. Searle's not being a dualist. He just shows that the software/hardware distinction is not a solution to the mind/body problem (otherwise known as the "hard problem").

      Delete
  26. "Unconscious states in nonconscious entities (like toasters) are no kind of mental state at all. And even in conscious entities unconscious mental states had better be brief! We're ready to believe that we "know" a phone number when, unable to recall it consciously, we find we can nevertheless dial it when we let our fingers do the walking. But finding oneself able to exchange inscrutable letters for a lifetime with a pen-pal in this way would be rather more like sleep-walking, or speaking in tongues (even the neurological syndrome of "automatic writing" is nothing like this; Luria 1972). It's definitely not what we mean by "understanding a language," which surely means conscious understanding."

    I found this to pair nicely with the example of the D3 and D4 ducks. After some thought, I still wrestle with the idea (and have yet to decide) whether or not a D4 duck would actually be a duck. If D4 were truly indistinguishable from a real duck, then who is to say that it is not a real duck? From using the other example in this paper of " Can we ever experience another entity's mental states directly? Not unless we have a way of actually becoming that other entity", then it would only make sense to say that if we see D4 beside a real duck and are unable to discern the two, then D4 would, in this case, be a real duck. It's rather confusing, as it's tempting to posit the argument that D4 isn't a real duck because "it just is not", or because it's engineered, but then you could look at humans as being engineered as well and say that a robot is the same as a real human so long as it is indistinguishable from a human. To disagree with that argument would be to say that something isn't a "real" version of itself if it is engineered by a conscious being, and I find that to be circumscriptive.

    The idea of not being able to experience another entity's mental states unless we actually are that entity is something else I find fascinating, as it relates heavily both to the idea of dualism and to the idea of metaphysical solipsism. The duck thing comes into play here, as who is to say that D4 isn't the exact same entity as a "real" duck, seeing as one could never truly be both?

    ReplyDelete
    Replies
    1. This comment was written by Hillary Muller

      Delete
    2. The assumption is that D4 ducks behave exactly like ducks, and on the inside they are exactly like ducks, except they are synthetic. (The difference is not because they are man-made, but because of the physical material.) (Of course that means that if you look really closely, you'll find physical differences between synthetic neurons and real neurons, so the analogy between D4 and T4 can only go so far.) But with T2, T3, and T4 -- and probably with D2, D3 and D4 -- there is something else that could be missing, and that is feeling. But, because of the other-minds problem, there's no way we can tell.)

      Delete
  27. There are a few points that I would like to further clarify in regards to the article. I am still unclear with the concept of “Granny Objections”. Although it is mentioned throughout the article in several instances, it has not been properly defined. Therefore, I am having a hard time clearly understanding the concept.

    Secondly, with the point that “there is no stronger empirical test for the presence of mental states than Turing-Indistinguishability; hence the Turing Test is the decisive test for a computationalist theory of mental states”, I agree with the authors view that "This does not imply that passing the Turing Test (TT) is a guarantor of having a mind or that failing it is a guarantor of lacking one. It just means that we cannot do any BETTER than the TT, empirically speaking”. The TT is currently the most optimistic empirical test that explains our cognitive mental states. However, just because we do not know of a better explanation out there, does it mean that we have to settle on this as THE explanation?

    Thirdly, I am also wrestling with the example of the D3 and D4 ducks. D4 ducks were described as indistinguishable from real ducks both structurally and functionally, and D3 ducks were only indistinguishable functionally. However, in regards to D3 ducks, a lot of their functions are solely dependant on the structure. For example, something that needs to swim needs to have certain structural features to allow so; such as webbed feet. Therefore, I am a bit unclear with the concept of D3; how can the duck be solely indistinguishable only functionally? Are we saying that IS distinguishable structurally; but then wouldn’t the functions be based on certain structural features that are homogenous among all ducks that allow for this function?

    ReplyDelete
    Replies
    1. 1. Granny objections are just the naive objections someone would have to the idea that a computer or a robot could have a mind -- not because of something they know about computers and robots and what they can or can't do or be, but because it's just obvious to them that people aren't computers of robots.

      2. The TT is not an explanation. It is a test. The explanation would be the mechanism that passed the test by generating the capacity. Yes, there is underdetermination: maybe another mechanism could do it all too. And maybe Renuka and Riona are both the wrong mechanisms. But we will never be able to know whether they are the wrong mechanism, nor, if they are wrong, why they are wrong. TT-indistinguishability is all we have to go by. And that applies to T4 too.

      3. You are right that some of the things a D3 duck has to be able to do in order to be able to do everything a real duck can do will constrain its structure. Structure is dynamic, not computational. Like being able to move and sense, it is what makes a robot different from a computer, computing. There may even be some T4 features of our brain that are essential in order to pass T3; but there too, it's T3 that is the test of whether or not a T4 property is needed -- to pass T3.

      Delete
  28. I’m not quite sure if this is more a comment rather than a question, but here are a few thoughts that came to mind during class.

    When we we’re speaking about the sentence “the cat is on the mat”, and a way to potentially make a “machine” run into problems would be to ask a question where one could only answer properly if they know the meaning of a given word.
    This made me think of analytic inferences.

    Analytic inferences are said to be a property of our cognitive architecture. It roughly means that knowing one property automatically entails that you know properties about the other object without inspecting anything in the world.
    For example; knowing that something is a circle, automatically entails that the thing is a shape – and we know this without inspecting the world.
    These implications are said to come from the meaning itself, from the representation of the concept – and account for our intuitions of truth and falsity. These may be silly questions, but this had me thinking about could machines ever grasp our capacity of analytic entailments? Could machines ever have intuitions of truth and falsity? We could program a syntax saying: If sentence A is True and Sentence B is false; yield an intuition of False – but would the system understand anything? I don’t think it would (just as Searle’s CRA). It could be that cognition is partly computation, but computation cannot account for some facts of our cognitive architecture – like semantics.

    This also got me thinking about pragmatics – where the intended message cannot be obtained by simply resorting to what has been explicitly said.
    Let’s say we’re at party & I say, “Oh, it’s getting late” as a hint to mean, “it’s time to leave” –would a machine have the capacity of understanding this pragmatic inference? It’s not enough to only understand the syntactic string, we also need to understand the meaning of the symbols. Of course there are properties of the sentence that lead one to go beyond the literal meaning & come to understand the intended message, but I think pragmatics is not a purely linguistic phenomenon. I think it’s more about thinking and I wonder if machines could ever grasp pragmatics.

    ReplyDelete
    Replies
    1. ANALYSIS & AUTISM

      Meaning = (1) Reference + (2) Sense + (3) Feeling ("Intention")

      (1) Reference = the thing ("X") that a word (or phrase or sentence) refers to.

      (2) Sense = (sort of) the definition or description of the thing (X) that the word refers to. (It can also be considered "the means by which you recognize X, or that the word refers to X.")

      (3) Feeling = what it feels like to have the referent X of a word in mind; what it feels like to understand the word as referring to X, or to say the word and mean X.

      (4) The "use" of a word is how it is used in sentences that make sense (i.e., people can understand them).

      Use is not independent of (1) - (3) except when the word is used only syntactically (i.e., manipulated on the basis of its shape, not its meaning), in which case meaning (reference, sense, feeling) have nothing to do with it: It's just squiggles and squoggles, as in formal computation (and Searle's Chinese Room Argument applies).

      Let's take a simple example: "Apple" The referent of "apple" is those round red things. The sense of apple (to a first approximation) is "round, red fruit" and whatever it takes to recognize apples when you see them or when someone talks about them. The use of "apple" is to refer to apples and to describe and talk about apples in a way that makes sense. And it feels like something to know what apple refers to and means.

      An "analytic inference" would be to infer from the fact that this is an apple, that it is round (since being round is part of the meaning of apple).

      A computationalist would only have words (symbols). So "apple" = "round, red fruit." But there would be no reference, sense or feeling. Just squiggle = squaggle squogle squoogle. So computation could "analytically infer" that if "apple" then "red," but all it is really doing is "if squiggle than squoggle."

      A grounded robot like Renuka or Riona, however, could do all the above: point to real apples and perhaps (approximately) define them. And, if they feel, R & R also know what it feels like to see or think of or say and mean "apple." If you ask them whether apples are round, they will not do it "analytically," by consulting their formal definition, but by thinking of an apple and "reading it off" the 'image' (if it's an image) that it happens to be round. (No, this is not homuncular -- once we have designed and tested -- and hence explained -- a causal mechanism that can actually do that, as R & R can!)

      I'd say that once you had designed a robot that could understand the literal meaning of "Would you mind getting off my foot?" (and all other literal meanings), as R & R can, you had already solved the "easy problem" of explaining how and why thinkers can do what they can do. If R&R could not pick up on the pragmatic implication (by replying "No, I wouldn't mind" but not getting off your foot, because you hadn't asked them to get off, you had just asked if they would mind getting off) that would not mean they had not passed T3; it would just mean they had a touch of Asperger's syndrome...

      And of course because of the other-minds problem we cannot know whether they have (3) feeling, along with (2) sense) and (1) reference. Only R & R can know that.

      Delete
  29. I think this paper was really helpful to me for sorting out some of the confusion in my understanding of the hybrid dynamic/computational view of cognition that we have been discussing in class. Though I have for the most part been rooting for T4 as being necessary, I am beginning to warm up to the idea of T3 being all we need. Computationalism would allow for us to literally put ourselves into the mental states one another which seems quite absurd ("if there are indeed mental states that occur purely in virtue of being in the right computational state, then if we can get into the same computational state as the entity in question, we can check whether or not it's got the mental states imputed to it") and Searle's solely biology based view seems a bit frightening because to me it seems to create a huge barrier between all conscious beings, since we all are not neurologically identical, we could be experiencing wildly different realities. At first that didn't bother me much since we all have different cognitive capacities, abilities, personalities, etc and these things are sometimes correlated with differences in the details of our neural structures, however I think the following quote was very helpful:

    "Normally, if someone claims that an entity -- any entity -- is in a mental state (has a mind), there is no way I can confirm or disconfirm it. This is the "other minds" problem. We "solve" it with one another and with animal species that are sufficiently like us through what has come to be called "mind-reading" (Heyes 1998) in the literature since it was first introduced in BBS two years before Searle's article (Premack & Woodruff 1978). But of course mind-reading is not really telepathy at all, but Turing-Testing -- biologically prepared inferences and empathy based on similarities to our own appearance, performance, and experiences. "

    Though the 'other minds problem' is certainly frustrating, in my day to day life I don't usually dwell on such thoughts and assume that the person that I am talking to is in fact conscious/cognitively capable of doing the same things as me. Our ability to 'empathize' with others (or at least to understand that someone is cognizes just as you do through inadvertent Turing-Testing), despite various differences in age, habits, etc that may affect neural structure, seems to point out that there is some computational nature to cognition.

    ReplyDelete
    Replies
    1. Yep, the other-minds "barrier" is there for any TT, whether T2, T3 or T4. But it's nothing to lose sleep about, since it's almost 100% sure that other people have minds (i.e., feel) -- just about as sure as that apples will keep falling down rather than up (that's not sure either, as Descartes points out, but we don't worry about it!) And you can already tell from the way other people look and act that their feelings are roughly (though not exactly) the same kinds of feelings as your own.

      Besides, Turing is there to remind you that even if it were not so -- even if, despite all behavior and appearances, all other people really were just unfeeling Zombies, and you were the only one who wasn't, it would make no difference at all to you. They still behave exactly as if they did feel, just as you do.

      The only condition under which it's even worth a second thought (apart from Philosophy 101) is if you are trying to do cognitive science and design a system that can pass T3 -- i.e., with Renuka and Riona. Yet we realize in class that even there it does not make a difference -- and I doubt it would make a difference if R & R looked more metallic either, as long as they were T3-indistinguishable from the rest of us.

      With T2, though, Searle's periscope could show that there was no thinking going on, just squiggling. That would create a conflict, if you had a lifelong pen-pal who you had thought was a person. Yet even there, if it were really T2, you could go on emailing it, and discuss the matter, and you would no doubt find it interesting, and the friendship would survive.

      Except that it's impossible for a computer program to pass T2, because of the symbol grounding problem. So we'd be back to R & R even if they were just our pen-pals.

      Delete
    2. Yep, the other-minds "barrier" is there for any TT, whether T2, T3 or T4. But it's nothing to lose sleep about, since it's almost 100% sure that other people have minds (i.e., feel) -- just about as sure as that apples will keep falling down rather than up (that's not sure either, as Descartes points out, but we don't worry about it!) And you can already tell from the way other people look and act that their feelings are roughly (though not exactly) the same kinds of feelings as your own.

      Besides, Turing is there to remind you that even if it were not so -- even if, despite all behavior and appearances, all other people really were just unfeeling Zombies, and you were the only one who wasn't, it would make no difference at all to you. They still behave exactly as if they did feel, just as you do.

      The only condition under which it's even worth a second thought (apart from Philosophy 101) is if you are trying to do cognitive science and design a system that can pass T3 -- i.e., with Renuka and Riona. Yet we realize in class that even there it does not make a difference -- and I doubt it would make a difference if R & R looked more metallic either, as long as they were T3-indistinguishable from the rest of us.

      With T2, though, Searle's periscope could show that there was no thinking going on, just squiggling. That would create a conflict, if you had a lifelong pen-pal who you had thought was a person. Yet even there, if it were really T2, you could go on emailing it, and discuss the matter, and you would no doubt find it interesting, and the friendship would survive.

      Except that it's impossible for a computer program to pass T2, because of the symbol grounding problem. So we'd be back to R & R even if they were just our pen-pals.

      Delete
  30. Searle came up a lot in class again today, and the conversation reminded me of an issue/question I had forgotten to resolve during Searle's week in class.

    When I first read the CRA, I felt so intuitively, completely convinced by it that, in response, I searched really hard online for anything that seemed like a valid objection . I eventually found an article by Hector Levesque from U of T ( http://ijcai.org/papers09/Papers/IJCAI09-241.pdf ) that raised some interesting points.

    Levesque uses some simple math to show that there is no fathomable/physically possible book that would ever enable Searle to manipulate Chinese at a level indistinguishable from a native speaker, without the rules being written in such a way that would necessarily lead to understanding. Levesque concludes that this impossibility makes the thought experiment ‘vacuous'.

    In reading the article (and maybe I'm missing something that makes his calculations wrong), I initially found the line of thinking and the Summation Room example quite compelling. It feels clear at the end of the paper that there really couldn't exist a physical book that would enable Searle to pass T2 without any understanding. But the question I couldn't shake (and that I'm asking for help with here) is: does that even matter? I'll try and explain why I am leaning towards no .

    The entire thought experiment is founded on first accepting all the tenets of computationalism (the decisiveness of the TT, the possibility that any computer program could ever pass T2, implementation-independence etc.). Only under these conditions can the CRA ever apply and make sense. So, given that we are accepting that there is a conceivable computer program that could pass T2 in Chinese, doesn't that mean we must also accept there is a conceivable 'book', which Searle could use, that is simply a written-out, identical version of the program? To phrase it in the opposite way, if Levesque argues there simply aren’t enough atoms in the universe to hold the number of rules the non-understanding-inducing book would require, does it not become equally unfathomable that the computer program would be possible, as it would be of an equally unrealistic scale?

    Or, if it’s the paper and print book he has such trouble with, couldn’t we just go rule-for-rule on a chalkboard, with Searle memorizing each rule and erasing each one as he learns?

    Maybe this seems like a silly point to get hung up on, but it definitely does feel to me that the feasibility of such a set of rules does seem questionable at best. What I really want to question though re: my comprehension of the CRA’s rule book is less the issue of possible vs. not, and more whether or not I am wrong in feeling like it just doesn’t matter.
    If it isn’t possible, wouldn’t it imply that the T2-passing computer program is impossible too, thereby resulting in the same conclusions? I.e. Computationalism is still proven false, either way you spin it?

    ReplyDelete
    Replies
    1. Adrienne, you are completely right on every point you make. (And this is not "Stevan Says." It's pure logic.)

      Searle's is a thought experiment based on the premise -- not his premise but "Strong AI's," i.e., Computationalism) -- that a computer program (algorithm) could pass T2.

      If a computer can execute the algorithm, Searle can execute it. (How long it would take for Searle to memorize the algorithm is trivial and irrelevant, as the hexadecimal tic-tac-toe example shows.) An algorithm is an algorithm. And either it works or it doesn't.

      The premise is that there is a T2 algorithm, and it works.

      Searle then shows that it the premise were indeed true, and the T2 algorithm indeed existed and worked (i.e., passed T2), it still wouldn't be understanding Chinese, because if he (Searle) executed that very same algorithm, he would not be understanding Chinese.

      So Levesque's calculations in the IJCAI paper miss the point and beg the question. I doubt that they even do the lesser thing, which is to show that an algorithm could not pass T2. (I have no idea what he thinks this has to do with the "System Reply," which is that Searle doesn't understand but "The System" does: having memorized the algorithm, Searle would be "The System."

      However, "Stevan Says" an algorithm could not pass T2 too -- but not because of complicated hypotheses and calculations, but because of the symbol grounding problem: Words cannot be understood without knowing what their referents are. And in T2 there is no way to connect T2 symbols to their referents -- just to other symbols. So it's all just meaningless squiggles and squoggles even if the symbols are interpretable as meaning what they mean -- by Searle's real Chinese T2 pen-pals.

      So the premise of Computationalism is wrong, and hence the Chinese Room is just counterfactual sci-fi, because there could not be a T2-passing algorithm.

      Delete
  31. If a machine was created that was able to learn language like a child does (through trial and error as well as by looking for recurring patterns (ex: tense endings)). The machine would constantly be updating its vocabulary. In other words, the machine would be learning. Eventually it would be able to recognize Chinese words, would this recognition be a form of intentionality?

    ReplyDelete
    Replies
    1. "If a machine was created that was able to learn language like a child does (through trial and error as well as by looking for recurring patterns (ex: tense endings)). The machine would constantly be updating its vocabulary. In other words, the machine would be learning. Eventually it would be able to recognize Chinese words, would this recognition be a form of intentionality?"

      Hi Lucy!

      I'd like to take a crack at your question, now that we've talked more about language and language acquisition. For the purposes of my reply I've replaced the word intentionality with feeling.

      The machine you described seems to employ both supervised and unsupervised learning for language acquisition. The first question that comes to mind is whether this machine's symbol system is grounded via dynamic sensory systems.

      From the way you phrased, I'll assume that the answer is no. This machine is capable of manipulating symbols and putatively categorizing them on the basis of their shape alone and corrective feedback (assuming such a thing is even possible). You go on to assert that the machine would be able to recognize words, by which I assume you mean, correctly categorize Chinese words. I would argue, however, that the machine has no felt understanding for the same reason that Searle in the CRA had no felt understanding. The manipulation of symbols on the basis of their form is insufficient for a feeling of understanding.

      If I assume you were describing a grounded T3 robot capable of learning language like a child (using supervised, unsupervised learning, and instruction to acquire new categories) then it is altogether possible that this Chinese-recognizing robot is capable of feeling, but such a question is impenetrable to us because of the other minds problem.

      Delete
    2. The whole point is that it’s impossible to create a machine that would learn language the way a child does. A child acquires language thanks to built in mechanisms in the brain/mind, while a machine would only be able to learn the language. And due to the symbol grounding problem, the machine wouldn’t be able to attach the meaning of words to their realization in the outside world, so the machine’s language (even if grammatically correct) would be void of meaning, which is not the case for a child acquiring language. And because the machine would be so distinct and inferior to what children can do, I don’t think there’s any way to infer intentionality from a machine recognizing words (after all, the machine would only be recognizing patterns).

      Delete
  32. “the proponents of Strong AI are those who believe three propositions: (1*) The mind is a computer program.(2*) The brain is irrelevant.(3*) The Turing Test is decisive”.
    I have difficulty to see how those three premises are related. I agree to some extend to each of them. The mind is probably a computer program because, as we have seen, God or Nature would have being fool not to use such a powerful tool, and I agree with that. There is a truth value to computation, and it seems probable that the brain uses such a tool. Whereas the brain is irrelevant, I think it could be the case, but so far, all we have encounter that allow for cognition is the nervous system. AI does seem to be promising, but lack the capacity that all organic creatures have: a complex biological system. From what I learned in biology, it is so complex and possess so many molecular component, that it must have being necessary for organism to do what they can do.
    The article then introduces use to the computationalism theory, which state that cognition is computation. Here is the first premise: “(1) Mental states are just implementations of (the right) computer program(s).” So if mental states are just implementation of the right algorithm, is it necessary independent of its physical execution? I think this is what computationalism is trying to argue. But as Harnad says (and at the same time summarize my own thought): . “How can the brain be irrelevant to mental states (especially its own!)? Are we to believe that if we remove the brain, its mental states somehow perdure somewhere, like the Cheshire Cat's grin?”
    “If we combine (1) and (2) we get: Mental states are just implementation-independent implementations of computer programs”
    But are they? Would the way I experienced my mental states would be the same if my brain was in a vat? It seems to me my mental states are actually dependent on my brain and its biochemical composition. Whether or not there is an algorithm for my brain, I don’t see how its implementation on another physical being would produce the same mental content as mine. Or perhaps this is too theoretical for me to understand. The way I understand Harnad talk about this, is that physical implementation is necessary but irrelevant. What he point as wrong is that mental state aren’t just physical states, but rather physical implementation of a computer program. What differ from my point of view is that there is something intrinsic to mental state which is that they feel like something. Mental states differ from physical instantiation because they bring awareness and qualia. At least this how I see it.

    ReplyDelete
    Replies
    1. Hi Roxanne!

      I think I might be able to help a little bit.

      "Whereas the brain is irrelevant, I think it could be the case, but so far, all we have encounter that allow for cognition is the nervous system. AI does seem to be promising, but lack the capacity that all organic creatures have: a complex biological system."

      When you say this I think you are alluding to the idea that you think a T4 level robot would be the ideal level of Turing indistinguishability. You say that the nature and 'complexity' of a biological system are important to cognitive capacity of an organism, and that a T3 robot, without this system, would not be a strong enough test for cognition.

      “How can the brain be irrelevant to mental states (especially its own!)? Are we to believe that if we remove the brain, its mental states somehow perdure somewhere, like the Cheshire Cat's grin?"

      Here I think we're simply taking a stab at computationalism, not commenting on the importance of a complex biological system. I think Prof Harnad was trying to make the point that the claim of computationalism that the algorithm for cognition is implementation-independent is kind of silly because it would imply that the brain's physical nature is irrelevant to cognition.

      "What differ from my point of view is that there is something intrinsic to mental state: (mental states are actually dependent on my brain and its biochemical composition) which is that they feel like something. Mental states differ from physical instantiation because they bring awareness and qualia. At least this how I see it."

      As an aside, I think your final statement is interesting. Are you implying that inherent difference between a T4 robot and a T3 robot is that the biochemical nature of T4 robot affords it felt cognition as opposed to unfelt cognition? That there is something unique about the complex biological nature of our brains that would make us different from Renuka and Riona? I myself, lean toward T4 Turing indistinguishability, but what makes you so sure that 'feeling' is something afforded T4 rather than T3?

      Delete
  33. Stevan says that there is a “synonomy” or rather a conflation of the terms “conscious” and “mental” in Searle’s paper, and consequently in many of the responses offered to that famous classic of cognitive psychology.

    Understanding (in the sense of a human understanding a language, like Chinese) is a conscious state. This means that the presence or absence of being conscious must be ascertained or attested to. Does this mean consciousness is determined by bringing consciousness into consciousness? So like it has to say it’s a conscious thing. Oh hey, I’m a conscious thing, in the sense of the Cartesian cogito. I am, therefor I am something that is. Like, to determine consciousness, consciousness has to be an object of consciousness. I think that’s the direction Stevan is going in when he talks about the conflation of mental and conscious within Searle’s periscope. Consciousness is a periscope (medium through which to be conscious), that brings mental states within its range of scope to ascertain whether they are conscious. So I guess consciousness depends on consciousness? I think Stevan is right to be confused with Searle’s claim here. Searle wants to talk about what is strictly the mental, but Stevan thinks he conflates it with the conscious, thereby contradicting the claims in his argument.

    ReplyDelete
  34. The video linked above was very informative on Searle and his approach to the Turing Test, computation and cognition.

    The theory being tested here is how cognition is just a form of computation and that studying neuroscience is the way we can understand cognition.

    The Strong AI thesis of computationalism is introduced where we learn that all cognition can be reached through computationalism. This view also states that dynamics are irrelevant and it is only the software that matters i.e. hardware independent. Personally, I am slightly against this view because I believe that hardware is relevant to ‘thinking’. Human senses of touch, taste, sense, smell, sight are extremely important in gathering knowledge and without these we are unable to think. Similarly, hardware plays a major role and the way data is input into a system should factor into the larger process of ‘thinking’.

    A big question raised here is: is cognition just computation? Can a computation passed the pen-pal version of the turing test? Does that means it understands?
    Underlying all of these questions is the mind body problem. The turing tests aims to captures this. But how much of the body is involved in this understanding? As mentioned above, for now I believe that the body plays a significant role.

    In Searle’s case, we see that T2 is on trial. Searle is testing whether T2 understands. We see that all theories are underdetermined. There may be one way to pass a Turing test Searle is only testing one of the ways to pass the Turing Test, which is through computation.

    I did not understand how time and speed factors into Searle’s Chinese room argument. It would be great if someone can explain more about this.

    What we also cover is the Systems Reply which states that the system as a whole understands Chinese and not the person inside the Chinese room. The Chinese room argument refutes the claim that cognition is computation. Searle goes a bit too far by making the claim that cognition is not computation at all. In my opinion, I believe that cognition is part computation. There are many instances where we use algorithms and structured approaches to deal with certain situation and this is due to the computation aspect of cognition.

    ReplyDelete
  35. The author states “"Consider reverse-engineering a duck: A reverse-engineered duck would have to be indistinguishable from a real duck both structurally and functionally: It would not only have to walk, swim and quack (etc.) exactly like a duck, but it would also have to look exactly like a duck, both externally and internally. No one could quarrel with a successfully reverse-engineered candidate like that; no one could deny that a complete understanding of how that candidate works would also amount to a complete understanding of how a real duck works. Indeed, no one could ask for more."

    After re-examining this topic I began to wonder what if someone demanded more? For example, we have often looked to reverse-engineering to determine the function of something. But can’t reverse engineering really dictate how something works? Would there be more methods or something else that can help explain how something works? I know this is a bit simple, but I cant help but wonder why we are resting on reverse-engineering when we could be missing a much bigger factor.

    ReplyDelete