Saturday 2 January 2016

10d. Harnad, S. (2012) Alan Turing and the “hard” and “easy” problem of cognition: doing and feeling.

Harnad, S. (2012) Alan Turing and the “hard” and “easy” problem of cognition: doing and feeling. [in special issue: Turing Year 2012] Turing100: Essays in Honour of Centenary Turing Year 2012Summer Issue


The "easy" problem of cognitive science is explaining how and why we can do what we can do. The "hard" problem is explaining how and why we feel. Turing's methodology for cognitive science (the Turing Test) is based on doing: Design a model that can do anything a human can do, indistinguishably from a human, to a human, and you have explained cognition. Searle has shown that the successful model cannot be solely computational. Sensory-motor robotic capacities are necessary to ground some, at least, of the model's words, in what the robot can do with the things in the world that the words are about. But even grounding is not enough to guarantee that -- nor to explain how and why -- the model feels (if it does). That problem is much harder to solve (and perhaps insoluble).

62 comments:

  1. This is something I touched upon in my comment on the previous reading, but I wanted to bring it up again because it is still bothering me, and this article only made me more confused. So I decided this issue should get a whole post to itself.

    The successful TT-passing model may not turn out to be purely computational; it may be both computational and dynamic; but it is still only generating and explaining our doing capacity. It may or may not feel. (page 3)

    Could we really have a non-feeling robot that passes the TT (assuming T3 here)? Maybe I’m just getting caught up in the other minds problem, but it seems like for a robot to actually pass T3, it needs the ability to feel. The only way for a robot to act like a feeling being in a TT-indistinguishable manner is for it to actually be a feeling being. Other people act like feeling beings because they are feeling beings (at least as far as I can tell), and it seems reasonable that this principle would apply to robots as well. In other words, wouldn’t the capacity to appear to be feeling require the capacity to actually feel? Otherwise, how could you possibly appear to be feeling?

    The Turing Test set the agenda for what later came to be called "cognitive science" -- the reverse-engineering of the capacity of humans (and other animals) to think. (page 1)

    If the Turing Test sets the bar for the reverse-engineering of our capacity to think, and a robot that passes the TT can do all the things we can do the same way we do them, then wouldn’t the reason it can do all those things like us have to be the fact that it can think like us? If the robot passes the TT, then we have successfully reverse-engineered cognition, and the TT-passing robot must be cognizing. How is it possible that this robot could be cognizing but not feeling? Are the concepts of cognition and consciousness/feeling completely separable from each other? Maybe I’m conflating the terms “think” and “feel” when I shouldn’t be. But doesn’t the process of thinking directly require the feeling associated with thinking to be going on as well? Maybe I’m missing something obvious and need to brush up on my Turing readings from the start of the semester. I just can’t see where I’m going wrong here, so any help pointing it out would be greatly appreciated!

    ReplyDelete
    Replies
    1. Hi Alex,

      In response to your first question, I do think that a robot could pass T3 without feeling. In my opinion, the big problem here, which causes confusion as to whether this is possible, is the Other-Minds problem. We wouldn't even know if the robot could "feel" even if we could program feeling. It is entirely a matter of opinion, but I think a robot could come off as a "feeling" being. When we assume that other people are feeling beings, a huge part of that is based on their behaviour; subtle changes in prosody and facial expressions that feel natural to us in given situations. Some people with Autism or Schizophrenia may show differences in affect based on their speech and facial expressions, and may come off as "robotic" but we still know that they are feeling. I think we could program things like speech and expressions in a robot well enough to maybe make the robot seem different socially, but for people to still assume it is a person.

      Delete
    2. Hey Alex and Amanda,

      Alex you brought up something that confused me in this article as well though not exactly for the same reason! Im going to try to run through my thought process to come to my own confusion:
      Due to the other minds problem, were a man made machine (computer or robot) be able to do everything we do, we would be none the wiser as to whether or not this doing capacity would be accompanied by feeling, just as much as we cannot be sure of the same thing with regards to each other! What Searle showed with his CRA was that the 'chinese pen pal' computer program was able to do everything required for a T2 (so the doing capacity checked out) but that this was not accompanied by the feeling of understanding (Question for Dr Harnad or anyone with an input: is understanding also a weasel word for feeling? or does this present the distinction between a T2 passing computer program, which neither understands nor feels, a T2 passing robot, which understands but may or may not feel).
      I understand what you mean about the process of thinking requiring the feeling associated as well, and to the extent that for us, 'feeling' is a leylak category where we don't have any negative examples, it is also understandable that it would be unfathomable to have understanding and the capacity to do everything we do without having the associated feeling !
      An clarifying point in the article for me is: "Searle is simply pointing out that the same is true of computational simulations of verbal condition: If they can be done purely computationally, that does not mean that the computations are cognizing", which points to the fact that the computer program is in effect 'mindless' !

      Essentially, similarly to what Alex is pointing to: can there be understanding without feeling? I know that we can't be any the wiser about whether or not a robot able to do what we do is in fact feeling its doing capacity as well but is that the best kind of answer we can hope for at this point?

      Delete
    3. I also hold a little confusion about why it would be valid for Searle to make use of the Hard Problem to dispel a purely computational T2 candidate (by saying that he knows what it feels like to understand and he is telling us he doesn't understand Chinese through the implementation of the program) if we are unable to assume that a T3 grounded robot also feels what its like to understand as opposed to just doing the understanding (but this comes back to whether or not understanding automatically assumes a feeling of understanding)

      Delete
    4. Joe: T3 need not feel, it need only sense (detect). (In fact, until the hard problem is solved, no one has a clue as to why anything should feel.)

      Amanda: "Program feeling?" You mean a computer program? Didn't Searle show that computation alone could not do that? (I think you are still stuck with thinking computationalism means "being programmed." That's sci-fi. A program that learn can become very different from how it started -- and unpredictable as well as unrecognizable to the one who wrote the program. (Distinguish the fact about whether or not something feels from the other-minds problem of whether you can know whether or not it feels.)

      Naima: Actually, "understanding" is not a weasel-word for feeling any more than seeing is a weasel-word for feeling: It feels like something to understand, just as it feels like something to see. And if something can behave as if it understands (e.g. T2) it could either really be understanding (if it feels) or merely acting as if it understands (as in Searle's Chinese Room). Understanding = T3 grounding + feeling. Ditto for seeing: Seeing is T3 optical processing + feeling.

      Searle is not "using the hard problem." He is using his own Cartesian certainty about thinking or understanding (Descartes' Cogito/Senti) to report (with certainty) that, no, he does not understand Chinese, because he knows what it feels like to understand, and it's not happening! (No need to solve the hard problem for that -- though it is a (special) penetration of the other-minds barrier (Searle's Periscope) that works only for the special case of implementation-independent computation (computationalism).)

      Delete
    5. I think I have to agree with Alex here, and I've said something similar on another post. Like Alex I'm fairly convinced that nothing could pass T3 without feeling. Not because you need feeling to pass T3 (we've all tried and failed to find examples where feeling is not superfluous), but because if we were able to create something that could truly generate the performance capacity of a feeling being, down to every last detail, and for a lifetime, I believe there would be no way that it couldn't be feeling. Obviously this is just an intuition and could be completely false, but if in the far-off future the T3 TT-passing robot were to be created, I strongly feel that by generating all that we can do indistinguishably, the mechanism would also be generating feeling. Like Alex, I don't think that the two are dissociable. To be absolutely lifelong indistinguishable from a feeling being, I believe you would have to be feeling.

      Delete
  2. - So it is unclear whether a T3 robot would feel or not then despite the functional capacity being the same as a feeling being? I thought that Harnad does not believe in the existence of zombies, so then a T3 robot would always just be simulating then according to the CT-thesis? Or is it that once we implement the performance capacity, then consciousness will arise since there is correlation between the two, but it's just that we'll never actually know the robot is feeling, but we make the assumption of it just like the other-minds problem?

    - An off-topic question about feeling not in regards to this paper, but is feeling universal? Is consciousness a subjective experience, or is it the same among every individual? Are all living beings capable of consciousness (assuming that animals are capable of cognition and that plants are not)?

    ReplyDelete
    Replies
    1. Please don't mix up "P is true" with "Stevan Says P is true." I don't believe that there could be a zombie T3. But I have no evidence or proof. Because of the other-minds problem, no one can know.

      Feeling is feeling. But not everything feels the same.

      It is just as probable that other mammals feel as that humans do. Birds too; and other vertebrates; and invertebrates too, even though the differences are pretty big once you get down to clams. And it's almost as improbable that organisms without nervous systems feel as that isolated organs or even stones feel.

      Delete
    2. This was the concept I was having trouble with in class yesterday. How do we decide where to draw the line of feeling and non-feeling? You're saying it's improbable in organisms without nervous systems which intuitively I can agree with. But because of the other minds problem (as you mention directly before the point I have a contention with), how can we know that "not everything feels the same?"

      This is maybe all just a matter of Stevan says and believing in probable but uncertain things the same way we believe apples won't start falling up. But I sometimes have trouble pushing the line down and away from humans vs everything else - any elaboration or further reading you can suggest here?

      Delete
    3. Hi Oliver,

      I find your comment about feeling being universal quite interesting and thought-provoking. I've recently read Christof Koch's theories about how consciousness is this universal property and that it extends across all species as well as to all things (to a varying degree). In the Integrated Information Theory, this degree can be denoted by Φ, which indicates the level of consciousness that an integrated system has. It is said that the more integrated an information-processing system is, the greater amount of consciousness it has. Then the question becomes, "to what degree are we actually conscious?". Is all living matter in this universe (and even non-living matter) conscious to an extent? It seems like all crazy-talk but I find it fun to think about. There are many people who are convinced that we have a soul, that their pet animals have a soul, that when walking through a calm forest that the trees you touch have some kind of soul.. again, you can never ever know whether something other than yourself feels but what if it really does feel? We say the human brain is extraordinary and that we're superior to other animals but we note that they have intelligence as well -if not more intelligence in some domains, so in fact, our brain isn't this Godly thing after all. Maybe these metaphysical (or panpsychic?) views have some truth to them. We don't know and we'll probably never know...

      Christof Koch expressed that this kind of theory (that consciousness crosses all dimensions of life and in the universe) is the "most satisfying explanation for the universe, for three reasons: biological, metaphysical and computational." I'm curious to know what you (Professor Harnad) and the rest of the class think about this. Is it all just make-believe and just another idea lacking empirical evidence that can propagate across human minds (like organized religion...)? Us humans love to try to find reasons for our existence and the meaning of life so I see how we're willing to believe in all sorts of things.

      Delete
  3. Hi Oliver,

    I got the impression from this article that Turing did not intend or expect a T3 robot to be "feeling", but just to have our functional capacities without feeling. I do think a T3 robot would just be simulating and performing computations, and I think that this is what Turing had in mind. The computationalists took it a step further by asking if all we (humans) do is computation. I do agree though, and mentioned this a few times in my skywriting, that even if we were to program feelings in a robot, how would we know? We would have to accept them as feeling just like we accept all others that way. Who knows in the future if this will be a thing, or if there will be robot's rights just like women's rights one day.

    ReplyDelete
    Replies
    1. Hi guys,
      Oliver, I’m confused as to whether or not a T3 robot would feel or not and what would the conditions for it be. Let’s assume T3 feels, would it be because we program it to, or would consciousness arise as a correlate as you pointed out. As Amanda mentioned, Turing didn’t expect that a robot passing the TT would feel. It would solely be executing actions similar to human behavior. But that imply language, thus the ability to ground symbol to their referent in the real world. Does being able to do all the things a human being can do imply consciousness? If such robot would arise, there is no way to tell if there would be really feeling. Again, it would be to trust behavior and associate its concordant sentiment.

      Delete
    2. Amanda: About "programming," see above. And Searle shows that computation alone would not be enough to generate feeling.

      Roxanne: Turing said give up on feeling (the hard problem) because the easy problem of doing is the only one we can solve (because of both the hard problem and the other-minds problem).

      Delete
  4. I may or may not have expressed this in class, but I am not convinced that if we are able to build a T3 robot that robot will feel. As we have mentioned before, T3 robots can do things without feeling them, for example the battery charging example. The robot can be programmed such that when its battery drops below 15%, it finds the closest wall plug in, sticks in its prongs, and sits and waits until its battery is charged back up to 100%, but nothing about this necessitates that the robot “feels” like it needs to charge. I think that the intuition that a T3 robot will have feeling just like us lends itself to the idea that feeling is a “byproduct” of cognition. Which is to say, if we create a cognizing robot it must also be feeling. This I disagree with!
    This article nicely summarizing a lot of the key points of the course up until this point but it does not convince me of this business of “feeling”.
    Additionally, this quote “But where does this leave Turing's test, then, which is based purely on doings and doing-capacity, indistinguishable from the doing capacity of real, cognizing human beings? ”
    makes it seem as though Turing wanted the TT to test for indistinguishability of cognition and feeling, but I’m not sure this is the case. The TT (or T3 at least) is sufficient in my opinion) to test for performance capacity, to test the doing, to solve the easy problem. But I don’t think that Turing meant for the TT to address the hard problem.

    ReplyDelete
    Replies
    1. Hi Renuka!
      I am curious as to why you think that if we create a cognizing robot we cannot also say that it is feeling. Once we have reverse engineered cognition, I think the argument is that there are SO few ways to do this (like evolutionary psych points to the mechanism of an eye, of which there is pretty much one design for most species with few exceptions) that we will have a feeling thing once we have a cognizing thing.
      I don't know how I feel about it still so I am curious as to why you disagree.

      Delete
    2. Renuka: We won't know (because of the OMP) whether T3 will feel. But we don't "know" that anything else but ourselves feels (or doesn't feel): whether they are rocks or toy robots. The point with T3 (i.e., you!) is that we can't tell it apart from any of the rest of us (in any relevant way: we don't go around brain-scanning one another, and wouldn't know what to make of it if we did).

      Turing said forget about feeling: we can't test to see whether we've generated it (let alone explain it), but that the TT is enough to explain all of what we can test for and explain. And that's as much as we can ask of cogsci.

      Julia: I agree that the degrees of freedom are pretty narrow once you've reached T3. But normal underdetermination is when you have explained all the evidence, yet there is more than one explanation that can do the trick. With T3 (or even T4 or T5) you have not explained all the evidence, though the bit you left out (feeling) would make a huge difference. And since we know we feel, a T3 or T4 that left that out would have left out a lot, and would be the wrong theory, not just an under-determined one. But Turing's point is that because of the OMT, there's no way we can know whether T3 does or does not feel. (Ask Renuka or Riona!)

      Delete
  5. • “Many people have assumed that Turing had meant and expected the TT-passer to be a purely computational system…”
    • Was Turing a computationalist? Many people thought that to pass the Turing Test meant it had to be one that was purely computational, like Searle in the CRA, if squiggle then squaggle. It seems to me that this was not the strength of the TT, and that it probably wasn’t exactly what he had in mind. Turing was clearly thinking that the capcity of the computer was not going to be purely computational if it were able to fool a human for their whole lifetime into thinking otherwise. This article gave me my first real understanding of why it might be difficult to prove Turing is a computationalist totally.

    ReplyDelete
    Replies
    1. Hi Julia!
      It does seem that many people have assumed Turing was a computationalist. But I’m not sure he was. Even if he believes that a Turing Machine could simulate just about everything, this doesn’t imply it can fool a human being, in a dynamical world! Harnad argue in this article that Turing was aware that to pass the Turing test, a robot would have to possess sensory “organs” and motor functions. For a robot capable of passing the TT, it would have to be able to ground symbols, thus to have a link with the world.

      Delete
    2. Julia: T3 cannot be just a computer...

      Roxanne: Grounding is not possible with computation alone.

      Delete
    3. SH: I pasted directly from my relatively stream of consciousness notes, but I know that T3 is grounded in a physical dynamical system.
      I think what I meant is that the fact that Turing envisioned a system being powerful enough to be able to fool someone who is grounded in a physical world means that he probably did not think that computation alone could explain cognition.
      Basically, I meant to say that because Turing thought that passing the TT was even possible seems to me enough to show he isn't a computationalist.

      Delete
  6. I think this article was a good, quick summation of the main points throughout this course. We need to accept that there are things we will not be able to know or to test for because of the other minds problem (such as if the T3 robot feels, or if anyone feels except for me) in order to advance the field of Cognitive Science. Given this, the Turing Test and T3-passing robot, can offer evidence through one causal mechanism of human cognition (although, even if we solve the easy problem of reverse-engineering cognition there is a possibility that it is not the same as the way we actually cognize because our cognition is felt cognition, but we cannot test for capacity to feel). Thus, Turing was basically saying that identical performance capacity (able to do everything that we can do) is all we can aim for and is one way to figure out the hows/whys of cognition (and since we do not have any other inkling of the causal mechanism of our human cognition, it is a good place to start).

    “What is missing to make symbols meaningful, the way words and thoughts are meaningful to us? I've dubbed this the "symbol grounding problem"”

    One question I have is where does meaning fit into everything? We discussed earlier that meaning is the combination of grounding and feeling. So, assuming the T3 robot does not feel, the robot would use its sensorimotor capacities to ground symbols in the world, but then what the robot does/says would not actually mean anything to the robot? It seems as if meaning and feeling are linked closely together. It feels like something to understand a word or know what to do with an object. I am having a hard time dissociating feeling and meaning. Then again, meaning is quite a subjective thing – I can describe what it means to someone, but they will never truly know and they have to take my description at face value (and I guess the same would be said for the T3 passing robot). I guess this would mean that I am moving towards the idea that if a robot can pass T3, then it probably is the case that it means what it says and is feeling too (but we will never know for sure).

    ReplyDelete
    Replies
    1. You covered all the options: Meaning = T3 grounding + feeling. We can't know whether or not T3 is feeling, so either it is a grounded zombie or it's feeling. ("Stevan Says" if there can be a T3, it will feel, but because of the other-minds problem we cannot know whether, and even if we did, because of the hard problem we cannot explain how or why.)

      Delete
  7. Question: Do certain species have ‘more’ feeling than another species? Or have a greater ability/capacity to feel than another species? If so, is there a way we could figure this out?

    ReplyDelete
    Replies
    1. Hi Lucy,

      You're getting at a really interesting issue here. Feeling can be quantified (albeit somewhat crudely) in the form of a questionnaire to assess pain, for example. That being said, this isn't close to a perfect measure: the person could be lying and not feel a thing. So being sure of a measurement of feeling in humans is already problematic. Measuring it in nonhumans who can't communicate to us is even harder.

      But we know that different species have different capacities to detect sensory stimuli. For example, we know dogs smell much better than humans do. So it's very possible that the feeling of smelling peanut butter is stronger for a dog than a human. That being said, we should be cautious about proposing a direct relationship between feeling and the strength of a sensory signal.

      But the most interesting aspect of feeling (to me anyway) is that it's qualitative, not just quantitive. I don't know what it's like to be you and you don't know what it's like to be me. Here we probably aren't dealing with more or less of a certain dimension. Another way to look at it is to wonder what it feels like to echolocate. Bats can do that but we can't. And no matter how well we understand a bat's neurophysiology we won't know what it feels like to echolocate.

      Delete
    2. Someone may feel more intensely, or feel more things than another. Ditto for one species and another. But whether they are feeling at all (rather than feeling this vs feeling that) is not a matter of degree. You are either feeling (anything at all) or you are not.

      Delete
    3. Hi all,


      Yes, you are either feeling or you are not. And maybe a certain species has ‘more’ feeling than another. But still, we can’t even answer this question if we can’t be sure someone else feels because of the other minds problem. So, no there is a not a way we could figure this out. “How much you are feeling” is very subjective. If the same exciting news was delivered to two people, A and B, A may feel happier and more excited than B but we still don’t know anything forsure. A must just say or rate their happiness higher or react by shrieking, whereas B tends to just give a big smile then internalize it. You could say that females have more feelings than males because we are more emotional, but this just may be a cultural thing in which females are more expressive about their feelings than males.

      Regardless or strength of feeling, the question at hand is still just whether or not you feel. So when testing T3, it’s a matter of does it feel or not. That is what could lend help to answering the question to the hard problem “why” do we feel. Measuring feeling will not.

      Delete
    4. So does feeling then just represent the range of being conscious? This is confusing me because I do see feeling in a lot of different categories. Not that they're in a hierarchy, but in our everyday language we do say we "kind of feel" something or "slightly feel" a certain way. Is this just more of a problem with semantics?

      Delete
    5. Hi Prof Harnad,

      "Someone may feel more intensely, or feel more things than another. Ditto for one species and another. But whether they are feeling at all (rather than feeling this vs feeling that) is not a matter of degree. You are either feeling (anything at all) or you are not."

      I feel like this keeps coming up in the skywriting, but what could possibly account for this 'all-or-nothing', emergent quality of consciousness? It's reminiscent of all our discussion of how proposition could come from pantomime without any steps in between.

      In this case, I'm left wondering where we draw the line in the sand for things that are cognizing and things that are not, and furthermore, things that are feeling and things that are not.

      I've outlined this argument previously, but it still leaves me baffled.

      How could this absurd, pseudomystical capacity to feel evolve all of a sudden, or at a certain level of complexity? Where's the common evolutionary ancestor who has capable of feeling? Which animals are not part of that branch?

      Aaaaagh it's frustrating. Feeling is so slippery because it feels so obvious and accessible and yet defies explanation.

      Delete
  8. "Well Searle is not feeling the understanding of Chinese when he passes the Chinese TT. He can distinguish real understanding (as he understands English) from just going through the motions: just doing the doing."


    Is Searle really distinguishing between feeling and doing here? Or is Searle distinguishing the feeling of understanding from the feeling of going through the motions? Sure, he’s not feeling the understanding of Chinese, but that doesn’t mean he isn’t feeling at all. I actually recall a lecture where Stevan, all of a sudden, began speaking in a language besides English, to show how it feels something to not understand something. To say “not feeling the understanding” is pretty much the same as saying to feel like not understanding, right?

    ReplyDelete
    Replies
    1. Of course Searle is feeling something. Memorizing and executing a Chinese T2-programme does not make you go into a coma! But it doesn't make you understand Chinese either.

      Delete
    2. Yes, Nicholas that is a good point. It feels like something to not understand which is basically the feeling of being CONFUSED. It's not the best feeling in the world!

      Delete
  9. Hi everyone,

    Today in class I mentioned Benjamin Libet's "free will" experiment, in which someone is asked to report/press a button when they decide to make a motor movement. The study showed that the neural activity responsible for the motor movement took place before the participant "decided" to make the movement. Here is the primary source: http://arche.depotoi.re/autoblogs/wwwpauljorioncomblog_4fc2b52dc5e5e5503ee40e1bff73e7307c661025/media/0511df97.Brain-1983-LIBET.pdf People other than Libet have interpreted this as evidence that free will is illusory.

    Noam Chomsky was asked about the implications of these findings. Here is his reply: https://www.youtube.com/watch?v=J3fhKRJNNTA

    I'm not sure I agree with his view on this issue though. What does everyone else think?

    ReplyDelete
    Replies
    1. (1) Unconscious decisions (if Libet’s study shows what it’s taken to show) are not free will, but the opposite.

      (2) And Noam's prior point (that if you don’t believe in free will then there’s no point trying for anyone to reason with anyone) is not primarily about free will: it’s about causal predeterminism, which applies not only to human choice but to anything that happens in the universe.

      Even if everything that happens is already predetermined by the Big Bang, we don’t know what is going to happen (uncertainty).

      And (whether or not everything is predetermined), just as we don’t have any choice about feeling whatever we feel, we have no choice about feeling that we do what we choose to do because we feel like it (i.e., because we willed it). And this is true whether the choice was unconsciously made in advance of the decision or simultaneously with the decision: the hard problem means there’s a causal problem either way.

      Delete
  10. Since this paper is a nice review of the hard and easy problem as pertaining to this class, I wanted to attempt to explain the importance of each Turing test (TT) as well as describe the consequences of passing each test.

    T2 (since T1 is just toy problems) is a test of verbal capacity, or in other words, whether or not the subject in question can communicate through language, indistinguishably from a human for a lifetime. This is the test that Turing had originally described under the name “The Imitation Game.” T3 includes verbal and sensorimotor capacity, indistinguishable from that of a human. T3 is the test that will determine whether or not we reverse engineered something that can do all the things that we can do. In technical terms T2 can also be sufficient to test cognition because in order to pass T2, one would need a T3 passing robot to be communicating with the human. This is because, as Searle pointed out, cognition is not just computation, and without a robot with sensorimotor capabilities, we are stuck with a computer sending emails just based on symbol manipulation. A computer just sending emails will have no way to give meaning to words, thus it is unlikely that they can pass T2. We know that they can’t give meaning to words because of the symbol grounding problem as defined by the quote below:

    “Consider a Chinese-Chinese dictionary. It defines all the words in Chinese. But if you don't
    already know at least the meaning of some Chinese words, the definitions of the
    meaningless symbols only lead to more meaningless symbols, not to meaning.”

    In order to give words meaning, they have to be grounded by sensorimotor interaction with the world, which is why in order to pass T2, a robot with sensorimotor capacity is required. Lastly, “Stevan says” a T3 passing robot will feel. This is because they are, in terms of performance capacity, indistinguishable from a human, so if this robot acts just as a human does and without prior knowledge that it is a robot, there should be no denying that it feels just as we believe other humans feel. You can be skeptical that it feels because you cannot know for sure due to the other minds problem, but that means, through the same logic, you should be skeptical that any other human or living being feels as well. If you kick a T3 robot, it will react the same way any human would, thus why would you believe that it does not feel this kick, yet a human does?

    “The physical CT does not imply, however, that everything in the physical world is just
    computation, because everyone knows that a computer simulation of (say) a plane, is
    not a plane, flying (even if it can simulate flying well enough to help test and design
    plane prototypes computationally, without having to build and test them physically, and
    even if the computation can generate a virtual reality simulation that the human senses
    cannot distinguish from the real thing -- till they take off their goggles and gloves).”

    Correct me if I am wrong but a computationalist would agree with the physical CT? Or would at least agree with the passage above that a computer can simulate the physical world, but this simulation would not be the actual thing? With this logic I am completely perplexed that a computationalist would suggest that cognition is all computation. Computer simulation can never actual be the dynamic physical process that it simulates; thus why would someone ever believe that computation is cognition, something that directly implicates dynamic processes?

    ReplyDelete
    Replies
    1. Good summary, and shows good grasp of it all.

      Both computationalists and non-computationalists probably agree on the Strong C/T thesis (that computation can formally simulate just about any dynamical system, but that does not mean it can generate the dynamic properties themselves: just simulate them formally).

      But since (1) no one yet know what thinking is and (2) only the thinker can feel it, computationalists are concluding that thinking is not a dynamic property but a computational property (i.e., among other things, implementation-independent); so when you implement the computation, you are generating -- not just simulating -- thinking.

      Because of the other-minds problem, the only one who could contradict this would be the thinker; but we have no way whether what he's saying is true. So there's no way to get around the "invisibility" of thinking -- other than Searle's Chinese Room Argument.

      Common sense, however, and biology, suggest that thinking is a biological trait; and introspection adds that it is a felt biological trait. And feeling is not formal but dynamic. (The trouble is that no one can say how (or why) a dynamical system (the brain) generates thinking rather than just doing.

      Delete
  11. "Turing was perfectly aware that generating the capacity to do does not necessarily generate the capacity to feel. He merely pointed out that explaining doing power was the best we could ever expect to do, scientifically, if we wished to explain cognition. The successful TT-passing model may not turn out to be purely computational; it may be both computational and dynamic; but it is still only generating and explaining our doing capacity. It may or may not feel” (Harnad 2012).

    Could someone explain the rationale for how Turing used both the other-minds problem and the hard problem to say that the doing capacity is the only thing we can explore. 1) I don’t understand why the reverse engineering explanation of the doing capacity arbitrarily stops at doing if I assume T3 does in fact feel 2) This is built on the premise that we can trust that T3 feels. Let’s ignore the other minds problem momentarily (though this isn’t possible) and say T3 does feel. Why does the reverse engineering explanation-the idea that we have put this thing part by part together-not explain feeling if it occurs. I still believe feeling must play some powerful adaptive role given its widespread occurrence throughout the evolutionary tree and its central role in our everyday lives. Therefore, I think if the T3 robot did feel (though we couldn’t prove it), we would have to have some sort of mechanism that produced this, which we must have used in the reverse engineering process. If the internal-going ons are unobservable to us and we are willing to extrapolate for the third-person performance capacity for the sake of the internal processes, then why does it stop here. Unless you clarify that the the internal processes themselves are actually not first person science, and the sole difference between feeling and this internal process is that feeling is based solely in the first person-based on what we say we feel. Then this extraneous “interpretation” of the internal process couldn’t be explained by the how and why of the internal processes because they are distinct and this is just an emergent phenomena. So the causal mechanisms cannot explain it because they solely target the internal processes and we have to settle for that they just emerged. Unfortunately arguing about the potential adaptive capacities of feeling in class have left me muddled on the precise connection between the hard problem and the TT.

    ReplyDelete
    Replies
    1. Hi William,
      I’m not sure I understand everything you’ve mentioned and brought up. Which is why I’m just going to attempt to address a few of the points (and conveniently ignore the others haha).
      I’m not quite sure the entire argument is built off the premise that T3 can feel. Turing said that the doing capacity is the only thing we can explore specifically BECAUSE there is no way to create a thinking/feeling/conscious machine. The whole point about how robots can’t feel is specifically because there is no mechanism that produces this that can be implemented. In terms of humans too, there is no specific body part or process that that we can attribute feeling to, otherwise it would be a matter of time and technology to simulate this in a robot.
      Also I’m not sure what you mean by “internal processes”. By that are you talking about the so-called heterophenomenological bodily observable phenomena? Because in that case those are still physical processes independent from the mind. It doesn’t make much difference whether they’re going on above or below the skin. Because aforementioned, there is no specific physical body part attributed to feeling. This whole “first-person narrative” is the precise thing that can’t be accounted for, and is causing the entire debate. But I don’t think for a moment that Turing built his argument based on the premise that a computer feels.
      Sorry if I ended up answering beside the point/unrelated to what you were talking about, I just tried to say stuff with regard to what I think I understood.

      Delete
  12. After reading this paper, and honestly throughout all of Monday's class I was trying to wrap my head around the distinction between 'feeling' and 'consciousness'. Are feeling and consciousness to be considered synonyms (or I suppose weasel words for each other?) Because I'm thinking of a scenario when a being is unconscious but still feeling and I can't think of one. Even if someone is in a comma, and they seem to have some sort of neural response when touched (not sure if this is true, if anyone knows I'm curious) can it really be said that they feel?Especially if we are using the word feel to be the same 'feel' as in it feels like something to think/talk/write. I also think maybe this has to do with the distinction between feeling in a physical sense and in a non-physical sense but I could also just be getting myself lost in a philosophical path I don't want to go down.

    All of this being said, I really appreciated this paper as it stands as a summary of what we've learned this semester. One of the sentences that stood out most to me was "“But I cannot doubt that what it feels like right now is what it feels like right now." because I think this speaks to the crux of why ever deciding if a T3 robot (or other human or any other being) is feeling will almost always be impossible. We can never know what anyone else "feels like" or how they "feel" when they say that they are feeling, and I really don't think it's satisfying to say that just because we can reverse engineer a T3 robot to cognize in a satisfying way, then we can say it is therefore definitely "feeling".

    ReplyDelete
    Replies
    1. "is feeling (something, anything)" is synonymous with "is conscious."

      "Feeling" is the most straightforward and revealing word for what all the weasel-words are referring to (always the same thing).

      Feelings in the "nonphysical sense"? (What on earth does that mean -- apart from the fact that feelings don't "feel" physical?)

      The "other minds problem" is not about whether someone is feeling this or that ("heterophenomenology" -- T4 can probably give a pretty good prediction of that): it's about whether they are feeling anything at all. Turing's point with the TT is that when we can no longer tell apart a T3 candidate from any of us (in a relevant way: we don't cut one another open and look in one another's brains, and we don't judge it by either the color or the texture of their skin) then it's arbitrary to be more (or less) sceptical about a T3 than anyone else.

      Delete
    2. Hi Prof Harnad,
      Thanks for the clarification-- I should have used a different word than non-physical but I was referring to the different types of feeling that you discuss in 10c. Sorry about that confusion !

      Delete
  13. This article was a good wrap-up of many concepts we've spoken about during class: Turing Test, SGP, feeling, other-minds problem, simulation, etc. We've gone over again and again how passing the T2 only involves computation, and given Searle's Chinese Room Argument we see that this computation leads to no understanding, so we must have sensorimotor grounding experience and interaction with the world. It has been well drilled into my head by now that we need at least a dynamic T3 robot to be able to pass the TT and to have indistinguishable behaviour from humans. One thing we can never be certain about however, is that this T3 robot is feeling, and this is due to the other-minds problem. I keep going back to the thought that we only know one thing for certain - that it feels like something to feel, and that we know we are feeling from a 1st person point of view because it feels like something to feel (and to even doubt that is to feel!).

    The reverse-engineering of cognition however, to address the easy problem and to provide causal mechanisms, does not seem like it would ever answer the question of how do we feel either. Even if we somehow reverse-engineer consciousness/feeling, we wouldn't actually know 1) if this machine is feeling or rather a zombie and 2) if these causal mechanisms are the answer to the way we are doing it. I know we've talked about how doing this would provide ANOTHER way of how we could be doing it, but it won't be THE way of doing it (causal mechanisms are different and feeling is still ambiguous). So then, are we not stuck in this perpetual loop of never being able to address the hard problem? What are the other options to reverse-engineering? I posted a while back about the cross-frequency coupling dynamics going on in the default mode network in the brain and that this is speculated to be involved with self-referential processing, consciousness, projections of self in the future, etc. Of course, this does not solve the how/why question (maybe gives more clue as to the "how" aspect of it), but it just seems to me that we need to understand what we are able to do as humans before we go off reverse-engineering... We want to generate the same "doing-capacity" and "feeling-capacity" as our own after all, no?

    ReplyDelete
  14. Searle showed that formal symbol manipulation is insufficient to produce conscious understanding. This relies on the premise (which I accept) that symbols (or anything) are not meaningful insofar as they are not consciously understood as meaningful (i.e. meaning must be felt). Does this mean that in order to solve the symbol grounding problem, and thus to explain how symbols can be understood, one must solve the hard problem of how that understanding can be conscious?

    ReplyDelete
  15. This paper is a nice summary of the topics brought up in the three others, and compacts all the important points into a few well worded paragraphs. The main point is that although Turing is the one who formulated his computational Turing Test, even though this test has been used as evidence in favor of being able to program a robot to “feel” as a human does, that might not have been Turing’s point of view. That is, his test might’ve been used in a way he had not intended. It is explained that we can observe what we are capable of doing. Turing formulated a test in which a robot might be able to do the exact same thing as a human can do, and be believed to be human by other humans. He does not attempt to explain how the robot might be able to do this, simply that its ability to do it might help understand cognition. Computationalists held that cognition was just computation; in that way the robot would be a cognizing entity. Searle countered this with his Chinese room argument, in which he held that doing is separate from thinking/understanding. One can follow specific rules and output perfect Chinese without understanding a single word of the language. This ties to the symbol grounding problem, wherein there needs to be a way for symbols to be attached to what they represent in the world (which a computer cannot do, it can only link a symbol to other (definition type) symbols). Therefore it seems like Turing wasn’t a computationalist. He believed the ability to do did not entail the ability to feel. All he did was say that “explaining doing power was the best we could ever do scientifically”. He is not saying that the ability to imitate feeling is real feeling, although it might help, or that it explains cognizing and feeling.

    ReplyDelete
  16. This was a nice summary of the course and felt like a natural way to bring together various concepts (all of which are inextricably linked anyways).

    What stood out for me was the paragraph at the end: "The successful TT-passing model may not turn out to be purely computational; it may be both computational and dynamic; but it is still only generating and explaining our doing capacity. It may or may not feel."

    I guess at some point in this course I convinced myself that the robot that could pass Turing test would have cognition, that if we can create a robot that is indistinguishable from a human (in terms of email capacities), i.e. that passes the Turin Test, then that robot has cognition. But I'm realizing now that this is not the case. I didn't consider the possibility that we could create a robot who passed the Turing Test but did not have cognition. Which is what this passage is suggesting. It is entirely possible to have a robot which possesses both computational and dynamical systems, and that passes the Turing Test, yet does not have cognition because it does not have feeling. I guess this sort of goes back to what I was saying before about maybe cognition not requiring feeling: clearly I was wrong about that, feeling is part of what we do and therefore is part of cognition, so let me rephrase and say maybe passing the Turing Test does not require feeling. I wonder what this means for the future of AI. Is it dangerous to be creating robots without feeling (if the world comes to a place where we are mass producing robots)? It doesn't seem like the creators of AI will have (or should have, from a purely scientific point of view) any interest in figuring out how to create feeling in robots since the issue is so complex. Furthermore, why would they put effort into creating feeling if they can create a robot that passes the Turing Test without feeling?

    ReplyDelete
  17. I am at a loss as to what else to say; this paper is simply a repeat of everything we’ve already extensively covered in class. So I will instead consider some random questions that pop to mind.

    If the strong church-turing thesis is true would that then mean that a simulation of cognition would be equivalent to the ‘zombie’ cognition that some talk about? My answer is yes, but only assuming that either of those things exist. I think that neither of these things could exist because having a feeling of what it is like to be is an essential element to cognition, some fundamental property. Why do I think this? No clue, except that it is likely a fundamental part of social connection and thus our awareness of ourselves and of cognition. A chicken-and-egg scenario.

    Are neural nets computation and does this change things? Yes they are because we can write programs, and no, because while they do a better job at connecting symbols to ground re statistics, they still do not have sensorimotor connections to the things that give meaning. Does this matter? Perhaps not, if we can do a good enough job at teaching and translating this experience and meaning. Everything is already interpreted by our visual/auditory systems anyways, so I’m not sure I see a difference. Neural nets can learn on their own, more or less, as we do.

    ReplyDelete
    Replies
    1. Hey nirtiac!

      I was looking to address your first question (hopefully) without misconstruing anything you said.

      "If the strong church-turing thesis is true would that then mean that a simulation of cognition would be equivalent to the ‘zombie’ cognition that some talk about? My answer is yes, but only assuming that either of those things exist."

      The strong Church-Turing thesis (CTT) is a proposition that computation is powerful enough to simulate any properties of the world to the approximation you like. You propose that a simulation of cognition would be equivalent to the 'zombie cognition' that has been previously discussed.

      I would say that this depends on your definition of equivalent. If you mean to say that these two will have precisely the same properties, I would disagree. The simulated cognition suffers from the fact that it is entirely computational. Without sensorimotor, dynamic components, simulated cognition is a series of ungrounded symbol manipulations without meaning. With simulated cognition we immediately encounter the symbol grounding problem, and as Searle showed, strict symbol manipulation is devoid of semantic meaning, and thus devoid of the feeling of understanding.

      With a cognizing zombie, I think the distinction is subtler. This is an individual who for all intents and purposes possesses the same sensorimotor capacities that we do, and can behave indistinguishably from us. This individual is not obviously an unfeeling creature, but is by definition (for the purposes imagining where feeling comes from) unfeeling. Unlike the previous 'cognition', this one can pass for a cognizer.

      These two are alike in that they are both unfeeling: one by way of the symbol grounding problem, and the other by definition.

      " I think that neither of these things could exist because having a feeling of what it is like to be is an essential element to cognition, some fundamental property."

      I won't speak to my personal opinions in the matter, but I would argue that the question of whether either of these 'cognitions' could exist is insoluble. In the case of the simulated cognition because the strong CTT has not been proven, and in the case of the zombie cognizer because of the other minds problem.

      Delete
  18. I think perhaps the hard problem even though it was coined over 20 years ago, for those that subscribe to it, signifies a major turning point for the field of cognitive science. The whole point of Turing's imitation game, which started all of this, "was to make it quite explicit that our goal should be to explain how we can do what we can." While we have successfully built T2 robots, and there have been incredible advancements in robotics and AI (recently an AI bot named AlphaGo beat the world's highest ranked "Go" player, Lee Sedol, in three consecutive matches, for example) there are quite a few issues that stand in the way of cognitive science/robotics/AI and its goal of explaining human cognition, and it doesn't look like many, if any, of these issues are going away.

    Obviously, this field consists of many scholars who dispute the existence of the hard problem. Nevertheless, if according to Stevan Says, there really is no solution to the hard problem, and I only ask because this short paper is very much a reiteration of previous papers-where does cognitive science go from here?

    ReplyDelete
  19. I completely agree with Harnad, I do not think Turing was a computationalist. When creating the imitation game and striving for the Turning Machine, he was striving to do just what his game title suggests: imitating. He was not trying to create a machine that would be a person, be a conscious, but to cognize LIKE a person. His goal was to simulate the performance capacity of a human in a simulated world. His goal was not to make a real human, or to make real cognition, but just a simulation, and as Harnad pointed out very clearly, we know simulation is an approximation. Turing was only striving to simulate everything one can do with computation, he was not saying that everything we do IS computation.

    ReplyDelete
  20. ‘Turing was perfectly aware that generating the capacity to do does not necessarily generate the capacity to feel. He merely pointed out that explaining doing power was the best we could ever expect to do, scientifically, if we wished to explain cognition. The successful TT-passing model may not turn out to be purely computational; it may be both computational and dynamic; but it is still only generating and explaining our doing capacity. It may or may not feel.’
    It feels like the path to understanding feeling might need to relinquish any and all connection to understanding neural correlates and neurochemistry. I say this, as I’m currently trying to dissociate my understanding of neuroscience and the molecular underpinnings of our behaviour with the task of critiquing consciousness – which has always been something I thought emerged out of the neurochemical soup of our brains. It feels like (the irony in that phrase) the answer will always be beyond our reach or comprehension, because we’re asking a system, or person, to explain what they do not even know how to explain – it feels cyclical, but also liberating.

    ReplyDelete
  21. This gives a nice summary of the entire course in a quite concise manner. It showed rally well how all of the concepts fit together.
    I wondered while reading where we draw the line in consciousness. It seems only a robot that could do everything we can do would be counted as thinking but does it work the same way with feeling? In other words, to be conscious and cognizing would a robot have to feel everything that we can feel? Or is a little bit of feeling enough? For example, what if they had all the sensorimotor feeling but couldn’t feel what it felt like to understand. Is some feeling enough whereas some doing is not?
    Personally I agree with prof Harnad that in order to do everything we can do a robot would probably have to feel. If some of the doing could be done without feeling I wonder if we could count this as cognizing, then.

    ReplyDelete
  22. I want to elaborate on what I said in 10c (about Descartes, metaphysics, etc). When we say “a person”, we do not mean a “thinker”. Nor do we mean a “doer”. We mean a thinker+doer. Similarly, when I say “I”, I do not mean just the thought of me, but me – including my body. Although C.S. Lewis famously declared: “You are not a body with a soul. You are a soul and you have a body”, we do not mean just our “soul” when we refer to a person. (Although I think this is largely common sense, we can also look to examples of human rights and dignity, whereby people’s dignity is conferred by treatment of their bodies.)

    When we say “person”, we do not mean “robot”. So while Turing is interesting, I am not sure how important it is.

    Whatever metaphysics we choose must explain this fundamental unity of the body and the mind/soul/whatever. If it cannot explain it, then it must be scrapped. (As they say, metaphysics precedes all else – except experience, of course.) Physicalism to me does not explain it. And therefore it must be scrapped. Or modified.

    ReplyDelete
  23. I really enjoyed this reading: short and sweet!

    This reading got me thinking about the whole T3-feeling business.
    Let’s just go with the assumption that T3’s do feel for a second, it still doesn’t solve the issue of the hard problem. That is, we still don’t know how or why the T3’s who feel, feel. Let’s say we officially know that T2’s don’t feel and T3’s do feel; or T3’s don’t and T4’s do – even if we know this difference, it still doesn’t tell us how or why that given difference in the different T2-T3-T4 model generates that “feeling” difference. That difference would probably be the underlying cause, but we still wouldn’t remotely understand its causality.

    Additionally, since the TT is a test of weak equivalence, it could be that both a feeling & non-feeling T3 passes the TT, but the TT wouldn’t be able to pick that up; the TT wouldn’t be able to differentiate the feeling from the non-feeling. We could have a feeling T3 and a Zombie T3, both passing the TT because it’s only based on doings. But I think Turing knew that already, and he thought we should ignore this – for the time being anyways, since no one has remotely even come close to solving the easy problem. I also got the sense that he thought, if we build a system that can do everything we do, why bother finding out if it can feel. If it can do everything we can do, why can’t we just assume it feels?

    I keep thinking of psychokinesis, this fifth force, as an end-all explanation, but nothing has showed that to be promising so far. It seems like we are causally superfluous and inexplicable organisms – I can’t accept that!! There must be some sort of reason & explanation, and we just currently don’t have the tools..?
    Man… the hard problem is hard…

    ReplyDelete
  24. “Consider the symbol string "'zebra' = 'horse' + 'stripes'." To be able to understand that definition, you have to already know what "horse" and "stripes" mean. And that can't go on via just definitions all the way down ("stripes" = "horizontal" + "lines," etc.). Some words have to be grounded directly in our capacity to recognize, categorize, manipulate, name and describe the things in the world that the words denote. This goes beyond mere computation, which is just formal symbol manipulation, to sensorimotor dynamics, in other words, not just verbal capacity but robotic capacity.
    So I do not believe that Turing was a computationalist: he did not think that thinking was just computation. He was perfectly aware of the possibility that in order to be able to pass the verbal TT (only symbols in and symbols out) the candidate system would have to be a sensorimotor robot, capable of doing a lot more than the verbal TT tests directly, and drawing on those dynamic capacities in order to successful pass the verbal TT.”

    Does the conclusion that Turing what not a computationalist come from the fact that his test was not devised so no purely computational system could pass it? This argument is based on examples such as “zebra = horse + stripes”. Only a system that could interpret the meaning of each part would be able to respond appropriately to the sentence because on a purely computational basis it is pure nonsense. Yet, a system like the Searle’s Chinese room (a pirely computational system) could retrieve the appropriate answer to any input without understanding it, then the test theoretically could still be passed by a purely computational system. How can we conclude that Turing was not a computationalist about cognition?

    ReplyDelete
  25. This article was a great way of taking a lot of the topics we have covered showing how they, as an intertwined whole, relate to the hard problem of consciousness. I was pretty bummed to have missed this class (I was away in New York) because I think this would have been a great topic to debate with everyone! So also please forgive me if I have said anything misguided from the classroom discussions in these past few sections.

    I have to concur with the general consensus here, that a robot that is capable of passing T3 is most likely conscious. Consciousness, as we have described it, essentially feeling because it feels like something just to be conscious and cognition is the capacity we have to do the things we can do. We have decided generally that cognition is most likely not ALL computation, and that by providing sensory motor interface interaction, we can ground symbols - but this still does not provide that qualitative "feeling" of understanding, though it does improve a robots performance in the TT. It seems that, if a robot were to truly be able to keep up the TT, for an entire lifetime, it might actually need to feel. Just like how the robot would need sensorimotor interaction with it's environment to ground symbols, it would need some sort of emotional interface to deal with things like death, social nuances and subtleties, intense relationships, etc. and I think that, without feeling whatsoever. I still see the possibility of zombies I suppose, but to actually be able to pass T3 is no small feat, and it seems more plausible that that could only occur if the robot is feeling.

    ReplyDelete
  26. See, I love talking about the hard problem. I love wondering why we feel, and I think it's very interesting talking about how we feel. However, I find it a little bit circular continuing to emphasize the importance of WHY we feel. I watched a video of a Japanese robot that looked nearly identical to a human, and could speak and respond coherently in Japanese. However, because she was a robot, it was assumed that she didn't feel. But we can assume just as easily that my brother doesn't feel. Just because when I cut my brother and he bleeds (so I assume he's human) does not necessarily mean that he feels things or feels what it's like to understand anything. So when I cut open the Japanese robot and see a bunch of wires, I could just as easily say that the Japanese robot DOES feel and DOES know what it's like to feel like she understands, etc. So while it's interesting to discuss, I wonder what the advantages are of continuously trying to come to a conclusion as to WHY we feel or to determine whether or not other people feel.

    ReplyDelete

  27. I found this article to be a good summary of the course so far. The one thing I had trouble with was the idea that sensorimotor dynamics are an alternative to computation. From what I understand, they are the interactions amongst the body, environment and ones perception. They are the bridge that connects computation and the external environment.

    However, I cannot fathom that sensorimotor dynamics are “peripheral. I cannot imagine neutrons doing anything other than computation.According to the author, categorization is considered the basis of cognition. This requires the detection of invariant features which subsequently requires the ability to abstract away features from a perceptual scene in the first place. Can abstraction really be done by anything other than computation ~ what would this process entail?

    ReplyDelete
    Replies
    1. If sensorimotor patterns are received as input, then finding their invariant features in order to categorize them correctly is not necessarily a computational function. It can be done by computation on the digitized input, or it can be done by parallel distributed nets on the sensory input. Those nets can be simulated by computation (strong Church/Turing Thesis, weak equivalence) but they would only matter is being really parallel and distributed somehow mattered, dynamically, for something or other.

      Delete
  28. “The contribution of Descartes' celebrated "Cogito" is that I can be absolutely certain that I am cognizing when I am cognizing. I can doubt anything else, including what my cognizing seems to be telling me about the world, but I can't doubt that I'm cognizing when I'm cognizing.”

    I like the inclusion of Descartes in this article and how his statement “I think therefore I am” has such an impact on cognitive philosophy. This statement brings up a lot of links cognition and consciousness. Also, I remember reading a few comments in the blog (or previous blog posts) about unconscious cognition. How would that factor into this argument. Does unconscious cognition exist? It doesn’t seem like it existed for Descartes.

    Furthermore, the articles summarizes and explains Chinese Room Experiment, Cognition, Computation and the TT in a clear and concise way. A good quick read!

    ReplyDelete
  29. This reading was a good overall review of the main concepts discussed in class. However, there is a question that has yet to be answered. According to the CT thesis, just about anything can be simulated and approximated by computation. Following this logic, a T2 Turing Test should be "the right level." Stevan says that T3 robots should feel "because they are, in terms of performance capacity, indistinguishable from a human, so if this robot acts just as a human does and without prior knowledge that it is a robot, there should be no denying that it feels just as we believe other humans feel" (from Cristian's helpful explanation). In other words, T3 robots pass the TT because they are able to ground their symbols through sensorimotor (inductive/instructive) learning. But with the CT thesis, if a T2 robot can simulate symbol grounding in a computation model, shouldn't T2 also be able to feel? Or shouldn't a simulation of this be able to elicit feelings?

    ReplyDelete
  30. Personally speaking, I think the hard problem might as well be called the “impossible problem”. I don’t think it will ever be possible to reverse-eingeer the human ability to feel, mostly because of the other-minds problem. The ONLY hope would be to create a mechanism that produces feeling would be if the feeling mechanism can produce our doing capacity. However, this would even be lacking because we don’t know if it feels and we wouldn’t be able to successfully explain how or why.

    ReplyDelete
    Replies
    1. You agree with Turing that explaining doing (the easy problem) is the best we can hope to do. But this is not just because of the other-mind problem but because once all doing is explained, feeling seems to be superfluous.

      Delete