Saturday 2 January 2016

2b. Harnad, S. (2008) The Annotation Game: On Turing (1950) on Computing, Machinery and Intelligence

Harnad, S. (2008) The Annotation Game: On Turing (1950) on Computing,Machinery and Intelligence. In: Epstein, Robert & Peters, Grace (Eds.) Parsing the Turing Test: Philosophical and Methodological Issues in the Quest for the Thinking Computer. Springer 


This is Turing's classical paper with every passage quote/commented to highlight what Turing said, might have meant, or should have meant. The paper was equivocal about whether the full robotic test was intended, or only the email/penpal test, whether all candidates are eligible, or only computers, and whether the criterion for passing is really total, liefelong equavalence and indistinguishability or merely fooling enough people enough of the time. Once these uncertainties are resolved, Turing's Test remains cognitive science's rightful (and sole) empirical criterion today.

60 comments:

  1. Hi everyone,
    A few years ago a group in the UK claimed to have programmed a computer that passed the Turing Test. They did this by claiming the “individual” (the machine) with who one is corresponding with is a 13 year old Ukrainian boy named Eugene. The age obviously limits the replies you would expect from a conversation, but it also importantly limits the kind of questions you would ask. Stipulating that he is a non-native English speaker seems to further weigh the odds in the machine’s favour. Furthermore, their criteria for having passed it involves some arbitrary statistical threshold that Turing initially rejects (but later endorses.

    You can read about it here: http://www.bbc.com/news/technology-27762088
    (I originally wanted to comment on the flaws of their design but our professor has already beaten us to it, here is the link if anyone is interested: http://www.theguardian.com/technology/2014/jun/09/scientists-disagree-over-whether-turing-test-has-been-passed )

    While I believe that it’s unequivocally clear that the group’s claim of having passed the Turing Test is unfounded, I take slight issue with Harnad (2008)’s view that we must consider passing the Turing Test as being equivalent to T3, total indistinguishablity in robotic (sensorimotor) performance capacity. I agree that it would obviously be far more impressive to pass it at a T3 level, wouldn’t be more worthwhile to focus on strictly cognitive functions in the short term? If the goal is to “explain thinking capacity, not merely to duplicate it” (Harnad, 2008), is it useful to spend time hacking away at functions or elements which we do not consider to be strictly cognitive (walking, appearance, etc)? It seems to me that only once T2 has been passed should we begin to focus on T3 (which is further limited by advances in engineering, not just computer science).

    It’s interesting to note that Turing predicted his test could be passed within 50 years, yet more than 65 years later we are not close at all. However, if we are seeking to explain thinking, is the Turing Test a worthwhile endeavor at all? Simulated thinking is not thinking, just as simulated flying is not flying, etc.

    ReplyDelete
    Replies
    1. About "Eugene," see "Turing Testing and the Game of Life" and here.

      It's just "Stevan Says," but, because of the symbol grounding problem, I do not believe that a system could pass T2 (the verbal-only version of the Turing Test) if it could not already pass T3 (the robotic version) because T2 has to be grounded in T3.

      We'll talk a lot more about that in this and future weeks.

      Delete
  2. T: “the interrogator cannot demand practical demonstrations”

    SH: “This would definitely be a fatal flaw in the Turing Test if Turing had meant it to exclude T3 -- but I doubt he meant that. He was just arguing that it is performance capacity that is decisive (for the empirical problem that future cognitive science would eventually address), not something else that might depend on irrelevant features of structure or appearance. He merely used verbal performance as his intuition-priming example, without meaning to imply that all "thinking" is verbal and only verbal performance capacity is relevant.”

    From what I understand up to this point, the two major flaws pointed out in Turing’s paper are:
    1. How he uses the term “imitation game” causing misunderstanding
    2. How he excludes sensorimotor behaviour from the test (Even though he includes this in his hierarchy of tests as T3)
    I would like to focus on how Turing’s Test does not account for T3.

    Turing did not mean to imply that all thinking is verbal, but was trying to exclude physical appearance from his test of performance capacity. This is a problem because by only comparing verbal response, Turing excludes responses to commands like, “go and see and tell me if the moon is visible tonight” along with any other commands that require physical movement.

    Could this problem be solved if Turing had another separate test based on performance capabilities? The interrogator would send a task command to the human and the robot, and when the task is completed a third person would inform the interrogator. By adding a third person, we exclude the bias of physical appearance but can include sensorimotor capabilities.

    ReplyDelete
    Replies
    1. Actually, I think the appearance problem is less important in the 21st century. We're not that skeptical about robots because of movies. We already think the movie robots pass the TT. But of course real robots are nowhere near all that. It's not because the look like real people that we think Renuka and Riona are conscious: It's because of what they say and do.

      So, no, I don't think the robotic T3 needs to be done out of sight. I think if there was a more robotty looking person in the course, and they talked to us and sky-wrote just like the rest of us, we still wouldn't want to kick them...

      Delete

  3. “Surely the goal is not merely to design a machine that people mistake for a human being statistically as often as not! That would reduce the Turing Test to the Gallup Poll that Turing rightly rejected in raising the question of what "thinking" is in the first place! No, if Turing's indistinguishability-criterion is to have any empirical substance, the performance of the machine must be totally indistinguishable from that of a human being -- to anyone and everyone, for a lifetime (Harnad 1989)”

    I’m confused as how you are proposing to measure something as “totally indistinguishable” other than for people to mistake it as human as often as not? That to me satisfices that condition. Furthermore adding “to anyone and everyone for a lifetime” seems unnecessary since one would assume as long as the test is run it would be on more and more people but a total population would never be met and as for the lifetime, the computer would have to pass regardless of how society change, the test itself and its goals do not change.

    “nonverbal abilities”

    If you could clarify what you mean by this that would be appreciated. Aside from speech and physical abilities (which were classified under a different term) I can only imagine this is talking about acts of thinking which is the topic of discussion and to use a different term seems odd to me.

    “Exactly as a computer-simulated airplane cannot really do what a plane does (i.e., fly in the real-world), a computer-simulated robot cannot really do what a real robot does (act in the real-world) -- hence there is no reason to believe it is really thinking either. A real robot may not really be thinking either, but that does require invoking the other-minds problem, whereas the virtual robot is already disqualified for exactly the same reason as the virtual plane: both fail to meet the TT criterion itself, which is real performance capacity, not merely something formally equivalent to it!)”

    My understanding is that this paragraph goes against computationalism, because you’re saying that a computer cannot think (because it can only compute) but a robot with dynamic functions and parts may be able to because it would not be relying on just computation. Therefor (under this belief) a computer could never truly pass the Turing test because it could never think but a robot might be able to since it exists in the real world, but then the last sentence there is saying that it still would not be a real performance but merely equivalent? Clarification on this end part would help my understanding here.

    ReplyDelete
    Replies
    1. It's not that you need to conduct the TT for a lifetime. It's that the robot has to have the capacity to do it for a lifetime -- not just to fool some people for 10 minutes. There's a big difference.

      Nonverbal ability is everything you are able to do besides speaking/writing and understanding/reading, including everything a chimpanzee can do (that we can do too). And there is both verbal and nonverbal thinking (though they can't be tested by the TT.)

      As we'll see next week, there are (at least) three versions or levels of the Turing Test: T2 is Turing's original one, verbal capacity indistinguishable from our own; T3 is robotic (including verbal) capacity indistinguishable from our own and T4 is indistinguishability in robotic and verbal capacity as well as internal neural function. So a computer alone cannot pass T3.

      Delete
    2. “I’m confused as how you are proposing to measure something as “totally indistinguishable” other than for people to mistake it as human as often as not? That to me satisfices that condition.”

      I think I can take a swing at this one. The issue is that “as often as not” does not satisfy the condition of the Turing Test, as it is only a statistical likelihood and not a guaranteed outcome. “As often as not” means that about half the people can accurately discern whether the robot is machine or human. To pass the test, this quality needs to be completely indiscernible to every single person. Not at least 51%, but a full 100%. As Dr. Harnad puts it:

      “The candidate must really have the generic performance capacity of a real human being -- capacity that is totally indistinguishable from that of a real human being to any real human being (for a lifetime, if need be!). No tricks: real performance capacity.” (page 11)

      Furthermore, I don’t think Dr. Harnad is actually proposing how to measure something as “totally indistinguishable,” at least empirically. Rather, he just seems to be disagreeing with Turing’s belief that “more often than not” satisfies the Turing Test.

      Delete
  4. “But an email interaction with a virtual robot in a virtual world would be T2, not T3.”

    We have been discussion simulation and reality, and the idea that (for example) a simulated airplane flying, is simply that – a simulation. It is in no way doing what a real existing airplane does when it is flying. The quote above seems to suggest that a virtual robot responding to an email in a virtual way is T2. But from my understanding this cannot be total indistinguishability. The robot is virtual and the world it exist in is also virtual, so its verbalization are not “real”. However later in the test we read that “the question is whether simulations alone can give the T2 candidate the capacity to verbalize and converse about the real world”, giving me the impression that we have not yet determined if simulated worlds can create “real” conversation or verbal performance. Perhaps this interpretation is misguided as I am using virtual and simulated synonymously.

    ReplyDelete
  5. “A variant of Lady Lovelace's objection states that a machine can "never do anything really new." … I do not expect this reply to silence my critic. He will probably say that h [sic] surprises are due to some creative mental act on my part, and reflect no credit on the machine. This leads us back to the argument from consciousness, and far from the idea of surprise. It is a line of argument we must consider closed” (Turing, page 14-15)

    I found this reasoning to be a prime example of Turing’s substantially weak arguments. He initially presents the objection that a computer can only “do whatever we know how to order it to perform” (page 14), and then refutes this argument by discussing how his computer surprises him sometimes. What?!? Are you kidding me?!? Just because you are too lazy to logically follow the consequences of your code does not imply the computer is “originating” the results. You are! The computer is simply following your instructions, even if you don’t know where they lead. He then concedes this isn’t a satisfactory response to the criticism, but moves on anyway. “It is a line of argument we must consider closed.” What?!? NO! I’m still not satisfied here. You can’t just move on like that. Give me some substance! You can’t just bring up some loosely related and glaringly incorrect anecdote and expect me to buy your argument.

    Because I found this argument so frustrating, it was so nice to see Dr. Harnad completely rip it shreds:

    “This is one of the many Granny objections. The correct reply is that (i) all causal systems are describable by formal rules (this is the equivalent of the Church/Turing Thesis), including ourselves; (ii) we know from complexity theory as well as statistical mechanics that the fact that a system's performance is governed by rules does not mean we can predict everything it does; (iii) it is not clear that anyone or anything has "originated" anything new since the Big Bang.” (Harnad, page 14)

    Clear, concise, and slightly comical. This is why you are wrong, and here is the right answer. No further elaboration needed. Just beating him over the head with the weakness of his own argument.

    ReplyDelete
  6. Additionally, there were a few powerful lines that really cemented the idea of what a T3 robot actually is:

    “But “almost” does not fit the Turing criterion of identical performance capacity.” (Harnad, page 5)
    and
    “The candidate must really have the generic performance capacity of a real human being -- capacity that is totally indistinguishable from that of a real human being to any real human being (for a lifetime, if need be!). No tricks: real performance capacity.” (Harnad, page 11)

    They also helped clarify why simulations are not completely equivalent to reality, even if they are computationally equivalent. In Dr. Harnad’s words: “that simulation [of the T3 robot] would simply not be a T3 robot, any more than its virtual world would be the real world.” (page 15).

    However, there is one aspect of simulations that still confuses me. Back on the waterfall example from Monday’s class. If I am experiencing a simulation of a waterfall, via some form of highly advanced virtual reality, is the feeling I experience when viewing the waterfall any different than the feeling I experience when viewing the real the waterfall? What if I am completely unaware of the simulation and believe it to be reality? I’m inclined to say YES!, the feelings are different, as we know computation does not equal cognition (Stevan says). So it seems reasonable that a real waterfall would “feel” different than its computationally equivalent counterpart. But is this still true if we are totally unaware of the simulation? Do real objects possess some sort of “aura” that distinguishes them from a simulation?

    ReplyDelete
  7. Why should there be an emphasis on sensorimotor performance in the imitation game in order to prove the question Turing is asking? You emphasize that it is of interest in what thinkers can do, so that it wouldn’t just be simulation, but a “real”. However, in this case, one can also argue that the sensorimotor performance by dynamic systems can be of only the physical aspect, without the mental aspect of it, so I don’t understand how passing T3 has proven anything more than T2 rather than just sensorimotor performance in relations to thinking.
    The argument that you give is the flaw in T2 to real-world contingency, but wouldn’t a dynamic system be able to just modify itself to be able to answer and keep up with whatever is asked? It would have the performance capacity to do so. Also, wouldn’t even humans not have a response to every possible real-world contingency?

    What limitations should be placed as rules in the game if what should’ve been the Turing level intended is T3 and not T2?

    “Thinking is as thinking does”
    Can you define “is” since you underlined it?

    ReplyDelete
    Replies
    1. Hi Oliver – I’m going to attempt to take a stab at some of your objections to T3.

      Why should there be an emphasis on sensorimotor performance in the imitation game in order to prove the question Turing is asking?

      Firstly, I wouldn’t exactly say that there is an “emphasis” on sensorimotor abilities, so much as an acknowledgement that these capabilities are (Stevan Says) fundamental to the verbal performance capacities that Turing has (somewhat arbitrarily) chosen as the critical level for the test. They are not what is being tested – we are still only interested in performance capacity; however, it seems evident that they would be essential to success on measures of performance capacity, even one as simple as verbal performance.

      …one can also argue that the sensorimotor performance by dynamic systems can be of only the physical aspect, without the mental aspect of it, so I don’t understand how passing T3 has proven anything more than T2 rather than just sensorimotor performance in relations to thinking.

      I may definitely be misunderstanding what you are saying here, but what I believe you’re arguing is that sensorimotor abilities are irrelevant on the basis that, in the TT, we are exclusively interested in the “intellectual capacities of a man”(from Turing’s paper), and not the physical ones. However, the purpose of the robot is not so that the interrogator can demand demonstrations of physical abilities; to the contrary, (Stevan says) the sensorimotor capacities of the robot are simply essential to ever passing the test at all even just at a verbal level, beyond ‘fooling some people some of the time’. Dr. Harnad notes a number of examples wherein this ability to interact dynamically with the world would prove critical, namely the insertion of analog photos to the emails or ignorance of any events occurring at the same time as the test is. The analog photos are I think a really simple and strong example, as a digital computer is defined by Turing himself as a discrete-state machine that would be unable to ‘view’ i.e. process the image without an analog-digital converter, which in my opinion could be considered a part of a hypothetical robot’s sensorimotor capacities.

      Delete
    2. Good replies to OH by AB:

      What cognition "is" is whatever is being done by the mechanism in our head that generates our capacity to do all the things we can do.

      In T2, there is no way to test whether the candidate has the capacity to connect the name to the thing it refers to. That's only possible in T3, and T3 cannot be passed by just computation. It is sensorimotor categorization that connects symbols to their referents.

      Delete
  8. In one of the last paragraphs of the paper, Stevan Harnad criticized Turing’s thesis that stated that the only way to know that a machine is thinking is to be the machine itself. Turing called this the Solipsism view. Harnad argues that it is not the solipsism view, it is the other mind’s problem. Harnad also states that ‘’There is no one else we can know has a mind but out own private selves, yet we are not worried about the minds of our fellow-human beings, because they behave just like us.’’. I do not really understand why this is not really relevant to the discussion about the Turing thesis as Harnad states because this puts more weight onto the question as to why do we assume that machines cannot think. We do not know if other people think but we assume that because they act like we do. If robots acted in the same way that we did, why could we also not assume that they think?

    ReplyDelete
  9. I tend to agree with Harnad (2008) when he says that the original version of the Turing Test (T2) is limited. It almost seems obvious now that the Turing Test would have to be with a robot and not just with a computer interacting via email. In order to pass the T2 version of the TT, one would have to be able of interacting verbally with a computer for a lifetime never suspecting that it’s not another person. However, T2 is not everything we can do, it’s just verbal interaction through pen pal.

    What I understood was that T2 can basically interact with any incoming input as long as it’s been programmed to manipulate that given type of symbol. That is, digital input/symbols—so this is why it deals well through pen pal (so far so good). But then when Harnad brings in the example of T2 being unable to comment on family photos (p.8), it is to my understanding that T2 is unable to do so because when we think of pictures, it’s not programmed to deal with that type of input –analog input. Additionally, T2 runs into problems when the “kinds of verbal exchanges that draw heavily on sensorimotor experience if the candidate is a digital computer only, regardless of how rich a data-base it is given in advance” (p.9). It is to my understanding that even if the system has a large set of programmed knowledge, due to its lack of sensorimotor experience, it is unable to grasp and interact with analog input. So in the end, since T2 doesn’t have a sensorimotor system, it doesn’t have the capacity of processing pictures because it uses input that is beyond its domain?

    So it seems that T3 would actually be a better version of the TT. We are much more than just pen pal interaction. We move, we interact with the world of objects, we recognize objects and establish referents – we are not just a computer. T2 lacks a sensorimotor extension and therefore doesn’t accurately capture cognition. Yet again, T3 is not the “be all end all” either. I don’t think cognition is all computation; so to have a serious candidate for a T3 model, maybe it would have to include computation but also some other sensorimotor capturing element model as well?

    ReplyDelete
  10. I have trouble grasping the following concept about the Turing Test. The human interrogator has to determine who is the machine versus the human, thus whether the machine is or is not intelligent. But...

    1. Why does the interrogator's judgement of the machine's intelligence have to be all (perfect) or none? What comes to mind is individuals with neurological disorders like autism, who might be intellectually impaired on many tasks, but profoundly talented in another facets. Say the machine reflects this type of intelligence. Would the human interrogator deem the machine unintelligent because it is imperfect? Or what about individuals who simply have a lower than normal IQ. They might still be "totally indistinguishable in physical structure/function" from the human. Would this machine still not pass because it does not a perfect intelligence?

    2. What if the machine surpasses our own intelligence? How can a human judge if the machine is intelligent based on his own conceived definition?

    ReplyDelete
    Replies
    1. I had similar reactions when I was reading about the Turing Test. One of the fascinating things about the human mind is how people can be so unique and different in the way they think about things, which influences their behavioral reactions. No-one can be said to have the ''perfect mind'', so what ideal is the machine aspiring to? What kind of behavior is universally human that the interrogator can use as criteria to differentiate a person and a machine? Surely different interrogators would have different answers to this question, so how can such a subjective test be a reliable way of evaluating performance capacity?

      Delete
    2. in the context of the Turing Test what is being tested is the 'generic abilities' but i think the underlying question at stake here is again what is meant by 'cognitive capacities'. Also, IQ is not deemed as a marker for 'intelligence' in this context. Harnad has posted elsewhere that cognition and intelligence are interchangeable.

      to attempt to illustrate this : imagine you are in contact through email with your friend Ali (or whichever other name you prefer) since childhood (it doesn't need to be in childhood, but i just want you to imagine you've been in contact with him/her for a long while) and you talk with her about everything, she's essentially your best friend and you share all your experiences with her and value her opinion (and i suppose she shares her experiences and thoughts with you too). According to what you read on your email she can get angry and sad at what you are telling her and she can find you incredibly funny and also tell you when she thinks you're saying something insensitive, etc... Now suppose this exchange goes on all your life but you never have more contact with Ali other than this email exchange (and suppose also you dont find this weirdly creepy or suspicious but totally normal). Suppose also it turns out that Ali is a computer program function in some computer in some room (but you are never made aware of this). In this case, Ali would be deemed to pass the Turing Test on the T2 level because her verbal behavioral output would be indistinguishable from that of 'standard' humans.

      I think one could perhaps assume that individuals with severe neurological disorders (such as a severe case of autism) may not themselves pass the Turing Test if the judge is a socially standard human!

      Delete
  11. I have trouble understanding some of the points brought up in the paper.
    1. For the Imitation Game, “Turing's criteria, as we all know by now, will turn out to be two (though they are often confused or conflated): (1) Do the two candidates have identical performance capacity? (2) Is there any way we can distinguish them, based only on their performance capacity, so as to be able to detect that one is a thinking human being and the other is just a machine?” (Harnad, 2008)
    I don’t get what the difference between the two criteria is. If both entities playing the game have exactly identical behaviours (i.e., the performance capacity is indistinguishable), then what could allow for the second criteria to not be attained, how could our “mind reading capacities” allow us to distinguish between the two. It seems to me that if the first criterion is met, then the second one must be as well.

    2. In discussing the hierarchy of Turing Tests, I,m not sure I understand the difference between T4 and T5.
    “T4: Total indistinguishability in external performance capacity as well as in internal structure/function. […]
    T5: Total indistinguishability in physical structure/function.” (Harnad, 2008)
    If I understand things correctly, the entity capable of passing T5 would necessarily be a perfect replicate of a human being, the only difference is that it was created and not born. The difference with T4 is that in T4 the entity only needs to be identical in behaviour to a human being and have a similar “brain”structure, but it would not have to be biological. So a robot with an electronic brain that perfectly mimics an human brain could pass T4, but not T5?

    ReplyDelete
    Replies
    1. I was also confused with the first point you are making. I find it helpful to think about what performance capacity really is - as described in the paper, an empirically observable phenomenon. So I think that you would look at each person in the "game" and their individual performance to look at their performance capacity, like first just to see if a machine can simulate human expression at all. But then the second part which is more interesting is whether or not the performance capacity is good enough to be directly compared with another actor's performance capacity.
      I think that maybe it is helpful to think back to Turing's paper when we are talking about how the first condition could be fulfilled but not the second one, like for example if you asked A and B to give you a limerick based on a memory, and B is a machine, B might really convincingly say they are terrible at poems (like in the example in the Turing paper). So maybe if enough instances of this sort of behvaior happened, the performance capacity would still be there (they are emulating human like behavior) but when compared against another human you eventually realize they are not.

      just a thought on it!

      Delete
    2. Also, I think that if a robot had a brain that perfectly mimicked a human brain i.e "down to the last molecule" they would be able to pass the T5 test right? It seems like T4 is when we go beyond simply passing the Turing Test, or being able to imitate human behavior, but I am also confused as to the exact distinction between T4 and T5... I don't really understand either why blushing could be T3, because it seems like it is something that only happens when there is a spontaneous emotional reaction?


      I agree with Prof Harnad that the “line between the physical and intellectual capacities of a man” has not been drawn by saying we can distinguish things with the (simple) Turing test. I think that it is helpful to use a specific Turing test depending on what type of cognition we are looking at or what kind of machine we are dealing with.... I wonder if this would help distinguish between higher and lower level processes within cognitions (because if there was one part of a cognition that was easier to distinguish a robot on the Turing test, it would probably mean that this part of our cognition is more complex and not easily imitated).
      Like for example, an easy listening task vs. a dichotic listening task.

      Delete
  12. "The T5 candidate must be indistinguishable from other human beings right down to the last molecule".
    I'm glad this part is included in here because part of my doubt expressed in the other skywriting (about how building a robot that could do what we can do wouldn't show us how we do it) revolved around the fact that we use very complex neuronal networks and neurotransmitters. We might not know exactly how we cognize but we know enough to understand that neural nets and neuropeptides are most probably involved. Thus, I couldn't understand how a robot, as able as it may be to imitate everything that we can do, could be working in the same way as us without having these neuropeptides. I have more faith that if we could theoretically build a robot who was indistinguishable from us right down to the last molecule, it would have a better shot anyways of giving us insight into how we work. Though, this would likely be nearly impossible to create. Turing seems to reject T5 as if having the same molecules as humans is superficial, and reflective of only how they look: "we do not wish to penalize the machine for its inability to shine in beauty pageants". But the real reason that we would want all of the molecules to be the same is not so that the robot looks like a human but so that all the functions it can carry out are build or and driven by the very same underlying components as us.

    ReplyDelete
    Replies
    1. I agree with you in that part of our ability to grasp how a robot could have a mind and could cognize the way we cognize is in the way we set up the comparison of human vs. robot (which is what makes it so tough to conceptualize). However, in my interpretation, as the other-minds problem states that “there is no one else we can know has a mind but our own private selves,” this means that the T3 robots could be cognizing/feeling the just like we do without having the total indistinguishability in physical appearance/structure (but we will never truly know if this is the case). That being said, I think that we should aim for level T4 (total indistinguishability in external performance capacity as well as in internal structure/function) if we are to fully understand the ‘how’ of human cognition. As thinking and cognition are internal processes, I feel that this is the necessary level. However, as Harnad said, “there also may be a lot of details of brain function that are irrelevant to being able to cognize like vegetative function,” but it is substantive to cast a wider net in order to not miss the key ideas of cognition.

      Delete
  13. Reading about that Turing Test and the idea of “intelligence” probes me to ponder the following: “If a machine passes the Turing Test, meaning that it is “intelligent”, then it has surpasses human intelligence. If a machine can surpass human intelligence, would that be the Technological Singularity?” Many, including Ulam, von Neumann, Vinge and others, believe that this will be around the year 2040. Everyone has a different prediction of when this will occur, but the median value is 2040. The technological singularity is the event when machine intelligence will surpass that of humans causing intelligence to become increasingly non-biological and way more powerful. It is strongly believed by some, and it seems crazy to think that could happen. There are others who believe it could happen but that we need to be careful and cautious of it. There must be limitations so that it will not cause a threat on human existence when and if it does occur. Elon Musk, for example, believed artificial intelligence good be the biggest threat to human existence. You can read more about his opinion in the following article: http://www.theguardian.com/technology/2014/oct/27/elon-musk-artificial-intelligence-ai-biggest-existential-threat.
    To conclude, do we really want to create something that can pass the Turing Test? Do you think this result in more good or more harm to humans?

    ReplyDelete
    Replies
    1. Of course I agree that, like everything, intelligence should be regulated to avoid threatening human existence. But hasn’t that already happened? You do not need a machine to pass the TT for it to be deemed threatening to the human race. There exists many machines (which are unable to pass the Turing test) which soul purpose is to eliminate humans (i.e. drones used in the army).

      But, for the sake of your argument, lets assume that in order for a machine to be a threat to humans, it has to be able to exceed human intelligence (pass the TT). This makes me question what is meant by ‘intelligent'. In order for a machine to be considered intelligent, does it have to do tasks on its own (think intelligently)? Or must it be able replicate human intelligence as a machine does during the TT. In other words, the TT only reflects performance that resembles intelligence, rather than the capacity to think intelligently. In the case of the TT, if a computer is able to be indistinguishable from human, then it is considered intelligent.

      So by 2040 would these machines be imitating intelligence or would they actually be thinking intelligently?

      Delete
  14. I'm having some trouble with the "all or nothing" nature of the test that has been devised. With the Turing Test, it seems that as soon as a machine passes a threshold (the threshold of being able to do all the things that we do, indiscernibly from us) it is said to be cognizing. But what about a machine that can do all the things that a chimpanzee can do? Or a machine that can do everything a jellyfish can do?

    Our definition of cognition includes "everything going on inside an organism that allows it to do the things that it does", or more simply "cognition is as cognition does". But how do we judge this "doing"? It seems that we agree that many organisms are cognizing (albeit with less complexity) and are "doing" (with less complexity) so does cognition exist on a spectrum? As we look at simpler and simpler organisms, is there a point at which the organism's "doing" will no longer fit our criteria for "doing"? When do we say that their internal mechanisms for "doing" don't constitute a form of cognition? Is it determined by the presence or absence of a nervous system? Because that distinction is fuzzy. Or maybe there is no "spectrum" of cognition, but rather distinct "forms" of cognition, determined by the organism that is carrying it out. What then, is our criteria for picking and choosing the organisms whose "doing" is sufficiently complex to require cognition to carry out?

    With the Turing Test, it seems we have drawn an arbitrary line in the sand... If a machine can do everything we can do, then it is cognizing, and if it can't, then it is not cognizing. But if cognition exists on a spectrum of complexity, then how do we know that the machine isn't cognizing in order to carry out the subset of our "doing" that it can "do"? How can we be sure that on the way to creating a machine that cognizes as we do, we haven't created a cognizing machine that can do some of the things that we can do? Or suppose that there is no spectrum of cognition, but rather different forms of cognition. How can we be certain that a machine we have created, which cannot do all the things that we do, does not have its own form of cognition?

    And this raises another issue. If we can't settle on criteria for what sort of "doing" necessitates cognition, then what is the lower limit? As I mentioned before, how simple does this "doing" have to be before we stop calling the underlying internal mechanisms cognition? Is a machine which passes the T3 level of the Turing Test for a chimpanzee, a machine with cognition? What about a T3 machine which can do everything a jellyfish can do? Is it cognizing? What about a T3 paramecium? And if we don't set a lower limit, then wouldn't machines with toy capacities be cognizing? Since they're "doing" something...?

    To further complicate matters, there is the issue of "feelings". We seem to agree that a single cell is not feeling, but that a vast number of cells arranged in a specific way is feeling... So where do we draw the line for that? It's been conjectured in some skywriting that a T3 machine which can do what we can do is certainly cognizing and probably feeling as well. But what about the T3 chimpanzee, jellyfish and paramecium? These are all T3 level machines, but I think we'd all intuitively agree that the T3 paramecium is not feeling. Why? At what level of complexity do organisms suddenly become "feeling" and how, if at all, does this relate to the complexity of their cognition?

    ReplyDelete
  15. “By the same token, we have no more or less reason to worry about the minds of anything else that behaves just like us -- so much so that we can't tell them apart from other human beings. Nor is it relevant what stuff they are made out of, since our successful mind-reading of other human beings has nothing to do with what stuff they are made out of either. It is based only on what they do” (Harnad).

    I take issue with this comment because I believe there is a very specific reason as to why we think other humans and animals think like we ourselves do. One component is that the wealth of information we have on evolution and the biological make up of humans as a species and animals as species, suggests that all of us have the same “machinery” inside of us. Because the process leading up to our cognitive systems were the same, the biological systems themselves are clearly similar and because the outward behavior of others is similar to ours, we can make the deduction that other humans think and feel like we do.

    Machines do not have this shared process leading to the cognitive system though and would only satisfy the condition of same outward behavior (performance capacity). This is shakier ground to make the assumption that the machine thinks if we can’t tell their performance capacity from a human. As Searle has shown, it is possible to fool a human potentially without having awareness or understanding of the output behavior. I even believe that Searle portrayed an example of a sensorimotor computational machine that could fool a human, still lacking the awareness of and understanding of what is occurring.

    One could argue that our assumption that other humans think as we do because we have the same evolutionary heritage and same biological systems could be false. However, for example through brain lesion studies, we obtain similar cognitive effects across humans, so I find this highly unlikely. In conclusion, I think “doing” alone is not sufficient to show that machines think.

    ReplyDelete
  16. “But surely an even more important feature for a Turing Test candidate than a random element or statistical functions would be autonomy in the world”

    I am struggling with the concept of a machine having free will and/or autonomy. I suppose you could say this to a certain extent about humans too, but even among an array of programmed choices/decisions can it really be said that the machine is making choices for itself? Humans find and make meaning in life (or don’t) but machines are assigned meaning. If this machine is created to pass the TT, is that not its only end goal? Talking about this in terms of goals even seems bizarre; I still feel like motivation is needed for goals whereas machines have instructions and tasks. Even within the T3, human-like autonomy is hard to conceptualize. Is this a meaningful distinction?

    Moreover, I also find myself somewhat fixated on emphasis on the notion that the machine must pass the TT all the time, every time, forever and not most of the people most of the time. Is the interrogator in this situation aware of the TT itself? Are they sitting at their computer analyzing every email with the awareness that there’s a chance that their pen pal could be a machine? I’m sure if I analyzed emails from friends and family there would be instances where I could make a case that the email in question seemed nonsensical or “robotic.” What is the threshold of “humanness” that the interrogator for the TT is looking for? The example of the machine failing to comment on analog family photos, or not being in touch with current events: are human pen pals necessarily guaranteed to do these things? Maybe this is irrelevant but I see the logistics of the TT are important to discuss if we are making distinctions between T2 and T3 and what it really means to be a "human-passing" machine.

    ReplyDelete
    Replies
    1. Your comment reminds me of this really good passage from "Notes from the Underground":

      “I was just going to say that the devil only knows what choice depends on [..] Indeed, if there really is some day discovered a formula for all our desires and caprices—that is, an explanation of what they depend upon, by what laws they arise, how they develop, what they are aiming at in one case and in another and so on, that is a real mathematical formula—then, most likely, man will at once cease to feel desire, indeed, he will be certain to. For who would want to choose by rule? […] ‘Our choice is usually mistaken from a false view of our advantage. We sometimes choose absolute nonsense because in our foolishness we see in that nonsense the easiest means for attaining a supposed advantage. But when all that is explained and worked out on paper (which is perfectly possible, for it is contemptible and senseless to suppose that some laws of nature man will never understand), then certainly so-called desires will no longer exist. For if a desire should come into conflict with reason we shall then reason and not desire, because it will be impossible retaining our reason to be SENSELESS in our desires, and in that way knowingly act against reason and desire to injure ourselves. And as all choice and reasoning can be really calculated—because there will some day be discovered the laws of our so-called free will—so, joking apart, there may one day be something like a table constructed of them, so that we really shall choose in accordance with it. […]
      Yes, but here I come to a stop! Gentlemen, you must excuse me for being over-philosophical; it’s the result of forty years underground! Allow me to indulge my fancy. You see, gentlemen, reason is an excellent thing, there’s no disputing that, but reason is nothing but reason and satisfies only the rational side of man’s nature, while will is a manifestation of the whole life, that is, of the whole human life including reason and all the impulses. And although our life, in this manifestation of it, is often worthless, yet it is life and not simply extracting square roots. Here I, for instance, quite naturally want to live, in order to satisfy all my capacities for life, and not simply my capacity for reasoning, that is, not simply one twentieth of my capacity for life. What does reason know? Reason only knows what it has succeeded in learning (some things, perhaps, it will never learn; this is a poor comfort, but why not say so frankly?) and human nature acts as a whole, with everything that is in it, consciously or unconsciously, and, even if it goes wrong, it lives.
      I suspect, gentlemen, that you are looking at me with compassion; you tell me again that an enlightened and developed man, such, in short, as the future man will be, cannot consciously desire anything disadvantageous to himself, that that can be proved mathematically. I thoroughly agree, it can—by mathematics. But I repeat for the hundredth time, there is one case, one only, when man may consciously, purposely, desire what is injurious to himself, what is stupid, very stupid—simply in order to have the right to desire for himself even what is very stupid and not to be bound by an obligation to desire only what is sensible. Of course, this very stupid thing, this caprice of ours, may be in reality, gentlemen, more advantageous for us than anything else on earth, especially in certain cases. And in particular it may be more advantageous than any advantage even when it does us obvious harm, and contradicts the soundest conclusions of our reason concerning our advantage—for in any circumstances it preserves for us what is most precious and most important—that is, our personality, our individuality.”

      Delete
  17. I have some questions/remarks:

    1. "No, if Turing's indistinguishability-criterion is to have any empirical substance, the performance of the machine must be totally indistinguishable from that of a human being -- to anyone and everyone, for a lifetime"
    I think this is important. Because to me, at least, it is very much possible that at least 1 person has or will or does exist who can reliably distinguish a robot from a human. 1 person out of (currently) 8 bn isn't asking for much. I don't think they'd be able to say why - if they had a formalized process, then it could be incorporated as a rule into the robot's programming - but they'd just have a gut feeling or something. Is it really terribly inconceivable?

    2. "It is only generic human capacities that are at issue, not those of any specific individual"
    I'm not sure why this is the case. Shouldn't we look at the perfection of thinking, i.e. thinking at its highest? I, at least, would be way more impressed about a computer reliably imitating Bach than myself playing the piano or composing something. (I'd also be way more surprised if you couldn't find at least 1 person in the entire story of mankind who can't distinguish a computer's composition's and Bach's.. I know they've made programs tho so who knows. The point is that imitating Bach and imitating me is definitely not the same thing.)

    3. "What thinkers can do is captured by the TT."
    I'm still confused by this. If Hawking had no way to communicate, surely he would still be able to think? And, at least for me, the TT shows that you can have similar linguistic performance without thinking. So I'm not convinced that what thinking can do is captured by the TT.

    ReplyDelete
  18. “ “Think" will never be defined by Turing at all “

    This constantly bothers me: how “think” is not properly defined and is judged on the grounds of human imitation. Is thinking imitating humans?

    ——————

    ”We do not wish to penalise the machine for its inability to shine in beauty competitions”

    ”nor to penalise a man for losing in a race against an aeroplane”.

    ”Most of us could not beat Deep Blue at chess, nor even attain ordinary grandmaster level”

    Here we acknowledge that machines and computers are different.

    Humans go through ’human experiences’ such as birth, puberty, emotions, pain and these are the experiences which shape/influence human thinking. So far, there is no way a computer can experience these.

    Similarly, there is no way humans can perform certain computer tasks as we have limited biological resources.

    Hence, this ‘thinking’ we are trying to achieve through computers cannot be of a human kind and has to be of a different kind.

    I disagree that the imitation game is the solution to all of this merely because of the human-computer comparison. Both human and computer thinking are shaped in completely different ways, and these different methods yield different end products e.g. beauty, speed, computation, etc. Even though Turing explicitly states that the interrogator is not allowed physically interact with the candidates, shouldn’t physical experiences also be considered as they undoubtedly influence one’s thinking?

    ”our verbal abilities may well be grounded in our nonverbal abilities”

    Agreed!

    ReplyDelete
  19. “T3: Total indistinguishablity in robotic (sensorimotor) performance capacity. This subsumes T2, and is (I will argue) the level of Test that Turing really intended (or should have!).”

    I’m not sure I agree with this statement. I would argue that within the design Turing suggests for the imitation game, he is evaluating thinking as reasoning, not thinking as cognition. The T3 test is necessary to evaluate thinking as cognition, but the T2 test is sufficient to evaluate thinking as reasoning. He gives an example of conversation between interrogator and respondent (i.e. the computer) to see if the answers follow the line of reasoning that a human likely would. He reformulates the question of « can machines think », equating thinking to reasoning, not equating thinking to complete cognitive and functional capacity at both an intellectual and sensory level (this is what would be addressed by a T3 robot). I think this is where, as you mentioned in this article, the name ‘imitation game’ convolutes the intention. Yes, if Turing wanted to fully and completely imitate a person, then obviously it is necessary to test on the level of T3 using a real world robot. However, I do not agree that he interpreted the term ‘think’ to include replication of all human capacity. As an aside, I do think it was necessary for Turing to better define ‘thought’ rather than equating two questions and leaving variable definition up to the interpretation of the reader. But regardless, a Turing test at the level of T3 is no longer testing for ‘thinking’, it is testing for ‘cognition’. When considering cognition, I would argue that there are sub-domains that include of feeling, thought, sensorimotor experience, etc, that all feedback and influence the cognitive state, therefor all of these subdomains should be considered to be part of the process of cognition. By computationalism, all of these domains are physical systems and so a computer should be able to simulate them (so by computationalism, all of these conditions should be simulated in the T2 test – or that the T2 test should test for more than just reasoning). But Turing is not said to be a computationalist. In Turing’s paper, it is not clear that feeling and experience are regarded as physical systems, and thus they do not fit into the Church/Turing thesis where they can be simulated by computation. Assuming that by the Church/Turing thesis, the test was designed to replicate only the physical systems within the human cognitive experience, the test should only include thought. By extension, other subdomains of cognition (i.e. the dynamics of cognition – feeling and sensorimotor experience) do not need to be included in the Turing Test. This touches on the reading 1b, which proposed cognition to be some proportion of computation and some proportion of dynamics. Perhaps then, the proposed T3 test would be better called the ‘Harnad Test’, since it would allow for computation (i.e. ‘thought’) to be manipulated by the dynamics of the robotic system (as was theorized in reading 1b). Arguably, the T3 test is the better test, since within a human, computation cannot be unbraided from other dynamics of cognition, and as such cannot be adequately tested for within the T2 test. But since Turing was not considering dynamics nor their interaction with computation, it cannot be said that the T3 test was or should have been his intention.

    ReplyDelete
  20. I can only agree with the annotations that were made. One I particularly liked was:
    T: “We now ask the question, "What will happen when a machine takes the part of A in this game?" Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman? These questions replace our original, "Can machines think?”

    SH: “Here, with a little imagination, we can already scale up to the full Turing Test, but again we are faced with a needless and potentially misleading distraction: Surely the goal is not merely to design a machine that people mistake for a human being statistically as often as not! That would reduce the Turing Test to the Gallup Poll that Turing rightly rejected in raising the question of what "thinking" is in the first place! No, if Turing's indistinguishability-criterion is to have any empirical substance, the performance of the machine must be totally indistinguishable from that of a human being -- to anyone and everyone, for a lifetime (Harnad 1989).”

    It seems that Turing’s constant substitution of questions leads further and furher away from his original intended topic. A machine being able to fool/mimick a human is an entirely different concept than the issue of whether or not machines can think. Turing ends up contradicting himself quite a lot in his paper; he goes from questioning the possibility of a thinking machine to the plain performance of said machine when tested.
    Another problem I have with the concept of a Turing test, is that the overall argument posits that it is possible to program a discete-state machine so that it may simulate human action, but then it adds all sort of exceptions; “regardless of time”, “given a certain topic of discussion”, “only 70% of the time” etc… It is hard to find such an argument convincing when there are addendums and exceptions to the main point of it.

    ReplyDelete
  21. Next time I shall read both readings before making a commentary as the topic of deception in the imitation game was addressed and explained well in your critique (however, I am not entirely convinced that today’s chatterbots like Cleverbot are not simply using deceptive techniques). If they did maintain their indistinguishable performance throughout a lifetime as you suggest, then I do accept the view that it is not simple deceit and AI does exist within the machine. “Scaling up to the TT empirically by designing a candidate that can do more and more of what we can do” was exactly what I was thinking back when reading last week’s papers. I agree that hybrid dynamic systems should be used in TT to prove T2 and T3 as this “robotic machine” will now be able to use all forms of world experiences (touch, hearing, vision, pain, etc) to better behave as a human would.

    However, one thing that is not accounted for (or maybe I have failed to find it?) is the gradient of intelligence that exists across human beings and likewise in machines. If machines are indeed learning differently from humans, using higher processing speeds, and relying on input as their source of experience, then it must follow that there are different levels of intelligence that the machine could have depending on its programming and learning experience. Perhaps it is as intelligent as a human adult and can converse on a wide selection of topics ranging from politics to poetry to philosophical dilemmas OR perhaps its intelligence is child-like in nature meaning that many of these topics will be unknown to the machine. For the TT to be a fair test, should it not then have a measure of its level of intelligence and in turn compete against humans with a similar level of intelligence? You write that, “If Turing’s indistinguishability-criterion is to have any empirical substance, the performance of the machine must be totally indistinguishable from that of a human being – to anyone and everyone, for a lifetime” (Harnad 1989). But to me, a human judge/interrogator that is a child judging a machine that is adult-like in intelligence (or vice-versa) will severely affect the accuracy of test, thus the distinction should be made.

    On another note, should there exist such a term called “artificial stupidity” that was coined back when the Loebner Prize was held in 1991? It essentially refers to how a machine is dumbed down (ie. providing incorrect answers to arithmetic tasks) in order to better fool humans into thinking it is human itself. Could this in itself be a limitation of AI?

    ReplyDelete
  22. “But surely an even more important feature for a Turing Test candidate than a random element or statistical functions would be autonomy in the world”

    Although this article explains that machines may be able to think to a certain extent, I don’t believe that a machine would be fully autonomous. To be autonomous, one must act according to his or her own desires and commands; however, because a machine, no matter how complex, must be programmed or designed by another being, it is hard to imagine a machine with its own motivations.

    I would like to respond to a comment made above:

    “I also find myself somewhat fixated on emphasis on the notion that the machine must pass the TT all the time, every time, forever and not most of the people most of the time.”

    As Harnad mentions in this paper, to be a T5, a machine must have “Total indistinguishability in external performance capacity as well as in internal structure/function,” meaning that the machine is not trying to trick the interrogator that it is a human forever but pragmatically is a human being, because there is no genuine difference in performance.

    ReplyDelete
    Replies
    1. Hey Freddy, I have some thoughts on your comments. If I can understand you correctly, the problem you're finding with the TT is that there's a lingering question about whether or not a machine that passes it is truly an autonomous machine. I think that's an interesting objection but if I understand the statement “But surely an even more important feature for a Turing Test candidate than a random element or statistical functions would be autonomy in the world", autonomy in this case I think is speaking to the machine being a closed system that, once finite instructions are given to it, is able to compute on its own. If you think about it, the machine in that way is similar to the way we as humans work-- we have to have some prior "rules" (brain processes whatever that may be) but then we are able to think on our own in a closed system. I also don't think that being autonomous means you need to have your own desires or motivations.. that seems to get a bit too philosophical for this situation but I see what you mean. There seems to be some missing element of the machine being able to have 'freedom' but even that's such a contentious word.

      Delete
  23. In the paper, it is stated that "verbal abilities may well be grounded in our nonverbal abilities."
    If this is in fact the case, then it would be logical to assert that T2 in fact subsumes T3 - that in order to have T2 you in fact need T3. Otherwise you get into the symbol grounding problem of not being able to point to meaning. This is obvious when you think about actions - we define them through our ability to perform them, we know what kicking is because we can perform it with our bodies, we can recall kicking or imagine ourselves doing that action. Or even put more simply, meaning is often linked to seeing and touching things (nonverbal abilities which then shape our verbal abilities). So perhaps Turing does in fact (perhaps inadvertently) account for T3 (since it is subsumed by T2), and thus the T2 test is sufficient.

    ReplyDelete
    Replies
    1. I don’t think the T2 test is sufficient. Because in order to solve the symbol grounding problem there has to be some sort of interaction with the world. Furthermore, a T2 passing machine wouldn’t be able to answer the question “what color is the sky today”, because there would be no input from the outside world. A T2 passing machine cannot see and touch things, there is no sensory perception, so there is no chance that it be sufficient. Instead, a T3 passing machine does have the necessary interactions with the outside world to potentially fool a human into believing it’s conscious. So it’s the other way around, in order to have T3 you have to have T2, plus some other stuff.

      Delete
  24. I feel as if the Turing test is flawed by its very nature because it assumes that interpretable actions in the real world (i.e. speech), imply that the inner workings of the brain are functioning normal (i.e. cognition is happening). The Turing test says if something acts like a human, there is something like cognition going on on the inside. But this should not be the only way we classify things as cognitive thinkers. It excludes machines that think like we do but do not express it in words or actions, but just think within. There are examples of people like this, children who have lived in abusive conditions where they were denied exposure to language. They did learn basic language eventually but the point is that machines should be able to be considered intelligent regardless of their ability to communicate it to us in a way that humans do.

    ReplyDelete
  25. I had a follow up question, as I don’t think I properly expressed my question in today’s class. (Not sure where to post this, so I am posting it here.)

    I am still unclear when it comes to the distinction between T3 and T4. I guess the question I should have asked is why would we want to go from T3 to T4 explanation? I thought the T3 test is what is relevant. In T3, we also have to build a physical device that does what it is we can do and if the goal of cognitive science is to explain why and how an organism can do everything it does –if we already have an explanation in T3, why do we need another explanation? I am just trying to get a firm understanding of T4.

    According to T4, Riona and Renuka would now have to be indistinguishable not only in terms of their behaviours but also with regards to the internal structure, the stuff going on inside the head. So in essence, T4 is looking at the “doings” of the brain? It’s looking at the building blocks of the brain in order to know how the brain does what it knows? But how does taking the brain pieces apart explain anything more? We care about the underlying causal mechanisms, wouldn’t this be correlational?

    What I got was that T4 is more about the nuts & bolts inside the head – more about the physical apparatus. I mean, of course we need to implement it in a physical device and there’s no doubt the brain is the causal device – but don’t we care more about the level of the software as opposed to the level of the physical device? If we created a T3 robot that passed the test, this means we already have an explanation. It still doesn’t guarantee that it’s the right explanation, but lets assume it is –what would T4 add? It still wouldn’t allow me to say with certainty that the robot can feel. Would it give us a more precise way of representing the internal structure, in the sense that it closely simulates how our brain functions? But my intuition tells me I am way off and this is not what T4 is about – any kind of guidance would be greatly appreciated.

    ReplyDelete
    Replies
    1. Hi Cait, I actually think your intuition is correct and I think you're completely right to say that T3 seems to be the relevant level of the TT to be concerned with and T4 seems to add the unnecessary details of the material with which the turing machine is made up of. Especially since T3 includes sensorimotor capacities, I think T4 is increasingly irrelevant at that point. If you have a robot that is made up of anything and it passes th T3 test then it passes the T3 test. This seems to be something that doesnt necessitate the EXACT substance humans. T4 seems to me to be cosmetic and thats what I'm having a hard time also accepting. Also, I noticed above in the comments that Professor Harnad responded to someone by saying "So, no, I don't think the robotic T3 needs to be done out of sight. I think if there was a more robotty looking person in the course, and they talked to us and sky-wrote just like the rest of us, we still wouldn't want to kick them..." and also "T3 is robotic (including verbal) capacity indistinguishable from our own and T4 is indistinguishability in robotic and verbal capacity as well as internal neural function." The first statement from Prof Harnad to me seems like it says that the problem of appearance isn't relevant with regard to the TT (which is what you say) and then the second statement leads me to conclude that T4 might have more to do with the the indistinguishability of the robot TM from a human in the sense that if you cut open the robot TM it would have a thing indistinguishable from a brain complete with neural communication indistinguishable from what we have etc. So this is "more about the nuts & bolts inside the head" as you said. In conclusion, I think my answer to you is yes, we care more about the software and also sensorimotor interaction (T3)

      Delete
  26. “The original question, "Can machines think?" I believe to be too meaningless to deserve discussion.”

    I enjoy the critique to this quote because Turing’s point is conjecture. It is too limiting and even critiques his own test. As Harnad mentions, an aspect of thinking requires to examine what it is to do thinking and how you can do thinking. By suggesting “can machines think?” is meaningless, you are also rejecting what it is to do thinking, which is what the Turing Test is.

    “It excludes machines that think like we do but do not express it in words or actions, but just think within.”

    This is an excerpt from a comment above. I found this point very interesting and can sympathize with it but it then brings up other issues such as the other minds problem. How can we know know that a machine, or any other object, is thinking without having it communicate? The only way we can see that thinking is being done is through communication. Communication is the “do” of thinking and is part of the easy problem. A machine thinking from within would only feel what it is like to think, which is a component of the hard problem. Lastly, if a machine could only feel what it is like to think and not be able to “do” thinking, then would it really be thinking?

    ReplyDelete
    Replies
    1. I wouldn't say that the question "can machines think?" is too meaningless per se but I do think that the question doesnt really what Turing is getting at. But in terms of a Turing Test, the question "can machines think?" leads to the question, can this machine pass the Turing test, and if it can pass the TT then we cannot then go on to say that what it is doing it is not thinking.

      Delete
  27. "It would have had that advantage, if the line had only been drawn between appearance and performance, or between structure and function. But if the line is instead between verbal and nonverbal performance capacities then it is a very arbitrary line indeed, and a very hard one to defend."

    I certainly agree with this since it does not seem clear to me whatsoever that cognition is only verbally based. However, the fact that in class we are also deciding that some things are vegetative and some are cognitive bothers me. Despite seeming to be a more definite distinction, it still seems to me that there are times when some vegetative functions overlap with cognitive functions (as I have mentioned in a comment on one of the previous papers). While we are accepting that cognition is both dynamic and computational, I don't think that it is an equal split and I do think that we should be putting more emphasis on the dynamics of the system.

    ReplyDelete
  28. I know it doesn't directly pertain to this reading, but it's been mentioned in class and I'm curious (others may be too). Dr. Harnad has said he thinks Penrose came to incorrect conclusions re: Gödel's Incompleteness Theorems and how they apply to the computationalism debate. I'll take a stab at giving a kid-sib, non-mathematical version of the first theorem, both for the benefit of anyone who’s interested and doesn't know, and also so that I can check if I actually understand and have those who know more can correct me or add something.

    The 1st incompleteness theorem states that for a formal system of axioms that is sufficient to develop the basic laws of arithmetic (and can be computed), it is impossible to be both consistent and complete. (Not kid-sib at all - I'll attempt to break it down).

    Axioms - statements that are accepted/self-evidently true
    Consistent – a system that doesn’t contain a contradiction
    Complete – no true mathematical claims exist that can’t be proven within the system

    The ‘mathematical claim’ (called the Gödel sentence for that particular system), is a statement that is true, but can’t be proven/computed by the system. Generally the sentence if something like the following: “I am not provable in S” where S is the formal system. So, if the system is consistent and the sentence is in fact not provable, then the system is incomplete. Conversely, if we assume the sentence is provable (making the system complete), then the system is inconsistent, since it would contain a contradiction.

    Lucas was first person to use Gödel’s theorem as an argument against a computational model of cognition. He essentially imagined that if you were to write out all of the operations etc. of the machine (formal system) under consideration, then a human would be able to look and see that the Gödel sentence is true. As such, the human mind can do at least one thing that no computer can and therefore “a machine cannot be a complete and adequate model of the mind,” (Lucas 1961). When Penrose entered the debate, he essentially reiterated Lucas’ argument with some modifications, and then added a second line of argument about “quantum consciousness” that I’ll admit I don’t fully understand.

    A lot of people have criticized the Lucas-Penrose argument based on Gödel’s second incompleteness theorem, which in layman’s terms shows that no system can demonstrate its own consistency. The common counter-argument follows that logic to say that if we regard the human mind as a system, we can’t demonstrate our own consistency and as a consequence the Lucas-Penrose argument doesn’t work.

    So my question for Dr. Harnad (and anyone else) is why he believes that Penrose is wrong in his conclusions about the nature of the mind - does it have to do with the counter-argument I listed above, or a more subtle aspect of the original arguments I am missing? Or does it have to do with his whole quantum-consciousness deal, which it seems many people think is a stretch at best.

    ReplyDelete
    Replies
    1. Cognition and Computability I
      [Reply to Adrienne about the Gödel proof that not all true theorems can be proven -- computed -- to be true.]

      What the two Gödel proofs prove can be stated very simply without getting into the details of completeness/consistency: He proved that for any formal system strong enough to include arithmetic (i.e., numbers), there is always a theorem that we can see is true, but that is not provable (i.e., its proof is not computable). The way it works in Gödel's proof -- though Turing proves it differently -- is by giving every formal symbol (and every combination of symbols) a unique (prime) number (including +, -, =, true, false, exists, etc.). There is even a Gödel-number for "...is a proof of..."

      The punchline is that, once you have Gödel-numbered everything like that, you can construct a statement (a combination of symbols) that says "There exists no proof for the statement with Gödel number G." But G is chosen so as to be the Gödel number of that statement itself. So the statement with Gödel number G is saying, of itself: "I am unprovable."

      [This is not a self-denial paradox like the Liar Paradox which says of itself "This statement is false," which (think it through) would be true if it were false and false if it were true, hence it is an unresolvable paradox; a paradox is almost as bad as a contradiction.]

      But the Gödel sentence is not paradoxical, like the Liar Paradox. It is true. It truly says of itself that it is not provable. And it is provably true that it is unprovable. That's the Gödel proof. And not only is the Gödel proof true, but we can "see" that the unprovable Gödel sentence is true, even though it is not provable that it is true. -- So that's what made Lucas and Penrose conclude that cognition cannot be computation. Because if the truth of the Gödel sentence is uncomputable, yet we can know it is true, we must know it non-computationally.

      (I should really stop here and let you think about it, before telling you what "Stevan Says"... If you want to think about it more before going on, stop here and come back after you've thought about it..)

      Delete
    2. Cognition and Computability II

      Is the statement "if the truth of the Gödel sentence is uncomputable, yet we can know it is true, we must know it non-computationally" true? Let us call that the Lucas/Penrose (L/P) sentence.

      No, and the reason is very simple: We know what a proof is. It is a computation that shows that if a statement -- the theorem -- were not true, it would lead to a contradiction, which is impossible, so it must be true.

      (This is actually what the "formalist" mathematicians take to be a proof. Other mathematicians, the "constructivist" mathematicians ask for even more than a demonstration that it would lead to a contradiction if a theorem were false -- in the case of proofs of the form that a certain number or function must exist otherwise it would lead to a contradiction. They insist on actually constructing the number or function. This is because constructions must be finite -- any computation involved must eventually halt; if it goes on forever, constructivists are suspicious about proofs based only on non-contradiction. But we needn't go into this, because Gödel's proof is constructive: He actually constructs the Gödel sentence; he does not just say that there must be one or else it would lead to a contradiction, though that would be enough for a formalist; Gödel was in fact a constructivist.)

      So the L/P sentence is saying that we know that the Gödel-sentence is true, yet the Gödel-sentence is provably unprovable (uncomputable), so "knowing" must be non-computational.

      Now, I happen not to be a computationalist, so I already don't think that cognition (including knowing, thinking, understanding) is just computation. However, I also have a certain fondness for logic. And the L/P sentence can be rejected based on just logic alone.

      (By the way, it is not that all formal systems have unprovable truths. Only formal systems strong enough to include arithmetic do. Gödel also proved that, in formal logic, the propositional calculus as well as the first-order predicate calculus are consistent and complete!)

      The logical error of L/P is to equate "knowable" with "provable/computable": Whereas it is (provably) true that the truth of the Gödel sentence cannot be formally computed (hence it is unprovable), there is no reason at all (except if we took computationalism as an axiom, which would be to beg the question!) to believe that the mental (i.e., felt) state of "knowing X" is the same state as "proving X to be true."

      In other words, even if computationalism were true, it would be perfectly possible that the cognitive (hence computational) state of knowing that X is true was not the same cognitive (hence computational) state as that of proving -- or being able to prove -- that X is true.

      Delete
    3. Cognition and Computability III

      Penrose's belief that cognition cannot be computation came from the familiar experience of great mathematicians when they have an intuition that a theorem is true, but they cannot (yet) prove it. (Fermat had that intuition about his last theorem, and died before he could prove it.)

      The Weak Church-Turing Thesis was about what mathematicians do (computation), not about what they "know." It was not a cognitive theory. And not even the Strong Church-Turing Thesis (that computation can simulate just about anything) is a cognitive theory. Only computationalism is a cognitive theory ("cognition is just computation") -- and ("Stevan Says") it's wrong. But even if computationalism were right, L/P would be wrong, as a logical inference.

      But in fact there's nothing on earth preventing mathematical intuitions (i.e., beliefs) from being computational -- any more than there is anything on earth preventing mathematical intuitions from being false. (That was Russell's point about laughing gas and knowing the secret of the universe. We already know from Descartes that we can only really "know" two kinds of things: (1) the logically necessary truths of mathematics and (2) that we are feeling whatever we are feeling when we are feeling. All the rest is just beliefs (even if true).

      (As for Penrose's "quantum cognition" (QM): apart from the fact that (1) QM is not necessary in order to argue that cognition can't be just computation: ordinary Newtonian dynamics is already non-computational, (2) QM has enough puzzles of its own without being dragged in to resolve the puzzles of cognition; and (3) QM is just as involved in the functions of the liver as in the functions of the brain or any other organ of the body...)

      Delete
    4. Hi Dr. Harnad,

      Thanks so much for these in-depth replies. I think that I had understood Gödel's first proof exactly as you described it, I'm just obviously less proficient at giving a strong kid-sib explanation (which is no easy skill to acquire) so thanks for the extremely clear delineation.

      Re: L/P statement - that makes a lot of sense. It's one of those logical lapses that seems almost embarrassingly simple once you have someone tell it to you, but that is clearly not immediately evident (at least to me, and the number of other authors whose papers I read that delved in to convoluted refutations without ever returning to that simple flaw). As a side-note, it's interesting to me that it seems as though many conflicts and refutations in the topics we've discussed are often founded on misuse or misinterpretations of critical terms (i.e. provable vs. knowable, Searle's word choice for a number of his premises/claims)

      Delete
  29. I find the idea of the Granny objection to be rather interesting, because you classify a large subset of concerns that you think represent a similar problem. You want to be able to talk about all of human behavior in one breath, without taking stopping along the way to discuss strawberries and cream. So questions about specific forms of behavior/feeling are unnecessary, because the idea is that robotics will lead us to simulations of anything we can think of, at some point or another. Don’t even worry about what a robot can or cannot do, because we’ll get there. This is all about physical capacities, not feeling capacities. Because feeling capacities stay put with the Other Minds Problem, even before we could figure out if other humans think.

    I think this really shows the kind of scientific reduction that you’re placing upon the mind/body problem. We are only looking at the physical world being simulated, because we come from the perspective of computation first.

    ReplyDelete
  30. “ 'We now ask the question, "What will happen when a machine takes the part of A in this game?" Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman? These questions replace our original, "Can machines think?"'
    Here, with a little imagination, we can already scale up to the full Turing Test, but again we are faced with a needless and potentially misleading distraction: Surely the goal is not merely to design a machine that people mistake for a human being statistically as often as not!”

    Perhaps what turing meant to give leeway for was not the machine’s performance (i.e. allowing it to occasionally do less than a perfectly convincing job) but rather the interrogator/judge’s performance. That is allowing for the possibility of the machine being “found out” as a machine due to the interlocutor’s judgement and not its own performance, in the same way a person might (conceivably) mistake a human for a machine; obviously not because the other human fails to be human enough, but rather because of heightened skepticism on the part of the interrogator or perhaps answers by either human or machine that might seem to formulaic or inappropriate for the conversation at hand (certainly possible on the part of a real human)

    ReplyDelete
  31. “ 'We now ask the question, "What will happen when a machine takes the part of A in this game?" Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman? These questions replace our original, "Can machines think?"'
    Here, with a little imagination, we can already scale up to the full Turing Test, but again we are faced with a needless and potentially misleading distraction: Surely the goal is not merely to design a machine that people mistake for a human being statistically as often as not!”

    Perhaps what turing meant to give leeway for was not the machine’s performance (i.e. allowing it to occasionally do less than a perfectly convincing job) but rather the interrogator/judge’s performance. That is allowing for the possibility of the machine being “found out” as a machine due to the interlocutor’s judgement and not its own performance, in the same way a person might (conceivably) mistake a human for a machine; obviously not because the other human fails to be human enough, but rather because of heightened skepticism on the part of the interrogator or perhaps answers by either human or machine that might seem to formulaic or inappropriate for the conversation at hand (certainly possible on the part of a real human)

    ReplyDelete
  32. I thought this article was extremely interesting, and there are several comments that I would like to address. The author states “… [W]hat Turing will be proposing is a rigorous empirical methodology for testing theories of human cognitive performance capacity… calling this an “imitation game”… has invited generations of needless misunderstanding” (Harnad, 2008).

    I really enjoyed and resonated with this argument made by Harnad. Initially, I found it hard to understand what was Turing’s “imitation game”, however I thought this article made it quite clear. As the author emphasizes, “game” and “imitation” state that a form of trickery is involved. Also, I think the severity of Turing’s method for testing humans cognitive performance capacity is downplayed. For example, he referred to his process as simply a “game”, and de-emphasizes the importance and controversy that escalated.

    Moreover, I think the article fails to pinpoint Turing’s goal of finding human cognitive performance capacity through reverse bioengineering. Turing’s labelling of simply the “imitation game” is too simplistic and deemphasizes from its importance and powerful influence. However, Harnad does a good job of emphasizing its importance of how it pertains to human cognitive performance capacity.

    ReplyDelete
  33. “what he will go on to consider is not whether or not machines can think, but whether or not machines can do what thinkers like us can do -- and if so, how.”

    If the Turing Test were simply an investigation into whether or not machines can do what other thinkers like us can do, it would be completely trivial. Clearly the way the way to find out is to try to make machines that can do what we can do and see if they can. The true significance comes from the question of how, which would be solved implicitly by engineering a machine that can do everything that thinkers can do. So Turing’s really crucial contribution was suggesting that we could reverse engineer cognition in order to learn more about it.

    One comment on the article as a whole: Since Turing was such an intellectual giant, Harnad gives him the benefit of the doubt regarding his apparent decision to limit the test to a verbal exchange, and also regarding his decision to limit the range of contestants to computational systems. I am not so certain whether Turing simply left these as gaps which were too obvious for him to explicitly fill in. It seems to me that he was sincere in promoting T2 and computationalism. Although their flaws are obvious today, they may not have been so obvious at the time, even to a genius like Turing.

    ReplyDelete
  34. On Mar 20, 2016, at 2:06 AM, Ines Patino Anaya wrote: Dear Prof. Harnad. Please see video attached to this article... v. relevant I’d be interested to know what you think of it.

    Hi Ines,

    KS: “But I thought you told me the purpose of the Turing Test was to design a robot that could do everything a human could do, and do it so well that we could not tell it apart from what a real person can do, for a lifetime. And I thought the purpose of doing that was to explain “cognitive capacity” “causally": to "reverse-engineer" how cognition can be done at all: And I thought that how the robot looked or sounded mattered the least: that what mattered was that it could do anything we could do. And you kept telling me that the TT was not meant to be a game or a trick. It was supposed to really be able to do everything we can do, not just fool us. But this Siri-like robot seems to be just about looking and sounding and fooling, not about doing! What’s up?”

    SH: When Hanson (the designer) says that from interacting with humans for years his robots will learn more and more and will eventually be able to do everything we can do, he is making a promise that any roboticist can make about their current toy robot. But the fact is that right now these robots are toys, and that the purpose of this particular one is to provide a commercial service, with patients and customers. The immediate objective is not to generate and explain human cognitive capacity. Let’s see where Hanson gets in the next 20 years — but my guess is that T3 is not going to come out as a side effect of a service ‘bot. T3 will be based on the capacity to interact with the world (including people), not just clients.

    About "love" (the "hard problem") see: Harnad, S. & Scherzer, P. (2008) Spielberg's AI: Another Cuddly No-Brainer. Artificial Intelligence in Medicine 44(2): 83-89

    ReplyDelete
  35. “Turing's proposal will turn out to have nothing to do with either observing neural states or introspecting mental states, but only with generating performance capacity (intelligence?) indistinguishable from that of thinkers like us.”
    I think this is where I got confused. I thought the Turing test proposal had something to do with mental states. As if a robot has the same performance capacities, they would be said to be both intelligent AND to hold internal mental states. So the Turing test turns out to be a performance capacity test solely? Observable behaviors don’t necessarily carry internal quality? I have no problem in saying that my computer is intelligent. It does thing I cannot myself do. A calculator does mathematic quicker than I can; so according to Turing definition, we can say it is capable of some type of performance. Intelligence is rather defined as an action ability rather than internal conscious ability if I get this right.

    ReplyDelete
    Replies
    1. Hey (again) Roxanne!

      So maybe I can clear a few things up (and hopefully solidify my understanding a little better).

      I agree with your conclusion. The Turing Test (TT) is purely a test of performance capacity. It is decisive because it is the best we've got... Once we've reverse engineered a robot that can do exactly what we do, we've done the best we could because there's nothing else we can feasibly reverse-engineer!

      Your question about whether the TT tests for mental states now invokes questions about feeling. Since the TT is a pure test of performance capacity, and is all about whether a TT passing robot could DO what we do convincingly, the question of the robots feelings is impenetrable because of the other mind's problem. There is no way of knowing whether our reverse-engineered cognition is also felt cognition.

      "Observable behaviors don’t necessarily carry internal quality?"

      Certainly our own behaviours carry internal qualities. We feel them! But there's no way of telling whether our TT passing robot does too.

      " A calculator does mathematic quicker than I can; so according to Turing definition, we can say it is capable of some type of performance."

      I think this stance misses the point a little. Turing's definition of cognition demands performance capacity which is identical to our own, because that's our best shot at reverse-engineering cognition. He makes no promises about the 'cognition' of machines which are not at our level of performance capacity, because Toy machines (T0) can only do some of what we do.

      Delete
  36. This comment has been removed by the author.

    ReplyDelete
  37. It appears to me that the other-minds problem is overly simplistic, and often referred with little effort to complicate the answer. For example in the "argument from consciousness," it is stated that OM problem is the standard argument against the Turing Test. The OM problem says that one can only know and account for one's own cognitive abilities, feelings, experiences, senses, etc and nobody else's. But there must be some kind of vague threshold to which we hold up the responses in the case of the imitation game. For a machine to pass T2, it must be able to use verbal/written communication to "fool" a real human into thinking the machine is human, and it is supposed to be able to do that for as long as necessary.

    We claim that we can only account for our own minds, but clearly we are extrapolating our knowledge and abilities to learn, process, retain, and remember information onto other entities and comparing their performances with our own, or with what we consider "normal" human performance. On this note, I find it rather problematic how often we us the term "normal human" or "normal mind" in class as well as the articles. This is a bit of a stretch but not an unprecedented question: what if the person in the other room is blind or dyslexic and cannot read the questions properly, so they answer curtly and often with one or two words. Surely, the human might suspect the person to be a machine, yet it's a real person on the other side. Are we saying this person isn't normal? What is normal, and why are we not complicating the Other Minds problem, because we are clearly operating under significant assumptions of what a person can or can't do.

    ReplyDelete
  38. I think I really resonated with Harnads argument when he states “… [W]hat Turing will be proposing is a rigorous empirical methodology for testing theories of human cognitive performance capacity… calling this an “imitation game”… has invited generations of needless misunderstanding” (Harnad, 2008).

    Initially I was confused with the concept of the imitation game. I thought this article did a good job of explaining it for me. Firstly, Harnad recognizes that these words entail a form of trickery, which is immensely downplayed by Turing. Turing is very modest in his methodologies to test cognitive capacities. Perhaps he did not expect his theory to be so renown and evolutionary. Also, Harnad recognizes that this game was actually a form of reverse engineering; which he notes as a key of important future scientific research. Turing does not emphasize this. Harnad does a good job of shedding light on the complexities and importance of the Turing game, which are severely downplayed by Turing.

    ReplyDelete