Saturday 2 January 2016

(2b. Comment Overflow) (50+)

(2b. Comment Overflow) (50+)

4 comments:

  1. I wonder if in the Turing Test, if the one trying to decide who was a thinking human and who was the robot, was replaced with another robot, could they pick it out? I can imagine arguments for the robot interrogator to be better and worse at deciding which participant is human. In terms of worse, the robot would, I imagine, overlook the human qualities that it itself does not possess and therefore would not recognize. On the other hand, the robot interrogator would possibly recognize the robot participant’s weaknesses, as they share them. Could a robot be the interrogator?

    Discussing the restriction to email interaction, Harnard remarks upon the fact that this does limit the test a lot, especially as the comparison is on the ability of a computer to act as a human and a human can do a lot more then email interaction! Probably why this is only T2… I wonder if this is even really enough to be T2 though. The email interaction does not show the robots ability to use or understand even such verbal tools as tone, let alone nonverbal tools. The use of idioms or metaphors in the test I think could distinguish the man and the robot as well. The idioms literal meaning/shape is very different from it’s symbolic meaning and the reactions to such is greatly impacted by the level of understanding and interpreting. Furthermore, what if the robot was asked to imagine or day dream and then asked to describe their feelings around it? The more I consider T2 the more distinguishable I find the robot and the man with the right questions. All words, all verbal cues are grounded in nonverbal capacities. Performance assessment is feeble without assessing nonverbal cues as well.

    Although T2 is not enough, T4 and T5 surpasses the necessary abilities. As Turing and Harnard point out, it is about performance capacity, not appearance. T4 and T5 simply add the looks of human like ability and presence. T3 has the necessary sensorimotor abilities to experience, gaining much of the needed nonverbal performance T2 was lacking, and has the verbal abilities from T2. T3 is most useful level of test.

    ReplyDelete
  2. I don’t understand the T4/T5 distinction, nor do I quite understand the point of either in the hierarchy.

    T4: Total indistinguishability in external performance capacity as well as in internal structure/function. This subsumes T3 and adds  all data that a neuroscientist might study. This is no longer strictly a Turing Test, because it goes beyond performance data, but it correctly embeds the Turing Hierarchy in a larger empirical hierarchy. Moreover, the boundary between T3 and T4 really is fuzzy: Is blushing T3 or T4?

    So T4 is T3 plus a brain? With skin as well? Can it be synthetic skin that has all of the necessary functionality? Or does it have to have the same chemistry?

    T5: Total indistinguishability in physical structure/function. This subsumes T4 and rules out any functionally equivalent but synthetic nervous systems: The T5 candidate must be indistinguishable from other human beings right down to the last molecule.

    This is a molecule-for-molecule copy of a human being. This level is obviously extraneous. What questions would this answer over and above T4? I think the purpose for T4 is something like: “I know that they made that T3 which is pretty nifty… but that doesn’t help us understand how it happens in the human body.” Right?

    ReplyDelete
  3. “T2: Total indistinguishability in email (verbal) performance capacity. This seems like a self-contained performance module, for one can talk about anything and everything, and language has the same kind of universality that computers (Turing Machines) turned out to have. T2 even subsumes chess-playing. But does it subsume star-gazing, or even food-foraging? Can the machine go and see and then tell me whether the moon is visible tonight and can it go and unearth truffles and then let me know how it went about it? These are things that a machine with email capacity alone cannot do, yet every (normal) human being can.”

    Why wouldn't adding sensory components to a T2 machine to ground words and pick up information from the environment be enough? Then we would have something that could pass T2 presuming we could find a computational solution for grounding words. Stevan mentioned in another comment (in reply to Amanda in the second comment of the first page of comments for 2b) that we need not worry about about differences in appearance when creating a robotic version for T3, but why go to this extent when a sensory part could be added to T2 without needing to worry about the machine walking, moving, etc?

    ReplyDelete
  4. This piece definitely provided me with a more in-depth understanding of Turing’s original paper. It addressed many concepts, ideas, and ambiguities that are glossed over by Turing himself but that when defined and examined actually illuminate Turing’s argument rather than take away from it’s succinctness. One such idea, interestingly, Turing acknowledges as being difficult to define and proposes to rectify this potential confusion by explaining the idea in different words, but I find this actually makes the concept(s) less clear. Harnad, citing Turing, writes “This should begin with definitions of… ‘machine’ and ‘think’…[A] statistical survey such as a Gallup poll [would be] absurd. Instead of attempting such a definition I shall replace the question by another…in relatively unambiguous words.” I find it implausible and a little bit absurd that Turing could present the concept of the Turing Test (or the Imitation Game, or the Turing Machine) which, as we can tell from the first paragraph alone, are absolutely centric around the concepts of “machines” and “thinking”, without giving explicit, coherent definitions of these concepts. Harnad’s explanations of these concepts elucidate meaning that Turing explains with much less brevity, if they are even directly explained at all.

    Another very useful clarification made by Harnad is defining Turing’s criterion of complete indistinguishability in the performance capacity of a machine, and further subdividing the TT into t0-T5. These levels (and the concept of Turing completeness they establish) provide a more concrete understanding of what is meant by “passing” the TT and the Imitation Game as presented in Turing’s original paper.

    “... I grant you that you can make machines do all the things you have mentioned but you will never be able to make one to do X" : [e.g] Be kind, resourceful, beautiful, friendly, have initiative, have a sense of humour, tell right from wrong, make mistakes, fall in love, enjoy strawberries and cream, make some one fall in love with it, learn from experience, use words properly, be the subject of its own thought, have as much diversity of behaviour as a man, do something really new.”

    I don’t agree with the dismissal of Harnad’s “Granny objections” or Turing’s “Arguments from Various [In]abilities” (above). I think that it is insular to reject these empirical questions because they are the crux of true Turing completeness (i.e. can a machine ever pass the T3 without doing the things described above? Don’t those concerns have to be addressed to some degree, rather than just dismissed, in the formulation of a “thinking” machine?)

    ReplyDelete