Saturday 2 January 2016

2a. Turing, A.M. (1950) Computing Machinery and Intelligence

Turing, A.M. (1950) Computing Machinery and IntelligenceMind 49 433-460 

I propose to consider the question, "Can machines think?" This should begin with definitions of the meaning of the terms "machine" and "think." The definitions might be framed so as to reflect so far as possible the normal use of the words, but this attitude is dangerous, If the meaning of the words "machine" and "think" are to be found by examining how they are commonly used it is difficult to escape the conclusion that the meaning and the answer to the question, "Can machines think?" is to be sought in a statistical survey such as a Gallup poll. But this is absurd. Instead of attempting such a definition I shall replace the question by another, which is closely related to it and is expressed in relatively unambiguous words. The new form of the problem can be described in terms of a game which we call the 'imitation game." It is played with three people, a man (A), a woman (B), and an interrogator (C) who may be of either sex. The interrogator stays in a room apart front the other two. The object of the game for the interrogator is to determine which of the other two is the man and which is the woman. He knows them by labels X and Y, and at the end of the game he says either "X is A and Y is B" or "X is B and Y is A." The interrogator is allowed to put questions to A and B. We now ask the question, "What will happen when a machine takes the part of A in this game?" Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman? These questions replace our original, "Can machines think?"





1. Video about Turing's workAlan Turing: Codebreaker and AI Pioneer 
2. Two-part video about his lifeThe Strange Life of Alan Turing: Part I and Part 2
3Le modèle Turing (vidéo, langue française)

62 comments:

  1. I apologize ahead of time for my ignorance – my level of knowledge in this area is very limited, but I am very motivated in learning about it nonetheless.

    If I understood this correctly, Turing was speculating about the possibility of creating a machine that could pass the Imitation Game. That is, if we can design a machine that has the capacity of doing everything that a human being can do and ultimately trick an interrogator into thinking that it is a human. Now, I am not in any way undermining the brilliance of this man, but here is where I get a little confused.

    If one passes the Turing Test, this means that the system, the robot can do anything that we as humans have the capacity of doing, and does this in a way that is undistinguishable from humans. However, am I wrong in thinking that saying that a robot can do anything humans can doesn’t necessarily imply that it’s doing it the same way we do? Our ultimate goal is to have a causal explanation for cognition – to causally explain the mechanisms going on inside the head that enable us to do everything it is we have the capacity to do. But saying that a robot can do anything that humans can do, to me, doesn’t necessarily mean that they’re obeying the same principles as humans. I mean we can arrive to the same output via different operations (2+2+2=6; 2x3=6; etc.), but is the algorithm used by the robot the same as us? Or does that not matter? The impression I get (and most likely wrong) is that the Turing test is not a test for human intelligence per se, but more like a test that one needs to pass in order to even be considered as a serious candidate for modeling human cognition? I am just not convinced in saying that if one passes the Turing test it entails that we now have an explanation for the functioning of the mind and that the robot has a mind. Perhaps the robot is generating its capacities on an input/output basis in a sort of mindless fashion? But then this begs the question of what it means to have a mind – is this where consciousness comes in?

    While reading Turing’s Argument from Consciousness, he acknowledges the mysteries around consciousness but argues that we cannot know with certainty that another person is feeling. I also tend to disagree with the “extreme form of the view where the only way to be sure that machines think is to be the machine and to feel oneself thinking” (p.10). I find that adopting this way of thinking doesn’t get us anywhere, doesn’t allow us to move forward. After all, how can one be really sure that they feel? What is feeling? Although my intuitions lead me to think that robots don’t feel, I do not know this with certainty. And if indeed robots don’t feel, why is that the case?

    Lastly, I had a question concerning the claim that “machines cannot make mistakes”. This section had me thinking about learning. When children first start to learn what cats and dogs are, it is not uncommon for them to confuse the two. However as they gain more experience, they become better and make fewer mistakes –as if their algorithms change and become better. Can machines be programmed to make these kinds of errors? Can machines have that same type of flexibility within their algorithms? Or can the only types of error they make be due to errors in the programming itself?

    ReplyDelete
    Replies
    1. CA: "the capacity of doing everything that a human being can do and ultimately trick an interrogator"

      The TT is not a trick. If a system can really do anything a human can do, for a lifetime (not just 10 minutes!), then there's nothing more a reverse engineer -- who is trying to explain how and why we can do what we can do -- can ask for.

      Or is there? (See the Turing hierarchy from t1 to T5.)

      CA: "saying that a robot can do anything humans can doesn’t necessarily imply that it’s doing it the same way we do?"

      No, but (although it's the "easy" problem it's already plenty hard do it even one way. (See my reply about underdetermination.)

      CA: "is the algorithm used by the robot the same as us? Or does that not matter?"

      You are asking about "weak equivalence" (same input/output) versus "strong equivalence" (same algorithm).

      But that question is only even relevant if computationalism is right, and cognition is just computation.

      If not, it's not just about algorithms, but also dynamics. But there too you can ask whether it is the "right" hybrid symbolic/dynamic machine. And then it's again a matter of underdetermination. If two causal theories explain all possible data, but in different ways, there is no way to decide which one is right.

      Ditto with reverse-engineering. If two causal mechanisms have exactly the same performance capacities, what does it even mean to decide which one is "right"?

      Of course, with the mind, there is one other thing that matters, besides performance capacity....

      CA: "Perhaps the robot is generating its capacities on an input/output basis in a sort of mindless fashion?"

      Because of the other-minds problem, there is no way to know whether the robot feels. But that's why we have Renuka and Riona. What it means to pass the Turing Test is to be able to do anything Renuka or Riona can do. If the TT-passing robot does not feel, than you have the wrong causal mechanism. But still you have no way of knowing whether or not the TT-passing robot (Renuka, or Riona) -- or anyone else -- feels.

      CA: "is this where consciousness comes in?"

      Yes. To be conscious means to feel. To have a mind also means to feel.

      Delete
    2. CA: "While reading Turing’s Argument from Consciousness, he acknowledges the mysteries around consciousness but argues that we cannot know with certainty that another person is feeling."

      Yes, Turing too, like many others, has noticed the other-minds problem, and the fact that there's no way to know for sure that whether anyone else but me has one. This is not a problem with mind-reading other people (because they're like us and they talk). It becomes more of a problem the more a species differs from our own. And of course with the robotic Turing Test, the problem is at its height. ("Stevan says" if a robot could pass the TT, it is almost as likely that it feels as that other people feel,)

      CA: "I... disagree with the 'extreme form of the view where the only way to be sure that machines think is to be the machine and to feel oneself thinking'”

      There's no way to be sure. With machines, other species, or people. There's only probability.

      CA: "doesn’t allow us to move forward. After all, how can one be really sure that they feel? What is feeling?"

      We all know in our own case exactly what it means to feel. (Descartes' cogito.) But for others, we don't need to be "sure" any more than we need to be sure apples fall down rather than up.

      CA: "if indeed robots don’t feel, why is that the case?"

      That's a good question. In fact it's the hard problem all over again: If TT-passing robots don't feel -- i.e., if they're just Zombies -- then it makes it even more mysterious why on earth we feel.

      And since the other-minds problem cuts both ways (i.e., not only can you not be sure that something else feels, you also cannot be sure it doesn't feel), the hard problem (which is not the same as the other-minds problem) can also be formulated like this: "How and why are we not Zombies?"

      CA: "When children first start to learn what cats and dogs are, it is not uncommon for them to confuse the two. However as they gain more experience, they become better and make fewer mistakes – as if their algorithms change and become better. Can machines be programmed to make these kinds of errors?"

      Learning machines -- purely computational ones -- of course make mistakes while they are learning. They have a learning algorithm, but it's not so much their learning algorithm that changes as they learn, but their hunches about what's what.

      They are programmed to be able to learn. They don't need to be "programmed" to make mistakes. The task of learning takes care of that...

      Delete
  2. “The fact that Babbage's Analytical Engine was to be entirely mechanical will help us to rid ourselves of a superstition. Importance is often attached to the fact that modern digital computers are electrical, and that the nervous system also is electrical. Since Babbage's machine was not electrical, and since all digital computers are in a sense equivalent, we see that this use of electricity cannot be of theoretical importance. Of course electricity usually comes in where fast signaling is concerned, so that it is not surprising that we find it in both these connections. In the nervous system chemical phenomena are at least as important as electrical.”

    I am slightly confused by this argument. In the previous paragraph, Turing says that Babbage’s planned machine was digital, with mechanical storage. So, when he is talking about signalling, is he only referring to storage then? Or, by making this statement is he saying that it is possible to have an entirely mechanical machine that passes the test, but it would just be slower because it would not have electrical signalling like the CNS or digital computers would?

    “This special property of digital computers, that they can mimic any discrete-state machine, is described by saying that they are universal machines. The existence of machines with this property has the important consequence that, considerations of speed apart, it is unnecessary to design various new machines to do various computing processes. They can all be done with one digital computer, suitably programmed for each case. It 'ill be seen that as a consequence of this all digital computers are in a sense equivalent.”

    From what I understood, Turing is saying here that since there is a finite number of states in discrete-state machines, and since we can use a table of the input and outputs assigned with each state, then we can program other digital computers to use these tables and predict the action of the discrete state machine for us. The machines that can mimic or predict the action of one discrete state machine can do this for multiple discrete state machines, and are therefore considered universal machines. Turing then says that all digital computers are equivalent, and this is where he loses me. Does he mean that all digital computers are capable of being equivalent? Let’s say that we have one universal machine (C) mimicking the behavior of machines A and B, and another machine (D) mimicking the behavior of C. Machine A would not be equivalent to machines B, C, or D, but would machines C and D be considered equivalent given that machine D does not mimic any other computers?

    ReplyDelete
    Replies
    1. Ceci n'est pas un ordinateur

      AC: “Turing says that Babbage’s planned machine was digital, with mechanical storage. So, when he is talking about signalling, is he only referring to storage then? Or, by making this statement is he saying that it is possible to have an entirely mechanical machine that passes the test, but it would just be slower because it would not have electrical signalling like the CNS or digital computers would?"

      He's just saying that the physical details of how the machine is implemented are irrelevant. All that matters is that the TT-passer should be able to do anything and everything we can do, indistinguishably from the way any of us do it.

      Turing is talking (at the same time) about (1) the fact that the TT just guarantees "weak equivalence" (input/output equivalence) not necessarily "strong equivalence" (algorithm equivalence) and (2) about the fact that (if computation is implementation-independent), the specifics of the physical implementation are irrelevant.

      But I"m not sure Turing was a computationalist... What do you think?

      (The Weak Church/Turing Thesis is about computability by mathematicians, and the Strong Church/Turing Thesis is about the computer-simulability of just about everything. But it does not follow from either of these that cognition is just computation: Cognition could be all or part dynamic too, like heat or movement.)

      AC: "From what I understood, Turing is saying here that since there is a finite number of states in discrete-state machines, and since we can use a table of the input and outputs assigned with each state, then we can program other digital computers to use these tables and predict the action of the discrete state machine for us. The machines that can mimic or predict the action of one discrete state machine can do this for multiple discrete state machines, and are therefore considered universal machines. Turing then says that all digital computers are equivalent, and this is where he loses me. Does he mean that all digital computers are capable of being equivalent?"

      Programmable digital computers ("Von Neumann Machines") are Universal Turing Machines. They can mimic any other digital computer. But mimicking (i.e., computationally simulating) what another machine is doing does not mean doing what the other machine is doing -- except in the special case where the other machine is just a digital computer, and all its doing is computation (symbol manipulation).

      If the other machine is instead a vacuum cleaner, then the universal Turing Machine (digital computer) will not suck up dust: It will just simulate sucking up dust: simulated dust and simulated sucking. To forget this difference is to get lost in the hermeneutic hall of mirrors that is created by confusing the computations going on in the computer with the interpretations of the computations going on in your head.

      This is not a computer:

      http://www.najevtino.mk/images/m/17/60f0274273b7987c5f2e5332ba2e0f45.jpeg

      Delete
  3. "A better variant of [Lady lovelace's] object says that a machine can never "take us by surprise"".
    Turing disagrees, stating that machines take him by surprise due to insufficient or sloppy calculations for predicting outcomes. Turing doesn't mention the possibility that the machine, which is built to perform all that we do (i.e. attempting to fulfil the goal of cognitive science), might ultimately take him by surprise not due to a calculation error, but rather due to invoking an intuitive, unconscious feeling (love, hate, desire...). If anyone has seen the movie Ex-Machina, then consider when the Turing machine ultimately surprises the man. The surprise is not due to its elaborate and unpredictable actions, but rather due of the machine's ability to invoke an array of feelings within the man. It's not surprising that the machine has feelings, for they may as well just be imitations for all we know, but rather it is surprising how convincing and powerful the machine can be at invoking authentic feelings within the man who interacts with it.

    "Our hope is that there is so little mechanism in the child brain that something like it can be easily programmed".
    I disagree with Turing's belief that designing a programme to mimic the human mind would be more trivial if modelled after a child's brain. Infant brains are undergoing rapid neuroplasticity, and in fact are theorized to (by interactive specialization) have more active brain network connections compared to the adult's - whose networks narrow and refine with expertise/experience over time. Assuming these two theories are true, I think a child's brain provides a more chaotic and complex model than the adult's.

    ReplyDelete
    Replies
    1. MS: "Turing doesn't mention the possibility that the machine, which is built to perform all that we do... might ultimately take him by surprise not due to a calculation error, but rather due to [e]voking a feeling... [in us].

      It's already part of the Turing Test that the candidate must be completely indistinguishable from any of us, whether verbally, if it's an email pen-pal (T2), or both verbally and behaviorally, if it's a robot (T3: e.g., Renuka or Riona). So of course the TT candidate can evoke feelings in us (interest, boredom, fondness, fear). The TT has to have the power to be indistinguishable for a lifetime, so they are bound to evoke feelings. It is feelings that would prevent you from kicking Renuka or Riona, even though you were told they were "just" robots -- because they are otherwise exactly like any of us. We can get to know them, feel sorry for them, feel protective of them, etc. etc.

      But you're right that computational error is not the only way a program I wrote could surprise me -- and it doesn't even have to be a TT program. It's enough that it's a program that can learning things from data. I could write the program, giving it an algorithm for picking out cat images on the web, and it could bring me back cat images from places I never knew had them.

      Or it could be a program with a theorem-proving algorithm, which is could then apply to proving theorems I had not known were true.

      And so on. The "surprise" objection, like most of the others, is very naive about what computation is, imagining that if you write and run an algorithm you, the programmer, know every output that will be made to every input by that algorithm. Not only do you not know every possible output (if the program is non-trivial, there are an infinite number of them), but you don't know every possible input the algorithm will receive as data.

      MS: "It's not surprising that the machine has feelings, for they may as well just be imitations for all we know..."

      Here it sounds like you might be mixing up its having with feelings with its invoking feelings in us (which is what you seemed to be saying above, and below).

      And the computer does not imitate feelings, it imitates how people behave when they are feeling.

      MS: "...but rather it is surprising how convincing and powerful the machine can be at [e]voking authentic feelings within the man who interacts with it."

      Correct. But the reason Renuka or Riona can evoke feelings in us is the same as why any other feeling being can evoke feelings in us, namely, that they're indistinguishable from a feeling being.

      MS: "I disagree with Turing's belief that designing a programme to mimic the human mind would be more trivial if modelled after a child's brain.

      All Turing means here is that children can do less things when they are young (infants even less), so there are fewer cues to distinguish TTs robot-children from real children. But, as I said, the TT is a lifelong test, so TT robot-children would have to have the capacity grow, learn and mature, indistinguishably from real children.

      Delete
  4. “We can only see a short distance ahead, but we can see plenty there that needs to be done.”

    I’m surprised I haven’t seen this quote as one of the many that get over-popularized on the web and in pop culture, I don’t even think they used it in the latest movie about him! But this quote is fascinating giving the mind who made it and the field it was meant to describe. The irony from this article is that Turing had such an incredible forward thinking understanding of the field of computation at its infancy. To be reading his predictions on computer intelligence, storage and capabilities ~75 years after publication and see how accurate many of them turned out makes the quote above seem very modest.

    Now I do not know how many ideas from this paper were wholly original or even new at the time some that stick out include: His explanation of possible learning methods for computers by making the analogy of not trying to build an adult mind but a child’s, his open mind to the possibility of computer storage when only working with a couple thousand bits of information and how this larger storage could make possible the types of task and intelligence he is describing and lastly how he makes comparisons between the brain and an electric computer including the approximate number of neurons and that he believed only a small section of the brain was for higher levels of thinking (although it is humorous to see that his assumption that most of the matter is used for retaining visual impressions).

    While 75 years may not prove to be a long time in the scheme of how computation will play out in civilization, but at this point it certainly seems like Turing was able to see much further ahead than he might have believed.

    ReplyDelete
  5. The Argument from Consciousness is quoted by Professor Jefferson as “Not until a machine can write a sonnet or compose a concerto because of thoughts and emotions felt, and not by the chance fall of symbols, could we agree that machine equals brain-that is, not only write it but know that it had written it” Consciousness is the ability to experience and feel. Cognition is the mechanisms going on inside the head that enables us to do what we do everyday. So from Jefferson’s argument, it seems to me it should be not only be the Argument from Consciousness but the Argument from Consciousness and Cognition, since they seem to go hand in hand. He speaks of thoughts and emotions being felt, which could be defined as consciousness. However, he also speaks of a machine not just feeling emotions and then composing a sonnet but to also know that is had written it. The keyword here is know. For a machine to know that it wrote a particular sonnet is having the cognitive capacity to reflect on its actions, (ie. composing a sonnet). At the end of this argument, Turing states, “I do not wish to give the impression that I think there is no mystery about consciousness…But I do not think these mysteries necessarily need to be solved before we can answer the question with which we are concerned in this paper”. Turing does not think consciousness is relevant to answer the question, “Can machines think?” I believe he should also be stating that cognition is not relevant to answer that same question, since, as I showed before, Jefferson’s argument entails consciousness as well as cognition.

    A question I have concerning the Imitation Game: When answering the interrogator’s questions, is the person (A) trying to cover up the fact that they are the human in the game? Are they trying to trick the interrogator? It seems that some questions could either be answered to the point (Yes or No) and some could be elaborated. I guess I am confused whose side the human is on. Are they trying to cover up their identity as the human or do they want to be known? On the other hand, I understand that the machine is trying to trick the interrogator that it is the human.

    ReplyDelete
    Replies
    1. It feels like something to know, or to understand or to think. Otherwise "knowing" and "understanding" would mean only "being able to do." Computer programs today can write sonnets, and even fair imitations of humdrum Bach. But they are nowhere near TT power, and never will be: because they cannot pass T3 and ("Stevan says") even T2 can't be passed without being grounded in the robotic capacity to pass T3. T3 grounds words in the things they refer to, via the robot's system.

      The TT is not a trick, and was not meant by Turing to be a trick. Think of what Renuka and Riona can do, for a lifetime, not what a computer program can do, verbally, for 10 minutes.

      Delete
    2. When you talk about it "feeling like something" to know, understand or think, am I right in understanding that you're separating cognition from the argument against Turing that Jordana mentioned above? If so, this makes sense to me because we've discussed in class at some length that solving the hard problem of consciousness isn't required in order to solve the easy problem of cognition. Namely, in this case the easy problem is the "being able to do" that you mention. This is separated from the feeling of "knowing" and "understanding," which is perfectly fine if we're only concerned with understanding cognition; in fact, if I understand correctly, this is Turing's rebuttal to Jefferson's consciousness argument.

      A couple points that are still a little confusing to me are these:

      1. You speak about what computers today can do and their potential now and in the future for passing the TT. One thing I do agree with is that right now they cannot pass T3 or T2 since they lack the necessary grounding grounding of words. However, when you say that “they are nowhere near TT power and never will be, are you saying that you think that the computers of today will never progress to a level at which they would be able to ground their words via the robot’s system and pass T3? I don’t see a reason why a robotic system wouldn’t be able to eventually be equipped with sensory motor systems that would allow this.

      2. The notion of the TT not being a trick has come up a couple of times in these comments now. This may be trivial, but if we aren’t implementing T5 or even T4, the robotic implementation would then surely differ in some superficial ways than us would it not? For example, as Turing mentioned, it would be able to perform arithmetic at a much quicker and accurate level. Would pausing or making a slight error to hide its superiority from an interrogator not be considered a form of trickery (even if this was not Turing’s intention for the TT)?

      Delete
  6. “The claim that a machine cannot be the subject of its own thought can of course only be answered if it can be shown that the machine has some thought with some subject matter... for instance, the machine was trying to find a solution of the equation x2 - 40x - 11 = 0 one would be tempted to describe this equation as part of the machine's subject matter at that moment. In this sort of sense a machine undoubtedly can be its own subject matter. It may be used to help in making up its own programmes, or to predict the effect of alterations in its own structure. By observing the results of its own behaviour it can modify its own programmes so as to achieve some purpose more effectively.”

    “A variant of Lady Lovelace's objection states that a machine can "never do anything really new." This may be parried for a moment with the saw, "There is nothing new under the sun." Who can be certain that "original work" that he has done was not simply the growth of the seed planted in him by teaching, or the effect of following well-known general principles. A better variant of the objection says that a machine can never "take us by surprise."”

    I’m a little confused by what it means for a machine to be its own subject matter and observing the results of its own behavior. So assuming the machine has some thought, does this mean that the state the machine is in, is in fact the machine’s state itself? If so, what is the next state of the machine then? By modifying one of its original state to create a new state, or is that going back to an old, modified state? In that case, how would it be that the machine ever learns anything new if it has already been given all the necessary states to deal with any problem it faces? I’m assuming induction does not equal to learning, which is why I’m also confused as to why Turing compares learning to the element of surprise, since being surprised seemed to be us learning from the machine and not the machine learning from its inputs.

    ReplyDelete
    Replies
    1. If you assume the machine is just a computer, and also that it has "some" thought, you've already given away the store to computationalism by assumption.

      Yes, a computer can have states that take the data from other states as input, but that doesn't mean the computer is thinking, let alone thinking about itself.

      But computers can definitely surprise their programmer. One can write a program that changes in various ways depending on the data it encounters, either from the outside world, or generated by its own calculations. The outcome down the road can take the programmer completely by surprise. (Although of course if the programmer did the symbol manipulations himself, step by step, it would not be a surprise to him.)

      Learning is learning, a form of doing, whether done by a computer or a human. (I don't say "machine," because organisms are machines tooL machine just means a causal mechanism.) But learning is not just doing. It also feels like something to learn, and to learn something new. A computer does not feel that (or anything). It just does.

      Delete
  7. “May not machines carry out something which ought to be described as thinking but which is very different from what a man does? This objection is a very strong one, but at least we can say that if, nevertheless, a machine can be constructed to play the imitation game satisfactorily, we need not be troubled by this objection.” (page 2)

    Why not? Isn't this both a significant and “troubling” objection? Turing seems to be saying that the actual process of thinking itself is irrelevant as long as behavioral output is indistinguishable from that of a human. He is completely ignoring the challenge of discovering, defining, and implementing cognition by dismissing it as irrelevant. To Turing, it does not matter if the machine is actually thinking, as long as it can “carry out something which ought to be described as thinking.” But to play the imitation game satisfactorily, indefinitely, and without fail (assuming T3 here, not just T2), wouldn't the machine need to be thinking? Not just something described as thinking, but actually thinking, exactly the same as us. If the machine is not thinking, how can we be sure it will always pass T3? Maybe it has simply chosen the correct behavioral output to every question asked so far.

    “We also wish to allow the possibility than an engineer or team of engineers may construct a machine which works, but whose manner of operation cannot be satisfactorily described by its constructors because they have applied a method which is largely experimental.” (page 3)

    But again, this is ignoring the issue of defining cognition. How will we be able to implement it if we haven't even defined it? Building a system like the one Turing describes may be an important step towards discovering what cognition is, as it permits reverse-engineering of another model system to help illuminate how cognition works. But again, until we actually know what cognition is and how it works, how can we be sure the machine actually passes T3 and is not simply being asked easy questions.

    “I do not wish to give the impression that I think there is no mystery about consciousness. There is, for instance, something of a paradox connected with any attempt to localise it. But I do not think these mysteries necessarily need to be solved before we can answer the question with which we are concerned in this paper.” (page 12)

    So according to Turing, there is no connection between consciousness and cognition. But we know this is false, as we know how it feels to think. So cognition and consciousness go hand and hand, and to pass the Turing Test we need to recreate cognition, so the question of the role of consciousness does in fact need to be answered.

    Considering all the arguments he presents, is Turing a computationalist? He not only neglects the fact that we need to discover what cognition is and how it works before we can successfully implement it in a way that allows a robot to pass T3, but argues that it is entirely unimportant. Is this the computationalist argument in its true essence? That the challenge of discovering cognition is irrelevant because both humans and machines possess intrinsic computing capabilities, and that these capabilities are both necessary and sufficient to permit them to do all that they do? I'm leaning pretty heavily towards a resounding YES! , but I'm still not positive, so feel free to convince me one way or the other.

    ReplyDelete
  8. Turing mentions the fact that most computers have the feature of using electricity which makes some people think that this is what makes computers similar to human beings. This analogy is often due to the fact that human being’s nervous system is also based on electric communication between neurons. After describing this analogy, Turing points out that instead of focusing on the electrical ‘’common feature’’ between people and computers, we should ‘’ took rather for mathematical analogies of function’’.
    In one of my classes (advances in visual perception), without directly talking about computer perception, it has been mentioned how it is thought that people perceive things visually. It has been argued that people tend to use different filters and different channels of filtering that make up the contrast sensitivity function that the brain understands. This hypothesis has been proven by several experiments where people got habituated to one pattern and because the neurons got desensitized for some time from this pattern, these people had more difficulties seeing a pattern that was similar to the first pattern in another frame.
    It is interesting how people try to ‘’humanize’’ machine and to ‘’computerize’’ the way people think. In this particular example about seeing, I am curious to see what is going to be the complete explanation of seeing. Will it include the feeling of seeing? and if yes, what could be a potential mechanism for it? or would it be totally possible to describe a living being’s visual process without involving the mechanism responsible for the feeling of seeing?

    ReplyDelete
    Replies
    1. “It is interesting how people try to “humanize” machine and to “computerize” the way people think.”

      I agree. It is an interesting and even natural tendency for people. I didn’t notice that I was trying to “humanize” a computer until I got to the line in Dr. Harnad’s article that reads: “thinking is not observable: that unobservability helps us imagine that computers think” (page 10). Although there is a clear difference between a human thinking and a computer computing, I didn’t even realize that I was equating the two until it was explicitly pointed out to me.

      “Would it be totally possible to describe a living being’s visual process without involving the mechanism responsible for the feeling of seeing?”

      I think so. Neuroscience can probably explain every single mechanism and molecule responsible for objective sight (at least eventually, provided we are not there yet). So we can completely describe the visual process, excluding the “feeling” of seeing. As such, we can probably even simulate sight.

      “I am curious to see what is going to be the complete explanation of seeing. Will it include the feeling of seeing?”

      Again, I think so. While objective sight can be explained purely mechanistically, I would say the complete, subjective experience of seeing does involve the feeling associated with it.

      “and if yes, what could be a potential mechanism for it?”

      Aha. Here is where it gets much trickier. The question is not only What could be a potential mechanism for it?, but also Does a mechanistic explanation of “feeling” even exist? To me, “mechanistic” implies an explanation grounded in the physics of a phenomenon. So while we can mechanistically explain sight, we cannot mechanistically explain the “feeling” of seeing, at least not yet. And if we adhere to the principle that computation is not cognition (Stevan says), then we might never be able to.

      Delete
  9. Referring to section 4. "Digital computers"

    In this section the author describes the working parts of a digital computer. He also compares them to a “human computer”. Turing says that a digital computer consists of three parts. The store, the executive function and the control. The store is responsible for holding information such as the rules/functions that can be completed. The store also holds calculations. The executive function is the component that performs the functions that are read from rules in the store and recorded in the store. These functions can be simple or easy. Lastly, the control ensures that the rules/functions are read and performed correctly.
    According to Turing, human computer’s rules are written in a book rather than in the store. Instead of calculations being held in the store they are completed on an infinitely long supply of paper, or can be done in memory. Why are components that perform similar functions to the executive function and control not addressed? Is the executive function component equivalent to what would be a definition of cognition?

    ReplyDelete
    Replies
    1. Hey Renuka,

      Of course I would need Stevan to confirm, but I interpreted the comparisons (or lack thereof) of the computers' features in the same way you did (I think).

      It seemed implicit to me that both the executive function and the control would have to fall within the category of 'cognition' in the case of the human computer. I also thought that possibly they weren't addressed because creating a meaningful, tangible comparison (as was done with the paper and the store), would require an understanding of the mechanisms of cognition at a level that we still lack. Is that sort of how you interpreted it? Or am I off-track here.

      Delete
    2. Hi Adrienne!

      I agree with you that the executive function seems to be the likely parallel to whatever it is that cognition is. This is because the executive control reads the "rules" and then performs them (how it does so, we are still unsure). This fits nicely with the anti-computationalist view that cognition isn't ALL computation because of the simple addition of "and performs them [the rules]". However, I'm not sure about the "control" because sometimes we (humans) do make errors when "performing" rules which leads me to believe that we dont have such a rigid/strict "control" mechanism like digital computers do.

      Delete
    3. Hi Renuka and Adrienne,

      I am a little confused by “control” as well. I am unsure if computers or humans have more ‘rigid/strict’ “control” mechanism’. For instance, a computer programme can respond to changes in its code and different inputs, making them very adaptive. However, humans are very predictable and repetitive which as a result can make humans very limited. Are humans truly rigid? Are computers? Are computers or humans more adaptive? The same?

      Delete
  10. From The Theological Objection

    While, like many, I find both the theological and ESP objections to be trivial with respect to a machine ever passing the Turing Test, I can’t help but discuss a moment in this passage (one of a handful in the paper) of really weak argument on the part of Turing. While some of the faults of this section were discussed in Harnad's "The Annotation Game", I wanted to highlight a point that wasn’t touched upon.

    Turing argues here that suggesting a computer could not have a soul "implies a serious restriction of the omnipotence of the Almighty... should we not believe that He has freedom to confer a soul on an elephant if He sees fit?" He then goes on to - somewhat patronizingly - note that just because the concept of a machine being gifted an immaterial soul is "more difficult to 'swallow'", does not mean that He would not do it. From there on, he essentially considers his point made, case closed - God could give computers souls if He really wanted to.

    I found this rebuttal totally unsatisfactory, as it rested on a premise that (if one were to argue from the Christian perspective Turing is refuting) is fundamentally untrue. In my opinion he misses the point of the 'immaterial soul' and the Biblical conception of man as being worthy of having one because we are made in God's image etc. etc., and are distinct from animals on the basis of many criteria. Thus, there is never a situation according to the Bible where God would ever confer a soul upon animal or machine, even "in conjunction with a mutation which provided the [animal] with an appropriately improved brain to minister the need..."

    To be clear: I am definitely not arguing in favour of the theological objection, just highlighting an example of the inconclusive arguments that I found in a number of passages in the paper, including the one Alex Hebert breaks down really well in the 2b comments.

    ReplyDelete
    Replies
    1. This comment has been removed by the author.

      Delete
    2. To add to Adrienne’s comment, I am not sure to understand if Turing supports the possibility that a computer could have a soul.
      “In attempting to construct such machines we should not be irreverently usurping His power of creating souls, any more than we are in the procreation of children: rather we are, in either case, instruments of His will providing mansions for the souls that He creates.” (Turing, 1950)
      I seems to me that Turing would be open to the idea of computers having a soul and, if it is the case, doesn’t it weakens the evidence for a “thinking machine” that the Turing Test could provide concerning the question of “how” we think. If I understand things correctly, it seems to me that Turing is open to the idea of a duality between mind and body and that “mind” is some sort of immaterial entity that has power over the material world. This intuition appears to be confirmed by Turing belied in ESP, phenomena based on a “mind over matter” kind of power/force. Thus, if the Turing Test confirms that a machine can full replicate the performance capacity of a human, then the “how” is not solved because the machine, if souls exists, serves only as a “mansion for the souls” and an explanation for how cognition works remains to be found, outside of the scope of what the Turing Test can prove.

      Delete
  11. “Intelligent behaviour presumably consists in a departure from the completely disciplined behaviour involved in computation, but a rather slight one, which does not give rise to random behaviour, or to pointless repetitive loops” (Turing 60).

    This quote represents an interesting window to Turing’s views on cognition. We may first deduce from the paper that Turing is a functionalist. Because he believes another system different from the human, whether we can or cannot explain its inner workings and, can pass the imitation game, Turing is fundamentally a functionalist who believes that it is the apparent outcome that determines if machines can think.

    Next we can discern if Turing is a computationalist. Because Turing believes that intelligent behavior is beyond the disciplined behavior of computation, I think we can conclude that Turing himself wasn’t a computationalist. Logically, we can conclude that thinking is a type of intelligent behavior, so computation alone, according to Turing, cannot result in the thinking processes. Though cognition and thinking might not be the same exact thing, I do believe cognition is an intelligent behavior, so Turing may believe that cognition is not the same as computation.

    “Now the learning process may be regarded as a search for a form of behaviour which will satisfy the teacher (or some other criterion). Since there is probably a very large number of satisfactory solutions the random method seems to be better than the systematic. It should be noticed that it is used in the analogous process of evolution. But there the systematic method is not possible. How could one keep track of the different genetical combinations that had been tried, so as to avoid trying them again?” (Turing 60).

    The logic here seems to be flawed. By saying the evolutionary process leading to human learning wasn’t systematic, he says the learning process for a machine should include a random method. Evolution worked over millions of years however, allowing the random changes to appear systematic today and I don’t believe it would be expedient to allow the same for the machine. However, I do think Turing meant an “element,” so it could resort to a random process if it wanted, just as humans can resort to the systematic or random method to search for a solution depending on the context of the problem.

    ReplyDelete
  12. “In the process of trying to imitate an adult human mind we are bound to think a good deal about the process which has brought it to the state that it is in. We may notice three components.

    (a) The initial state of the mind, say at birth,
    (b) The education to which it has been subjected,
    (c) Other experience, not to be described as education, to which it has been subjected

    Instead of trying to produce a programme to simulate the adult mind, why not rather try to produce one which simulates the child's? If this were then subjected to an appropriate course of education one would obtain the adult brain. Presumably the child brain is something like a notebook as one buys it from the stationer's. Rather little mechanism, and lots of blank sheets. (Mechanism and writing are from our point of view almost synonymous.) Our hope is that there is so little mechanism in the child brain that something like it can be easily programmed.”

    Turing seems to be unaware of internal neurological changes that underlie learning and general cognitive development. Maya (above) has already mentioned interactive specialization - there is also the maturational account, where as a child’s brain develops, certain neural structures become active, giving rise to corresponding cognitive functions. Finally there’s basic skill learning where neural connections/pathways become honed - this may be the closest to what Turing meant by learning. The point is, in all three accounts of cognitive maturation, “hardware” changes - we have deduced that differences in cognitive function follow accordingly (and in a corresponding fashion). For human cognitive function, we believe that cognition and mind “emerge” from these neurological changes (hardware). Though Emergent Interactionism doesn’t really explain much, it allows us not to not invoke dualistic explanations for cognition, consciousness, etc.

    However, a machine that undergoes the Turing Test would have the same hardware throughout its life. For leaning to occur, its “software” would have to undergo change. Does computationalism necessarily imply dualism or am I creating an artificial (pardon the pun) delineation here? Furthermore, if it is dualistic (and if we continue to reject dualistic explanations) can Turing’s idea tell us anything about how our own minds actually work?

    ReplyDelete
  13. I like the idea of using Laplace’s demon to explain how a universal Turing machine operates. You could predict what an individual computer could do, but would a universal machine be the only one that could do this because of the extremeness of being able to predict everything that could happen based on the current state? I understand Laplace in a theoretical sense but I am confused as to how it would play out with an actual machine.
    I am interested in the arguments from Lady Lovelace, the idea that a machine can only do what you tell it to do (or program it to do). What if there was a program you gave to a machine that programmed it to change, or to learn? In that sense it would only be doing what you programmed, but at a certain point the machine would be independent of the program and would no longer only be doing what you tell it to do.
    The example Turing gives in section 7: Learning Machines about critical mass is interesting to me because of the idea of implicit priming or the threshold for any stimulus to be sensed in people, especially in studying attention and phenomenon like déjà vu. A person isn’t paying enough attention to consciously recognize they have seen something so they have the uncanny feeling of seeing it before when they do see it. Because people have this capacity to be unaware of the fact a stimulus has been triggered (but still experience some effects of the stimulus), I do not know how far this comparison can go with a machine.

    ReplyDelete
  14. I apologise in advance if the substance of my comment has already been said; I don't have time to read other students' comments this week.

    I disagree completely with Turing's paper. Essentially, he replaced "Can machines think?" with the question, "Can machines imitate a human"? I do not think that is what "thinking" means. I will be defending what I understand to be Prof Jefferson's view.

    "Thinking" could mean:

    1. That which is programmable
    --> trivial

    2. That which IS, or RESULTS, in output that can be interpreted as communication
    i) Do we think that thinking necessarily results in an output?
    ii) Do we think that thinking IS the output?
    iii) And do we think that all such output comes from thinking?
    (these are rhetorical questions - I think the answer to all is clearly No haha.)

    3. Defined vis-a-vis a substance
    a) e.g. what the intellect does
    b) e.g. what a person's brain/soul does
    (the obvious drawback of 3b is that we also daydream, and tend to distinguish it from thinking, so it clearly needs to be more specific)

    4. Mental "languaging"
    --> Obviously, we do not know that the machine is mentally considering things or not, even if it does give out outputs.

    5. The (mental) experience of thinking
    --> To me, this is the correct answer. An output is neither sufficient nor necessary for thinking. Nor is thinking required to produce an output. The way I see it, Turing's example showed the latter statement, not that machines can think. I also think this is the way that people use "thinking" in their day-to-day lives (i.e. referring to a mental experience). The experience of abstraction, or whatever it is, is key. If I were to unwittingly scribble down something that looks like "1. Socrates is a man. 2. All men are mortal. 3. Therefore, Socrates is mortal." in another language, thinking I was merely doodling, I would never say that was the result of (or is) thinking.

    6. A mental experience that is rule-based (as opposed to a sensation, a belief, a desire, daydreaming etc)
    --> I'm not sure what to say in response to this except that this is why people distinguish between "good thinking" and "bad thinking". Good thinking is rule-based, but bad thinking isn’t. Both are still thinking, however. I need to think about this more, since there must be something distinguishing "thinking" from "daydreaming". Perhaps it is: a mental experience using logic; or a mental experience involving universals; or a mental experience involving only universals; or a mental experience of abstraction; or something of that sort.

    Now, Turing says:
    "The original question, "Can machines think?" I believe to be too meaningless to deserve discussion"
    Which is presumably why he came up with the imitation game. But it is not meaningless, merely equivocal. At worst, it is impossible to ever know, which is not the same as saying it is meaningless. (At best, it is meaningful and indeed knowable.)

    Turing thinks that Prof Jefferson's view is susceptible to solipsism. It may very well be, but I don't think that means that we should change the definition of "thinking". He is confusing knowing whether someone else is thinking with thinking itself.

    Lastly, were we to switch his thought experiment around, such that you had to guess if I were a machine or not, I could certainly fake a machine. It would just be a really dumb, bad-at-arithmetic machine. I don't think imitation can tell us if something is something else. To me, saying so is akin to saying that since my falling on your foot results in your foot hurting means that I am a large boulder, since large boulders falling on your foot make your foot hurt. The essence of a large boulder is not that falling on feet results in them hurting. Nor is the essence of thinking what it results in or, indeed, that someone else thinks that you are thinking.

    ReplyDelete
    Replies
    1. There's a word limit!! Ahh. I wanted to say this too:

      I also find his position on the theological objection too simplistic, both in the way he framed it and consequently his response. A "soul" can be distinguished from an "intellect" and from an "immortal substance". I think Aquinas' view, for instance, on the intellect is really well thought out and Turing's one-line summary of it is just very unfair and misleading.

      (Also, this is false: "How do Christians regard the Moslem view that women have no souls?" since Muslims don't believe that.)

      Delete
  15. In thinking of a machine that can do everything we can do , should we not also be considering that it should not be able to do things that we cannot do? For example, if you asked a human to multiply 38690485 by 849058320 and then divide the answer by 4925839 they (save maybe the odd savant) would not be able to give the answer within 5 seconds from their head. A computer however, could. This would be a huge give-away that the computer was a computer in the TT if it could solve this. So it would have to be preprogrammed to not perform anything that a regular human being could not. This also has implications though for cognitive modeling. Does the computer's ability to perform tasks much outside the realm of human ability mean that it thinks differently than us? I would argue that it does and that it's not enough to only consider everything we can do when building a human machine, but also everything we cannot do.

    ReplyDelete
    Replies
    1. I’m not quite sure I agree with this. Given pen and paper and adequate time, the average human would be able to perform the computation. Thus, a computer should be able to too. In order to fool a human, it could perhaps be programmed to give the answer after a delay, however time is not the issue. Because this is all theoretical, there is no need to look into the time it would take both systems (human or computer) to perform the calculation, but rather their ability to do it in the first place. It’s a complicated calculation that enable human error while solving it (errors which a computer wouldn’t make), however the issue is once again with the doing and not with the doing correctly.

      Delete
  16. ''The game (with the player B omitted) is frequently used in practice under the name of viva voce to discover whether someone really understands something or has learnt it in ''parrot fashion''... What would Professor Jefferson say if the sonnet-writing machine was able to answer like this in viva voce?''

    I was particularly intrigued by this approach used by Turing, who seems to argue that the imitation game can still be valid test for human cognition despite the mystery of consciousness. Indeed, if the machine could provide answers such as this, there would be no way for the interrogator to know that it was not a human replying.

    I would argue that the only reason that the machine's answers would seem thoughtful is due to the interpretation of the interrogator, who would automatically place human emotion into the words uttered. Even when reading the words in my head, I subconsciously added a certain tone of voice. Written language is merely symbols on a piece of paper, it is humans who add the meaning and interpretations. Even if the computer answered in a very bizarre way, I'm sure people would find a way to interpret it in a coherent manner. Therefore, is it not the human doing the work to make this a meaningful interaction, instead of the computer?

    However, this leads into the ''other minds problem'', because isn't this what we are doing everyday when we make conversation? Assigning meaning to the other person's words? Unless we are sure other humans are thinking as well as us, a computer mimicking another human will not tell us anything about our own cognition.

    ReplyDelete
  17. In the section of Turing's paper which he entitles "Learning Machines", he makes two "recitations to produce belief", and maybe I don't understand what he's trying to say, but I didn't find his "recitations" convincing at all.

    In the second paragraph he makes a comparison of the mind to "atomic piles of subcritical [and critical] size". I found this to be a bizarre comparison to make, and one which adds nothing to make the criteria of the Turing Test more convincing as criteria for cognition. He speaks about ideas presented to subcritical minds and critical minds which may give rise to "one idea in reply" or "a whole theory" respectively. It seems strange that he would muddy the already unclear definition of cognition by equating it to some level of intelligence, or originality or intuition. He goes on to claim that animal minds are subcritical. Is he trying to say that animals are not cognizing or that they are simply "unintelligent", whatever that might mean? If it's the former, then that is in direct contradiction to our more rigorously vetted definition of cognition, and if it's the latter, then it is somewhat irrelevant to the current discussion.

    He goes on to say that "the skin-of-an-onion analogy is also helpful". I found it to be quite the opposite of helpful. He said that we should examine the brain and strip away those operations which can be "explained in purely mechanical terms". It is not clear what he means by this. Is he discussing dynamics? Is he saying that we should strip away dynamics until we are left with only computation, which he argues is at the heart of cognition? He goes on to say that by stripping away these layers, we will either come to the "real mind" or "a skin which has nothing in it." What does this real mind consist of in the former case? In the latter case he says that the "whole mind [would then be] mechanical". Is he saying that in this case the mind is a purely dynamic system, contradicting his previous viewpoint? Or is he saying that the mind is a continuous state machine and that cognition is entirely computational? If so, what layers were we stripping away in the first place? Computational layers?

    I found this part of the "Learning Machines" section to be more confusing than helpful, and wonder why it was included, and wishing it was explained more clearly.

    ReplyDelete
  18. This comment has been removed by the author.

    ReplyDelete
  19. This comment has been removed by the author.

    ReplyDelete
  20. It’s helpful that Turing’s paper focused on the potential criticisms that his audience could have and thus a lot of my questions about machines and the Turing test were answered in the “objections” section. However, I had trouble with Lovelace’s objection and I’m not really sure how I feel about it:

    “The analytical engine has no pretensions to originate anything. It can do whatever we know how to order it to perform”

    What exactly does “pretensions” and “originate” refer to in this argument? If a machine were, hypothetically, modelled to “invent” things would this not be a pretension to create something original? Does the machine’s software count as “pretensions”? This is kind of where I began to think a bit about motivation. A machine, arguably, has no motivation to do any of the tasks it is programmed to do, and no motivation to exist. In the Turing test, a human could email a machine and the machine would respond, but machines don’t really care about getting an answer back. The Turing test also seemed to be question based (human asks, machine answers) but to be indistinguishable from a human, would the machine not have to provide information without being prompted? Aspects of motivation could be programmed into a machine and simulated (e.g the machine keeps emailing you when you don’t respond with increasing “aggression” over being ignored) but if the emotional need and motivation for human contact doesn’t really exist then what can we call it? Motivation to survive is maybe THE driving force that has continued our species and it applies to everything we do. Could this will to live be programmed into a machine in such a way that it would apply to any situation?

    In the same vein, the idea of the machine as a child who is moulded by education and operant conditioning is helpful. I think I’m just way more comfortable with comparing a machine to a human when it goes through the typical human stages of growing up. However…the usage of operant conditioning is a little confusing here for me. Turing referred a lot to reward and punishment. What is rewarding for a machine and how can you punish it? If I push the red button when the machine makes an error, how do I program that machine to associate the red button with the moral judgment of “wrong” and even if I can, why would that matter to the machine if it has no motivation to receive praise? Maybe I’m looking at this the wrong way.

    ReplyDelete
  21. “A variant of Lady Lovelace's objection states that a machine can "never do anything really new." This may be parried for a moment with the saw, "There is nothing new under the sun." Who can be certain that "original work" that he has done was not simply the growth of the seed planted in him by teaching, or the effect of following well-known general principles. A better variant of the objection says that a machine can never "take us by surprise." This statement is a more direct challenge and can be met directly. Machines take me by surprise with great frequency. This is largely because I do not do sufficient calculation to decide what to expect them to do, or rather because, although I do a calculation, I do it in a hurried, slipshod fashion, taking risks. Perhaps I say to myself, "I suppose the Voltage here ought to he the same as there: anyway let's assume it is." Naturally I am often wrong, and the result is a surprise for me for by the time the experiment is done these assumptions have been forgotten. These admissions lay me open to lectures on the subject of my vicious ways, but do not throw any doubt on my credibility when I testify to the surprises I experience.”

    In the above passage, I feel like Turing isn’t addressing the meaning of “surprise” the way Lovelace seemed to intend it. Lovelace wrote that a machine can “never do anything new”, therefore it cannot take us by surprise. So surprise here seems to point to an innovative “action” (the term broadly referring to any output a machine could make), some non-programmed original computation that would have no reason to occur outside of what the machine w would have been pre-programmed to do.
    On the other hand, when giving an example of a time a machine did surprise him, Turing uses the case of measurement. He says he assumed a voltage measurement, forgot he did so, and then was surprised when the machine yielded a different measurement then what he had guessed at. I don’t feel like this warrants the use of the term “surprise”. First of all, I feel it is unlikely that one would simply forget an assumption such as that one. Second of all, there is no rational reason to make that kind of assumption in the first place. As for being surprised by the machine measuring exactly what it was programmed to measure, that is hardly “original work”. What Turing terms “surprise” is the result of error in his own measurement or carelessness. That surprise comes from him, not from the machine.
    I feel that not only is Turing’s reasoning behind his surprise flawed, but were it right it still would be a different type of surprise than the one addressed by Lovelace. Therefore I don’t believe his argument has much weight against the claim that “a machine can never do anything new”. I would even lean towards Lovelace’s opinion in that matter.

    ReplyDelete
  22. "The claim that a machine cannot be the subject of its own thought can of course only be answered if it can be shown that the machine has some thought with some subject matter. Nevertheless, 'the subject matter of a machine's operations' does seem to mean something, at least to the people who deal with it. If, for instance, the machine was trying to find a solution of the equation x2 - 40x - 11 = 0 one would be tempted to describe this equation as part of the machine's subject matter at that moment. In this sort of sense a machine undoubtedly can be its own subject matter. It may be used to help in making up its own programmes, or to predict the effect of alterations in its own structure. By observing the results of its own behaviour it can modify its own programmes so as to achieve some purpose more effectively. These are possibilities of the near future, rather than Utopian dreams."

    I think this passage is interesting in a few ways. Firstly, I find it intriguing that the machine can have its own subject matter only "for this very moment", that seems contradictory to what we think of as human subjectivity but then that is really opening another can of worms altogether because human identity and subjectivity is rather vague. What I do know, though, is that it's not as if a human (of normal cognitive capacity) goes through life as different subjects according to the task at hand.. the subject stays the same bar for developmental differences over the years, but these are still the same subject (we even agree on that when it comes to the law). How does a machine then establish continuity with its subjecthood if it can theoretically be programmed to act exactly like any other machine?

    Another idea I find interesting is that the machine can "modify its own programmes so as to achieve some purpose more effectively". I'm a bit unclear as to how exactly this happens because there has to be some outside force writing the input of this, or is this simply created through the very procedure the machine is interpreting? I also have a hard time thinking of the machine as really evaluating its own programs because how can the machine really meaningful consider a value spectrum if it's based on a code of procedures? Does that mean it has to have a code for each possible situation that might need to be evaluated?

    ReplyDelete
  23. This comment has been removed by the author.

    ReplyDelete
  24. This article is very interesting. After reading it, I tried to discern how Turing chose to operationalize the term ‘think’, in order to go from the question « do machines think » to the alternative form he provides. It seems he is not testing if a machine can think, but rather is testing if a machine can be programmed to compute in a way that effectively mimics human reasoning or ‘thinking’. It is clear that all of his arguments support the question when phrased in that way, however I would argue that the two versions of the question are completely different. To select a similar question rather than operationalize the variables in the initial question means that Turing is ignoring the base issue and unknown : what exactly is thinking. If we assume that thinking, extracting its definition from common use, involves conscious cognition, then we end up with an even more ambiguous and ill-defined concept. The imitation game tests for a very specific kind of thinking. If we assume that ‘thinking’, as a concept, only refers to those thoughts that are to do with logical reasoning (some may go as far as calling this ‘computation’), then Turing adequately argues for a machine to be able to replicate this. However, does this mean that feelings and their bi-products (bi-products that could also be thoughts themselves), which may all be a bi-product of thoughts, are not considered ‘thinking’? The imitation game controls for the interrogators ability to discern emotion through tone of voice, but there is no argument made for a machine being able to imitate thoughts which are affected by feelings. An example of this is stereotype activation. It has been proven in psychological testing that activating the stereotype of ‘women are bad at math’ will result in females performing worse on a math test than a group of control female participants. This is not something that could be replicated with a machine, even with a learning machine, because the negative stereotype has an impact because of the repeated negative feeling over time. There may be some questions that the interrogator could ask that would trigger such things, learned in one gender or another based on repeated feeling over time, that would increase the percent chance of responding in one direction or another. This situation begs the question of whether or not we can reliably replicate thought. On one hand, the arguments are valid, and should a machine be created that passes the imitation game, you would be tempted to say yes, machines can think (in terms of the definition above). On the other hand, can this machine ever be created, if thought cannot truly be separated from the influence of feeling, and if feeling cannot be replicated in a symbol manipulating machine?

    ReplyDelete
  25. The imitation game suggests that a machine can ‘think’ if it can answer questions like a human. Does it also consider times when a machine ‘think’ better than a human?

    For instance, when a computer answers an extremely difficult math question in record time, it fails to answer the question like a human would since a human would take more time.

    (I may be thinking too much into the future with the following) Pondering over the answer to my own question: one can say a computer can win the imitation game if it is able to *present* the answers like a human despite being able to compute it faster than one. A computer should be smart enough to adapt itself to the limitations of the human brain as well if it wants to win the imitation game.

    On these lines, we know that humans are prone to making “human errors”. Are we expecting this from a machine as well when it’s playing the imitation game?

    ——————

    If one wants to make a machine mimic the behaviour of the human computer in some complex operation one has to ask him how it is done, and then translate the answer into the form of an instruction table. Constructing instruction tables is usually described as "programming."

    "Let us fix our attention on one particular digital computer C. Is it true that by modifying this computer to have an adequate storage, suitably increasing its speed of action, and providing it with an appropriate programme, C can be made to play satisfactorily the part of A in the imitation game, the part of B being taken by a man?"

    This summarizes computer science/AI in a wonderful way!

    If we assume that every complex action in the world can be broken down to simpler blocks of condition-action statements, we can say that it is possible to break down the functioning of the human brain into simpler substructures which can be replicated through very very detailed instruction tables. Hence being able to create a machine that can imitate human thinking.

    ——————

    I found ‘6.4 Contrary Views: The Argument from Consciousness’ very interesting!

    I think we are a long way from incorporating self-awareness and emotions in a machine and there are ways to imitate humans without these inclusions. As Turing said: “But I do not think these mysteries necessarily need to be solved before we can answer the question with which we are concerned in this paper.”

    I don’t see the point of ‘6.9 Contrary Views: The Argument from Extrasensory Perception’. If extra sensory perception is not considered to be human, why is it even being considered in the imitation of humans? Why is this argument a “strong one”?

    ——————

    Afterthoughts:

    Why do we need to use humans as a standard of comparison for computerized ‘thinking’?

    Can ’thinking’ be created in a different way?

    A professor of mine (Joelle Pineau) mentioned something on these lines in class a week ago: For years, people tried and failed to imitate birds but soon The Wright Brothers came along and decided to fly instead of imitating birds and invented the flying machine.

    Similarly, to achieve ‘thinking’ do we need to imitate humans or can we do so in other ways?

    ReplyDelete
  26. “The claim that "machines cannot make mistakes" seems a curious one. One is tempted to retort, "Are they any the worse for that?" But let us adopt a more sympathetic attitude, and try to see what is really meant. I think this criticism can be explained in terms of the imitation game. It is claimed that the interrogator could distinguish the machine from the man simply by setting them a number of problems in arithmetic. The machine would be unmasked because of its deadly accuracy. The reply to this is simple. The machine (programmed for playing the game) would not attempt to give the right answers to the arithmetic problems. It would deliberately introduce mistakes in a manner calculated to confuse the interrogator.”

    I find that the explanation outlined above is tailored too much to the context of Turing’s test and neglects some fundamental workings of the human mind. The explanation Turing uses to explain a machine that makes mistakes is not actually making mistakes in the context that the argument necessitates. Yes, the machine is giving wrong answers, but it is doing this purposefully, which would not be classified as a mistake but rather a manipulation of the answer to trick an observer. In some cases, when a human makes a mistake, they genuinely think they are right and are expressing this answer not to trick an observer, but rather to attempt to answer the question correctly. Now one may mention that the mechanisms by which a mistake is made are irrelevant when discussing the Turing test, but it is when considering whether or not a machine can think, and if the Turing test is trying to substitute this question, then it becomes relevant. Mistakes shape the way people adapt and think about the world as well as play a role in thinking about self worth, thus if a machine were to think, it should make mistakes in the proper sense. Although I disagree with this explanation, I am not suggesting that machines cannot make mistakes, I am simply pointing out the flaws in the argument. Acknowledging these flaws also point to discrepancies in the relationship between the Turing test and the question of whether machines can think.

    ReplyDelete
  27. “May not machines carry out something which ought to be described as thinking but which is very different from what a man does? This objection is a very strong one, but at least we can say that if, nevertheless, a machine can be constructed to play the imitation game satisfactorily, we need not be troubled by this objection.”

    If I understand this correctly, Turing is making a claim about the Weak Equivalence in which the input/output, behavioural actions are all that matters (T2) when evaluating whether or not machines can “think.” However, although this is an important starting point, once Turing redefines the question as “what will happen when a machine takes the part of A in this game? Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman?”, it loses its original purpose of figuring out whether machines can think. This is partly because thinking is inherently an internal process, whereas, the imitation game focuses on the verbal input/output behaviours only. In this sense, passing the imitation game, diverges to be essentially a test of whether the digital computer can successfully verbally output behaviours like a human, “so well that an average interrogator will not have more than 70 per chance of making the right identification after five minutes of questioning.” But this is not an adequate answer, as passing the imitation game does not mean that machines can think – it’s almost as if thinking gets reduced to external outputs without any meaning. If the ultimate goal of the TT is to assess the ‘how’ of cognition, then it falls short as it does not resolve how the machine reaches its conclusion.

    “By definition they are incapable of errors of functioning. In this sense we can truly say that ‘machines can never make mistakes.’ Errors of conclusion can only arise when some meaning is attached to the output signals from the machine.”

    I understand that in order to successfully play the imitation game the machine would use certain techniques such as calculated introduction of mistakes to stray the interrogator into thinking it was a human. The machine could be programmed to this degree, however, the question remains of whether the machine would ‘know’ to do this? Rather, it seems more plausible that the machine would learn through the massive amount of data collected during previous trials of the imitation can, and that it would eventually figure out how to predict what to do. Also, it is clear that errors of functioning can be disregarded as important in this case. That being said, errors of conclusion do not seem trivial. As we are only dealing with T2, I was under the impression that meaning does not come into the picture because the T2 is only assessing the cognitive performance capacity of the machine? Thus, even moving to T3 level, although (as Harnad says) the symbols are grounded in the dynamics and sensorimotor action, there still would be no capacity to understand meaning – so would errors of conclusion occur fairly frequently if this was the case?

    ReplyDelete
  28. This article is very interesting. After reading it, I tried to discern how Turing chose to operationalize the term ‘think’, in order to go from the question « do machines think » to the alternative form he provides. It seems he is not testing if a machine can think, but rather is testing if a machine can be programmed to compute in a way that effectively mimics human reasoning or ‘thinking’. It is clear that all of his arguments support the question when phrased in that way, however I would argue that the two versions of the question are completely different. To select a similar question rather than operationalize the variables in the initial question means that Turing is ignoring the base issue and unknown : what exactly is thinking. If we assume that thinking, extracting its definition from common use, involves conscious cognition, then we end up with an even more ambiguous and ill-defined concept. The imitation game tests for a very specific kind of thinking. If we assume that ‘thinking’, as a concept, only refers to those thoughts that are to do with logical reasoning (some may go as far as calling this ‘computation’), then Turing adequately argues for a machine to be able to replicate this. However, does this mean that feelings and their bi-products (bi-products that could also be thoughts themselves), which may all be a bi-product of thoughts, are not considered ‘thinking’? The imitation game controls for the interrogators ability to discern emotion through tone of voice, but there is no argument made for a machine being able to imitate thoughts which are affected by feelings. An example of this is stereotype activation. It has been proven in psychological testing that activating the stereotype of ‘women are bad at math’ will result in females performing worse on a math test than a group of control female participants. This is not something that could be replicated with a machine, even with a learning machine, because the negative stereotype has an impact because of the repeated negative feeling over time. There may be some questions that the interrogator could ask that would trigger such things, learned in one gender or another based on repeated feeling over time, that would increase the percent chance of responding in one direction or another. This situation begs the question of whether or not we can reliably replicate thought. On one hand, the arguments are valid, and should a machine be created that passes the imitation game, you would be tempted to say yes, machines can think (in terms of the definition above). On the other hand, can this machine ever be created, if thought cannot truly be separated from the influence of feeling, and if feeling cannot be replicated in a symbol manipulating machine?

    ReplyDelete
  29. It was interesting to finally read Turing’s paper Computing Machinery and Intelligence after having read so many controversial claims about his work, specifically about the goals of the imitation game. I now understand that the game is not asking whether or not a machine can “think” but rather if a machine can be behaviorally indistinguishable from a human.

    Firstly, although I have read many opposing views stating that the TT is not about trickery and deception, I am still having a hard time accepting that. In the section on Arguments from Various Disabilities Turing writes, “The machine (programmed for playing the game) would not attempt to give the right answers to the arithmetic problems. It would deliberately introduce mistakes in a manner calculated to confuse the interrogator.” Of course if a machine were to answer the interrogator’s question of some problem such as, “What is the square of 39 multiplied by 392810?” with 100% accuracy, it would be a dead giveaway that it is not human. However, being programmed to give incorrect answers to such questions and even answering yes to questions such as “do you like the taste of strawberries and cream?” when that experience has never existed, all seem like deliberate artifice. So could we say that intelligence involves some form of deception? Can it also be proposed that this game is measuring the level of human gullibility if in fact the machine is engaging in deception?

    Further, the last section on Learning Machines was quite fascinating and led me to ponder some other questions. Namely, if we program this sort of “child computer” and then allow it to undergo an educational process to learn, can it eventually do things that we are not able to do (ie. simulate the brain)? Could we allow its own development and not force it to pretend to be someone or rather something that it is not (ie. like when playing the imitation game)? It is interesting to note Google’s Deepmind project in neural turing machines - in which they attempt to simulate human short-term memory using neural networks and external memory. Their aim is to “solve intelligence” by using new deep learning algorithms or “deep reinforcement learning” (not very familiar with how it works). In essence though, the output change is able to be quantified depending on changes in the input – similar to how neural connections are strengthened or weakened. These learning algorithms are being tested on the ability of the machine to learn and improve playing video games (without being preprogramming but instead learning from experience).

    Does this go to show that we have come a long way from traditional machine learning if the machine still does not understand what operations it is performing? Turing suggested to begin with “the playing of chess” as an intellectual field that machines could compete against men in. Now that IBM’s Deep Blue has accomplished that, which area should be tackled next? Perhaps understanding the workings of short-term memory with Google’s Deepmind project will be a step in the right direction.

    ReplyDelete
  30. "Finally, we wish to exclude from the machines men born in the usual manner.”
    What does this mean, exactly? That we do not consider humans to be machines?

    "One might for instance insist that the team of engineers should be all of one sex, but this would not really be satisfactory, for it is probably possible to rear a complete individual from a single cell of the skin (say) of a man. To do so would be a feat of biological technique deserving of the very highest praise, but we would not be inclined to regard it as a case of "constructing a thinking machine.”
    What would be accomplished by choosing a team of engineers all of the same sex and how is it at all related to the sentences that follow?

    Either I do not understand Turing’s argument against the argument from consciousness or he simply does not do a satisfactory job of dismissing it. It seems to me that the issues raised by the argument from consciousness are valid and were not adequately addressed in the section.

    “...It seems to me that this criticism depends on a confusion between two kinds of mistake, We may call them "errors of functioning" and "errors of conclusion." Errors of functioning are due to some mechanical or electrical fault which causes the machine to behave otherwise than it was designed to do.”

    Here Turing indirectly touches on what I think is another major roadblock people experience when attempting to concede to the idea of a thinking machine. Namely, their lack of free will. If a computer “behaving otherwise than it was DESIGNED TO DO” is considered a mistake, a glitch, then how can we value anything it does as more than a feat by its designer? If a computer is programmed well enough to fool a human into believing it is itself a human by giving outputs carefully constructed ahead of time by some other human, isn’t that simply human intelligence being deferred until the moment the machine is being called to act? Isn’t the computer acting more as a proxy or even time capsule than a thinking being?

    "Unfortunately the statistical evidence, at least for telepathy, is overwhelming."
    OH?!?
    I find it intriguing that out of all the arguments Turing considers and dismisses ESP is the one he seems to take most seriously as a challenge, since its absurdity negates whatever obstacle it may pose, however large.

    "The machine has to be so constructed that events which shortly preceded the occurrence of a punishment signal are unlikely to be repeated, whereas a reward signal increased the probability of repetition of the events which led up to it”

    Behaviorist computation?

    ReplyDelete
  31. In the section 5 – Arguments from Various Disabilities, Turing explains that the argument that “machines cannot make mistakes” isn’t enough to refute that machines can think: because machines can pretend to make mistakes, an observer is not able to distinguish between a machine and man based on this claim. Conversely, what if a human was incapable of certain qualities characteristic of humans? For example, if someone with severe autism is unable to detect emotion in other humans or a psychopath is unable to feel certain emotions, the interrogator may again not be able to distinguish between a machine and a man. Therefore, the claim that disabilities or impairments provide an answer or even evidence to the imitation game seems weak.

    ReplyDelete
  32. This article was very interesting because it provided an in depth analysis on a movie that I enjoyed. I think one of the most notable observations was the principe question ‘can a machine think’ actually meaning whether or not a machine is able to mimic a human’s behaviour. In my opinion, these are two completely different questions. Although the article addresses the meaning of ‘can a machine think’ and even clarifies the terms with precise definitions, these definitions are not representative of the actual question at hand. Therefore I think it is safe to disconcern the original question and focus on whether or not the machine is able to mimic human behaviour. But then, why do we have to use humans for comparison? Also, if we have these two completely different questions, then we are left with the ambiguity of what is thinking. The imitation game focuses on the computational form - but I still think the question is unclear and can be interpreted in many ways, and hence can result in many different answers. Perhaps thinking can be thought of in other ways, aside from computation.

    ReplyDelete
  33. I find it strange that Turing choose to use a party trick as what came to be known as the 'Turing test'. After reading Harnad's article, I see he has acknowledged this as an error on Turing's part. I know this isn't important, but it makes me curious as to why Turing wouldn't simply design a scientific experiment. Maybe he was trying to make it more accessible/relatable to the general public? More 'fun'??

    Turing definition of Machines is interesting as well. He the article he says: "To attempt to provide rules of conduct to cover every eventuality, even those arising from traffic lights, appears to be impossible....From this it is argued that we cannot be machines....'if each man had a definite set of rules of conduct by which he regulated his life he would be no better than a machine. But there are no such rules, so men cannot be machines.'
    I think this is interesting, because this is the common view most people have, yet humans are very much regulated by rules of conduct most of the time. These are often social rules which are enforced by society and we learn from infancy. And linking this back to computation, regardless of what people want to generally believe, we are to some extent programmed by these social rules to produce certain outputs with certain inputs.

    ReplyDelete
  34. Turing quotes Professor Jefferson "Not until a machine can write a sonnet or compose a concerto because of thoughts and emotions felt, and not by the chance fall of symbols, could we agree that machine equals brain-that is, not only write it but know that it had written it. No mechanism could feel (and not merely artificially signal, an easy contrivance) pleasure at its successes, grief when its valves fuse, be warmed by flattery, be made miserable by its mistakes, be charmed by sex, be angry or depressed when it cannot get what it wants."

    Turing says that according to this view "the only way to know that a man thinks is to be that particular man." So, the only way to know cognition is through introspection. But we already know that introspection is entirely unhelpful in uncovering cognition because we run into the other minds problem.

    ReplyDelete
    Replies
    1. Hi Levi!

      I wanted to respond to the point you made about Turing's intention:
      "Turing says that according to this view "the only way to know that a man thinks is to be that particular man." So, the only way to know cognition is through introspection."

      I would argue that Turing would not make such a silly statement. I believe that he was referencing the other minds problem to refute Jefferson's gripe that a machine cannot be cognizing unless it can prove that it is feeling. Turing is not saying that cognition is inaccessible except by introspection, but instead that knowing that another individual feels is completely impenetrable because of the other mind's problem. He's saying that Jefferson's gripe is meaningless since the only measure of cognition that we can have is Turing Test performance capacity. We'll never be able to assess whether the robot (or anyone else for that matter) truly feels.

      Delete
  35. I, like many others commenting on this paper, find that Turing's choice to switch out the question "can machines think?" for a question of what will happen when a machine takes part in the imitation game, to be misguided. Turing appears to take a starkly behaviourist view on the notion of "thinking" by choosing to equate the two questions. Furthermore, near the end of the paper, when speaking about learning machines Turing says:

    "We normally associate punishments and rewards with the teaching process. Some simple child machines can be constructed or programmed on this sort of principle. The machine has to be so constructed that events which shortly preceded the occurrence of a punishment signal are unlikely to be repeated, whereas a reward signal increased the probability of repetition of the events which led up to it. These definitions do not presuppose any feelings on the part of the machine... (etc)"

    This quote further solidifies his behaviouristic ideas about cognition. Since, through class discussion and the previous paper by Professor Harnad, we have come to the conclusion that "behaviourism was begging the question of "how?"", it seems unlikely that a test, such as the Turing Test, which seems to be founded in a viewpoint similar to behaviourism, would be able to provide an accurate test of a computer's capacity to "think." Though he does not explicitly define the word "thinking", he does mention in the above quote that he is not assuming the machine can feel. However, echoing what Professor Harnad said earlier, there is "something it feels like" to think, so can we really just exclude the idea of feelings when saying that something has the ability to think?

    ReplyDelete
  36. This comment has been removed by the author.

    ReplyDelete
  37. “We like to believe that Man is in some subtle way superior to the rest of creation. It is best if he can be shown to be necessarily superior, for then there is no danger of him losing his commanding position. The popularity of the theological argument is clearly connected with this feeling. It is likely to be quite strong in intellectual people, since they value the power of thinking more highly than others, and are more inclined to base their belief in the superiority of Man on this power.”

    I find this an interesting argument, but I’m not sure I’m sold on Turing’s suggestion that intellectual people are more likely to employ a theological framework when conceptualizing intelligence and thought. I agree that intellectual people probably value the power of thinking more than less intellectually-inclined individuals (maybe just because the former are more aware of the scope of human thought? What designates someone as an “intellectual person” in the first place?) but I do not think this means they are more inclined to base their “belief in the superiority of Man” on theological explanations. To me it seems that the intellectual would be more apt to agree with the reasoning used in The Mathematical Objection or Lady Lovelace’s Objection.

    As I read this paper I kept noticing Turing’s tendency to dismiss certain stances or criticisms with some kind of general or opinion-based refutation. In some cases it seemed like he was shooting down an idea without much substantiating evidence on the contrary and/or without fully exploring the argument he was disagreeing with. A few examples:

    “I do not think this argument is sufficiently substantial to require refutation.”

    “Instead of arguing continually over this point, it is usual to have the polite convention that everyone thinks… I am sure that Professor Jefferson does not wish to adopt the extreme and solipsist point of view. Probably, he would be quite willing to accept the imitation game as a test.”

    “The original question, ‘Can machines think?’ I believe to be too meaningless to deserve discussion.”

    “I am not very impressed with theological arguments whatever they may have been used to support.”

    On one hand, it makes sense that some of these statements are necessary to maintain the brevity of the paper, and Turing makes excellent points. Many of his arguments deeply and thoroughly explore the core criticism he is analyzing. On the other hand, I thought these comments came across as laissez-faire objections that made the foundation of his counterarguments weaker. I couldn’t help but notice what seemed like a vacillation between the systematic dismantling of arguments using logic and reasoning versus the blatant dismissal of other ideas without much explanation as to why they were deemed incorrect… I’m probably overthinking this though!

    The excerpt from Professor Jefferson’s Lister Oration for 1949 is brilliant. I love the following line:

    “No mechanism could feel (and not merely artificially signal, an easy contrivance) pleasure at its successes, grief when its valves fuse, be warmed by flattery, be made miserable by its mistakes, be charmed by sex, be angry or depressed when it cannot get what it wants."

    In my opinion, this could sum up the entire Argument from Consciousness.

    ReplyDelete
  38. ~Argument from consciousness~

    I think Turing approached this argument wrongly.The first part of the argument is that no mechanism can feel. The second part is that the only way to know if the first part is true is to actually be the mechanism feeling. If that is the only way, then Turing can counter that objection in like 2 seconds with the Other Minds Problem. Seems pretty convenient for Turing, but I think there is definitely a way to know feeling happened: DEATH. The only way to know something in the world is feeling is for it to stop functioning forever. If you can take it apart and put it back together again, then it was not a feeling thing. Life is more than the sum total of its parts, more than every single particle in the universe put together. Life is meaning. And this does not actually deny the validity of the Turing Test in the sense that if: a robot could simulate all human behavior then: it feels. SO CAN A ROBOT DIE?????

    ReplyDelete
    Replies
    1. Hi Nicholas,

      “I think there is definitely a way to know feeling happened: DEATH. The only way to know something in the world is feeling is for it to stop functioning forever.”

      I’m not sure what you are getting at when you suggest that being capable of dying is the ultimate test for whether or not a robot is conscious. Surely to die is merely to lose your performance capacity. Yet the Other Minds Problem prevents us from inferring consciousness from performance capacity.

      ”If you can take it apart and put it back together again, then it was not a feeling thing. Life is more than the sum total of its parts, more than every single particle in the universe put together.”

      These assertions are not universally accepted, and surely they cannot be tested. Death is scientifically understood based on the difficulty of reestablishing the complex homeostasis of the body once it has been thrown off sufficiently. This is not beyond matter, but a property of the way matter is arranged in human beings, or sheep, or nematode worms. At any rate, such intuitions about the nature of consciousness cannot function as the basis of a sound scientific test for consciousness, since they could only be demonstrated by already solving the problem at hand. You effectively beg the question by assuming that you know the answer to the problem of consciousness—that consciousness is life(?)—and then using that answer to solve the problem.

      Delete
  39. During the TT, if the interrogator asks a difficult mathematical question, in order to attempt to imitate a human, the machine will make a mistake. Or, if the integrator asks if the taste of a pastry dish is enjoyable, again the machine can reply ‘yes’ or ‘no’ (which is another imitation of a human, since machines do not have the ability to perceive taste). I am curious how the machines 'knows' to make these mistakes for certain questions but not others. Do the machines take information from previous TT scripts? Or furthermore, in the case of Renuka and Rinona, who are programed to imitate humans for much longer than the duration of a TT, how do these machines know what are a ‘normal’ amount of mistakes? Or are Renuka and Rinona not concerned with being perceived as human and thus, are not programmed to make any mistakes (unless if they were to undergo a TT)?

    ReplyDelete
  40. There are several elements that Turing brings up briefly that I believe would benefit from further discussing.
    In section 4 he expresses that it is unnecessary to solve the mysteries of consciousness to solve the “can machines think?” question. This implies that consciousness it its entirety is not required to be able to think. He separates the two faculties, but I am not sure what he would consider to be the dividing difference between thinking and consciousness. It was previously discussed in class that thinking is cognizing is consciousness, but it seems as though Turing would disagree with this statement. Unfortunately he does not elaborate on this argument.

    He also addresses discounting a machine’s ability to think whenever the digital machine makes an error “prov[ing] a disability of machines to which the human intellect is not subject”. Yet, humans make many errors all the time, and computers can trump many things that we cannot do (eg arithmetic). He uses the the human need for superiority as an explanation for an unreasonable expectation that for a machine to think to the same level of sophistication as a human mind can, it must be able to match all human skills. This reminds us that asking whether a machine can think is not asking whether a machine can think “like a human”, and that different thinking can still be a sophisticated and sufficiently complex type of thinking.

    ReplyDelete
  41. I just returned to this page for the midterm and wanted to touch on something that seems like a common issue that people have: Turing's switch from 'Can machines think' to 'can machines pass the imitation game'. It seems as though many people find this transition unsatisfying when it comes to actually answering the initial question, but I think this discontent may be based on a misconception.

    I could be completely wrong in my interpretation here (and maybe this issue has by now been resolved for people), but never in the paper did I feel Turing claimed that a TT-passing machine would in fact think. The way I read his framework was not "If a machine passes the test, it is thinking", but rather "If a machine passes the test ('wins the game'), we have no way to say that it is not thinking." I feel like this distinction would resolve a number of the issues that previous Skywritings have expressed on this post, including the issue of feeling. Obviously having read Searle's CRA, The Symbol Grounding Problem etc. I believe most of us agree that a T3 robot would be the only way to successfully pass the TT; however, if for the benefit of a thought experiment we imagine that a computer alone could 'win the imitation game', I still believe the distinction I mentioned above resolves the issue of feeling. All that the Game/TT will ever tell us is that we cannot say the TT-passing system is not feeling, since (thanks to the Other Minds Problem), we can't ever know with certainty that anyone other than ourselves feels.

    ReplyDelete
    Replies
    1. I completely agree with you. I think that most of the time, we get really caught up with word choice like saying thinking and feeling are only human capacities and that's that. But the issue at hand is usually a question that is much more specific and nuanced than that. I appreciate that you pointed that out because most of the objections I find myself raising to people like Turing are related to my desire to automatically assume that for computers to think or feel they have to be human in some way, but it's a good reminder to note that the question isn't that. It's about passing a specific test and being able to prove if the machine is not doing something.

      Delete
  42. To what extent do we need to stretch the meanings of 'think' and 'understand' to suit the needs of a candidate for the Turing Test? Even if the candidate is a sensorimotor grounded robot, if the underlying structure of the 'thinking stuff' is not based on the same principles as our thinking stuff (therefore a T3 instead of a T4) than could we not assume that whatever is going on in the robot will be perhaps a close approximation but never quite the same kind of thinking stuff?

    ReplyDelete
  43. “It will seem that given the initial state of the machine and the input signals it is always possible to predict all future states”
    It is possible to reasonably predict the state of a digital machine, because each state is discrete and predicted by its internal rule table. But we can’t translate this to brain state since it is constrained by dynamic properties. So predictability gives us an advantage in the study of digital machine but this isn’t translatable to brain study. Can we say brain state are discrete state?
    “According to the most extreme form of this view the only way by which one could be sure that machine thinks is to be the machine and to feel oneself thinking. One could then describe these feelings to the world, but of course no one would be justified in taking any notice. Likewise according to this view the only way to know that a man thinks is to be that particular man.”
    This again refers to the other mind problem. Let’s suppose you could become that machine for a moment and then report what it felt like to be that machine. The problem is that the only thing you can be sure of is your own feelings. So you might think you were experiencing the machine’s feeling but this was in fact only you feeling you were feeling the machine’s mental states.

    ReplyDelete
  44. The biggest issue I've been having with the imitation game is the idea of time, or the length of time a person can email/communicate with a machine before the human catches on, rather. Turing says that there are ways for the machine to deflect certain questions or requests such as "can you please write a poem about the leavings falling in autumn" to which the machine replies "oh no, you would not want to read a poem I've written." Yes, this method works to a degree, but when two "people" are talking for a very long time, a certain relationship forms on the basis of trust and the human will nevertheless share personal information and vulnerabilities to which the machine cannot reciprocate. I guess this issue begs the question: is there a time limit to T2?

    On section 7 of the paper, it claims that "It is true that a discrete-state machine must be different from a continuous machine. But if we adhere to the conditions of the imitation game, the interrogator will not be able to take any advantage of this difference" (15). The imitation game is held as the ultimate turing test, and turing tests are supposed to be hardware independent. So where does the passing of time considered hardware?

    ReplyDelete