Saturday, 2 January 2016

(1a. Comment Overflow) (50+)

(1a. Comment Overflow) (50+)


  1. “We’ll call this the principle of behavioral equivalence: if a person or system reliably produces the right answer, they can be considered to have solved the problem regardless of what procedure or representation(s) they used. Behavioral equivalence is absolutely central to the modern notion of computation: if we replace one computational system with another that has the “same” behavior, the computation will still “work.”” (Horswill, 3)

    I just want to clarify this notion of behavioural equivalence for myself with a different example from that on page 2-3 of Horswill. If a group of individuals learn in different manners (visual, kinesthetic, etc.) and are asked to do some sort of cognitive function (eg: solve a problem). Despite the fact they may may encode and retrieve information differently and/or perform different tasks during said function, they can all come to the same output answer.

    I don’t doubt the usefulness of algorithms that might simulate different ways in which we are known to cognize (like my example above), however I am in some ways in agreement with Ailish’s previous comment. I find it difficult to see what good some behaviourally equivalent causal explanations would be, say one that is efficient for a machine yet occurs in a way totally different from humans, for understanding our own cognition. While the algorithm might certainly be interesting in the field of AI and a viable way in which a machine could perform a cognitive process, it seems to me to abstract too far away from a useful understanding of how our actual brains work.

    “We don't think of "vegetative" processes in organisms (like breathing, balance, temperature) as cognitive, so they are non-cognitive. Motor skills (like walking, swimming, playing tennis) are probably also not just cognitive.” (qtd from one of Professor Harnad’s replies)

    If I understand correctly, we can say that processes like categorization, problem solving, and learning are cognitive processes but something like thinking about and controlling one’s own breathing would be considered a vegetative process?

    Is it possible that there can be some crossover between vegetative and cognitive processes in some voluntary motor situations, or at least in motor skill learning? For instance, learning a complex motor skill, though once learnt may not seem to involve much “cognition,” might involve an individual dedicating a lot of attention to detailed movement during the learning process. If it is true that the line between cognitive and vegetative processes is a bit fuzzy and indiscrete, would this fact help the argument for T4 over T3 at all?

  2. North writes “shocking as it may seem, humans are not good at thinking in binary” when explaining the ‘language’ of modern digital computers. He follows this with “writing a program with hundreds of thousands of commands is generally beyond the ability of a human do do reliably; it simply involves keeping track of too many details.” His comment about the human capacity to think in binary seems very counterintuitive to me – i.e. it seems natural to assume that that humans find it extremely difficult to think in a two symbol system – especially when considering that even in computation, each letter in the English alphabet requires 8 bits to represent. What ways can humans be taught to “think” in binary/very simple symbol systems? An example I thought of (which isn’t binary, but seems like kind of a tactile form of ‘bytes’) is Braille. The Braille cell is composed of a rectangle three dots wide and two dots high. Letters, numbers, and punctuation are all represented using only six dots. Even more amazingly, “experienced Braile readers…read Braille at speeds comparable to print readers [at] 200-400 words a minute” (NFB, 1996). While this only encompasses the language aspect of “thinking”, it’s still pretty incredible to me… I might be overthinking it though.

    (If anyone’s interested:

  3. Response to: Copeland, Jack. "What is a Turing Machine?" AlanTurning.Net. July 2000

    Two discrete, but interconnecting comments in comparing the human cognitive mechanism (the brain and its capacities) to the idealized Turing machine.

    The Turing Machine functions through binary code, and is therefore considered to have a completely different underlying mechanism. Although the human brain does not function in terms of 0 and 1, its underlying mechanism is in some ways very similar to the idealized machine.
    Evolution and experience has “programmed” specific, purposeful connections in the brain. While a single neuron may intercept a variety of different neurotransmitters, where this element is more elaborate than the binary system, the causal underlying mechanism in what a human “does” is a binary function of a single neuron: it either depolarizes and fires (0), or it does not fire (1). The unlimited tape would be analogous to the vast synaptic networks in the brain, and the program U would be representative of evolution and experience that has determined the structure of these networks. With the goal of creating an equivalent (but not identical) functional structure of computation of the human brain, a turing machine in this sense would be considered a success (separate from its ability to adequately masquerade as a human in conversation).

    The second question inquires upon the purpose of artificial intelligence and machine learning. If the goal is for a machine to match human cognitive intelligence, simply because a computer would not be able to “understand” in the same manner in which we “understand”, can we definitively attest that it is therefore not thinking? If it could think in a different way that we do not understand since we do not think that way, how can we measure if such a machine is in fact thinking? It should be clarified to what criteria a machine must reach to be categorized as a universal Turing Machine. Are we measuring its ability to think like a human, or its ability to think at the same level of sophistication and complexity as a human while still allowing for inherent differences?

  4. I've been having some trouble with the concept of appearances versus reality when it comes to simulation. In reading Horswill's "What is Computation?" the following passage prompted some specific questions:

    "Although we’re a long way from being able to fully simulate a brain, computational neuroscience, in which scientists try to understand neural systems as computational processes, is an important and growing area of biological research. By modeling neural systems as computational systems, we can better understand their function. And in some experimental treatments, such as cochlear implants, we can actually replace damaged components with computing systems that are, so much as possible, behaviorally equivalent,” (pg 17).

    This example to me resembles the example of a virtual reality simulation of a waterfall. As discussed in class, it is clear when we take off our VR goggles, the waterfall simulation was not truly a waterfall in reality. The dynamics – the wetness of real water – are missing.

    Based on some further discussion in class around vegetative versus cognitive functions, the example of cochlear implants could be argued to be vegetative (as this is handling auditory sensory input similarly to the way pupillary restriction handles optical sensory input in Kaitlyn’s reply to Freddy). Similarly (if we have decided that the function performed by the implant is vegetative), it could be said that the replacement of the inner ear’s neural system for taking auditory input into the brain is just a simulation of auditory input. This is no problem for cognition for now.

    The next step, though, of a full computational simulation of a brain – or even of part of a brain that controls certain cognitive functioning – is where I run into problems. If we reached the capacity of simulating part of or all of a brain and all of the dynamics and hardware were running alongside that simulation of the brain, then a person with the real brain and the person with the simulated brain both have “things going on in their heads that let them do what they do” right? Would simulated cognition really then be different than ‘real’ cognition?

    Or is Steven’s argument that this situation would not be possible (i.e. we wouldn’t be able to fully simulate a brain) since not all of cognition is computation?

  5. Both the readings “what is Turing Machine” and “what is computation” discuss the idea of computation. The Turing Machine using algorithm to simulate, with behavioural equivalence, all the computations that can be performed by any computer. The procedure of the computation is talked about as an example of being “when 1 replace with 0”. Computation is the manipulation of representation, meaning manipulation based on shape not meaning. So, when comparing computation to cognition, and whether the human brain purely runs on computation and is therefore a computer, I feel convinced computation is not all cognition due to this simple example. If this example was changed to “when one replace with 0” and the code showed “1”’s instead of “one”’s, the machine would not replace “1” with 0, only “one” with 0. A human thinking would understand the meaning of both the word and number as the same and replace both. In computation one and 1 is not the same, however I feel in cognition they are grounded in the same thing. Therefore, I feel computation and cognition are not equal.

    The “what is computation” piece brought up interesting points on the social and cultural effects on what would actually happen if all cognition was decided to computation. This would have grand consequences on how we as humans identify, and resonating effects on how we as an individual and as a culture interact. But if everyone was running on computation, wouldn’t all individuals, cultures, etc. be the same? Thinking about how DNA and RNA are created and multiply through transcription makes me open to the idea of computation being utilized in the way of the brain and body. However, I always find myself coming back to feelings. No individual feelings are alike and I think are truly unpredictable, and as I feel they would say “uncomputable”. As well, I’m convinced not all computation is cognition when talking about the halting problem. Since we can introspect and knowing when we are unable to compute (when we are halting) I wonder if any computation can be cognition.

    Lastly, “Computers would be nowhere near the social or economic force they are now if you had to buy separate computers for word processing, email, and editing photos.” This line made me cringe. I cannot imagine the frustration of having a different device for each function. Even if cognition is not computation, thank god for it!

  6. To bring this topic more to the present, I thought I would appeal to a more familiar paradigm of computation and try to relate it back to its Turing Machine roots. Take, for example, a list of integer data, called in array in Java. One of the most common things we like to do with arrays is to sort them. So let’s talk through what happens when you sort call “sort(array)”:

    An array is stored in the computer as a chunk of memory cells (reminiscent of a TM’s tape) that house the appropriate data. An integer is stored in the computer as a constant number of bits, zeros and ones that can be computed into the decimal numbers we know and love. So when the computer goes to sort the array it has to discern whether the number represented by the bits in one entry is greater or less than the number represented by the bits of another cell. The computer does not make this distinction on the basis of numeracy. In fact, the computer does not know that it is computing numbers. It follows a very simple rule for lining up the binary representations and churning out a result: greater, less than, equal. Based on the output this function, it swaps the position of the entries and outputs a sorted array!

    It is convenient for us to call this function “sort,” but to the computer an array is sorted after the function has been called, not when the data are arranged in numerical order.

  7. Since this is all just a giant thought experiment since we are talking about human minds and human cognition as if there aren't infinite ways for human minds to exist/function and as if we aren't almost entirely cognitively blind, I thought I'd pose a few concerns I've had with the way cognition is currently spoken about and explored in cognitive science. And perhaps, these concerns, more than anything, reflect my arts/humanities background. While a few things have been cleared up throughout the course of the class, there are still words, terminology, and concepts everybody seems to largely agree on, and many of these unspoken definitions-if you will- are rather essentialist and often ableist.

    For example, in the "What is computation?" article, behavioural equivalence is brought up. In the case of the imitation game, as long as the computer exhibits the same behaviours (through written language) as another 'normal human' then it passes the turing test. So who, exactly, is the machine being compared to? If a T2 machine only has verbal capacities, what "tips" the human into thinking the person in the other room is a human or a machine? How do we know it's not a real person without the same verbal capacities as one would assume a "normal" person to have? At the T3 level, it would be interesting to posit an imitation game, where instead of typing the questions and answers, the T3 robot would have to talk and respond to questions as a "normal person" would. This will probably never happen, but what if a person like Stephen Hawking was in the other room, and because of his physical disability, his responses must be dictated by a machine. The human in the other room may think this is a funny joke because they assume it's a very poorly designed T3 robot, when in reality it's a real person. Is Hawking, then, considered a "normal human"? In many ways, he's considered an extraordinary human, yet because his disability does not allow him to speak out loud with his mouth/vocal cords etc, he is not considered "normal" enough to stand as the threshold to which behavioural equivalence is measured against?