Saturday 2 January 2016

1a. What is Computation?


Optional Reading:
Pylyshyn, Z (1989) Computation in cognitive science. In MI Posner (Ed.) Foundations of Cognitive Science. MIT Press 
Overfiew: Nobody doubts that computers have had a profound influence on the study of human cognition. The very existence of a discipline called Cognitive Science is a tribute to this influence. One of the principal characteristics that distinguishes Cognitive Science from more traditional studies of cognition within Psychology, is the extent to which it has been influenced by both the ideas and the techniques of computing. It may come as a surprise to the outsider, then, to discover that there is no unanimity within the discipline on either (a) the nature (and in some cases the desireabilty) of the influence and (b) what computing is --- or at least on its -- essential character, as this pertains to Cognitive Science. In this essay I will attempt to comment on both these questions. 


Alternative sources for points on which you find Pylyshyn heavy going. (Remember that you do not need to master the technical details for this seminar, you just have to master the ideas, which are clear and simple.)

Milkowski, M. (2013). Computational Theory of Mind. Internet Encyclopedia of Philosophy.


Pylyshyn, Z. W. (1980). Computation and cognition: Issues in the foundations of cognitive science. Behavioral and Brain Sciences, 3(01), 111-132.

Pylyshyn, Z. W. (1984). Computation and cognition. Cambridge, MA: MIT press.

72 comments:

  1. • “However, the uncomputability of the halting problem is often misunderstood, so let’s be very clear on what it does and doesn’t entail. It doesn’t mean we can never determine if a program halts; in fact, we can solve the halting problem for many of the programs that we care about in real life. What we can’t do is write an infallible program for determining whether an arbitrary piece of program text will halt for an arbitrary input text” (Horswell, 2007 & 2008).

    I am curious as to how the reasoning for this statement works out. To solve the halting problem, we would need a new program that uses the infinitely looping program as an input to determine if an output value does or does not occur. However, we learned in the article “What is a Turing Machine,” that inputs on the tape must have a finite representation. If this is true, how is it possible that “we can solve the halting problem for many programs” (Horswell, 2007 & 2008). I understand it could be that we cannot write a general program that will discover if any piece of a program with any arbitrary input text will stop. However, how is it still possible to do it for a specific program we’re interested in if it isn’t possible to represent a looping program finitely for input? This seems contradictory to me and I wonder what other’s thoughts on this are.

    ReplyDelete
    Replies
    1. Don't worry about the halting problem...

      Well, I'm not a mathematician or a complexity theorist, but, by analogy with other examples of unsolvable or NP-complete problems, the proofs that they cannot be solved in general do not mean that there are no particular problems that can be solved. And it isn't about finite vs infinite problems. Some kinds of infinite problems are more tractable than others. The unsolvability is about the general case: every problem, not any problem.

      Maybe the misunderstanding is in the phrase "represent a looping program finitely for input": The computer program is a set of symbol manipulation rules (T= Test) that is applied to another set of symbol manipulation rules (F = Formula). T tests whether the Formula F will ever come up with a solution. The Formula itself is finite, but it can keep on generating new data (“infinitely long”) if it just keeps lloping. T tests whether F will ever halt. For some Fs, T can show that F will come up with a solution (hence it will halt). But it cannot do this for all possible Fs.

      But you need not worry about the halting problem. For this course you just need to know what computation (symbol manipulation, the Turing Machine) is. And you need to know what computation is because that is what the computationalists are saying that cognition is too: "Cognition = Computation."

      It doesn't matter that the halting problem cannot be solved by computation, because it can't be solved by cognition either...

      Delete
  2. "The amazing thing is that nearly all digital computers are Turing complete. In fact, some stunningly simple mechanisms are Turing complete."6

    6"Minsky and Blum proved in the 1960s that a “two‐counter machine” was Turing complete. A two‐counter machine is a machine
    whose only data representation is a pair of numbers (non‐negative integers, in fact), and whose only commands are to add or
    subtract 1 from one of the numbers and to check whether one of them was zero.

    I wanted to clarify my understanding of both the universal Turing machine and how other machines can be Turing Complete

    The universal Turing machine can theoretically exist because just as computers can be programmed to activate patterns of 1’s and 0’s in certain ways to accomplish certain functions, a single Turing machine can be modeled to have the equivalent behaviour and thus be a representation of that computer. This idea can be theoretically expanded such that it can be programmed to contain all possible computations and thus becomes a universal Turing machine.

    Since in a Turing machine the procedure is moving along a tape and reading and writing 1’s and 0’s on the tape, Minsky and Blum are comparing those actions to a counter machine that can be seen as giving a representation of the Turing machine’s procedures in a different form. The movement and reading/writing of the Turing machine becomes the addition/subtraction of the counter machine. I get a bit lost at the counter machine’s third function of checking for zero. Is this the mechanism in which it halts a program? A strict addition machine could not be Turing complete because it would either have a procedure to halt thus limiting its computational power, or without halting it would run forever and be useless.

    ReplyDelete
    Replies
    1. Hi Jordan, don't worry about the technical details. I just wanted everyone to get a clear sense that computation is the manipulation of symbols (e.g. 0's and 1's) according to rules (algorithms, programs).

      Delete
  3. As described in the text, the input that a Turing machine receives consists of a finite number of symbols. Considering that the number of sentences that can come up/ that can be pronounced is finite (and not infinite as the number of people and the time they have to pronounce these sentences is finite, as discussed in class), could a Turing machine theoretically pass the Turing test? Can you really simulate any physical system, even human cognition as the Church-Turing thesis claims?

    ReplyDelete
    Replies
    1. To pass the Turing Test is not to have the capacity to simulate any physical system, it's to have the capacity to do anything and everything a human being can do. And simulating thinking (cognition) would no more be thinking than simulating a heater would be heating.

      Delete
  4. I don't understand two things: (1) levels of abstraction and (2) physical symbol systems.

    For (1), assuming that by "details" in the world, the author is referring to particulars, can there be a level of abstraction that deals with no particulars, i.e. only with universals? Can computers consider these? Or do models of the world, and abstraction levels for them, remain in some sense tied to particulars since they have to be "useful"?

    For (2):
    a) It says:

    "A symbol system creates, copies, modifies, and destroys symbols."

    I am not sure what is meant by "create". Do they mean that the symbol system merely creates the symbol as a physical object, like writing down a letter, or that it assigns meaning to the symbol? If the former, a1. If the latter, a2:

    a1) Given that symbols are meaningful patterns, does that not presuppose something assigning them a meaning? Wouldn't you need to be intelligent to do so?

    a2) How/when does a computer "create" symbols, in the sense of assigning meaning to things?

    2.
    b) "[S]ymbols in a physical symbol system are physical objects that are part of the real world, even though they may be internal to computers and brains [...] Many of these symbols are used to refer to things in the world. Other symbols may be useful concepts that may or may not have external meaning. Yet other symbols may refer to internal states of the agent."

    Are all symbols physical, even the ones that refer to internal states? Don't we assign meaning to non-physical objects all the time? For instance, mental objects: I can think of a word, "hickypicky", and I have designed that it means "cold". I can manipulate it in my mind: "hackpacky", "hickypickying", or choose for it to no longer mean "cold" (hence destroying it). So is that a physical symbol, or can symbols also be mental?

    Also, this is Munema Moiz! Sorry, I created this account when I was 13 and I don't know how to change my name from piff puff :(

    ReplyDelete
    Replies
    1. A symbol is any arbitrary object of any arbitrary shape that we manipulate according to rules.

      The rules are based on the object's shape, not its meaning. We just agree, by shared convention, to interpret some symbols as standing for some referents.

      For example, "2" is a symbol, and so is "+" and so is "=".

      If you give a computer (that is running the right program, i.e., rules):

      "2 + 2 = ?"

      it will give you "4"

      We can interpret that as meaning that adding two to two gives you four. but the computer is just manipulating the symbols according to their shapes (and the rules), not their meaning.

      Forget about "abstraction." It's not quite right word here. What there is is a symbolic "level," which is just symbols, being manipulated on the basis of rules operating on the symbols' shapes, not their meanings. And there is the "meaning" level (in our heads), where "2 + 2 = 4" is being interpreted as meaning that adding two to two gives you four.

      Forget about "creating" symbols. Just think of it as a computer (Turing machine) either reading or writing symbols, based on what symbols it reads and what physical state its rules put it into (so if it sees 2 then + then 2 then = then ? it erases ? and replaces it by 4).

      Yes, the interpretation of the symbols comes from our heads, not the symbol system. This is already a symptom of the "symbol grounding problem": there's something missing here. It can't be that what is going on in our heads to give the symbols meaning is just the same thing as what's going on out there in the Turing Machine (i.e., not just another a Turing machine manipulating symbols on the basis of their shapes, not their meanings).

      Yes, all symbols are objects, but the nature and the shape of those objects is arbitrary. We simply agree by convention to let "2" stand for "two" (or rather for what we mean by "two," because "two' is just a symbol too, as is "hickypicky" whether it is out there on the table or screen, or we say it, or with think it: when I verbalize in my head, that's also an object: it's something going on in my brain).

      Delete
  5. “If our brains, and thus our thoughts, can be simulated, to what extent does that mean we ourselves can be simulated?” (Horswill, page 17).

    But does the fact that our brains can be simulated necessarily imply that our thoughts can be simulated as well? Is cognition sufficient to prevent thought from being simulated by a computer?

    Horswill provides an excellent example of a human-computer interface that is already in use today, describing how cochlear implants are used to “replace damaged components [of neural systems] with computing systems that are, so much as possible, behaviorally equivalent” (page 17). So we know that mechanical constructs can replace corresponding biological constructs in a way that preserves the natural biological function. And as Horswill asserts “there are many computational models of different kinds of neurons” (page 17). So if we replace all biological constructs with equivalent mechanical ones, have we successfully recreated a human? Is this machine fundamentally human? And connecting this idea to Thesues’ paradox (https://en.wikipedia.org/wiki/Ship_of_Theseus), what if we do it over time, as each biological function fails? Can this machine be called human? At what point is it no longer considered human?

    However, as Stevan says, “computation is not cognition” (introductory lecture). And since “computation is wrapped up with the study of behavioral equivalence” (Horswill, page 19), it can be reasoned that behavioral equivalence does not imply cognitive equivalence. While this robot is behaviorally equivalent to the original human, one cannot assume that it is cognitively equivalent. So is cognition unique to humans? Can it be simulated? And if it can be simulated, is it possible to interface it with the robot in a way that makes it both behaviorally and cognitively equivalent to the original human? Or is the interface between cognition and computation uniquely human? Assuming it can be done, would this robot now be considered human?

    ReplyDelete
    Replies
    1. "So is cognition unique to humans?"

      This question is very interesting because it brings up parallels to the monism-dualism debate. I would think the majority of the scientific community believes in the monistic reality, where the matter of the universe gives rise to our cognitive abilities. The "unique" theory of cognition reminds me of dualism and its explanation that the "soul" is unique to humans. If monism is really correct, shouldn't cognition be something that isn't unique to humans. Assuming that the structure and dynamics of the matter between our ears is replicable, then cognition should not be unique to humans. If, however, dualism is correct then no matter if we replicate the dynamics and structure of this matter, we would not be able to replicate cognition since we are missing something in a different reality.

      Delete
    2. First (Alex), cochlear implants are not little Turing Machines, manipulating arbitrary symbols. They are dynamical devices that turn sound vibrations into a neural signal for the cochlea.

      And second (William) it's a little early to worry monism/dualism here. We're just trying to understand what computation is.

      Delete
    3. I'm also a bit iffy on the behavioural equivalence thing. Per Horswill:

      "behavioral equivalence: if a person or system reliably produces the right answer, they can be considered to have solved the problem regardless of what procedure or representation(s) they used."

      I couldn't help but compare this to two people, A and B, who always do the same action but whose intentions are always different. They may be producing the same output, but what's going on inside is not the same.

      Similarly, when Horswill says:
      "What’s interesting here is that none of these differences matter.  So long as the person comes up with the right answer in whatever representational system or systems they happened to be using, we consider them to have successfully done the arithmetic"

      But isn't the justification of the writing-arithmetic that, theoretically, it could have been done mentally? When Horswill says: "so long as the person comes up with the right answer", I think it's important that "the right answer" is only determined by the procedure, insofar as the procedure has already been justified by an intellect (Euclid or whomever).

      I guess what I'm saying is that coming up with the right answer to a problem is not the same as understanding it. Kids use tricks in Maths class all the time, without understanding the philosophy of logic/maths.

      Delete
    4. I also have some trouble grasping the concept of behavioural equivalence. So taking the statement “if a person or system reliably produces the right answer, they can be considered to have solved the problem” to be true, would that mean that a system simply fishing out the answer to “2+2=?” out of a database and giving out the output 4 would still be considered as having gone though the process of computation even if the system hasn’t solved the problem itself but rather relied on the computation made by an other system?

      Delete
    5. Hi Munema (PP), rather than thinking of one problem, and the possibility of getting the answer in many different ways, it will give you a better idea if you think of the Turing Test (TT), and being able to do anything and everything an ordinary person can do. Are there lots of different ways to do all of that? By the time you scale up to full TT power, whether two TT robots happen to be doing the same thing in exactly the same way becomes much less interesting and important than the fact that they can do it all.

      (The objective is not to find a causal mechanism that is me or you in particular, but one that has all the generic capacities of a normal human being.)

      Delete
    6. Pylyshyn distinguished "weak" and "strong" equivalence between two different systems (two people, two robots, or a person and a robot).

      "Weak equivalence" means they both produce exactly the same output for the same input. (And remember: we are talking about the whole Turing Test: all the pairs can do everything any normal human can do.)

      "Strong equivalence" means they both produce exactly the same output for the same input and they both do it using exactly the same algorithm. The Turing Test does not require strong equivalence.

      It is computationalists who emphasize strong equivalence -- for the obvious reason that they believe that cognition is just computation, computation is just algorithms, and so cognitive science is looking for the algorithms that generate our cognitive capacity.

      But I -- be careful, this is a "Stevan-Says" -- think that once your mechanism has full TT-passing power, there aren't many different ways to do exactly the same thing. Even if computationalism were true, and it could all be done by one huge algorithm, I doubt that many different algorithms could do it all. They would just be trivial variants of one another (like individual differences between human beings, many of them caused by different experiences rather than different "algorithms").

      Besides, because of the symbol grounding problem, it can't all be done by one huge algorithm. Grounding requires noncomputational components ("dynamic" components), not just computational ones, for example, sensorimotor capacity. Computation alone cannot move, nor can it detect optical input (analog): it first needs optical transduction (dynamics) and then analog to digital conversion.

      Delete
    7. "The objective is not to find a causal mechanism that is me or you in particular, but one that has all the generic capacities of a normal human being"

      Computionalists maintain that computation (the manipulation of symbols according to a set of rules) suffices to explain all of cognition. Does that automatically exclude the 'what it feels like' aspect from falling under the heading of cognition? If so, do computationalists exclude it intentionally (i.e. do they maintain that the 'what it feels like' aspect is unimportant and negligible)? Or if not, is the 'what it feels like' aspect also computationally encoded according to them?

      With regards to the above quote from Harnad (two comments above) I am not quite sure what exactly is meant by 'generic capacities of a normal human being'. Would the sense of individuality fall under that umbrella as well? The objective is obviously not to recreate 'me or you' or any existing individual through finding the causal mechanism(s) that gives rise to whomever that particular individual may be, but rather a sort of prototype of a human cognizer. But would that even be possible without some sort of 'sense of individuality'?

      Delete
    8. Would this also be simulated by the prototype or is it irrelevant?

      Delete
  6. “I’ve said that computation isn’t fundamentally about numbers. Nevertheless, let’s start by thinking about arithmetic, since we can all agree that it’s an example of computation. Suppose you ask someone: “What’s seven plus three?” If they reply “ten,” then they just performed a computation. So we can provisionally think of computation as a kind of question‐answering.” (“What is Computation?”)

    I would like to use this example to briefly discuss Pylyshyn’s view that computation is equivalent to cognition (computationalism and the strong equivalence thesis). I’ll use an analogy to explain why I have a problem with Pylyshyn’s view.

    If I talk to Siri on my iPhone and ask “What’s seven plus three?”, she will probably answer “ten”. But does Siri really know that 7+3 is 10? I think this is highly unlikely. What’s more likely is that she has been programmed to give that reply to a series of inputs (seven, plus, three), which each have symbolic meaning. If this is an accurate example of computation (I’m aware that it may not be fully representative of computation because this example fits with the functional model and not the imperative model), there is an important difference between computation and cognition. The difference is that unlike computation, cognition seems to carry with it an understanding of what we are doing. We are not always mindlessly or automatically replying when we are asked a question. We understand the meaning of the question holistically and do not look at it as merely a series of inputs (though the inputs are obviously important). For this reason, does it make sense to say that while cognition surely involves computation, they are not equivalent?

    Pylyshyn seems to anticipate this point and notes, “it could turn out that consciousness is not something that can be given a computation account. Similarly certain kinds of statistical learning, aspects of ontogenetic development, the effect of moods and emotions, and many other important and interesting phenomena could simply end up not being amenable to a computational account.” (Pylyshyn, 1989). However, all examples Pylyshyn mentions are examples of what goes on inside an organism that allows it to do what it needs to do (our definition of cognition - furthermore they all feel like something). As such they belong in the realm of cognitive science. The strong equivalence thesis seems incomplete.

    ReplyDelete
    Replies
    1. The problem with computation as the mechanism of cognition is not that it is "programmed" but that the symbols are not connected to their referents: they need a thinking mind to make the connection. Therefore computation cannot be the only thing going on in the brain of a thinking mind. But this (the symbol grounding problem) is not the same thing as the problem of consciousness (which is the problem of explaining how and why we feel rather than just act). Computation would be ungrounded even if were all unconscious robots. A robot is already not just a computer, implementing a computation. It is a dynamical system, sensing and moving. That's not computation....

      Delete
  7. It seems kind of hard for me to believe cognition is computation given the fact that we haven't found a clear definition for computation either, given that there are different views; it looks more like computation has expanded and gotten more complex to be able to model and explain cognition instead.

    From the invention of Turing machines to computers that we have today, it's obvious that the field of computer science is getting more and more advanced, and with the field of computer neuroscience, we are able to uncover more and more about the brain than we would ever be able to without the existence of computer simulations.

    The idea of behavioral equivalence, that there exists a computer capable of modeling the human mind, means that there also exists a human mind that's an Universal Machine, capable of modeling any computer and computations; in that case, one can also argue that there exist a brain capable of simulating almost every individuals' brain mechanisms in the world.

    If the notion that even we don't understand our own brains as well as we think, how can we believe that there will one day exist a computer that is already predisposed to all there is to know about the human mind?

    I believe that as we started using notion of computation to help explain cognition, that both computer and human understanding of the mind are growing and learning at the same time as we uncover more and more information, we try to put labels and models on what we have learned to have a better explanation and understanding of the brain and human mind; and as to why we use computers is because both are physical symbol systems that are incredibly similar to each other, and is the main tool we have right now to extend our knowledge.

    Or I could just be totally off and have gotten the completely wrong idea from the articles.

    ReplyDelete
    Replies
    1. No, the problem is not with the definition of computation, which is fine, and pretty much agreed upon. The problem is that computation isn't enough.

      Delete
  8. "A physical symbol system has the necessary and sufficient means for general intelligent action"

    Two questions come to mind regarding the claim that intelligent systems manipulate physical symbols to produce new expressions (actions/thoughts/internal states...)

    1. How can we account for reflexive processes like a knee jerk or eye saccades? Considering reflexive, automatic actions have no conscious thought/preparation, is there "time" for real symbol manipulation? Or do these actions simply reflect an immediate reaction to an external stimulus...

    2. How does symbol manipulation reflect differences between experts and novices? Say an alien were to look at the symbol manipulation history used by an Olympic athlete completing a pole vault jump versus a layman attempting the same jump, would the alien be able to differentiate the symbol sequence used to complete the jump in the expert versus the novice? I understand that the "symbol level of abstraction" describes an AI machine at the level of its symbolic reasoning, but does it use a criteria common amongst, or distinguishable within experts?

    ReplyDelete
    Replies
    1. The hypothesis is not that intelligent systems manipulate symbols. (Computers manipulate symbols.) The hypothesis is that the manipulation of symbols generates intelligence, meaning the capacity to do what thinking organisms, cognizers, can do.

      Reflexes are not the problem, because they are automatic and hence not considered “intelligent.” They can be accomplished by hard-wiring inputs to outputs.

      We’re far from the expert/novice distinction yet because (until further notice) even novice capacity cannot yet be generated by computation alone for most human cognitive skills. But if computation alone could indeed generate novice capacity, then I guess expert capacity would be where the novice computational system reached after it had received a lot more training…

      Delete
  9. Quite some interesting papers, however I am a little confused when it comes to grasping the concept of computation as a whole.

    The impression I got from Pylyshyn (who is no doubt a computationalist) was that he takes cognition to be entirely computational. If I understood this correctly, he argues that we manipulate symbols based on their rules and principles. He then argues that “the symbolic expressions have a semantics, that is, they are codes for something, or they mean something” (p. 56). I am just a little confused as to where the semantics of the symbols is coming from? Do the symbols come loaded with meaning? I was under the impression that the symbols were actually meaningless (in a sense)? Or is he just saying that meaning arises somehow & somewhere in the manipulation of the symbol and its rules and principles – along the lines of an epiphenomenon?

    I may be mistaken, but I am under the impression that Pylyshyn never actually mentions how symbols come to have their meaning? So in the end, can all of cognition be computation? Does meaning arise from something other than computation?
    I guess what I am trying to get at is that if all of cognition is in fact computation, how do we come to give meaning to these arbitrary symbols? How do we come to link the symbol level with the knowledge level?

    I hope my message is decipherable – I apologize if I am completely off base!

    ReplyDelete
    Replies
    1. Hi Cait, you’ve asked exactly the questions that need to be asked about computationalism and Pylyshyn: How do symbols get their meanings?

      To even begin to answer that question you have to first ask what meaning means.

      At the very least, meaning has to include the connection between symbols and their referents: the connection between “apples” and apples.

      That’s the symbol grounding problem, and it’s not clear how computation alone can solve it. Computation can just connect symbols to symbols.

      But reference isn’t all there is to meaning. A robot that could call apples “apples” when it “saw” them (i.e., it got opical input from them) would still not know the meaning of apples, the way we do. Not even if it could pass the Turing Test (TT) and do with apples (and everything else that words and sentences refer to) exactly the same things we can do. The Turing-Test passing robot would be grounded — but would it have meaning?

      When I say a word, you don’t just know how to pick out what it refers to, do what needs to be done with it, and talk about it. It also feels like something to know what a word means. That’s the difference between hearing words in a language that you understand, compared to a language that you don’t understand. A TT-passing robot could be naming and manipulating apples just fine, but if it does not feel like something to be that robot, and to mean apples when it says apples, then the robot has grounding but not meaning.

      I don’t expect all of this to be crystal clear the first time round. It contains material from about half the course! But that’s where we’re heading.

      Delete
  10. “Since we know that computers can’t solve certain problems (the uncomputable problems), that must mean that brains can’t solve the uncomputable problems either. In other words, there are fundamental limits to the possibility of human knowledge.”

    There could be multiple reasons why a computer that simulates the human brain would not be able to solve a certain problem. First, people program computers, so if the programmer cannot solve a problem using his or her brain, then they cannot program a computer to solve it, because they would not know how. Second, if the human brain could solve a certain problem, but the computer could not, then we may not understand the human brain enough to program a functionally correct computer. Both cases lead to the same conclusion that if this were the case, it would be due to the fact that we do not know enough about the human brain to simulate a functionally correct computer. Since one of the largest models we have, with 2.5 million neurons, still can only perform eight tasks, I think that we are still far from understanding the brain enough to fully simulate it. Therefore, our limit would be how well we know how the brain works. I don’t understand why, with time, it would not be possible to understand the brain enough to be able to simulate a computer that can solve all problems.

    ReplyDelete
    Replies
    1. If a computational problem is provably unsolvable by computation than it is unsolvable by any human either.

      Delete
  11. "Often, we can ignore which representation is used and just think about the procedure as manipulating the “information” in the input... However, as this example
    shows, the choice of representation can affect our choice of algorithm" (pg 5)

    "The halting problem has also been used to argue that the brain must not be a computer because people can tell whether programs will halt and computers can’t...people are actually just like computers on the halting problem: we can solve it a lot of the time, but we also get it wrong sometimes." (pg 15)



    It is obvious that Horswill is emphasizing the idea that computation is a complex term, “computation is a term in flux”, whose definition is currently undergoing serious revision. He presents several seemingly contradictory statements, and then follows up with evidence to support both “sides”. For example, he mentions that representation can be discounted if we focus on procedures, but then goes on to say that computer scientists study the properties of data structures. He then suggests that computation can be defined several ways; algorithms, information processing, something that computers do, or as something to do with behavioural equivalence. All this is to say that we have yet to fully wrap our heads around the concept of computation. What Horswill does not address (and what is important for me to understand) is how we might proceed to reconcile these apparent contradictions to formulate a working definition of computation for the purpose of discussion (and ultimately, this class) especially in the way that computation relates to cognition.

    “Intelligence is ultimately a behavioural phenomenon” (pg 15)

    Small additional point: in the context of the quotation above, what does Horwill mean by “behavioural”?

    ReplyDelete
    Replies
    1. Let me give you a definition of computation that will never lead you astray:

      Computation is what a Turing Machine does.

      It manipulates symbols (arbitrary shapes) according to rules that are based on the symbols' (arbitrary) shapes, not their meanings:

      "If you see a 0 erase it, write 1 and stop." That's about it.

      But if the computation was worth doing, it has an interpretation, for example, the answer to a mathematical question. But that interpretation is not in the computation. It's in the heads of the users of the computation.

      The computation is also physically implemented by some hardware, but the physical details of the hardware are irrelevant to the computation. The computation is purely formal (symbolic).

      None of this has anything to do with cognition (so far). Computation was defined in order to describe what mathematicians are doing when they "compute." Of course mathematicians think (cognize), but computation was not defined in order to capture how they think -- just to describe what they do when they compute.

      Computationalists began to use computation to generate behavioral capacity (artificially) that is normally only generated by the brains of thinking organisms. And then they got the idea that maybe computation is what is going on in our brains to generate all the things we can do.

      Part of the reason computationalists thought this was the power of computation. (Computation covers anything a mathematician does; that's the weak Church-Turing Thesis; and computation can also simulate just about anything -- the strong Church-Turing Thesis.)

      But the hypothesis that cognition = computation (computationalism) goes beyond all that. The first part of the course will be about whether or not computationalism is right.

      The answer will be that it is probably only partly right: It's not true that cognition is all computation. But it's also not true that cognition is not computation at all.

      Cognition is probably hybrid computational/dynamic, with the meanings of the symbols of our language grounded in a dynamic capacity: the capacity to connect symbols to their referents through sensorimotor categorization.

      Delete
  12. I am inclined to echo some of the comments here in saying that I am also not really sold on the idea that cognition = computation. However, I do find it interesting that parallels are easily drawn in Horswill's explanations between concepts such as meta-programming (“the same commands used to manipulate data can be used to manipulate the procedures themselves”) and things like human introspection. Maybe that wasn't the intention but it does affirm my feeling that computation is helpful to understand and model cognition, but that treating them as the same thing is missing much of the story. Computationalists seem to be in favour of reducing the brain to inputs and outputs and computer scientists seek to create languages of high abstraction for the sake of convenience interestingly making humans more like computers and computers more like humans. This shouldn’t be surprising since simplifying (via categorization etc.) is how humans survive but the convergence of the two definitely deserves merit in future theories. There were a few (many) points amongst the three articles that left me confused. If neurons are theorized to be the low abstraction language (the binary code of the brain?) then what are the possible candidates for the highest level of abstraction? Horswill said it was still an “open question” but are phenomena like memory, emotion, intention, and attention candidates or is this way off the mark? I noticed that Pylyshyn's papers often referred to intents and goals and included them in a computational model. In addition, I feel like there is a bit of a gap in the translation of Newell and Simon’s hypothesis into the two points explored in the Poole and Mackworth article. If any intelligent agent is a physical symbol system, does that mean that an intelligent agent is ONLY a physical symbol system or does that leave room for another accompanying system to create cognition? In addition if all that is needed for intelligent action is a physical symbol system then would a computationalist argue that intelligence is all that is needed for cognition? Is this a valid question or is that not relevant to cognition at all? If intelligence is “the ability to acquire knowledge and skills” and cognition is how an organism does what it needs to do in the world then is treating intelligence and cognition as one in the same correct through a computationalist perspective? Also, the statement that intelligence is a behavioural phenomenon, and one is intelligent if you show intelligence through behaviour seems to me to be a rule with a lot of exceptions and inconsistencies, in both humans and machines. At this point these three readings have me in semantic gymnastics trying to relate and reconcile definitions. The above definition of intelligence is maybe too generous for artificial intelligence as of 2016. Perhaps there is then more than one definition of intelligence and this is just me referring to the wrong one (human vs. artificial) but if computation = cognition then I see no reason why computationalists should split up intelligence into diverging definitions for humans and computers. ...Obviously a running theme in this week’s reading was wondering if my query was relevant or if I was just barking up the wrong semantic tree. ...I suspect learning more about the symbol grounding problem in future classes will also complicate this.

    ReplyDelete
    Replies
    1. Is meta-programming like introspection? No, because programs are just symbols, and symbol-manipulation, even if the symbols are themselves programs. All of them are ungrounded, at all "levels": It's just squiggles and squoggles. Introspection thinking about thinking. Thinking is both grounded (because we know what things words refer to, and what they mean) and conscious (it feels like something to think). Neither of those is true of computations

      "If neurons are theorized to be the low abstraction language (the binary code of the brain?) then what are the possible candidates for the highest level of abstraction?"

      Until further notice, neurons, or action potentials, are the hardware (dynamics) of the brain. Whether what they're doing is just implementing computation (software) is what's on trial here (computationalism). But don't get lost in the hermeneutic hall of mirrors. Symbola and symbol manipulations are just meaningless squiggles and squoggles. We are the ones interpreting them, and our interpretations are in our heads. This has nothing to do with level of abstraction, now with the distinction between lower- and hugher-level programming languages. It's all just squiggles from head to toe...

      "If any intelligent agent is a physical symbol system, does that mean that an intelligent agent is ONLY a physical symbol system or does that leave room for another accompanying system to create cognition?"

      That's certainly the right question to ask. Now, what's the answer?

      "if all that is needed for intelligent action is a physical symbol system then would a computationalist argue that intelligence is all that is needed for cognition?"

      "Cognition" is the very same thing as "intelligence" (just like "consciousness" is the very same thing as "feeling"). Multiplying synonyms and treating them as if they meant something different or something more does not get us ahead! If computationalism is right, then cognition = intelligence = computation. If not, then not.

      "that intelligence is a behavioural phenomenon, and one is intelligent if you show intelligence through behaviour seems to me to be a rule with a lot of exceptions"

      Cognition (intelligence) is not a behavioral phenomenon, i.e., it's not behavior. It is behavioral capacity -- or rather than mechanism that generates the behavioral capacity.

      Delete
  13. I’m a little confused about this and not sure if I am understanding properly: In the introductory lecture, Prof Harnad, when discussing how reverse engineering applies to cognitive science, says we should “come up with a model that can do what we can do and that will be an explanation of how we do it”. I assume this means for example to build a robot with all of the abilities of a human being. But I don’t understand how this would give us any real insight into how WE cognize. This robot might be behaviorally equivalent to us but how it does all the things that we do might be (likely would be) completely different from how our brains handle it. It might give us one possible way that cognition happens in us but we couldn’t know with any great certainty if that is how it actually works. Am I missing something here?

    ReplyDelete
    Replies
    1. Underdetermination, explaining the mind, and the "other-minds" problem

      It is quite natural to say "I don't want to know how a robot does it: I want to know how we do it."

      But the assumption behind that question is that there are many different mechanisms that could all do what we can do.

      That assumption is right for tiny little fragments out of everything that we can do -- move, play jeopardy, play chess, describe simple scenes, etc. Let's call these fragments "toy capacities." There are many different ways to generate a toy capacity, and probably none of them have anything to do with the way our brains do it.

      But as you scale up to a mechanism that can do more and more of what we can do -- the same mechanism, small enough to fit into one head -- the "degrees of freedom" shrink. That means that the number of different ways to do it all is much smaller than the number of ways to do tiny bits of it all. (One strong hint of this is the fact that it's easy to design many different ways to generate a toy capacity but no one has come close to designing even one way to do it all -- even though we call this the "easy" problem!)

      There is something in "scientific" explanation called "underdetermination." It is true for every causal explanation that there's no way to know for sure that it's the "right" explanation -- the one that the gods actually used to design the universe. Maybe it just works, and fits all the evidence, but it's the wrong explanation. That's just one of those unresolvable uncertainties that Descartes' method of doubt reminded us that we could never reduce to zero. All scientific theories (not just cognitive scientific theories of how the brain works) are underdetermined; we can never be sure they are right, even when they account for all the data. This is "normal" underdetermination.

      The underdetermination of theories by data is also similar to the "other-minds problem," and this is worth thinking about: That's why we have Renuka in our course: She was built in MIT in 1992, but does it make any difference whether she was built the "right" way? Suppose Riona was built in CalTech around the same time, and works in a completely different way: The mechanism can't be "right" for both of them -- and maybe neither mechanism would be the same as the one in our brains. But the first question to ask yourself about Renuka and Riona is: If you know they don't have brains like ours, is it alright to kick Renuka and Riona?. If not, why not?

      Read this week's readings about the "Turing hierarchy" -- from t1 to T5 -- and then let's talk about this again.

      Delete
    2. "I don't want to know how a robot does it: I want to know how we do it."

      Hi Prof Harnad,

      In the introductory lecture we came to an agreement that cognition is "whatever is going on inside an organism that allows it to do what it does." In this reply, and in the paper annotating Turing's Computing Machinery And Intelligence, it seems that you're suggesting (and please correct me if I'm wrong) that if we build a machine that could pass the T3 level of the Turing, we will have built a cognizing machine because it has the software and hardware to do as we do. I can concede that this is cognition, but I'm not thoroughly convinced by your 'degrees of freedom' argument.

      I agree that as we scale up the complexity of what we ask such a machine to do, there are fewer and fewer mechanisms that would allow it to do those things, but are there really so few that we can be sure that building a T3 machine will tell us anything about ourselves? In our definition of cognition, we don't limit our definition to humans, but discuss organisms in general. There are many animals with nervous systems and brains, which by our definition are "cognizing" but their mechanisms for doing so are wildly different. Even when behaviours are comparable across species, for example vocal learning in zebra finches and humans, their underlying neural mechanisms are provably very different. Since different animals have developed different mechanisms for cognizing, I don't think it's so difficult to believe that a T3 level machine could have an entirely alternative means of cognizing. To take this one step further, if we imagine an individual who has suffered from a small stroke, but has been rehabilitated and recovered fully to the extent that the effects of the stroke are indiscernible by an outside observer, this person can "do what [we] do" but most definitely has some alternative neural mechanism underlying their cognitive capacity. I believe the same can be said of a T3 machine.

      You go on to ask whether it "make[s] any difference whether [a machine] was built the right way," and I would assert that it matters very much. While I believe that a T3 machine would provide a great deal of insight into the 'how' of our cognition, I don't think that it would solve the easy problem of cognition/consciousness. And this, I don't believe is a matter of underdetermination, but the fact that I don't think a T3 machine could necessarily provide a convincing answer to the easy problem.

      You go on to ask whether "it is alright to kick [such a machine]", and my intuition would say that you shouldn't, just in case this cognizing machine is also a feeling machine, but I am also skeptical about this. Being "built the right way" is important in this case as well because if, as I mentioned, the machine cognizes using different mechanisms, how should we know that it feels as we do? I suppose it is possible that I'm using the other-minds problem as a crutch in this instance, but I feel that our natural inference that other people (and organisms) feel is more sound than inferring the feelings of a machine because I have good reason to believe that these other biological systems cognize and therefore feel as I do...

      Or maybe I don't know what I'm talking about, and this is just an underdeveloped form of machine speciesism.

      Delete
    3. Making Up Minds

      Good points, Chris!

      I am guessing that you would prefer T4 (indistinguishable internal neural function + T3) to just T3. You could be right. That T3 is the right level is just "Stevan Says."

      But let me suggest some other things to think about in relation to your dissatisfactions with "just" T3:

      (1) Seems premature to say T3 wouldn't be good enough, because there are so many ways to pass T3 when we're nowhere passing it even once...

      (2) Yes, animal T3s would be easier, but we know our total human capacities pretty well, from inside and out, whereas we don't know the total capacities of other species; they are behind the other-minds barrier. Also our "mind-reading" skills work best with our own species. We could probably make a T3 spider that passed the spider TT for us, but would it really have all the capacities of a spider? (Forget about asking a spider: All you have to do is make it smell like spider and it will try mate with it, or eat it...).

      (3) Yes, the brain has plasticity, and the internal functioning of a person after a stroke has changed. But those are changes in the fine-tuning. Chances are they are both cognizing in roughly the same way. (Perhaps all mammals are, but we can do much more, because we have language. We'll return to that later in the course.)

      (4) All scientific explanations are underdetermined. It's very possible that at the end of the day, there will be not one but several different "Grand Unified Theories of Everything" in physics, all of them Turing Indistinguishable, because they all predict all possible data, and they all causally explain it all, but in a different way, and we will never know which (if any) is right. (This is called "normal underdetermination." There's no way around it.)

      (5) In cogsci reverse engineering we could insist on T4 rather than just T3, because what the brain does is data too. But there is such a thing as overdetermination too: You are worried that we might miss the crucial feature of brain function for cognition with T3; but there also may be a lot of details of brain function that are irrelevant to being able to cognize (like vegetative function). If you insist on T4 (which is even harder to reach than T3) you may be insisting on a lot more work for nothing (for all you know), because, in the end, T3 is the only test for whether a feature of T4 is or is not necessary for being able to do anything and everything we can do. (Also, brain data do not explain for themselves how and why they generate behavioral capacities. We'il be discussing that in week 4.)

      (6) There is no "easy problem of cognition/conscousness (= feeling)," just an easy problem of doing-capacity. But we know cognition is more than just doing-capacity: it's also feeling. It feels like something to be able to do all the things we can do (though introspection does not tell us how we do it). And there's no way to know whether anyone other than you feels. What the brain does is just doing too. It does not solve the other-minds problem. And that's not even the "hard" problem. The hard problem is explaining how and why organisms feel. T3 does not solve it. But neither does T4.

      Now let me ask you: if Renuka or Riona really had been made in MIT, and this course were on explaining the mechanism that generates their capacity to do everything we can do, are you really saying you would not be satisfied that you had learned what cognition was, because maybe there's another, radically different way to do it? Yet you wouldn't feel comfortable about kicking them? Sounds like conflicting intuitions. "Stevan Says" you're probably better off trusting your mind-reading/Turing-testing skills. I'm only a pygmy, so there's no need to believe me. But Turing says the same thing too...

      Delete
    4. “I agree that as we scale up the complexity of what we ask such a machine to do, there are fewer and fewer mechanisms that would allow it to do those things, but are there really so few that we can be sure that building a T3 machine will tell us anything about ourselves?”

      I have two comments in response to this skywriting:

      First, can you (either Steven or Chris) explain the logic behind the idea that as you increase complexity of what we are expecting a machine to do there will be fewer mechanisms that allow it do to those things?
      In addition, what defines “complexity”. The number of different things? The novelty of the responses/actions that you are expecting? My intuition is that as something gets more complex there are more possible strategies/mechanisms rather than fewer. For example, tying a knot in a string can only be done in one way. In comparison to tying shoelaces with a double knot. For the latter, some will tie a knot and then use the “two-bunny ears” approach, while others will use a “single ear loop around” approach. As the task becomes more “complex” the possibilities for the method seem to increase.

      Additionally, I have to agree with/add to Professor Harnad’s response by saying that producing a machine that cognizes (possibly in a different way to us) has taught us something about cognition. I think the most important thing to take away from this situation would be that we have learnt a possible mechanism for cognition, and we can hypothetically test these mechanisms experimentally to infer if humans employ the same ones, or even if humans recovering from stroke employ the same ones.

      Delete
    5. " First, can you (either Steven or Chris) explain the logic behind the idea that as you increase complexity of what we are expecting a machine to do there will be fewer mechanisms that allow it do to those things?"

      I'll take a shot at explaining the logic there, but I could be totally off base. Let's take for example the way physics explains the dynamic of the universe and everything within it. If we consider the small scale stuff - dynamics, electromagnetism, gravity - we recognize that these are physical phenomenon embedded in a larger universe (picture toy robots emulating individual capacities of a true cognizing T3 robot) for which there exists a huge array of causal equations (many ways of generating toy capacities, many degrees of freedom). As soon as we scale up these small phenomenon and ask physicists to give a grand unified theory of everything (GUTE) which encompasses all of these phenomenon, there are suddenly fewer ways to do so (fewer degrees of freedom). So it is with a T3 robot. Once we try to create a robot that can do all these toy capacities AND everything else we can do, the potential computation/dynamic combinations that we use to reverse engineer cognition shrink significantly.

      As far as complexity, I would define that as the difference between toy capacities and a T3 robot. Forms which are complex subsume the performance capacities of multiple forms which are inherently less complex. I think this sort of definition is part of the hierarchical nature of the Turing Test.

      Delete
  14. Harnad stated in the lecture “it is not that cognition is not computation at all, rather, cognition is not all computation.” According to my understanding of computationalism, cognition equates to computation that in turn equates to intelligence, and that intelligence is a behavioural phenomena. So, in this view, cognition exists without interpretation and meaning. However, this is not a strong argument as intuitively we know this is not the case: one of the core features of humans is to communicate through the shared meaning of the verbal/non-verbal symbols and the specific things they refer to. Do all symbols hold meaning, and if so, how does our brain acquire and encode these meanings? As I write this, it becomes clear that stating “that intuitively this is the case” is a circular argument and that “how” we (i.e. the mind) do anything is increasingly tough to deduce.

    These two distinctions/concepts not entirely clear to me:
    1) The halting problem – I know it was mentioned above that we need not worry about the details; however, in relation to the strong Church-Turing Hypothesis and that “some problems [are] provably un-computable,” (Horswill) does that mean that whatever cannot be simulated by computation cannot be solved by humans and vice versa?
    2) What are the differences of noncognitive vs. cognitive vs. subcognitive? It says that “Zenon regulated everything that was noncomputational to the ‘noncognitive’ [and] it occurred below the level of the architecture of the virtual machine” – does this refer to the physiological level of neurons?

    ReplyDelete
    Replies
    1. Thoughts About Thinking

      Cognition is thinking. It feels like something to think, but introspecting about what we are thinking does not explain how we think, nor how we can do all the other things we can do. (Doing is behavioural.) The causal explanation of how we can do all the things we can do will also be the explanation of what thinking (cognition) is. (We're waiting for cognitive science to show us.)

      Intelligence is cognitive capacity. Symbols are just arbitrary shapes and computation is just manipulating those arbitrary shapes according to rules that operate only on the symbols' shapes, not their meanings. The results of the computation (and some of the symbols and manipulations) are interpretable to us, hence meaningful. But the interpretation comes form us, not the computation.

      Language is not computation, because we manipulate words based on not just their shapes but also their meanings. How words (verbal symbols) get their meanings is the symbol grounding problem.

      Whatever cannot be computed cannot be computed. Formal computation by mathematicians (and computers) is what the weak Church/Turing Thesis is about.

      The strong Church/Turing Thesis is about simulating (discretely, approximately) any physical process at all with computation. Computer simulation is in some ways similar to verbal description: I can describe just about anything in words, in as minute detail as I wish. But the verbal description is not the same thing as the thing it is describing.

      Analog processes (physical, dynamical, continuous) like a sun-dial can be used to generate results (like the time of day) that are useful to us and that we also interpret, but that is not what Turing means by computation. It's possible that some results that cannot be generated by Turing computation can be found using dynamical systems, but that does not make them computable (in the formal Turing sense). (There is really no such thing as analog "computation," even though people sometimes describe dynamical processes that way if we "use" them to generate results.)

      We don't think of "vegetative" processes in organisms (like breathing, balance, temperature) as cognitive, so they are non-cognitive. Motor skills (like walking, swimming, playing tennis) are probably also not just cognitive.

      But the cognitive/"subcognitive" distinction is just a computationalist's fantasy. If cognition were just computation (it's not), then the dynamical hardware (physiology) implementing the cognition/computation would be "subcognitive." The dynamical details of the hardware specifics are irrelevant to computation (though hardware is of course necessary to run the software).

      But ("Stevan Says") cognition is not just computational. It's hybrid: dynamic/computational. So the dynamics are not just the hardware, running the cognitive software from below; they are part of cognition itself. In fact, the dynamics ground the verbal symbols of cognition, by connecting words with their referents: i.e., with the things the words refer to, through our sensorimotor categorization capacity.

      (Zenon-says that whatever brain activity occurs "below the level of the architecture of the virtual machine” is noncognitive, but the notion of the "virtual cognitive machine" is itself homuncular: Who's "using" that "virtual" machine? Zenon is thinking of the hierarchy of programming languages, from bottom-level binary 0/1 code up to higher-level programming languages, closer to English. But those are all just doing squiggle-squoggling. The interpretation comes from us.)

      Here,
      Even though a computationalist thinks otherwise. is what a
      Level
      Language really is: it's
      Only an acrostic. Nobody home. Ditto for the "virtual architecture"...

      Delete
  15. Reading the paper by Ian Horswill, I was struck by how using basic computational principles one could make the argument in many cases that computation is a really good model for human cognition.
    Universal machines: they cannot necessarily compute anything, only anything that another machine can compute.
    “The amazing thing is that nearly all digital computers are Turing complete. In fact, some stunningly simple mechanisms are Turing complete.”
    It seems to me that if you were to do a morally questionable experiment where you took an infant at birth, you could “compute” it in a B.F Skinner type fashion to be able to do mental processes or have certain intelligences, granted it had the proper teachers and resources. In this example, the child is like a Turing complete machine, and the other people teaching the child are like other programs/machines that are being mimicked. This is oversimplifying learning and human behavior of course but it is a really interesting parallel to me that you could (given the proper, admittedly impossible circumstances) teach any person any skills in the same way you could program a Universal Turing machine to do any program that any other computer could do.

    I am interested in the way some commenters have spoken about simulation and personal identity as well as the “Who am I?” blurb Hoskill writes on page 17. It reminds me of the classic identity problems in an introductory philosophy class, for example. If you downloaded all parts of my brain onto a program and ran the program so that the more input it received the more output it could construct from my memories and so on, while it seems intuitive to say it is not “Julia” in the proper sense, if we accept behavioral equivalency it seems that you have cloned me.
    In fact, if we accept the fact that “there are fundamental limits to the possibility of human knowledge (17)” it also seems like at a certain point this “Julia” program would be able to respond in a way that is more true to me than I would be able to, because of different heuristics and human error. Would a theoretical machine with my brain be subject to the same biases and illusions I would be? Let us say that at least at the very moment the program is created, there is a phenomenological copy of me who has the same lower and higher level abstractions it uses to navigate the world. I understand that (as mentioned above in the Renuka/Riona comments) the moment I see there is this program we are different, but it seems really odd to me that for a single second or however long it is, there is an exact clone (of at least my physical symbol systems, I believe?) because we represent the world with the same abstractions at this point. I suppose you could argue that maybe the program only has my “knowledge level” of abstraction, but if it were possible to also give it my “symbol level” of abstraction, wouldn’t we stay exactly the same mentally from that point on (because we would respond internally the same way to situations)?

    ReplyDelete
    Replies
    1. Computation is rule-based symbol manipulation. You can't "compute" infants -- though of course, since symbol shapes are arbitrary, you can use babies for your symbols instead of 0's and 1's...

      A child, growing, learning and interacting with other people is only computing according to computationlism. If computationalism is wrong, it's doing other things to.

      And teaching is definitely not the same thing as computation -- except when all you are teaching someone to do is to manipulate arbitrary symbols (meaningless to them). In that case you are just making them into the hardware that implements a computation. (This is rather like what happens in Searle's "Chinese Room" [week 3]) It's also probably what it was like when you were first taught the recipe for factoring quadratic equations "aX**2 + bX + c = 0" using the rule: "x = -b +/- (SQRT(b**2 - 4ac)/2a) without knowing what it really all meant... -- Maybe some of you still don't know!)

      You can't "download" Julia unless computationalism is true... Real cloning is just transferring the genetic code for protein synthesis (which is also not just formal computation), not the transfer of software from one hardware to another. And Renuka and Riona are robots, doing dynamics, not just computers, manipulating symbols.

      Delete
  16. Despite the many issues that arise in attempt to equate cognition to computation, I think it is a useful tool in terms of expanding the ideas we have about cognition. For example, in computers we talk about primitive operations which the hardware can execute directly. This can be paralleled with many human functions such as breathing, heart rate, etc, but also less obvious ones such as language. Undoubtedly, babies cannot speak as soon as they come out of the womb, but is the ability to acquire language a primitive operation?

    In the article "What is Computation?", it states that Alan Turing argued that 'intelligence is ultimately a behavioural phenomenon. That is, an entity if intelligent because it behaves intelligently, not because it's composed of some special substance such as living tissue or a soul. Moreover, intelligence is a computational phenomenon, amenable to computational analysis." This seems to be assuming that humans always behave intelligently. And now of course you run into the task of defining intelligence. I suppose in this instance, Turing is suggesting that intelligence is cognition (and vice versa). And perhaps cognition involves intelligence a good amount of the time, but it's not all that cognition is. Cognition can cause erratic and illogical behaviour due to the emotions experienced by human beings. So I suppose my main problem with the idea that cognition is computation is that I don't believe you could program a machine to experience the same complex emotions as a human being. Perhaps you can program a computer to become depressed if it experiences x amount of unhappy experiences. But will it truly feel sad? I guess this comes back to what you were saying it class, that machines can't feel. What about love? How could a machine possibly experience that in the same way as a human being? There is first the problem of what program to write in order for the computer to fall in love (say, if x amount of pleasant things happen because of x person), but more importantly what would that then result in? Say, whenever x person is around, act happy. This all of course is a gross oversimplification of love. And perhaps I am just not enough of an expert of computation, but I don't believe you could compute love. It's a subjective experience and something we feel internally and upon introspection.

    ReplyDelete
    Replies
    1. A "machine" is just a causal system. A vacuum cleaner or a computer are human-made machines, but organs and organsims are also machines. So the answer to your question of whether a machine can feel is of course Yes. But the real question is: what kind of machine can feel? Some kinds of organisms can feel. Probably not plants, bacteria or one-celled animals, but surely mammals, birds, fish, other vertebrates, invertebrates (all machines).

      Can a computer feel? Extremely unlikely (and "Stevan Says" no), no matter what program they are running. But reasons are needed to support this. Searle gives one; the symbol grounding problem is another. Probably there are more...

      Delete
  17. While reading the Horswill explanation of computational neuroscience I found myself questioning the way that emotions are able to be simulated by a computer (turing machine) since emotions seem to have a unique process of interaction with the environment outside of the brain in a number of ways (computations like solving an addition problem or thinking "I am Alba" don't seem to have the same bottom-up manipulation as emotions). How could an emotion, say the feeling of happy, be an output of a Turing Machine if there are an infinite amount factors at play with a human emotion of "happy" (from environmental stimuli, physical stimuli, pre-conceived ideas, memories, personality traits, mind-altering substances, etc etc) but for a Turing machine there has to be a finite set of instructions for it to work in the first place? Furthermore, isn't the output of "happy" (or any emotion) a very subjective and therefore NOT universal concept? It is obviously manifested in many different ways. That being said, if a computer can simulate the human brain, Horswill wonders (as do I), "Is the particular substance of our construction – biological tissue – somehow crucial to our identity, or could we,in principle, map out all the neurons, synapses, and connection strengths in our brains, and make simulations of ourselves?" It seems that our brains undergo much more manipulation by way of interacting with external things rather than being simply an enclosed computation device. (I'm even thinking about how brain plasticity would make sense?) Furthermore, if "computation is the process of producing some desired behavior without prejudice as to whether it is implemented through silicon, neurons, or clockwork." and, if in the case of a human brain it comes down to that specific brain, in that specific human, at that specific time and place in order for the exact feeling of "happy" to be a possible output... does cognition really equal computation in the way that it can be simulated by a computer? Could a computer have an input that replicates that situation exactly?

    ReplyDelete
    Replies
    1. Alba, you don't need to go all the way to emotions to be sceptical about whether computers feel: Any feeling will do, including the feeling of touching a smooth surface, seeing red, or hearing an oboe. You can attach sensory-motor peripherals to a computer, but that's not a computer any more, it's a robot. We are robots too, bio-robots, made of a certain kinds of tissue. But the basis for being sceptical about whether a robot is made out of the "right stuff" to be able to feel is not the same as the basis for doubting that a computer (hence computationalism) can feel, because a computer does not have any "stuff" to feel with. (And the peripherals are just feeding the computer 0's and 1's; you could have done that without the peripherals!)

      For computation, it is true that the hardware details are irrelevant. If computationalism (which is also called "cognitivism") were true, and cognition were just computation, then the hardware details would be irrelevant for cognition too.

      But is computationalism right?

      Simulating the human brain computationally is just computation too. A simulated brain is not thinking any more than a simulated furnace is heating. For much the same reason, a simulated brain is not feeling either.

      Delete
  18. These readings definitely helped me wrap my head around the different ways of looking at computation but in the end left me a little puzzled. Horswill emphasized that, “Computation is an idea in flux; our culture is in the process of renegotiating what it means by the term.” This leads me to question how can we continue to argue whether cognition is computation when the very definition of computation is not unanimously agreed upon? Perhaps sensorimotor pathways can be seen as computational instead of dynamic?

    Anyways, I would like to argue that computational neuroscience seems like a very promising approach to understanding cognition in an eventual effort to generate cognition rather than simulate it. After all, we have agreed that simulation is not cognition and how can we reverse-engineer any kind of cognitive ability if we still do not understand how we do what we do? In this sense, I feel that it is most obvious to look at brain-imaging studies to see how our “software” is running and to study neural representations as we process information. One area of great interest in brain-imaging studies nowadays is the resting-state of our brain – where spontaneous neuronal activity occurs. fMRI has already shown a correlation of neural activation between brain regions that are said to comprise different functional connections and networks. In particular, the default mode network (DMN), which is active during wakeful rest, is said to be involved with self-referential processing, mind-wandering, visualizing the future, etc. This could be a very exciting path into further understanding how “thinking” occurs especially since functional connectivity is different in people with Alzheimer’s, schizophrenia and autism while at rest.

    In my lab working with MEG, I am able to examine the coupling between the phase of slow oscillations and the amplitude of high-frequency oscillations, that is resolved in milliseconds. This kind of “Phase Amplitude Coupling” (PAC) is a mechanism that is responsible for long-range neural communication within the brain and could help to answer the “how” component of cognition.

    All in all, yes, cognition may in fact be a dynamic information-processing system rather than strictly computational. Even today with our supercomputers we are unable to simulate brain activity at a reasonable rate (it took the K computer in Japan about 40 minutes to simulate 1 second of brain activity http://www.telegraph.co.uk/technology/10567942/Supercomputer-models-one-second-of-human-brain-activity.html). There must be some other kind of mechanisms involved to facilitate information transfer – whether it is computational or not we still do not know. Perhaps PAC can serve as one of the mechanisms at play here. When it comes down to attempting to create a kind of “sentient” robotic machine, neural networks within the brain must not be overlooked.

    ReplyDelete
    Replies
    1. As you'll see once you read Turing's article and about what a Turing machine is, the notion of what computation is is not in flux. (It's pygmy minds -- like Horswill's, if he thinks the notion of computation is changing -- that are in flux!) The Weak Church/Turing Thesis, according to which every example of computation to date, and every other formalization of computation to date, is Turing computation, has so far never been wrong. And Turing computation is hardware-independent.

      If you don't believe computationalism, then what you are seeing when you look at brain activity is not how our software is running!

      When you do your neural imagery, ask yourself sometimes: am I getting closer to figuring out how the brain generates my capacity to do what I can do? It's easy to find brain activity correlated with "self-referential processing, mind-wandering, visualizing the future" but quite another matter to explain how the brain generates those capacities.

      "the coupling between the phase of slow oscillations and the amplitude of high-frequency oscillations,... “Phase Amplitude Coupling” (PAC) is a mechanism that is responsible for long-range neural communication within the brain and could help to answer the “how” component of cognition."

      Perhaps, but long-range neural communication in the brain is not part of T3 (our brain's external doing-capacity: sensorimotor + verbal) but of T4 (which is T3 + our brain's internal neural doings, like "long-range neural communication within the brain"). There's still not much known about how the internal neural doings generate the external doing-capacity.

      T3 is the name of the game. T4 is welcome to help. But it's only helping only if it helps generate (and hence explain) T3, not if it changes the subjecct...

      Delete
  19. (From What is Computation?)

    “Another possible answer is that computation is “information processing”, although that then begs the question of what we mean by information. A pragmatic definition would be that computation is what (modern, digital) computers do, so a system is “computational” if ideas from computers are useful for understanding it. This is probably the closest definition to how real people use the term in practice. As we’ve seen, there are a lot of different kinds of systems that are computational under this view. But that isn’t a very satisfying definition. Another view would be that computation is wrapped up with the study of behavioral equivalence. Under this view, computation is the process of producing some desired behavior without prejudice as to whether it is implemented through silicon, neurons, or clockwork.”

    While I found this entire article interesting, I chose this section specifically to comment on since it contains most of the running definition variants used for the term computation. My issue with the authors point about ‘what is the best way to define computation’, is that it seems like further defining the word is unnecessary. That is not to say that the study of how computers and brains compare is unnecessary, but rather it seems that there is a scramble to stretch the word out to the limits of its definition. The field is trying to determine what fits under the umbrella of computation. The author points out that the Layman’s definition of computation is ‘any activity that is best explained in terms of what we know a computer can do’. It seems as though the initial term was well defined through the Turing machine, and we are now trying to put too many eggs in the same basket, metaphorically speaking of course. When looking at the origins of the word, computation was the human ability to do mental mathematics. By extension, the computer was so-named, because it was simulating this capacity to its maximum. As we expand the capacity of the computer, it would be much more logical to better define new terms to identify neurological or computer functions. Well-defined variables are key to new discovery and understanding. Ultimately, the question of ‘what is computation’ is a linguistic particularity that distracts away from potential discovery and in my opinion, is the reason there is still so much debate around the theory of computationalism. We have yet to decide on a definition for computation, and so continue to argue over whether cognition is computation. This leads many scientists, especially computationalists, to look inside the box of computation to explain everything to do with cognition, rather than look outside the box altogether to come up with new theories. It’s like trying to find one specific puzzle piece in a set of 1000 – it can almost definitely not be done, especially since the peice hasn’t been defined. There is too much focus on figuring out what the ‘computation’ puzzle peice looks like, and not enough focus on defining and building the rest of the puzzle. If we do the latter first, then ‘computation’ will be the last peice left.

    (from Artificial Intelligence)

    The physical symbol system hypothesis seems to be a more abstract version of the computationalism theory, but instead of strictly considering input/output computations, it considers general manipulation of symbols through whatever the symbol system (or, in terms of computationalism, dynamics of computation) can do. I may be trying to draw too many connections here, but this hypothesis seems to be a more general reiteration of computationalism. This goes back to my comments on the previous article – that perhaps the debate over computationalism is more a debate of linguistic particularities than anything else, as we attempt to overstretch the definition of the term computation, or limit our exploration of cognition to something that already conveniently fits into an explainable model.

    ReplyDelete
  20. (From What is Computation?)

    “Another possible answer is that computation is “information processing”, although that then begs the question of what we mean by information. A pragmatic definition would be that computation is what (modern, digital) computers do, so a system is “computational” if ideas from computers are useful for understanding it. This is probably the closest definition to how real people use the term in practice. As we’ve seen, there are a lot of different kinds of systems that are computational under this view. But that isn’t a very satisfying definition. Another view would be that computation is wrapped up with the study of behavioral equivalence. Under this view, computation is the process of producing some desired behavior without prejudice as to whether it is implemented through silicon, neurons, or clockwork.”

    While I found this entire article interesting, I chose this section specifically to comment on since it contains most of the running definition variants used for the term computation. My issue with the authors point about ‘what is the best way to define computation’, is that it seems like further defining the word is unnecessary. That is not to say that the study of how computers and brains compare is unnecessary, but rather it seems that there is a scramble to stretch the word out to the limits of its definition. The field is trying to determine what fits under the umbrella of computation. The author points out that the Layman’s definition of computation is ‘any activity that is best explained in terms of what we know a computer can do’. It seems as though the initial term was well defined through the Turing machine, and we are now trying to put too many eggs in the same basket, metaphorically speaking of course. When looking at the origins of the word, computation was the human ability to do mental mathematics. By extension, the computer was so-named, because it was simulating this capacity to its maximum. As we expand the capacity of the computer, it would be much more logical to better define new terms to identify neurological or computer functions. Well-defined variables are key to new discovery and understanding. Ultimately, the question of ‘what is computation’ is a linguistic particularity that distracts away from potential discovery and in my opinion, is the reason there is still so much debate around the theory of computationalism. We have yet to decide on a definition for computation, and so continue to argue over whether cognition is computation. This leads many scientists, especially computationalists, to look inside the box of computation to explain everything to do with cognition, rather than look outside the box altogether to come up with new theories. It’s like trying to find one specific puzzle piece in a set of 1000 – it can almost definitely not be done, especially since the peice hasn’t been defined. There is too much focus on figuring out what the ‘computation’ puzzle peice looks like, and not enough focus on defining and building the rest of the puzzle. If we do the latter first, then ‘computation’ will be the last peice left.

    (from Artificial Intelligence)

    The physical symbol system hypothesis seems to be a more abstract version of the computationalism theory, but instead of strictly considering input/output computations, it considers general manipulation of symbols through whatever the symbol system (or, in terms of computationalism, dynamics of computation) can do. I may be trying to draw too many connections here, but this hypothesis seems to be a more general reiteration of computationalism. This goes back to my comments on the previous article – that perhaps the debate over computationalism is more a debate of linguistic particularities than anything else, as we attempt to overstretch the definition of the term computation, or limit our exploration of cognition to something that already conveniently fits into an explainable model.

    ReplyDelete
  21. This comment has been removed by the author.

    ReplyDelete
  22.  “Moreover, intelligence is a computational phenomenon, amenable to computational analysis” (Horswill, 2008).

    I am interested in how considering intelligence as a strictly computational phenomenon, applies to different types of intelligence. It is quite straight forward when thinking about intelligence in Turing’s language, that being “that if a computer could fool humans into thinking it was human, then it would have to be considered to be intelligent even though it wasn’t actually human.” But what if we related this to social intelligence or emotional intelligence? I would think an intelligent computer would not be able to fully implement social intelligence as it requires the ability to judge expressions, qualities, actions etc. that are strictly human. In addition, social and emotional intelligence requires judgments on one’s intuition, an attribute that no intelligent computer possesses.

    With regards to emotional intelligence, to possess such a quality necessitates that the identity in question has emotions. Intelligent computers don’t possess emotions because emotions describe how one feels and if a computer were to feel we would suggest they are conscious, which is untrue. If we are calling something intelligent, shouldn’t it fulfill all aspects of intelligence? And wouldn’t this then make intelligence a human phenomenon and not a computational one?

    ReplyDelete
  23. The Artificial Intelligence article emphasizes using models with multiple levels of abstraction. Although this article was very insightful, I have a few questions in regards to the multiple levels of abstraction.When using multiple levels of abstraction, how is one able to clearly distinguish between the higher and lower levels if they are qualitative and not quantitative. If there is no concrete criteria for what constitutes a higher and lower level and it is all subjective; how are we sure we are using both kinds of levels. Since they do not have to be quantitative, it is much harder to distinguish between them. Therefore, I believe that the levels of abstraction are too arbitrary. In my lab at the MNI, we assess Alzheimer patients based on qualitative criteria. Then, we manipulate the qualitative data into a quantitative form for further analysis. This is much more useful and helps provide a concrete basis for comparison and analysis. The article features an example that models an environment with high levels of abstraction such as rooms, hallways and doors. Then, lower levels of abstraction are described as ‘details’. However, I believe it is too arbitrary to describe doors as higher level because they could also be portrayed as a detailed lower level of abstraction.

    ReplyDelete
  24. "When one neuron stimulates another neuron, it predictably increases or decreases the rate or likelihood of the second neuron stimulating other
    neurons. If it’s predictable, then it should be possible to write a computer program that predicts and simulates it, and indeed there are many computational
    models of different kinds of neurons.  But if each individual neuron can be simulated computationally, then it should be possible in principle to simulate the
    whole brain by simulating the individual neurons and connecting the simulations together. While there’s a difference between “should be possible” and “is actually possible”

    As is mentioned in the piece, if we assume that computers can simulate a brain then we acknowledge that human knowledge will always be unable to solve uncomputable problems just like computers. But what if we assume that computers are not able to simulate brains, that would mean that there is something more than computation that our brains are doing, my question is does this force us to adopt a dualist point of view.

    ReplyDelete
  25. Since the first lecture, I have been struggling with refuting computationalism, the idea that cognition is computation, on the basis of “feelings.” This theory presents that human cognition, which is everything that goes on inside the brain to do whatever that we do, can be represented in terms of what computers do. Refuting computationalism would be denying any possibility of programming or designing a machine to do what humans do.

    After reading Horswill’s “What is computation?” I was left leaning more towards computationalism. Hypothetically, we should be able to design a complex system in which each neuron of the brain is represented computationally, and this may lead to a set of outputs that correspond to its inputs. For example, when I see a bright light (the input), the photons will hit my retina and through many processes that include many parts of the brain, my pupils will constrict (the output). If we were to design a machine that responds to photons by sending out an output signal to “constrict its pupils,” the output of this machine and my brain are behaviorally equivalent: “if a person or system reliably produces the right answer, they can be considered to have solved the problem regardless of what procedure or representation(s) they used” (Horswill, 2007).

    Of course, there is the problem of feelings: it can be argued that since machines do not have “feelings,” they are but a simulation of the brain, so the machines perform a simulation of cognition, rather than the real thing. However, if we were to design a computational network that performed identically to a brain, a question arises whether the idea of “feelings” or “soul” will be present in the machine as a by-product of the internal processes. Although to say that the idea of “feelings” is exclusive to cognitive organisms seems romantic, if we were to assume that the machine is incapable of “feeling” like we do, then we run into the Other Minds problem: if the outputs of the machine is behaviourally equivalent to ours when presented with an input, how can we know for certain that it does not have feelings?

    ReplyDelete
    Replies
    1. “Refuting computationalism would be denying any possibility of programming or designing a machine to do what humans do.”

      I don’t think that refuting computationalism denies any possibility of designing a machine or robot that does what humans do. As Professor Harnad mentioned in class, computation can definitely be considered as a part of cognition but it isn’t all there is to cognition.

      Also as a side note, I think that your example of pupillary restriction would be an example of a vegetative rather than a cognitive process!

      Delete
  26.  "But if each individual neuron can be simulated computationally,
    then it should be possible in principle to simulate the
    whole brain by simulating the individual neurons and
    connecting the simulations together."

    This seems to be a pretty fundamental issue that comes up when we we try to define the concept of computationalism. Basically, if we can simulate the single neuron, can we simulate the entire brain? I have some issues with this question that I would like talk about.

    So, the idea that we can simulate the neuron comes from the initial debate on what it actually means to do computation. There have been lots of approaches to find a singular definition, but Alan Turing was able to devise an incredibly simple machine that could perform the computations of any computing device. Some people proposed that many functions of the human brain can be represented by the Turing Machine. The paper above discusses how the parts of the neuron can be represented by the parts of the Turing Machine. So if we can simulate what seems to be the basic unit, can we go on to simulate the entire brain?

    Can we simulate the neuron with a Turing Machine? My issue lies with the idea that the program is supposed to be represented by DNA. In other words, the set of stored procedures that instruct the neuron is supposed to be DNA. I can see how this would work in some examples (for instance, the procedure for establishing gender of the individual). But I think it is a weak statement to contend that DNA is the set of instructions for everything the brain does (for instance, saying something ironic). The computational model of the neuron seems applicable to biology, but not to psychology. That is, the model of computation can cover the vegetative functions of an organism, but I do not think it works for the cognitive functions.

    Furthermore, I think I am starting to see what might be an important distinction between computation and cognition. In computation, there are meta-representations, in which all other representations can be stored. This is because everything uses the same format for coding either data or procedures. So this is how a computer can program another computer, because everything uses the same underlying units (kind of like legos). Something similar to this in humans is chunking. This is when several different representations are "chunked" together to form a unified representation. This is how humans can actually remember so much information. But humans are certainly not perfect. The article above actually mentions how computers are much better than humans at remembering details. To me this makes sense, because chunking is not the same thing as forming a meta-representation. A meta-representation relies upon the fact that the units are always the same thing(binary text strings). There is a conversion going on, but no units are lost. All the 'stuff' is still there, simply arranged differently according to the program. But this is not the case in humans. Chunking fundamentally changes the nature of the representations being chunked. For example, a person might chunk these letters: ACLUFSUMIT into ACLU FSU and MIT. The important aspect to me is that units have changed. We have gone from letters to the names for universities. What the letters and universities have in common is that they are both made from characters from the Roman Alphabet. In a computer, there is no difference; units are units. Binary code is binary code. But in humans, the changing of units is what makes chunking so prevalent. It is hard to remember that many letters, but easy to remember just three names of universities. So the three names can be easily recalled, but the letters seem to be almost forgotten. The human remembers the names, and then computes(??) the letters by spelling out the names. The difference that I am trying to make is that meta-representations in humans do not actually contain all the information that they are supposed to cover, but in computers there is never a loss of information.

    ReplyDelete
    Replies
    1. “The computational model of the neuron seems applicable to biology, but not to psychology” I am not quite sure what you mean by this. If we have are able to computationally replicate a neuron, then it will effectively be bale to do everything a neuron can do. Why are you only limiting it to the vegetative functions and not the cognitive ones? I understand the argument of DNA that you bring but say we manage to computationally replicate/simulate the DNA as well, then won’t we be having a model that can do everything the brain can? I know that we are nowhere close in science to make this possible but theoretically if computation can model biological structures, it will be able to replicate cognition as well.

      I agree with your last paragraph and like the ideas you put forth. To add to your ideas, I would like to emphasize on the “worldly knowledge” that humans possess due to experience which an AI system lacks. A smart AI system would have a memory storing process that takes up the least memory and is quick to retrieve. On the other hand, humans can have so many different techniques of memorizing things. It shows a clear distinction between computation and cognition.

      From here I’d like to raise the question: is computation an extension on cognition? If so, I find it interesting how we (humans) are trying to model an extension of our cognition (computation) to model cognition.

      Delete
  27. On January 10th, you answered Anastasia Semikhnenko’s question with the following comment.

    "To pass the Turing Test is not to have the capacity to simulate any physical system, it's to have the capacity to do anything and everything a human being can do. And simulating thinking (cognition) would no more be thinking than simulating a heater would be heating.”

    I am confused by this. If a computer is able to do anything and every that a human can, would that mean that a computer is imitating a human? What is the difference between simulating everything and anything a human can do versus imitating everything and anything a human can do?

    ReplyDelete
  28. “A physical symbol system has the necessary and sufficient means for general intelligent action. [...] It means that any intelligent agent is necessarily a physical symbol system. It also means that a physical symbol system is all that is needed for intelligent action; there is no magic or an as-yet-to-be-discovered quantum phenomenon required. It does not imply that a physical symbol system does not need a body to sense and act in the world.”
    Even if a general intelligent system would need a physical symbol system I don’t think it would be sufficient. I find it strange that you could believe it doesn’t require any physical body and no relation to the world. So far, the only thing to our knowledge to possess some form of intelligence has actually this physical relation to the world. Maybe it depends on what they mean by general intelligent action. I would need clarification on that point. Even if I believe a physical symbol system could perform some task, it doesn’t mean it is aware of what it is doing: it simply follow the rules implemented in its particular system.
    “An agent can use physical symbol systems to model the world. A model of a world is a representation of the specifics of what is true in the world or of the dynamic of the world.”
    An agent can use physical symbol system to model the world, but does it has too? Would there be another way by which one agent could form a representation of the world? Also, would the physical symbol system be the same for each one of us, or is it intrinsically different based on our composition? Some would certainly argue that the physical agent (hardware implementation) doesn’t matter, but there (must/could) be a physical implication for the difference of me being me and you not being like me regarding reasoning and thinking.
    “Although no level of description is more important than any other, we conjecture that you do not have to emulate every level of a human to build an AI agent but rather you can emulate the higher levels and build them on the foundation of modern computers. This conjecture is part of what AI studies.”
    I would certainly like to take a step into the future to know whether or not AI will actually prove this to be right. It believe that by simulating only the higher level, it will reach (or maybe surpass) the level of human capacities. Why I think it might be proven wrong is that neurons, through their biological and chemical operations (lower levels), also do some kind of learning. For example, some genes can be turn off or on depending on the activity level of a neuron, which will in turn modulate the behavior of that neuron. By reproducing only the higher level of abstraction, AI would miss this important aspect and might not be able to achieve consciousness.

    ReplyDelete
  29. "There’s an old party game called “the imitation game” in which two people, classically a man and a woman, hide in different rooms while the other guests try to guess which is which by submitting written questions and receiving written answers.  The idea is to see whether the hiders can fool the rest of the guests.  In the terminology we’ve used above, the people in the
    rooms are trying to act behaviorally equivalent to one another.   Obviously, this doesn’t mean that if the impersonators win they’ve actually changed identities or sexes"

    Wouldn't the same go for computers and cognizing, in that only because a computer has successfully convinced us it is cognizing by behaving like it is doesn't mean that it is necessarily really happening?

    ReplyDelete
    Replies
    1. Yeah that’s exactly it! The whole idea is that it’s not because a computer can successfully fool a human into thinking it too is a human, that the computer is actually cognizing. I.e., an computer’s (hypothetical) ability of doing just as a human would do is not evidence that on the inside there is cognizing/feeling happening. Therefore the argument is that it isn’t enough to do in order to prove cognizing, because anything can theoretically do. So there’s no way of proving cognizing for sure just like there’s no way of knowing who’s the man or the woman in the party trick.

      Delete
  30. Would does it even mean to claim that an organic system functions computationally?

    ReplyDelete
    Replies
    1. Hi Naima,

      For an organic or any dynamical system to function computationally, it must carry out rule-based symbol manipulation based purely on the shapes of the symbols. This definition seems very vague, but that is only because computation is so flexible. Anything that can carry out this sort of symbol manipulation is just as much a computer as a PC is.

      Let’s assume that the computer in question is a Turing machine. All that a Turing machine needs to be able to do is to read symbols which are in a linear order, alter the symbols by erasing them and printing new symbols, and move among the symbols. The part of the Turing machine that does the reading is always in one of a finite set of discrete internal states. Based purely on the state it is in and what symbol it reads at a given time, its programming will determine exactly what it does, from among the above operations. Anything that can carry these operations out in this manner is a Turing machine, by definition. The symbols do not need to be letters written on a piece of paper. They have no meaning for the purposes of computation. All that matters is what the computer’s program tells it to do when it encounters that symbol. As such, they can be anything which the reader is capable of identifying as distinct from the other symbols. They could be boxes of different colours lined up on a shelf or lines in the sand, it does not matter as long as the part of the computer that does the reading can identify them based their form and manipulate them in the way that the program requires. For instance, regardless of whether or not cognition is computation, a person can act as a Turing machine and thus as a computer, as long as they are manipulating symbols in the manner described above.

      Not all computers work exactly like Turing machines, but they all perform formal, rule-based symbol manipulation. So to claim that an organic system functions computationally is simply to claim that it recognizes certain entities—whether they are letters written on a piece of paper, or depolarizations of a given voltage at a given synapse—as symbols based on how they appear rather than what they mean. Based what symbols it recognizes, it will do what its set of rules tells it to do. It might produce more symbols or move to look at a different symbol, but it will only ever deal with symbols, and only according to a set of rules. If a system does anything beyond this sort of formal symbol manipulation, it is not purely computational, but as long as some of the things that it does conform to these constraints, it is partly computational. If there is something going on in our heads that follows these rules of computation, we are partly computational systems.

      I’m afraid that I’ve done little other than to restate the definition of a computer, but I could only speculate as to exactly how organic systems actually carry out computations. The important thing is that ‘how’ is not the key question, since computation can be implemented in many different ways.

      Delete
  31. Computation is often linked to mathematics. It is implementation independent, which means that the hardware doesn’t matter (the way in which an algorithm is executed), as long as the result is the same (aka behavioral equivalence). The functional model defines computation as the workings of functions using inputs, yielding outputs, and following given procedures (science studies procedural knowledge). The inputs and outputs need to have a specific representation, which are processed as information. These representations lead to data structures. There can be many different levels of abstraction in computation, as all intricate operations can be decomposed into the processing of 0s and 1s. Another model is the imperative model. It takes into account the fact that a computation does not necessarily need to yield an output in order to be considered computation. Its only requirements are that it must be able to be followed mechanically, useful, and uses the same techniques as “normal” procedures. Procedures are thus commands that manipulate representations. Computation must also be interpretable, i.e. semantically relatable. However when an infinite loop is created, computation “halts”. Halting is determining whether or not given a certain input, would the program ever stop running? Computation is also used a lot in neuroscience, e.g. in order to model neurons firing, and attempts to simulate the brain.

    ReplyDelete
  32. The authors state, “For the entire system to run, it has to be realized in some physical form. The structure and the principles by which the physical object functions correspond to the physical or biological level” (p. 57)

    “Regardless of whether one takes the high road or the low road, in cognitive science, one is ultimately interested in where the computational mode is empirically valid – whether it corresponds to human cognitive processes” (p. 79)

    I am having difficulty grasping this concept. It seems like according to brain imaging, various regions of the brain are activated by different cognitive tasks. Therefore, wouldn’t it be a bit much to have a computational theory of mind. Therefore, wouldn’t this theory that deals with how stuff works inside our heads have to account for the nervous system as well? Simply said, the CMT alone seems like a shortcut for explaining cognition. If physiology contributes significantly to this process of cognition, then perhaps the CMT should explain this as well.

    ReplyDelete
    Replies
    1. Searle and the symbol grounding problem show that pure computationalism is wrong: Cognition is not just computation. But it can be part computation, and that part could be distributed in the brain, and brain imagery could tell you where it happens -- though it does not tell you what is happening there, because computation is implementation-independent. You have to figure out what algorithm is being executed there some other way, such as modelling it, as in T3.

      Delete
  33. One point I would like to raise was when Pylyshyn mentions that computers have had a significant influence on the study of cognition. If cognitive science was not studied before computers were available, would cognitive scientists still believe that the mind is made of a symbol system?

    Moreover, there was one passage of the article that I am still having some difficulty with.
    The article states, “cognitive algorithms, the central concept in computational psychology, are understood to be executed by the cognitive architecture. According to the strong realism view that many of us have advocated, a valid cognitive model must execute the same algorithm as that carried out by subjects. But now it turns out that which algorithms can be carried out in a direct way depends on the functional architecture of the device. Devices with different functional architectures cannot in general directly execute the same algorithms. But typical commercially available computers are likely to have a function architecture that differs significantly in detail from that of brains. Hence we would expect that in constructing a computer model of the mental architecture will first have to be emulated (that is, itself modeled) before the mental algorithm can be implanted.”

    Firstly, I want to say that I agree with the first portion of the paragraph and how the same functional architectures are needed to produce the same algorithms (where there is a minimum of one functional design that corresponds to a specific algorithm).

    However, there are certain issues that arise with the mentioned paragraph. It seems like there is a lack of empirical evidence supporting it and it seems like its too simplistic when cognitive abilities can be reduced to algorithms. Is there a bigger picture that we are missing? It seems like with the population; there are too many variations in behaviours, psychology in order for everyone to have the same capacity for mental algorithms. What about people who have cognitive deficits with certain brain disorders? How are we supposed to test these slight variations in cognitive capacity and thus algorithms? It seems like there is still a lot of research that needs to be done in this field, and how do we know when to draw the line with the differences in cognitive capacities? It seems like everyone has their own unique cognitive capacity; so how are we supposed to draw the similarities?


    ReplyDelete
    Replies
    1. 1. Computation (the Turing Machine) was formalized by Turing before computers were invented. It can be understood without computers. It's what mathematicians do.

      2. Pylyshyn is a computationalist. He thinks cognition is just computation. But he also believes in "strong equivalence," which means you have to find an algorithm that not only gives the same output for every input as the brain does ("weak equivalence") but it has to be the same algorithm that the brain is executing.

      (He is not saying that every individual person must be running all identical algorithms; there can be individual differences too.)

      You don't have to worry about "functional architecture" either: that is probably homuncular and goes against the implementation-independence of computation. The Turing Test only calls for weak -- input/output -- equivalence.

      Delete