Saturday 2 January 2016

1b. Harnad, S. (2009) Cohabitation: Computation at 70, Cognition at 20

Harnad, S. (2009) Cohabitation: Computation at 70, Cognition at 20, in Dedrick, D., Eds. Cognition, Computation, and Pylyshyn. MIT Press 


Zenon Pylyshyn cast cognition's lot with computation, stretching the Church/Turing Thesis to its limit: We had no idea how the mind did anything, whereas we knew computation could do just about everything. Doing it with images would be like doing it with mirrors, and little men in mirrors. So why not do it all with symbols and rules instead? Everything worthy of the name "cognition," anyway; not what was too thick for cognition to penetrate. It might even solve the mind/body problem if the soul, like software, were independent of its physical incarnation. It looked like we had the architecture of cognition virtually licked. Even neural nets could be either simulated or subsumed. But then came Searle, with his sino-spoiler thought experiment, showing that cognition cannot be all computation (though not, as Searle thought, that it cannot be computation at all). So if cognition has to be hybrid sensorimotor/symbolic, it turns out we've all just been haggling over the price, instead of delivering the goods, as Turing had originally proposed 5 decades earlier.

57 comments:

  1. On reading this paper, it is interesting to consider how artificial intelligence would be considered, even though it is not directly mentioned in this paper. As it has been mentioned ’The gist of the Turing thesis is that on the day we will have been able to put together a system that can do everything a human being can do, indistinguishably from the way a human being does it, we will have come up with at least one viable explanation of cognition’’. This passage seems to be describing the advent of artificial intelligence.
    With this statement, it is interesting to consider Zenon’s point of view who ‘’pointed out that neural sets were 1) uninteresting if they were just a hardware for implementing a symbol (computational) system, 2) irrelevant (like other dynamical systems) if they could be simulated computationally, and 3) subcognitive if they could be ‘’trained’’ into becoming a symbol system (which then goes on to the real work of cognition)’’.
    According to these two authors, if artificial intelligence will be properly developed as described by Turing, then neural sets will be uninteresting as they will fulfill the three characteristics mentioned by Zenon that make neural sets uninteresting. These sets will become uninteresting not because of their intrinsic changes but because of the advances in programming that will make programs give similar outcomes that these neural sets give. However, will there be some other component that will distinguish these artificial intelligent machines from humans?

    ReplyDelete
    Replies
    1. “However, will there be some other component that will distinguish these artificial intelligent machines from humans?”

      We know from the What is a Physical Symbol System? reading that “a physical symbol system has the necessary and sufficient means for general intelligent action” (para. 4), so it can be reasoned that these AI machines are computationally equivalent to humans. However, we also know that Stevan says “computation is not cognition” (introductory lecture), so they are not completely equivalent to humans. There is another component that distinguishes AI machines from humans, and that component is cognition. But how does cognition work? And can it be simulated with computers? If we are able to simulate it, and can successfully interface it with the AI machines, can they now be considered human?

      Ultimately, this article did not completely answer the questions raised in the 1a readings. What can be considered cognitive? How does cognition work? Is cognition uniquely human, or can it be recreated? And maybe that’s because we just haven’t discovered these answers yet. Throughout his article, Dr. Harnad is constantly stating what cognition is not and how cognition does not work. Yet he concludes the article in a way that leaves these ideas totally open-ended: “As to which components of its internal structures and process we will choose to call ‘cognitive’: Does it really matter? And can’t we wait till we get there to decide?” (page 9). So the way I see it, we have reached definitive conclusions about what cognitive processes are not, but we are still searching for a concrete answer on what they actually are and how they actually function. And because of that, we know that there is indeed some component that distinguishes these AI machines from humans, we just do not know how to implement it or if it can be implemented at all.

      Delete
    2. Cognition is as cognition does (i.e., what thinking organisms can do).

      And the explanation of cognition will be a mechanism that does as cognition does. There's no point trying to define the mechanism that can do it until you have built one and show it can.

      Computation was a candidate for being that mechanism, but we can already tell without trying that computation alone cannot do all (or even most or many) of the things thinking organisms can do. Computation, for example, cannot think or move. You need dynamic systems for that. And so computation cannot make the connection between its symbols and the things that those symbols stand for, their meaning.

      But we are not trying to "simulate" cognition: we're trying to generate it, which means generating the capacity to do the doing: the capacity to actually do what thinking organisms can do.

      Delete
  2. “There is still scope for a full functional explanation of cognition, just not a purely computational one. As we have seen, there are other candidate autonomous, non-homuncular functions in addition to computation, namely, dynamical functions such as internal analogs of spatial or other sensorimotor dynamics: not propositions describing them nor computations simulating them, but the dynamic processes themselves, as in internal analog rotation; perhaps also real parallel distributed neural nets rather than just symbolic simulations of them.”

    I have two things I’m trying to understand from this paragraph. In theory I understand all of those words separately but I’m unsure of concrete examples of what internal analogs or other dynamic functions are and how they assist their systems. I was similarly confused in our last class discussion when the concepts of dynamics were brought up.

    Secondly, I’m having a hard time understanding how even with this other dynamic function, how can this function work in tandem with a computational system. When trying to imagine how this works I do not understand how if the two systems work together the whole system doesn’t get defined as computation. I’m assuming this is the kind of system that the paragraph is discussing when describing cognition as not “purely computational”. An analogy that I’m trying to work out is that one could say the brain is purely computational and I can understand that there are clearly many processes in the human body that are dynamic and not computational and as such the human body as a whole is not explained purely by computation. I still get lost at the fine details of what the dynamic functions are and how they aid the computational processes to create the full product.

    ReplyDelete
    Replies
    1. Google "mental rotation" and try the task. An internal analog rotation would be dynamic rather than computational.

      Delete
  3. Are you suggesting of building a hybrid symbolic-subsymbolic system? Like a combination of a rule-based system with cascade correlation neural network for example?
    How would one go about creating a Turing machine of that kind if one does not know how to map which part of the system is cognitive and which part is dynamic?
    We would be faced with a symbol-grounding problem ourselves, would we not? Implementing a computer that does what it does because of how it is, creating a machine that would explain to us that in which introspection or computation alone does not work. Maybe I'm completely off the mark though.

    Also, "Your brain has to do a computation, a computation that is invisible and impenetrable to introspection," by invisible, are you suggesting the part of the brain that operates unconsciously?

    ReplyDelete
    Replies
    1. Yes, a grounded hybrid symbolic/dynamic system. Only the computational part is a Turing Machine. The grounded robot is hybrid. (I didn't understand your 3rd paragraph.) We are conscious of almost nothing the brain does. The fact that we are conscious while it does something does not mean we are conscious of what the brain is doing. And that's one of the reasons introspection is not a way to figure out how the brain works.

      Delete
  4. "The root of the problem is the symbol-grounding problem: How can the symbols in a symbol system be connected to the things in the world that they are ever-so systematically interpretable as being about: connected directly and autonomously, without begging the question by having the connection mediated by that very human mind whose capacities and functioning we are trying to explain!” (Harnad 2005).

    The symbol grounding problem ultimately represents why the idea that cognition as computation will fail since computation only connects symbols with symbols, devoid of the connection between the symbol and what it actually represents in the real world that allows for understanding. There was a somewhat similar example of how computation fails to produce our cognitive abilities in one of my past classes focusing on the idea of common sense and world knowledge. Take for example the sentence:
    “the committee denied the group a parade permit because they advocated violence.” If you were to give this to a computer in the Turing Test and ask who behaved unethically, I believe the computer would be unable to answer. This is because we have common sense as cognizing individuals and know that committees in general have regulatory authority through the world knowledge we have gained since birth while computers don’t have access to this. In an unrestricted Turing Test, it would be extremely difficult to have a computational program that could have the nearly infinite connections we make through common sense ready for any question that someone asked. If something is to pass the Turing Test and demonstrate some mode of cognition, then it will have to have common sense.

    ReplyDelete
    Replies
    1. But how do you know common sense is not computation too?

      Delete
  5. Regarding Searle's Chinese room argument, I have trouble accepting the fact that Searle truly has no understanding of Chinese. By being exposed (inside the room) to a database of Chinese symbols and the program rules for manipulation, and considering that he is memorizing and executing these symbol manipulation rules, how can he claim to not understand? What is his standard of understanding?

    When learning a new skill, such as a new language or new sport, if I have all the necessary tools, instructions, and am executing the behaviour properly, what I am lacking is not an understanding of the skill perhaps, but rather the self-confidence to admit that I am understanding.

    So, I am hesitant to accept that Searle's truly does not understand, and am not entirely convinced that cognition cannot be adequately represented by the Turing Test...

    ReplyDelete
    Replies
    1. Hexadecimal Tic-Tac-Toe

      It's a little early for Searle's argument here. Kid-sib hasn't done the readings for Week 3 yet. But we have no secrets, so let me know what you think of this:

      You know the game of tic-tac-toe. There are rules for playing it. Supposing we translate the rules into a hexadecimal code that is not spatial but merely strings of symbols. You don't see any X's or O's or diagonals. Just strings of symbols, and you know the rules for manipulating them. If Searle were taught those symbol-manipulation rules (and not told it was tic-tac-toe), would he know he was playing tic-tac-toe? Would he be understanding tic-tac-toe? Yes he could be playing against you, and for you the strings could be transformed into spatial tic-tac-toe, and you'd be sure Searle was not just playing the game, but understanding what he was doing.

      Ditto for Chinese and its symbol manipulation rules.

      Delete
  6. Definitely making a note to read all articles before beginning commentary on the first article because Harnad’s article made some of the Pylyshyn arguments a little less fuzzy. One of the major points that stuck out to me was the argument that the physical properties of the hardware system, while not independent of the software, are irrelevant to the software’s existence because the software should theoretically run on many types of hardware. To me this argument is mostly feasible when it comes to computers.

    There are a few questions I have about this claim when applied to humans. Firstly, does this not assert that (if cognition = computation, and hardware = action potentials) human software is not unique to the individual? The way I am interpreting software here is that this software is cognition in that it IS how we do the things that we need to do. If I download chrome software, I can run it on both Mac and PC hardware (hopefully this is an adequate example; I am not particularly computer-savvy—yet). So does this mean that I could take anyone’s “software” and “run” it on another brain (whose hardware will also differ physically, as brains do) and it would run the same, with the same outputs? I feel like humans must have highly variable cognitive pathways paired with equally variable hardware. If Jack’s software is loaded onto Joe’s hardware, how do we know that Joe’s hardware is capable of running this program? If my brain is installed with the software of a person with a severe mental illness, will I run their program the same? I guess this is getting into the other mind problem, because we can’t prove either way that our cognition/software is unique from anyone else.

    If computationalists are correct, and we could just extract our software AND additionally reverse engineer a computer that functioned exactly like our specific brain in which to run that software on, there are still interactions between software and hardware that I have never seen performed by a computer. Cognitive patterns modulate the brain, which results in both large and small scale physical changes in the brain—whole areas grow and shrink. While I understand that meta-programming is software changing software, I’ve yet to see a computer physically expand its parts because of software. Although...I can't say for certain that its impossible.

    I’m not sure if these are valid arguments against viewing the brain as hardware and software but I suppose it just struck me that there is much more to consider besides the fact that both computers and brains take input and make output.

    ReplyDelete
    Replies
    1. Doppelgänger

      If computationalism is true, and just running the right software (on hardware that can run it -- obviously not on hardware that cannot run it) then, yes, if you ran erased Renuka's software and replaced it my Riona's software, Renuka would become Riona (at least until she opened her eyes and saw that she no longer looked the way she had looked until that moment).

      But computationalism is almost certainly wrong, so these sci-fi fantasies are almost certainly impossible. Both Renuka and Riona think and understand (and feel) not just because of the computations (software) being run by their hardware but because of the dynamics of their "hardware" -- and their hardware is not just a Turing Machine for running the computations. They are robots, with sensorimotor dynamics (at the very least) and probably other dynamics that have nothing to do with running software.

      Other bits of sci-fi fantasy: If you cloned Renuka at birth, you wouldn't get two Renukas, you'd get identical twins, diverging from the moment they started to grow or move.

      I won't bother with 3-D printing of Renuka today, but if it were possible, the two Renukas would look at one another (the original and the 3-D copy) and from that instant on they would become different Renukas.

      Delete
  7. Already today, we have telephone robots who understand full sentences and can speak back to you (try calling the Apple store). Granted, they don't have an entire vocabulary but the idea is the same. But they are clearly just doing what they are programmed to do, not cognizing. They are hearing words and following the commands "if you hear this string of words, say this next". They even are programmed to know when you are frustrated: http://www.dailymail.co.uk/sciencetech/article-2236832/Apples-robot-operator-knows-pushed-far-Swearing-gets-connected-real-live-human.html. And they use human-sounding language that is built into them so that if they're sentences were being typed like in the TT, one might easily think it's a human being. Siri is another good example. Searle is right in that even if a computer could pass the full TT test and converse in such a manner that they would be confused for a human, this doesn't mean they are cognizing - language seems so easily programmable. Using Searle's example then though, of him memorizing Chinese symbols but not understanding their meaning, would we not consider memorization to be cognition? If I were a poor student and I'm asked a question on an exam that I read, and regurgitate an answer which I memorized the night before, I'm still cognizing am I not?

    ReplyDelete
    Replies
    1. 1. But passing T2 us to be able to do what Siri does, but to do what Renuka does, below, and for anything said to her, and for a lifetime.

      2. Memorizing is not cognition, and computers (and tape recorders) do it all the time. But remembering is cognition -- and it feels like something to remember. Computers don't feel anything.

      Delete
  8. “and if, as both brain imaging data (Kosslyn 1994) and considerations of functional optimality subsequently suggested, dynamical analog rotational processes in the brain really do occur, then there are certainly no grounds for denying them the status of being “cognitive" " (pg 6)

    My understanding so far is that cognition requires an understanding of how the mind functions in order to do the things that it does. The reason why mental imagery theory was dismissed as an armchair theory of cognition is because although it explains recall using mental images of (for example) people, it fails do explain how these images are conjured, how they are identified, who identifies them etc. The quotation above seems to contradict this definition of cognition – specifically the stipulation that cognition must explain the “how” of a specific process.
    The argument seems to be, if dynamical processes are occurring in the brain (analog rotation) then they could possibly be cognitive. Although the explanation for mental rotation tasks seems less elusive than the 3rd grade teacher recall example because it involves specific correlations involving degree of rotation, I am still left wondering, what is being rotated? Is it a mental image (that is being rotated)? How/who “knows” when the image has been rotated enough? These seem to be unanswered “how” questions, leaving us in a similar predicament as the one used to justify the dismissal of the mental images theory.

    ReplyDelete
    Replies
    1. The passage about imagery was referring to Shepard's mental rotation task, and an internal analog rotation of visual projections of shapes on the retina would be able to do that (toy) task. So analog imagery and rotation could be part of the architecture of the mechanism that generates cognition. It need not all be just computation. (And when it comes to robotic capacity, it can't be all just computation.)

      Delete
  9. “The only way to do this, in my view, is if cognitive science hunkers down and sets its mind and methods on scaling up to the Turing Test, for all of our behavioral capacities. Not just the email version of the TT, based on computation alone, which has been shown to be insufficient by Searle, but the full robotic version of the TT, in which the symbolic capacities are grounded in sensorimotor capacities and the robot itself (Pylyshyn 1987) can mediate the connection, directly and autonomously, between its internal symbols and the external things its symbols are interpretable as being about, without the need for mediation by the minds of external interpreters.”

    In thinking about the T3 robot that is reversed engineered, although it passes the Turing Test and is able to ground symbols into sensorimotor behaviour, could this robot merely be an unconscious “zombie” (I use this term as Ramachandran did in his book ‘Phantoms in the Brain’, as referring to a theoretical being unconsciously doing everything in exactly the same manner that a conscious being does without the experience of feeling)? It seems that this line of thinking cannot possibly be the entirety of human cognition. It would be an illusion of how the human mind works – very different cognitions may be going on inside the “black box” of the mind (black box in the sense that we still do not know what is going on). Although the robot goes one step further in being able to ground the symbols to the world we live in, it still is not enough to understand the human mind. I agree with Harnad that although computationalism attempts to answer the tough “how” questions, it does not hold all the answers. In this regard, I do not think the dynamics of the brain can be ignored if we are to truly understand how we cognize, think, feel, etc.

    So assuming that we can resolve all the so-called “easy” problems (such as passing the Turing Test) of explaining the HOW we are able to do what we do, will this accumulation of data about behaviour (and brain processes if considering Chalmers’ definition of third-person data being objective) eventually lead to understanding of the “hard” problems (such as how we feel, or how did we come up with our third-grade teacher’s name)? If we are able to do this, then we can get rid of the fallacy of the little homunculus inside our brain as outlined in the paper. In this case, I feel that similar to the idea of phenomenal consciousness (P-conscious states are experiential, that is, a state is P-conscious if it has experiential properties. The totality of the experiential properties of a state are “what it is like” to have it. Moving from synonyms to examples, we have P-conscious states when we see, hear, smell, taste and have pains” (Block 1995: 230)) being irreducible to specific brain functions, the “software” (computation) is irreducible to the “hardware” (brain functions/physiology). However, this raises the question of if it is even possible to design a test to figure out if a robot cognizes the way that we as humans do?

    ReplyDelete
    Replies
    1. Block Buster

      Yes, the T3 robot could be a Zombie, but because of the Other-Minds Problem we can never know whether or not it's a Zombie -- just as we can never know with one another. And that's because Renuka and Riona are completely indistinguishable (behaviorally -- not physiologically) from any of us. And that's the real power of the Turing Test. You can't ask for more (unless you insist on T4, but then you have to explain why and how T4 is relevant to not being a Zombie!).

      But robots aren't just computers either.

      (And "Stevan-Says" there are no Zombies, and anyone that passes T3 feels. But because of the "hard problem," no one can explain causally how or why T3 feels.)

      There is only one kind of consciousness and that is feeling. There is no P-feeling vs A-feeling. And what is unfelt is unconscious. P is about feeling and A is about access to data (which can be either felt access or unfelt access). (Even though I published Block's PC/AC paper way back when, I always thought it was bollocks!)

      Delete
    2. Thank you for posting this article critiquing the A-consciousness / P-consciousness distinction! I remember learning about it in another class and finding the distinction a bit troubling.

      Delete
  10. ''Vocabulary learning – learning to call things by their names – already exceeds the scope of behaviorism, because naming is not mere rote association: things are not stimuli, they are categories. Naming things is naming kinds (such as birds and chairs), not just associating responses to unique, identically recurring individual stimuli, as in paired associate learning. To learn to name kinds you first need to learn to identify them, to categorize them (Harnad 1996; 2005).''

    The ability to recognise and place objects under specific, arbitrary linguistic labels is something that particularly fascinates me about the human brain. Categorisation is an extremely important way of bringing meaning to the world and allows us to make sense of the huge amount of input we are exposed to. However, I feel that simulating how humans learn to categorise with a computer could be extremely difficult. It is not as simple as receiving clear input with clear labels. Instead, it seems to involve a huge amount of inference and interpretation of non-linguistic cues.

    When exposing children to new objects, parents often use a range of labels to describe what is in front of them. For example, using a category name (''dog''), using an assigned name (''Bob''), using a descriptive adjective (''big'', ''black''), using a verb (''bark''), or even a term perceived to be ''child-friendly'' (''woof-woof''). How are children able to (a) infer what part of the scene in front of them is being described, (b) learn the meaning of the labels and (c) learn to extend the labels to other items.

    I would speculate that perceiving non-verbal clues (such as pointing, eye gaze direction, tone of voice) are very important for imagining what the parent is trying to convey. I'm sure that with many, many tries it would be possible for a computing program to eventually learn to categorise objects in the right way. But perhaps the reason a child can learn so much faster is because they are somehow able to ''sense'' what is trying to be communicated to them? And maybe a desire to understand and be understood somehow accelerates the learning process? Which may intrinsically link the computational aspect with feeling? These are all very abstract ideas, but would be interested to see what people think!

    ReplyDelete
    Replies
    1. These are interesting questions! Perhaps valuable insight can be gained from studying the communication abilities of human infants through the lens of cognitive science? More specifically, studying the dynamic functions of cognition that enable infants to adapt, analyze, and adopt human communication abilities in the first days and weeks of their life. Ideally, cognitive scientists would “find” (or in other words, discern and explicitly define in symbols we understand) the mechanism of the mind that gives infants the capacity to understand and communicate successfully with almost zero preexisting experience of knowledge. 
       
      I don’t know very much about this at all, but it seems to me to be another “poverty of the stimulus” situation (Chomsky). Similar to the case of Universal Grammer, the cognitive mechanisms that allow human infants to effectively communicate at such a young age cannot be extracted from the human mind and explained. 

      This post led me to think about a lecture from previous course, where we were taught about a study showing that infants of only two days old could mimic basic facial expressions. A similar study by Metlzoff and Moore (1983) titled “Newborn Infants Imitate Adult Facial Gestures” showed that infants ranging from 0.7 to 72 hours old can imitate two adult face gestures (mouth opeing and sticking your tounge out. The authors point out that stimulus-responses linkages learned early in life cannot account for infant imitation, because “young infants can also copy behaviurs that have not been part of any previous adult–infant interactions (Meltzoff & Moore, 1983). Good read if anyone is interested!

      Meltzoff, A. N., & Keith Moore, M. (1983). Newborn Infants Imitate Adult Facial Gestures. Child Development, 54(3), 702. doi:10.1111/1467-8624.ep8598223

      Delete
    2. We all can agree with Rose and say that the human ability of categorization is extremely fascinating. Replicating categorization through computation seems to be an extremely complicated task (not sure if it is possible or not). There are some categories which have set rules to enter (e.g. objects larger than a breadbox) whereas there are other categories that keep changing over time or don’t have strict definitions. For computation to replicate these categories would be difficult because it would take a lot of “worldly knowledge” and inferencing existing elements to do so.

      I do not particularly agree with the argument of feeling, categorization and learning. I don’t think a child learns faster than computer because of sensing. In fact, a computer can categorize a lot more if it spent the same amount of time as a child. I think a child’s “feeling” would not cause it to learn faster/better but it give a sense of motivation or desire to learn. That’s where it may differ from a computer. Additionally, there are many arguments that rely on intuition and an innate capacity for learning. Since not much is known, we can only assume that it may be one factor which distinguishes our categorization from computers.

      Clarice, thanks for the paper! It was a good perspective into learning at a growing stage.

      I guess based on this, I believe that humans and computers can categorize but have different means of doing so. They can achieve a similar end result. There is so much we do not know about the human brain, and hence replicating all of that through computation is even harder. Maybe trying different techniques through AI could help us understand our cognitive capabilities.

      Delete
  11. I really enjoyed the way the article starts by talking about the distinction between a memorization and a computation, and how behaviorism was attempting to look at how we have the capacity for so much thought and spontaneous cognition.
    “Beware of the easy answers: rote memorization and association…. Surely we have not pre-memorized every possible sum, product and difference? (2).”

    This seems like the inverse of the way that animal behavior is explained. There is a tendency to anthropomorphize the behavior of an animals and the stories told about their behavior sometimes end up attributing more cognition than what might actually be happening - in some cases, what appears as cognition is actually rote memorization and association, whereas in this example the opposite is true in that something that appears basic is more complex than we realize.

    I am slightly confused about the difference outlined between mental states and computational states in regards to how looking at them in this way solved the mind body problem.

    “If the mind turns out to be computational, then not only do we explain how the mind works but [we also explain dualism]… with understanding how the mental states can be physical states: It turns out they are not physical states! They are computational states.”

    I understand that this analogy is flawed, but it seems to me that a physical state and a computational state really might not be that different if we take the hardware/software analogy to be serious (couldn’t a physical state just be one type of hardware that allowed for one type of software like the Mac/PC example from the other reading?)

    Also, why does Searle think the TT is insufficient? I understand why it can be used as evidence that cognition doesn't mean computation and that semantics of a message aren’t the same as syntax. I find it hard to show that it is insufficient for testing cognition at all or at least understanding how close an imitator of human cognition a computer can be.

    “...symbolic capacities are grounded in sensorimotor capacities and the robot itself can mediate the connection, directly and autonomously, between its internal symbols and the external things its symbols are interpretable as being about, without the need for mediation by the minds of external interpreters.”

    This is scary to me, I cannot imagine at all what a Turing test would look like for these robots.

    ReplyDelete
    Replies
    1. Hi Julia, I read the line about mental states not being physical states but instead being computational states to mean that the mind, if taken as a computer (like a Turing machine), has the potential of being universally interpreted. In the way that a turing machine is universal and therefore can be simulated by any computer/is able to compute any known computation, there could in theory be a direct simulation of any brain REGARDLESS of the substance it's made up of. Anyway, this is the way I saw that distinction because then it just matters that the mental states are computational and not necessarily of certain physical features (this is how the physical nature is deemed "irrelevant"). As for a further question I have on this topic, do you or Professor Harnad think this flexibility in physical state has a limit? Does all of this have to mean that there could be anything housing the computation ability of the mind so long as it meets the criteria of being able to simulate it? Futhermore, I'm not sure how to put this succinctly without sounding confusing but would this simulation have to be broken down into second by second "snapshots" of the mind? Because we know that the mind develops as we undergo life's experiences.. would the simulating computer then also develop or would there be many separate 'computers' that make up the function of one's mind?

      Delete
    2. This is really late to the game, but I was wondering about that after reading 1a, like if we had a human like computer or a brain like machine (that was physically different than a brain like you say), would it be able to change and adapt and be "plastic' in the same way a human brain is? I have no idea, but my intuition is that if a machine were programmed correctly it could "learn" but maybe not the exact same way we do (because of heuristics and other behavioral things)

      Delete
    3. Hey Julia, I think you're right. They would definitely not learn in the same way we do (via the strengthening of synapses), but it's reasonable that computers could “learn.” Although I have no idea how it can be done, it makes sense that they could be programmed to incorporate past experiences into their decision making process. Maybe if you had a button that told it whether its behavioral output based on a certain input was right or wrong, and some sort of counter system to keep track of your responses. So when it receives a certain type of input, it defaults to the program corresponding to the behavioral output with the highest number of “correct” responses. In other words, the program that is most likely to produce the desired behavior. If you tell it that it's wrong, it simply moves on to the next highest program associated with the input, and repeats that process until its produces the behavior you are looking for.

      Delete
  12. This article was a great way to understand why computation cannot be the only answer to fully explain cognition and that rather there is a hybrid system at play.

    In particular, I agree with Professor Harnad in believing that the Turing Test needs to involve all of a human’s behavioural capacities in order to explain cognition. Like Searle, I disagree that if a computer successfully passes the classical TT, it can cognize. If the purpose of the TT is for the computer to fool humans in believing that they are human and not machine, then is the test not just plain trickery of the human mind? Is the TT not just a measure of a human’s gullibility in perceiving sensible sequences of phrases as being generated by a human?

    Take the software program, Eugene Goostman for example. It was developed to imitate a thirteen year old boy from Ukraine and ended up convincing about 30% of judges that it was human back in 2012. The media exploded shortly after claiming that Goostman had passed the Turing Test. In that case, it was said to not be linguistically distinguishable from real humans but does that truly make it an intelligent machine? Turing did not specify (scientifically) his definition of intelligence so what are we really measuring in this test? The ability to imitate human conversation using these kinds of chatbots is not intelligence and in my opinion, is setting AI back. There are no “thinking processes” going on but rather pre-programmed responses (or learned responses from human input like with Cleverbot).

    To summarize, a robotic version of the TT, as Harnad proposes, must be the next step to take if we still want to tackle the question of cognition and not just simple replication of sensible human dialogue.

    ReplyDelete
  13. At this point, I'm fairly convinced that cognition is not computation alone, but also requires dynamics to generate sensorimotor capacities and ground internal symbols in their real-world physical referents. I'm convinced, yes, but I'd like to play devil's advocate for a moment, in defense of computationalism. Imagine we took a living human being and had technology sufficiently advanced to remove his entire nervous system from his body (which would presumably look like a tadpole) and place it in a vat where it would be kept alive. Then, using this miracle technology, we stimulate the cells in his nervous system so as to exactly recreate his internal AND sensory reality prior to being de-brained. There is no doubt that the nervous system is functioning as it was before... minus the dynamics.

    So, according to our current model, cognition must be part computational and part dynamic, so the brain-tadpole in question cannot be cognizing. But why?

    I suppose it's possible that I'm hung up on the our definition cognition which implies that "cognition is as cognition does"... that the brain-tadpole is not cognizing because it can't walk around, and do all the things we do...
    And one might argue that the use of external stimulation means that the brain-tadpole's assignment of internal symbols to (what it perceives as) external things is "mediated by the minds of external interpreters"...
    And I'm sure you're thinking that a simulation of reality is not the same as reality...
    But I can't shake the feeling that the brain-tadpole would be cognizing and feeling as it was before. For the brain-tadpole, the world is as it was. There are physical things associated with internal symbols. Why is our "reality" more grounded than theirs, that we would call their external referents internal symbols. From the perspective of the brain-tadpole, nothing has changed, yet their cognition is deemed incomplete. How can we know that they're not thinking and feeling?

    By extension, if we could build a machine like this which would pass the T2 level of the Turing test, and we could simulate a reality for the machine by "stimulating" it appropriately, why wouldn't this machine also be cognizing?

    It's entirely possible that I'm grasping at straws here, and that I'm presupposing too much knowledge about the inner workings of a brain-tadpole, but I'd love to hear some other thoughts!

    ReplyDelete
    Replies
    1. This is a very interesting thought experiment! I would think that if it were to "pass the T2 level of the Turing test" (on more than just a one time basis, so during every try continually pass the Turing test) then it would indeed be deemed to be cognizing. Perhaps one could ask instead if we could simulate a reality for the machine by "stimulating" it appropriately, could this conceivably be sufficient to pass the T2 level of the Turing test? So assuming as you have previously, the nervous system, devoid of the rest of its body, which mediated its contact with the external world and external stimuli in general, would now receive its information about 'what's going on out there' from a stimulation process. But what this nervous system/tadpole in a vat would do with this information would be the same kind of dynamic 'stuff' as what a nervous system in a body would do and as such, the nervous system/tadpole in a vat would be cognizing as well! Or at least i should think so...

      Delete
  14. If it is true that the only way to successfully have a robot cognize in the way that a human mind does would be for cognitive science to raise the threshold of the test by changing "its...methods on scaling up to the Turing Test, for all of our behavioral capacities." But, wouldn't that have to also mean we take into account the unique development of the human brain throughout childhood? Would a computer mind then need to 'develop' in the same way a human brain does in order to allow for the exact external-internal interactions to take place as if it were housed in a human body? Or would all of this be programmable? Doesn't this seem like the only way that a computer brain can truly simulate a human brain... I know this is an abstract thought but it's what came to mind at the end of the paper.

    ReplyDelete
    Replies
    1. Great point!! i find this very curious as well! i think perhaps Harnad points to a distinction when he states the need for "the full robotic version of the TT, in which the symbolic capacities are grounded in sensorimotor capacities and the robot itself (Pylyshyn 1987) can mediate the connection, directly and autonomously, between its internal symbols and the external things its symbols are interpretable as being about, without the need for mediation by the minds of external interpreters."
      Though i also still have difficulty determining if this means that the robot is deemed to have some sort of independent volition that arose from the original program it was computed with.

      Delete
    2. On second read, Harnad does clarify that part of the internal structures and processes of the robot would need to be dynamic, thus it would definitely not all be programmable! As to how/if one would need to 'teach' the robot for it to 'develop', I am rather stumped by that too..

      Delete
    3. Though perhaps that is beside the point when it comes to questioning whether or not this part computational, part dynamic robot would be TT passing !

      Delete
  15. In the previous reading on the Turing machine, it was explained that the Universal Turing Machine could run any program, as long as that program was rewritten in terms of the symbols of the universal machine (i.e. to adapt to the dynamics of the universal machine). If this stance is considered for Computationalism versus the theory proposed in this article (i.e. some combination of dynamics and computation), why can’t it be true that humans simply haven’t yet figured out how to « rewrite » the symbol language of computation in terms of the dynamics (hardware) of the human brain, and as such, cannot reproduce its functions? I am confused about whether the article is suggesting there is more to the hardware dynamics of the brain that we do not yet understand, or if it is suggesting that there are dynamics to the system that do not exist solely as hardware to run software, but instead interact with the hardware/software computation to affect it’s outcomes (resulting in phenomena such as feeling and consciousness that we discussed in class). Would that be correct? I.e. there exists an ‘additional’ dynamic. This dynamic may itself be controlled by or triggered by computational processes of the brain, and may go back to influence computation hardware/software (almost like self regulation in cellular processes, to borrow from physiology), but is not itself classifiable as hardware or software in terms of the Turing machine understanding of computation.

    ReplyDelete
  16. In the previous reading on the Turing machine, it was explained that the Universal Turing Machine could run any program, as long as that program was rewritten in terms of the symbols of the universal machine (i.e. to adapt to the dynamics of the universal machine). If this stance is considered for Computationalism versus the theory proposed in this article (i.e. some combination of dynamics and computation), why can’t it be true that humans simply haven’t yet figured out how to « rewrite » the symbol language of computation in terms of the dynamics (hardware) of the human brain, and as such, cannot reproduce its functions? I am confused about whether the article is suggesting there is more to the hardware dynamics of the brain that we do not yet understand, or if it is suggesting that there are dynamics to the system that do not exist solely as hardware to run software, but instead interact with the hardware/software computation to affect it’s outcomes (resulting in phenomena such as feeling and consciousness that we discussed in class). Would that be correct? I.e. there exists an ‘additional’ dynamic. This dynamic may itself be controlled by or triggered by computational processes of the brain, and may go back to influence computation hardware/software (almost like self regulation in cellular processes, to borrow from physiology), but is not itself classifiable as hardware or software in terms of the Turing machine understanding of computation.

    ReplyDelete
  17. This article definitely helped me to understand better the argument that computation is not cognition, but some points remain unclear to me regarding cognition and the attempts to explain the mechanisms behind it.

    When we consider computational neuroscience in the sense that mental processes occur through a certain pattern of neuronal firing, what are we missing to explain cognition. For example, if we could predict the neuronal firing pattern of all human processes, have we explained “how we do what we do?” If not, what else is there left to explain to fully unearth what cognition is? Is it the fact that we cannot predict the neuronal firing pattern of all processes or is there something fundamentally missing?

    Secondly, when explaining cognition, the word “dynamics” is thrown around a lot, but I am still unsure of what this “dynamics” actually refers to. I saw that a similar question was asked earlier and they were referred to the mental rotation task, which does clear the picture but I am still lacking a complete understanding. I understand that part of the explanation has to do with something that is continuous rather than formed into discrete steps, but I feel that this definition is missing something in the context of cognition.

    ReplyDelete
  18. I too was slightly confused with the use of the word "dynamical". The context of "dynamical physical level", I understand it was being referred to the hardware of a machine, but when it is referred to in the context of "the relation between computational and dynamical processes in cognition", I'm not 100% clear. I take it to mean processes which can occur on any machine/person (hence the term dynamical), regardless of the physical differences between them.

    Of course this whole paper brings up many important points of discussion, however what caught my eye was the footnote at the very end:
    "One could ask whether grounded cognition (“sticky” cognition, in which symbols are connected to their referents, possibly along the lines of Pylyshyn’s (1994) indexes or FINSTs) would still be computation at all: After all, the hallmark of classical computation (and of language itself) is that symbols are arbitrary, and that computation is purely syntactic, with the symbols being related on the basis of their own arbitrary shapes, not the shapes of their referents."
    So in grounded cognition, a symbol acquires its meaning based on other symbols. But, going back to the symbol grounding problem, where do these other symbols get their meaning? Perhaps cognition does contain primitive symbols, like in computation, which are present at birth. So in this sense, grounded cognition can still fit the description of classical cognition in some sense: if you were to get down to the primitive symbols, they would behave arbitrarily - although I suppose this arbitrary behaviour could only extend so far, there would likely be certain biological predispositions.

    ReplyDelete
  19. I agreed that the mind alone which just simple symbol manipulation is not enough to have understanding or cognition, but the idea that somehow adding sensory instruments in order to allow the same symbol manipulation device to interact with the world gives it cognition doesn't make sense. Because there is nothing about sensory instrument that adds understanding. You may say that because this machine can look at things it "knows" what things are. You say this because this is how you explain how you work. You know that the word ''cat" means the image of a cat. But there is nothing here that has added cognition, all we have done is made it more human, but this images are still processed through symbols, and manipulation of those symbols. And again this is the same thing that Searle showed to have no understanding of cognition.

    ReplyDelete
    Replies
    1. This comment has been removed by the author.

      Delete
  20. This article was very insightful because it shed light on more recent explanations of cognition. "Searle thought the culprit was not only the insufficiency of computation, but the insufficiency of the Turing Test itself; he thought the only way out was to abandon both and turn instead to studying the dynamics of the brain".

    Firstly, it would be interesting to see Searle’s studies regarding the dynamic aspects of the brain and if his results further support his hypothesis. Secondly, I fully agree with Professor Harnad’s belief that the above statement is a bit far-fetched. Although the computational explanation for cognition is partially correct, I believe that it does not fully encapsulate cognition as a whole. Therefore, the TT needs to involve all of a human’s behavioural capacities. However, how do we know that this addition will then fully encompass cognition. Also, how do we know when we have included ‘all of the behavioural capacities’ known to mankind - it seems that they may appear in varying degrees, can be arbitrarily classified or be a sub-behaviour of another behaviour. Ultimately, with the exponentially growing field of research, there are numerous advancements in the field of cognition; I don’t think that there could ever be one full explanation because there is always so much that is unknown and needs further research.

    ReplyDelete
  21. "As we have seen, there are other candidate autonomous, non-homuncular functions in addition to computation, namely, dynamical functions" (pg 8)

    What do dynamical functions and processes mean in relation to the hardware? It is unclear to me how we can try to explain a physical system of neural networks with theories that don’t include the hardware. I am curious to see an explanation of cognition using dynamics of the brain.

    ReplyDelete
  22. I had similar concerns to Alba (commented above) at the end of this paper about the notion of a Turing Test encapsulating all human behaviours. Would this robot be programmed to develop in a manner analogous to a human, thus developing behaviours and gaining knowledge in a characteristically human way? Or would this robot be programmed to be a fully fledged, developed human and if so, would it really be ethical to create a robot that is behaviourally equivalent to an average human but has been programmed with a false history of memories and view, behaviours, etc. that are biased by the programmer (at least they would be at first perhaps until the robot began to further interact with its environment)?

    Furthermore, when development is brought into play I find the notion of hardware-independence loses its appeal since there are many intricate biological processes (neural growth/pruning, chemical and synaptic changes, and changes in DNA transcription and methylation) that appear to play reasonably large roles in our own development and aging. The "degrees of freedom" (which Professor Harnad has spoken of in class) for hardwares that would be capable of instantiating such a cognitive software seem to me to be very slim in light of this.

    ReplyDelete
  23. Something strikes me as off about the last paragraph:

    "We cannot prejudge what proportion of the TT-passing robot’s internal structures and processes will be computational and what proportion dynamic. We can just be sure that they cannot all be computational, all the way down."

    Isn't the whole point of the TT to recreate human behaviour so that we can understand it at a fundamental level? How are we to come up with a causal explanation of cognition if we don't even know at a high level whether or not a certain process is a dynamic or computational one?

    ReplyDelete
    Replies
    1. "The only way to do this, in my view, is if cognitive science hunkers down and sets its mind and methods on scaling up to the Turing Test, for all of our behavioral capacities."

      I do not think that this paper addresses all the behavioral capacities that humans can do. T5 is the full robotic version of the Turing Test, in which meaning is grounded in the robot itself. I think it is incomplete to stop there. You can take apart and put back together a robot, and there will be no difference in function. But if you take apart a human, you now have a dead human. Dying is unique to the meaning-making entity. I propose that death should be T6 (or perhaps replace T5). A robot that could die would be able to create meaning.

      By the way, time only exists for things that die, so is meaning time?

      I cannot answer HOW we create the connection between representation and physical thing. But I think I know WHY. Meaning is what gives us life. Meaning is life and survival. Feeling is life. Death is no feeling. Feeling hungry makes me eat food, which makes me live. Why feeling? Because life.

      So anything alive feels. Be nice to animals.

      Delete
  24. Response to the youtube video ‘Categorization (02/13):

    'Stevan says' hardware is a dynamical system. When a software (a computer program) is performed on a hardware it becomes a dynamical system.

    Then 'Stevan says' that Pylyshyn's theory is that the brain is performing a computation. What the brain is performing is is not dynamic, not analog but digital and computational.

    I am confused by this theory. The brain, like hardware is a physical object. (Ex: It is known that when certain brain regions become activated, neurons in other regions in close proximity to this activated region can become activated as well.)

    Why does Pylyshyn's theory state that the brain is not dynamical when it ‘runs’ computational thoughts like a hardware runs a computer program? Why is it that when a computer program is run by hardware it is considered to by dynamical, but when a brain cognates, it is considered to be not dynamical, but instead digital and computational?

    ReplyDelete
    Replies
    1. Hi Lucy,

      “Why does Pylyshyn's theory state that the brain is not dynamical when it ‘runs’ computational thoughts like a hardware runs a computer program? Why is it that when a computer program is run by hardware it is considered to by dynamical, but when a brain cognates, it is considered to be not dynamical, but instead digital and computational?”

      I think that you are misunderstanding the relationship between dynamical and computational systems. There is no contradiction between being a computational system and being a dynamical system. Computational systems are implemented on dynamical systems. The software is the computational program that is executed on the dynamical hardware. So a physically implemented computational system is both dynamical and computational. Clearly, as you mention, the brain is a dynamical system. The question is whether or not it is also a computational system.

      The distinction between digital and analogue is a different distinction. Computational systems are both digital, in that they operate by moving between discrete states, and dynamical, in that they also change physical states over time. Purely analogue systems are not computational in that they cannot be described in terms of formal symbol manipulation. So the conflict is between analogue operations which cannot be expressed computationally and digital operations which can. All of these are dynamical when implemented.

      Delete
  25. I just want to clarify something. Pylshyn believes that cognition is computation, but he doesn’t believe that all computation is cognition. Does Pylshyn believe that cognition is exclusively computation?

    ReplyDelete
    Replies
    1. I don’t think he believes cognition is exclusively computation. The way I understood it is that although there is definitely computation involved in cognition, there is also something else going on that is not computation. The whole problem is pinpointing what that something else is. But it wouldn’t make sense for cognition to be exclusively computation because if that were the case then it would be possible to program a computer to cognize (which is not the case) (though that depends on points of view) (but in the end no, as explained in Searle’s Chinese Room argument).

      Delete
  26. This paper addresses that words and propositions were more explanatory and free of homuncularity than images, and were therefore closer to computation, more explicit and easily testable. This is counter-argued in that Shepard’s mental rotation task could be easily computed using Cartesian coordinates. While mental analogue rotation would occur in a similar way as a computer system rotating the object, humans have a sense of intuition of the “right” orientation, whether or not we have seen that object before. It is unclear if a machine could even have a sense of “right” orientation. This brings up the issue of feeling thinking - can the machine every come to a conclusion, that humans often do, that something feels right.
    We can link the concept of why we“feel” that something we’ve never seen before looks “right”, similarly to how a mathematician could “feel” that something may be true despite being unable to prove it, to the internal process of cognition that we have such a difficulty in grasping. To call this “feeling” intuition I believe is appropriate, and suggest that the feeling of intuition is the “cognitive blind spot” the “internal process” where the difference in the first response between 7-2 and 7+2 is felt. They are both felt phenomenons and their processes are largely mysterious to us. Perhaps greater research on human intuition could illuminate more helpful pieces of the puzzle in deciphering the process underlying cognition.

    ReplyDelete
  27. "Is Computation the Answer? Computation already rears its head, but here too, beware of the easy answers: I may do long-division in my head the same way I do long-division on paper, by repeatedly applying a memorized set of symbolmanipulation rules – and that is already a big step past behaviorism – but what about the things I can do for which I do not know the computational rule? Don’t know it consciously, that is. For introspection can only reveal how I do things when I know, explicitly, how I do them, as in mental long-division”

    So I understand the argument being made against introspection but I don’t see how it answers the matter of computation and whether or not it is the answer… Not knowing (consciously) how to carry out the computation on a piece of paper does not seem to suggest that computation is how it is carried out in my head nonetheless.

    ReplyDelete
  28. I really like the third grade teacher example, it clarifies the issues with introspection and the homuncular, and how they cannot explain the easy problem. It is not good enough to say the answer simply came to us when are recalling our teacher’s name. How did it come to us? Introspection and the idea of a homuncular really just provides us with more questions about the unconscious process.

    Behaviourists, like Skinner, advocate for the simple explanation for why we do what we do: reward and punishment. However, Chomsky’s poverty of the stimulus disproves the behaviourists point of view when talking about child development. There are not enough stimuli reinforcement to explain the jump from grammatical unawareness to understanding in a child. This leads to the idea of innate internal knowledge structures. Evolutionarily, the more we can learn, the better. So in the respect the least innate knowledge to survive is ideal, and what we can learn from the external stimuli is important. That being said, certain things like grammatical structure and vocabulary learning seems to be one of the few that need a kick start. This also makes me think about facial recognition, how this is cross culture and most likely innate in order to be able to know when a stranger may be angry and attacking.

    Computation is a very reasonable alternative to the above attempts at explaining cognition. However, the idea of a need of a separate semantic interpreter seems far fetched. We do not react to things based on their shape, we react based on meaning.

    ReplyDelete
  29. Introspection cannot answer questions about memory recall or any other task that we don’t know explicitly how we do them step by step. As far as our ‘conscious’ minds are aware, the answer simply comes to us; we simply produce necessary behaviors.

    Even if we were satisfied with Skinner’s type of explanation for our learned language capacity (recall: we aren’t. It circularly assumes that we learn without explaining how), Chomsky’s poverty of the stimulus argument simply debunks its feasibility. The claim is that in the period between a child being non-verbal to when they are grammatically fluent enough to construct their own sentences, they are not exposed to enough language stimuli to learn by reinforcement. Therefore, there must be an internal knowledge structure that enables the child to extrapolate grammatical rules without reward-punishment inputs.

    Vocabulary learning, too, must require additional internal knowledge structures. Learning to call Tom the parakeet “bird” and then being trained to also call Bridget the hawk “bird” does not explain how we learn to call a previously unseen finch “bird” too. Here too, our explanation of the task — which we call categorization — relies on an innate ability: we pick out the relevant features of what makes a bird and ignore the others.

    Just because we cannot explicitly state the relevant features for inclusion in a given category, that does not mean that those features don’t exist. Rather, that is an argument for why introspection ought not be a primary tool for cognitive science.

    Along similar fault lines as verbal behavior, mental imagery theory of memory recall fails to address the real questions by begging them. The story goes that when I am asked the name of my 3rd grade teacher, I recall an image of her in my mind, then I name the image just as I would if it were really Mrs. Schulman in front of me. But the obvious flaw is the vacuity of the answer. What allows me to recall the correct image? What allows me to retrieve the correct name associated with the image? This amounts to no more than a tiny person who lives in my head who runs all the “executive function” of my mind, the homunculus. This explanation quite obviously does nothing to explain how we remember Mrs. Schulman’s name.

    Pylyshyn and computationalism helped to dispense with some errors of imagery theory. Understanding cognition ought to hinge on the mind’s functions rather than some higher-level explanation. Given the input “3rd grade teacher,” how do we “search” our memory and conjure the appropriate information? The answer lies not in introspection but computation, according to Pylyshyn. Where Pylyshyn went too far was denying that any non-computational explication for any function of our minds was not explanatory of cognitive phenomena.

    One point of contention is the Shepard Rotation Task. All evidence suggests that our minds engage in something like a dynamical rotation of the object in question, not the computationally optimal cartesian transfer function of 3 points. On what grounds should we then consider this non-cognitive? Seemingly, only a dogmatic adherence to radical computationalism.

    Computation is a powerful resource, but at some point it needs semantic interpretation to be at all worthwhile. This is because computation, by definition, is based only on the shapes of symbols, not their meanings. For example, would long division be of any use at all if you didn’t know what numbers were? So if we intend to describe our cognition in terms of computations on syntactic structures, at what point do we stop dealing in symbols and start understanding these representations as real-world objects? This is similar to the explanatory problem of the homunculus.

    ReplyDelete
  30. (these are thoughts which are kind of disorganized)
    It seems like computation alone is unable to fully account for the complexity and realization of the human mind. Searle is questioning whether or not algorithms can generate any kind of feeling at all (he argues that they can’t). It doesn’t matter how the computation is (physically) realized, only that the result is the same. This means that the hardware (electrical or brain e.g.) shouldn’t matter in order to generate cognition – the theory. However there has yet to be a computer program able to pass as thinking. And even if it did, Searle argues that that is still not a guarantee of thinking (he argues this more in depth in his Chinese room argument). Therefore the Turing test does not adequately depict human cognition.

    ReplyDelete
  31. “What makes us able to do what we can do? The answer to this question has to be cognitive;
    it has to look into the black box and explain how it works. But not necessarily in the physiological sense.”

    What possible other sense are they referring here? I believe that looking at physiology give use a good idea about what is happening in term of neural activity and molecular interaction. It is how we interpret those physical forces which cause problem. We might have to add computation (or even something like mind matter who knows) to fully understand how cognition is working.

    “For introspection can only reveal how I do things when I know, explicitly, how I do them, as in mental long-division. But can introspection tell me how I recognize a bird or a chair as a bird or a chair?”

    Introspection tells you the flow of your thoughts. How your thought looks to you at one particular moment is the only thing accessible. It doesn’t tell you by which mechanism a bird is recognize and then associated with the feeling of knowing it is a bird. If introspection would be the answer to the causal explanation of cognition, it would probably have some progression by now.

    This article has left me pensive about the role of computation in cognition. I believe biology possess tremendous power in term of molecule plasticity. It allows neural tissue to be reorganized and for learning to occurs. But is that all? How is computation introduced in this process? If computation has the power to regulate just about everything and that it is independent of its physical implementation, then any brain could be a potential computational system. Computation would have the strength to carry out cognition. So mental states would in fact be computational state!

    ReplyDelete
  32. The paper ends with the conclusion that
    1) computationalism is disproven by the symbol grounding problem and Searle's CRA
    2) TT must scale up to what Harnad calls T3 in order to bypass the SG problem

    Maybe this question is naive, but wont the programming of the T3 robot by the programmer have an impact on the way the robot grounds symbolic capacities in sensorimotor capacities? On second last paragraph, Harnad agues that cognitive science needs to scale up "for all of our behavioural capacities"... "the full robotic version of the TT, in which the symbolic capacities are grounded in sensorimotor capacities and the robot itself (Pylyshyn 1987) can mediate the connection, directly and autonomously, between its internal symbols and the external things its symbols are interpretable as being about, without the need for mediation by the minds of external interpreters." I understand that a T3 robot can be programmed to learn through trial and error, for example, but isn't the robot's programmer(s) a source of external interpretation?

    We have clearly established that "we are unaware of our cognitive blind spots- and we are mostly cognitively blind." Perhaps this is a question that does not concern the ontology of the TT, but wouldn't the capacities and limitations of the programmer(s) limit the ability for the T3 robot to "ground" symbols?

    ReplyDelete
  33. After reading this article and the comments posted, the article questions if ‘cognition can just be computation’

    My opinion on this is that cognition is much more than computation. Although computation encompasses part of cognition, there is much more. Parts such as sensorimotor dynamics contribute significantly to cognition, which computation does not account for. Given the big role that physiology and human behaviour plays, it seems quite complex to be able to develop a machine to pass the Turing test.

    In this article, Harnad discusses the “mediated-symbol grounding” of objects described by symbols of language as represented onto their physical instantiates. Thus, the brain can connect the symbol to its respective referent.

    These lines made me further explore the notion of how the brain is able to connect sounds, symbols and their referents. What about when the times that sounds vary and they are not the same every time? Such an example would be with an accent. Can computation account for this? Thus, perhaps the parameters for computation need to be widened in order to compensate for the differences with each sound. However, where sounds are ambiguous and differ, how are we able to determine what it is a derivative of? How do we know the original word it is being compared to?


    ReplyDelete