Saturday, 2 January 2016

10c. Harnad, S. & Scherzer, P. (2008) Spielberg's AI: Another Cuddly No-Brainer.

Harnad, S. & Scherzer, P. (2008) Spielberg's AI:Another Cuddly No-BrainerArtificial Intelligence in Medicine 44(2): 83-89

Consciousness is feeling, and the problem of consciousness is the problem of explaining how and why some of the functions underlying some of our performance capacities are felt rather than just “functed.” But unless we are prepared to assign to feeling a telekinetic power (which all evidence contradicts), feeling cannot be assigned any causal power at all. We cannot explain how or why we feel. Hence the empirical target of cognitive science can only be to scale up to the robotic Turing Test, which is to explain all of our performance capacity, but without explaining consciousness or incorporating it in any way in our functional explanation.


  1. I find that this article covers well the main gist of the course. It describes, how feelings, something intrinsic to living organisms cannot be modelled by computer or robot engineering. With the technology that is present now and will probably be present in the next decades, it is impossible to make robots that really have feelings. These robots can simulate having feelings and even answer positively the question as to whether they have feelings. However, it is impossible (anyways at least for now) to make these machines have real feelings like living beings have.
    The author, Steven Harnad questions whether making robots with feelings will serve some purpose to improve the performance of machines. The answer is yes, however, he hypothesizes that this will not give these machines a great improved performance capacity as it is possible to implement feeling-less mechanisms meant to make machines do certain actions and to avoid certain situations.
    Since feelings do not seem to have a strong component to making people more productive or to making them avoid certain dangerous situations or locations, can feelings be a crucial component of social life? or life when interacting with other species? Since robots are not yet designed to work with each other, and do not seem to be penalized by not having feelings, it can be thought that feelings are not necessary to the things that robots do. It is most likely that feelings must occupy some useful role as evolution would not have made feelings develop feelings in as much living organisms as it has.

    1. "It is most likely that feelings must occupy some useful role as evolution would not have made feelings develop feelings in as much living organisms as it has."

      So one would think. But the hard part is explaining how and why...

  2. I find myself at a similar place as I was after reading the last assigned article. I come to being at odds with your answer that feeling is not something that we can figure out the inner workings of. I find even you are not as sure of the matter at all points in your writing. For sake of critiquing there are many ways you answer the question of “is it possible to find the causal mechanism for feeling?”. All the different answers have differing degrees of severity.

    “is whether feeling is a property that we can and should try to build into our robots. Let us quickly give our answer: We can't, and hence we shouldn't even bother to try.”

    A strict no here.

    “But that is precisely what makes the mind/matter problem such a hard (probably insoluble) problem”

    But we a get a “probably” here (if my understanding of the mind/matter problem being equal to understanding a causal mechanism for feeling)

    Now I don’t mean to be nit picking about use of phrasing, but it is an important distinction. While you have been very adamant in denying that we will be able to find a causal mechanism for feeling, I find very little evidence to support this claim in your works. My understanding from the readings and discussions thus far is that the bulk of “evidence” you put forward is either anecdotal or simply that we haven’t figured out a good approach at solving the problem. The bulk of this article itself seems to be relying on past failed attempts and current methodologies that are likely to prove fruitless. What I find so unsettling about your claim isn’t that you believe the question will never be solved, but that you advocate that “we shouldn't even bother to try”. Of course being from the opposing camp I would greatly disagree with this, but trying to look at the problem from an outside lens I just don’t think the evidence warrants such a strong conclusion and that we owe it to the scientific method to keep trying until such a conclusion is warranted. The harsh denial that we should even try comes in with even less force because you yourself later say that the question is probably insoluble, which should certainly mean we should keep looking for a definite answer!

    Despite everything I have said above, I do sympathize with the point that even if we were to find a causal mechanism, then what good would that do us? I agree with all your points about AI at the end of the paper, so clearly at this point there’s no reason to suggest that “feeling” AI will perform any better, nor should it help us improve upon AI in the first place. Would it help with medicine? Maybe, but I can’t see the connection just yet. So if anyone has any opinion on what good finding a causal mechanism would actually do for us then I would interested to see the thoughts.


    1. Hey Jordan ,

      I share in a lot of your frustrations about the nature of the insoluble problem argument, namely that there doesn't seem to be any concrete evidence beyond stating that we've failed thus far, which really nags at my inner science student conscience. But I think the way to overcome this nagging is to consider that the issue is that there is no way to 'pay our dues to the scientific method' as you've suggested, because as Harnad notes, feeling is by definition unobservable and non-confirmable (except in oneself).

    2. Jordan: You're right that the hard problem is insoluble is just "Stevan Says."

      But that the hard problem is hard is not.

      So have a go. Take a stab at it. Suppose the easy problem was solved: What do you think a causal how/why explanation of feeling would look, one in which feeling does not keep turning out to be superfluous?

      (When I way "don't try" I mean don't try to build feeling in when you're still only trying to solve the easy problem because you end up fooling yourself: You put in a functional component that, on the face of it, simple does X, but you add to it that it is a feeling. Now you have the illusion of having made a little progress in both directions, easy and hard. But in reality, all you've done is smuggled in feeling, inexplicably -- and, as always, superfluously.)

      Yes feeling is a biological trait, not a supernatural one. Surely it must have a biological function...

      Adrienne: "feeling is by definition unobservable and non-confirmable (except in oneself)" -- and, until someone comes up with a bright idea, causally superfluous.

    3. "The second reason comes from the neural correlates of voluntary action: If the neural correlates of felt intention were simultaneous with the functional triggering of voluntary movement in the brain, that
      would be bad enough (for, as noted, there would be no explanation at all for why intention was felt rather than just functed). But the situation may be even worse: The research of Libet (1985) and others on the "readiness potential," a brain process that precedes voluntary movement, suggests that that process begins before the subject feels the intention to move. So it is not only that the feeling of agency
      is just an inexplicable correlate rather than a cause of action, but it may come too late in time even to be a correlate of the cause, rather than just one of its aftereffects"

      Im glad you mentioned Libet here for I was under the impression that some use Libet as an illustration of the 'epiphenomenality' of consciousness ! Essentially, doesn't saying "The best AI can do is to try to scale up to full Turing-Scale robotic performance capacity and to 'hope' that the conscious correlates will be there too" amount to chalking up consciousness to an epiphenoma, or evolutionary spandrel??

    4. In the beginning of the paper it is also mentioned that "to be conscious of something means 'to be aware' of something which in turns means to feel something" so until we are able to scale up to the full Turing Test, there will be no 'understanding, knowing or believing', and once we have we will be (according to Professor Harnad) obliged to assume that it is felt despite the margin of uncertainty?
      It does indeed seem inconceivable to understand, believe or know something in a purely 'functed' as opposed to felt way but that is because we have no negative example of feeling therefore we can't imagine what it is like not to feel ... Again though, to assume that this Turing Test passing robot will have feeling (because of the other minds problem) makes unable to shake the feeling that that makes feeling somewhat epiphenomenal!

      Anyone with input please comment!!

    5. Hey Naima!

      By saying that "feeling is somewhat epiphenomenal" are you trying to say that it is the parallel by-product of something else?? I can see what you mean by this, it seems like there has to be some sort of switch that happens when you're born that gives you the feeling you're capable of having but then you get into a crazy complicated situation of figuring out when or what that initial causation is.

      I think a really interesting quote for you to consider is the following passage from the paper: "If the clenching occurs while I'm in dreamless sleep or a coma, then it is unconscious, just as it would be in any of today's robots. The only difference would be that I can eventually wake up, or recover from the coma, and feel again, and even feel what it's like to hear and believe that I had been in a coma and that my fist had been clenched while I was in the coma (so I am told, or so the video shows me). No such possibility for today's robots. They don't feel a thing (Harnad 1995)." I personally paused at the line "then it is unconscious, just as it would be in any of today's robots" because it makes it seem like there are times when we're being equated to robots that don't feel. I understand that we're not because then we wake up, etc, but could our sleep state be a window into what a scaled up TT might reveal?

      Also Prof Harnad (or anyone), why is feeling "casually superfluous" if it is intrinsically linked to all of our conscious (meaningful) capacities??

  3. “The second reason comes from the neural correlates of voluntary action: If the neural correlates of felt intention were simultaneous with the functional triggering of voluntary movement in the brain, that would be bad enough (for, as noted, there would be no explanation at all for why intention was felt rather than just functed). But the situation may be even worse: The research of Libet (1985) and others on the "readiness potential," a brain process that precedes voluntary movement, suggests that that process begins before the subject feels the intention to move. So it is not only that the feeling of agency is just an inexplicable correlate rather than a cause of action, but it may come too late in time even to be a correlate of the cause, rather than just one of its aftereffects.”

    Could feeling be related our sense of self that persists across time? Here’s how I’m thinking about it. Feeling is continuous (or it feels like it is, anyway). But thoughts and emotions aren’t – they arise and pass away. Modern interpretations of Buddhist philosophy argue that our minds are analogous to a movie reel. A movie reel is made up of a number of discrete stills. When the stills are played one after the next in a sufficiently rapid sequence, moviegoers are presented with the illusion of motion and continuity. In the same way, our minds feel continuous but are actually made up of discrete intentional states. Buddhist philosophers use this (and a number of other arguments) to demonstrate that the sense of a self that persists across time is an illusion.

    If feeling is continuous, it could be an adaptive mechanism that helps create a sense of self. Belief in enduring selfhood could provide purpose for actions and future actions.

    Of course, I don’t see how this could be empirically demonstrated or how it could circumvent the other-minds problem. So it’s not great science – just food (speculation) for thought. I’d be curious to hear other people’s thoughts on this if anyone else is interested in cross-cultural philosophy.

    1. Lots of good reasons for being able to remember past events and data, to be able to integrate them, to be able to analyze them as a continuous series. But why should any of that be felt, rather than just done ("functed")?

      Ditto for the self: Good to distinguish own-body from the rest of the world. To remember. The think you paid for the gas (even if you didn't): But those are all doings, outer and inner: How/why should any of them be felt?

      This is the "flavor" of the problem you come up against every time you try to give a causal/functional explanation of how/why something is felt rather than just done.

  4. Sorry, should we be reading "First, Scale Up to the Robotic Turing Test, Then Worry About Feeling" or Spielberg's AI: Another Cuddly No-Brainer. Because this link sends me to the former, not the latter

  5. “So the problem is not with uncertainty about the reality of feeling: the problem is with the causal role of feeling in generating (and hence in explaining) performance, and performance capacity.”

    I think I must have understood something wrong, but I don’t see why this is an issue. I thought that the whole issue with feeling was not “the causal role of feeling in performance”, but rather how performance ends up being felt and why would that even be the case at all?

    Later it says that “feeling is not a fifth causal force” (which is again repeated in the Telekinesis segment), which had be completely lost.
    So are we asking if feeling as a part to play as a cause of performance, or as a result?

    1. Hey Hernan, I don’t think you are misunderstanding anything. You seem to get both the pieces just fine, now you simply need to connect the dots. “The causal role of feeling in performance” and “how performance ends up being felt and why would that even be the case at all” are two different ways of saying the same thing. Whatever the explanation is (if one exists at all, and that’s not looking too good), it will explain the causal mechanisms facilitating the feeling associated with performance. What makes that feeling happen and why that feeling happens at all. To put it a bit more cleanly, the how and the why it feels like something to do something. Dr. Harnad includes a great analogy to help illustrate this point:

      So when we try to go on to explain the causal role of the fact that nociceptive performance capacity’s underlying function is a felt function, we cannot use nociception’s obvious functional benefits to explain (let alone give a causal role to) the fact that nociceptive function also happens to be felt: The question persists: how and why? (page 4)

      As to your second point, when he says “feeling is not a fifth causal force,” all he means is that feeling, in and of itself, cannot directly cause something to happen. It cannot change the motion of the physical world. The equation F = ma does not apply to feeling. And to your final question, is feeling a result of performance, I’m leaning towards yes. I’m not completely positive, but here are the two quotes that make me think that’s how it works:

      (1) So it is not only that the feeling of agency is just an inexplicable correlate rather than a cause of action, but it may come too late in time even to be a correlate of the cause, rather than just one of its aftereffects. (page 5)

      (2) Feeling itself is not performance capacity. It is a correlate of performance capacity. (page 5)

      It all depends on if the terms “aftereffects” and “correlate” imply the same causal relationship as “results” does, and I would say they are all close enough in meaning that your final conclusion is correct.

    2. Thanks Alex, spot-on. (But feeling as an effect is no more explanatory than feeling as a cause, alas: The hard problem is explaining how and why it's there at all!)

  6. First, I just want to comment on how understandable and to-the-point this article is. We have been working through some super dense material lately, and it’s nice to read a paper that focuses on exactly what this course is all about. It covered a lot of the topics we have been discussing all semester, and even clarified a few lingering issues I have had since the first few weeks of class. In fact, I think it would have been helpful to have this reading a little earlier in the course (when these topics are being introduced) rather than at the end (once we already have a decent handle on them). Maybe somewhere around week 4. Anyways…

    There are no theoretical foundations of machine consciousness. Until further notice, neither AI nor neuroscience nor any other empirical discipline can even begin to explain how or why we feel. Nor is there any sign that they ever will. (page 6)

    One issue that started to bother me last reading kept coming up throughout this reading as well: if the hard problem cannot and will not ever be solved, then what’s the point of it all? I guess some of Dr. Harnad’s pessimism is rubbing off on me. If there is no answer out there, what are we doing? And where do we go from here? Is this just game over? Dr. Harnad suggests we focus on passing T3 before even worrying about these issues:

    Think only of implementing what will generate more powerful performance capacity, and worry about consciousness only if and when you have generate our performance capacity, Turing scale. (page 6)

    But if the hard problem is actually insoluble, then we will never be able to generate our capacity to feel. Does that also mean we will never be able to create a T3-passing robot? Can we actually generate our entire performance capacity without also generating our capacity to feel? Or do the two go hand-and-hand? Can we actually have a T3-passing robot that does not feel, or does acting like a feeling being require there to be feeling going on?

    So I guess robotics and the Turing Test is where we go from here, but I still can’t shake the feeling of what’s the point of it all? If there is no solution to the hard problem, and that means there is no way of generating our capacity to feel, then creating a T3-passing robot should be impossible, right? We may be able to get extremely close, but it seems like we will never be able to hit that threshold. So if we are operating on the belief that the hard problem is insoluble, why bother trying at all? Why not call it quits now? Just in case we’re wrong and a solution happens to turn up?

    1. What's the Point?

      "Stevan Says": T3 will feel, but we will not be able to explain how or why.

      What is the point of seeking a how/why explanation of feeling?

      Because it is only because there is feeling that there is any point to anything at all. In an unfeeling universe, nothing matters. There is no wrong or right. No good nor bad. Just doing this or doing that.

    2. If T3 can feel, and we are able to successfully create a T3-passing robot, then won't that robot be the explanation in and of itself? Reverse engineering is the way to get a how/why explanation for feeling, so once we have the robot, we also have the explanation. So if you are right about both T3 feeling and us not being able to explain how or why, wouldn't that imply that we will never be able to pass T3, because to reach T3 in the first place you need to explain the how/why question?

      Wow, I think I'm finally starting to get why the hard problem is so hard.

    3. I have the same question as Alex too. Since Stevan believes that zombies are impossible, then implementing a T3 Turing-indistinguishable robot with feelings is impossible too, is it not? How can we design a robot based on performance capacity that gives rise to the correlated feeling capacity (if it is even a capacity) if what is cited in the paper is that there is no models of consciousness for AI?

    4. @Alex, I think that your comment just put the hard problem and the Turing hierarchy into focus for me! So thanks.
      Yes it seems as though theoretically creating a T3 robot would solve the hard problem (which is why it would be superfluous to get to T4)
      But given that the hard problem is insoluble according to *Stevan says* we won't be able to ever engineer it.

      I have a question for Professor Harnad, given that you are a vegan on the basis of the fact that animals have feeling and consciousness just like we do, and for the same reason you wouldn't kick riona or renuka it is wrong to kill an animal for food, would you also be ethically opposed to eating a cloned animal because it would likely pass T3 though we cannot be sure it is conscious?

    5. Alex ,

      Re: your second post in this thread, I completely agree. I would have thought that if, as Stevan says, the T3 robot would feel, then there is nothing else left to explain... I thought we had based our entire formulation of the TT and easy/hard problem based on the premise that reverse-engineering provides us with a causal mechanism. So how then would we not have explained the how/why if indeed our hypothetical T3-passing robot did feel? Does it all come back to the OMP yet again, and the fact that we wouldn't be able to know with certainty that our robot felt and thus that our reverse-engineering was successful in creating it? And if indeed that's the case, then as you've said: why bother trying?

      Julia ,
      I could be wrong but I think the answer to your question lies in the fact that cloning an animal is not the same as reverse-engineering a system. Our certainty (or lack thereof) regarding the cloned animal's consciousness would be by definition identical to our uncertainty about the originating animal's consciousness.

    6. Alex: T3 does not solve the hard problem. It solves the easy problem.

      Oliver: (1) "Stevan Says that P" does not mean "P is true." (I don't believe T3 can be passed by a zombie. I don't think zombies are possible. But even if T3 is feels, that does not solve the hard problem, because it does not explain how and why T3 feels -- which is exactly the same problem as explaining how and why T3 cannot be a zombie!)

      Julia: See reply to Alex above!

      T3 is actually a human test. Our mind-reading powers are much more powerful with humans. But, no, I would not eat a robot animal that was indistinguishable to me from a real animal. (The other-minds problem is the cruelest problem, because it's a problem for the other mind, if you wrongly suppose it does not feel.)

      (Cloning, by the way, would be T5. Of course I would not eat a cloned animal (and of course it would feel). But cloning is not even a T-test because if you clone it, you still don't know how it works, because you did not build it.)

      Adrienne: You (too) are mixing up the easy problem (which T3 solves) with the hard problem (which it does not solve). And "Stevan Says" is neither here nor there...

    7. If we were to succeed in building a T3 robot, I feel as though there are two different solutions here that can’t both be true. In either case, we will have solved the easy problem.
      1) The robot feels, but we haven’t solved the hard problem because we didn’t reverse engineer feeling. Instead, we reverse engineered all of the things we can do, and feeling appeared in an emergent way. And in fact, we would actually never know whether or not this is true because of the other minds problem.
      2) The robot doesn’t feel

      #1 seems unsettling to me because I don’t like the idea of anything being emergent in this way. So I would lean toward accepting #2, that the robot doesn’t feel. But this is why the hard problem is so frustrating: Stevan says we would never know either way! I am curious to hear though, why Stevan thinks a T3 robot would feel, because I haven’t heard any convincing arguments for why as of yet.

    8. *Sorry, in the last paragraph I mean that the OMP is frustrating, and not that it is Stevan says

  7. "The second reason comes from the neural correlates of voluntary action: If the neural correlates of felt intention were simultaneous with the functional triggering of voluntary movement in the brain, that would be bad enough (for, as noted, there would be no explanation at all for why intention was felt rather than just functed)"

    Feelings are easily remembered. You may not remember the details of something, but chances are you will remember the associated feeling (as proven by a visceral reaction to something when that something is named/brought up in conversation). This makes me wonder if the "felt"-ness of intention is perhaps a learned behaviour. If we can associate a feeling with an intention then we can remember more easily to have that intention again, if the intention yielded a beneficial behaviour. Perhaps this has SOMETHING to do with why it's adaptive.

    1. Remembering: useful. Intending: useful. But both are just doings. Why are they felt. And why should feeling make you remember better, rather than just remembering better to remember better? (Welcome to the hard problem!)

  8. - From your example with anesthesia, then to what degree does the body, the hardware, play a role in feeling? Is consciousness part of the body? Is there reason to believe that consciousness manifests itself in our brain through specific brain patterns or specialized brain regions seeing as there is correlation between feeling and function?

    - Isn’t part of the reason why the hard problem is insoluble because we’re feeling about feeling? Such as how there’s thinking about thinking, we’re thinking about feeling, but thinking is also a state of feeling, so it becomes trying to explain something with itself? Yet, the look for a causal mechanism isn’t found because telekinesis isn’t supported from evidence, the causal forces of physics don’t explain feeling, so then the objective answer isn’t there either… is there other approaches to study consciousness then? Or at least, what are the logical implications of never being able to solve the hard problem?

    1. Hi Oliver,
      The same questions have crossed my mind. If we cannot find a solution to cognition because we are indeed ourselves cognizer, then the hard problem will remain unsolvable. It might be the case that the solution is part of the question. I wonder how we could merely comprehend cognition, if part of that understanding resides inside the thought of it.

    2. The logical implication of the fact that we can't explain feeling is that there is something special about feeling (and/or something special about explaining feeling). I think ("Stevan Says") it's as much a problem of causal explanation as a problem about the nature of feeling. But I don't have the answer. I just know which answers (and kinds of answers) are non-starters.

      I don't think the problem is because we are thinking about thinking, or because we ourselves are feelers -- though it's only because we are feelers, and think, that we even know where is a problem!

      In a T3 zombie world -- if there could be one -- it's a real puzzle what they would be talking about when they talked about feeling! Probably it would be about behavioral dispositions, and what they could detect of their internal (not mental: internal) correlates. Those would be the grounding of the words -- the same words as ours. But actually this is one of the reasons I don't believe there could be T3 zombies ("Stevan Says"). I think they they are just as impossible as passing T2 through computation alone. But in that case, we have the need for symbol grounding as the explanation. No explanation for why there can't be T3 zombies (other than that maybe there can't be T3s at all, just biological organisms, for some unexplained hard reason...

    3. Hey Oliver,

      To address your first question:
      "To what degree does the body, the hardware, play a role in feeling? Is consciousness part of the body?"

      I would argue that A) hardware includes both the body and the brain, but B) that the body is implicated in feeling insofar as it permits for a certain array of sensory experiences afforded by the nervous system which innervates it. I don't think there is anything innate about the body that the nervous system can't encapsulate when it comes to feeling.

      "Is there reason to believe that consciousness manifests itself in our brain through specific brain patterns or specialized brain regions seeing as there is correlation between feeling and function?"

      This is a tough question I think... On the one hand, there is no reason to believe that we could point to the brain somewhere and say, "Look! That area controls feeling!" Since we have no causal mechanism for this apparently superfluous phenomenon, how would you even identify or describe neural correlates of feeling? More often than not, we can point to neural correlates and suggest a certain sensory representation, but this has no bearing on feeling, as a zombie could easily have the neural correlates of a sensory experience, while responding appropriately, without any need for feeling.

      On the other hand, it appears that modules of feeling are everywhere in the brain! In a previous skywriting I discussed that removing the V4/V8 area of visual cortex would remove the capacity for colour perception, and the feeling of colour perception in the victim. This would then suggest that feeling capacity is distributed along with cognitive capacity. This of course gets us no closer to understanding how or why the brain generates feeling capacity, but it's interesting nonetheless.

  9. “Feeling Versus 'Functing': How and Why Do We Feel? In some sense. But that is precisely what makes the mind/matter problem such a hard (probably insoluble) problem: Because we cannot explain how feeling and its neural correlates are the same thing; and even less can we explain why adaptive functions are accompanied by feelings at all. Indeed, it is the "why" that is the real problem. The existence of feelings is not in doubt. The "identity" of feelings with their invariant neural correlates is also beyond doubt (though it is also beyond explanatory reach, hence beyond comprehension). We are as ready to accept that the brain correlates of feeling are the feelings, as we are that other people feel. But what we cannot explain is why: Why are some adaptive functions felt? And what is the causal role -- the adaptive, functional advantage -- of the fact that those functions are felt rather than just functed?”

    I think I misunderstood the root of the problem while reading the other two articles. The problem “why do we feel?” is not “What enables us to feel and how” but the problem is “What is the purpose of feeling, if we can accomplish the same functions without the ability to feel?” These are two very different questions, and as technology improves more and more the latter is harder to answer. Why were we programmed to feel when we can program robots to do almost everything we can without feeling? This led me to question if there are any functions we perform that require feelings. One big answer came up: reproductive success. As far as I know, and I may be wrong, we have not programmed robots to reproduce and pass on their information. Even if robots could pass information to one another, through wifi let’s say, if one is physically destroyed, a human is necessary to recreate it or to fix it. At this point in technology we haven’t created robots to fix robots or to build new, more complex robots, or have we? In order for an organism, or a robot to exist through generations it would have to have the ability to create new robots more advanced than itself. Humans use feelings to do this, but would it be possible for robots to?

  10. “Programming. Now we come to the "programming." AI's robot-boy is billed as being "programmed" to love. Now exactly what does it mean to be "programmed" to love? I know what a computer program is. It is a code that, when it is run on a machine, makes the machine go into various states -- on/off, hot/cold, move/don't-move, etc. What about me? Does my heart beat because it is programmed (by my DNA) to beat, or for some other reason? What about my breathing? What about my loving? I don't mean choosing to love one person rather than another (if we can "choose" such things at all, we get into the problem of "free will," which is a bigger question than what we are considering here): I mean choosing to be able to love -- or to feel anything at all: Is our species not "programmed" for our capacity to feel by our DNA, as surely as we are programmed for our capacity to breathe or walk?”

    Could we ever program feelings? This part brings me back to thinking about the Other-Minds Problem. Even if we did program a robot to feel, how would we know for a fact that it is feeling? How would we know how it feels? I don’t think that we could. Would we have to accept it as feeling just incase it does feel, just like we accept living beings as feeling, even though we cannot know for sure?

  11. “For feeling is not a fifth causal force. It must be piggy-backing on the other four, somehow. It is just that in the voluntary case it feels as if the cause is me. But the other four forces are all unfelt forces”
    This is an interesting point that I have never considered before.
    At the beginning of the course we talked extensively about why cognition is not just computation. We said computation is rule based, symbol based, implementation independent and semantically interpretable. We said that cognition isn’t only all of these things, it must be causally explained another way. We discussed the idea that like the heart pumps blood as a dynamical system, the brain pumps behaviour as a dynamical system. AS a result we concluded that cognition is causally explained by some sort of dynamical system. This quotation explains the four dynamic forces in the universe: electromagnetism, gravitation, and the strong and weak subatomic forces and then proceeds to point out that “feeling” this unique capacity must be some combination of these existing 4 forces. This baffles me. How could something that we cannot explain how or why it exists be composed of 4 forces for which there is lots of evidence thus far? Of course apples could start to fall up, but until now they haven’t so we accept the force of gravity. In a similar respect, how is this “force of feeling” that we cannot be sure even exists in anyone but ourselves created by forces who’s action we see everyday as demonstrated by static hair in the winter time, or leaves falling to the ground in autumn?

    1. • “For the clenching of my fist to be caused by my willing it to be clenched, rather than by some combination of the usual four feelingless forces of nature, would require evidence of a fifth causal force – a telekinetic force. And there is no such evidence, hence no such fifth force.
      It is still not clear to me about the “Granny objections” to finding an explanation for feeling by making up a telekinetic fifth force and saying this one must be responsible for us feeling.
      What is so special about feeling anyways? I am feeling (pun intended) exasperated at how much energy we spend trying to causally explain it when we can’t even be sure it’s happening in other people. There’s no way that we don’t know that a machine as simple as my computer doesn’t FEEL. Maybe my vacuum cleaner even feels sad because I haven’t used it in months. Maybe feelingness is just a combination of the four forces that were mentioned above, considering they manifest in every other observable and nonobservable thing happening in the universe. This article is an articulate summary of everything we have discussed thus far but it still seems like a lot of Stevan Says and I am still left not being totally convinced that consciousness and feeling are the same or that feeling is the decisive factor in determining whether we have reverse engineered cognition.

      (i am tacking this onto Renuka's comment because I quote the same passage and thought it might make more sense than posting it separately)

      also in response, I don't know why it is so baffling that something like cognition and feeling is inexplicable to us using these four forces. To me, and maybe this is pessimistic or naive, it just seems like maybe humans are too invested in our own feeling to be able to objectively understand it through these forces and instead like to point to it as something special (a granny objection I think)

  12. “What Is/Isn’t a Robot? So, what is a "robot," exactly? It's a man-made system that can move independently. So, is a human baby a robot? Let's say not, though it fits the definition so far! It's a robot only if it's not made in the "usual way" we make babies. So, is a test-tube fertilized baby, or a cloned one, a robot? No. Even one that grows entirely in an incubator? No, it's still growing from "naturally" man-made cells, or clones of them.”

    We, as human beings, put a lot emphasis on the importance of feeling. As we have discussed, everything we are conscious of, we feel (there is no doubt that feeling has some adaptive advantages, but not all feeling does, so this begs the question of why we feel at all at every conscious moment in our lives?). The little clip of the movie highlights this fact of our perception of feeling – the crowd had no problem watching the destruction of ‘feeling-less robots’; however, once it was the little boys turn, who looked more like a human and reacted as a human would, they choose not to kill him because he might be human i.e. he feels (the morality of the crowd starts to kick in). So, as Harnad mentions, the term robot begins to be arbitrary. We would still accept those with synthetic transplants as feeling human beings, then it should follow that we should not kick/destruct a T3/T4 robot as well. That being said, being able to feel is not solely a human capacity (vertebrate non-human animals feel, other organisms may feel). We often lean on consciousness when defining what makes us unique, but this is not the case if consciousness = feeling. As we’ve discussed in class, language is unique to us and what makes human beings human. This brings me back to thinking that if animals feel, yet do not have language (which is an important part of human cognition), then I do agree that we should first try to reverse engineer our performance capacity, i.e. T3 robot that can do everything we can do. This answer is not entirely satisfying given that feeling is so omnipresent (there’s such a strong urge to come up with a causal mechanism for feeling, but at this point in time, there are no answers to the hard problem).

  13. “A counter intuition immediately suggests itself: “Surely I am conscious of things I don't feel!””
    How could that be so? I guess it could be possible if you were only a “functing” machine instead of a feeling machine. But even then, how could you be conscious without experiencing feelings? From what I understand, robots acting in the world can have beliefs; since they can tell you about them and take actions that are congruent with those beliefs. But there is no point in machines that are only doing; the real interest must be toward things that are feelings. The feeling beings are the one that can realize the beauty of the world. As professor Harnad says in class, the reason to seek the how/why explanation for feelings is important, because if there are no feelings, nothing matters anymore. No right or wrong, no good or bad.
    “Both robotics and cognitive science try to explain the causal basis for performance capacity, the functional mechanism that generates it: robotics, in order to get machines to do useful things for us, and cognitive science, in order to explain how we ourselves are able to do such things.”
    I agree that performance capacities are in the scope of cognitive science. But I’m wondering if performance can tell us anything about feelings. T3, indistinguishable from us in terms of acting, wouldn’t tell you the causal explanation of why there are feelings. It would basically tell you that this is a way to achieve such behavior; not the brain way. By doing such forward engineering, it still wouldn’t explain how and why feelings are correlated to neural activity.

    But why exactly do we feel? Why did evolution came up with this adaptive function? Feelings correlate with behaviors; but are they necessary for it? Could it be the case that my actions are made, because my motivation comes from my feelings? Or it could be the case that we are “functing” machine which happened to have a correlated feeling function as well.

  14. I do not agree with the statement made in the ‘Correlation and Causation’ paragraph. The authors example that humans learn to avoid circumstances that cause injury through nociception (ex: removing the ‘keeping our weight off the injured limb’). The authors say that humans do not have to learn this way, "Everything just described can be accomplished, functionally, by merely detecting and responding to the injury-causing conditions, learning to avoid them, etc. All those functions can be accomplished without feeling a thing; indeed, robots can already do such things today, to a limited degree."

    There exists a condition called congenital insensitivity to pain (CIP), also known as congenital analgesia, people with this condition are unable to feel pain. People who suffer from this disorder (there are only about 40 documented cases) often injure themselves, and often injure severely because they are unable to feel pain. Also, since there are very few cases of people who have this disorder, i think it is safe to assume that having the ability to perceive pain is very important! So maybe just like how there is an evolutionary advantage to the ability to feel pain, there could be evolutionary advantages to other feelings as well, perhaps all feelings.

  15. “Are models of consciousness useful for AI? No. First, consciousness is feeling. Second, the only thing that can be 'modeled' is I/O performance capacity, and to model that is to design a system that can generate that performance capacity. Feeling itself is not performance capacity. It is a correlate of performance capacity.”

    I do side with “Steven says” that we cannot reverse engineer feeling but I find the abrupt “no” to the question above to be a bit unwarranted. Consciousness is feeling, thus there is a distinction between things that are processed in our brain that are felt and things that are processed in our brain that are not felt. In other words, some brain processes are accessible and can be felt and some are not and are performed unconsciously.

    Cognitive science or AI may not be able to explain causally how/why some processes are felt and some are not, but it may be useful to analyze what is available to feeling and what is not in order to allocate processing time when designing a robot with performance capacity. Wouldn’t knowing what is filtered by consciousness be useful in understanding how to reverse engineer an effective robot? Feeling “is a correlate of performance capacity” thus it seems that defining what performance capacities are felt may be important when studying AI, while disregarding how/why these things are felt.

  16. Why does it feel like something to process information?

    Many times I've wondered what is our purpose on this planet, as many have. But not often enough have I considered why is it that we are conscious beings? Why is it that we have feelings? The answer seems obvious to me. We feel because we have evolved to feel. My gut-intuition is that we once upon a time began with basic feelings/emotions (pain, anger, joy, disgust, love etc) to aid in survival and reproduction and that these same feelings have evolved to account for many of our "conscious states". I don't agree that as feelingless Darwinian survival machines we could have accomplished what we have to this day - forming a civilization, intricate society, advances in medicine and technology, co-habitation, and now more awareness about the environment as well as a passion for protecting human and animal rights. Even if we take a simple example such as Harnad proposes, of us feeling pain due to an injured limb, we can already begin to see how feeling would help us. Yes, I agree that perhaps feeling could be substituted by our nocireceptors simply detecting pain and then alerting our brain to act without the actual feeling of pain, however, the fact that we are ABLE to perceive this pain is what aids us in our timely response and teaches us to deal with specific situations. Our feelings drive our responses/actions and guide us in the way we choose to live our lives. We have advanced so much as a species that we empathize with one another and feel moral emotions as well; we do not just react involuntarily to stimuli. If we were all feelingless robots, just imagine how our society would look like today? Would we form social bonds just the same? Or look out for one another as we do? Care about anyone else but ourselves and our in-group? Would we just be here to survive and reproduce? Maybe I'm not understanding the question of why we have feelings so please correct me if I'm wrong. I'm not sure I agree with the why/how problem being addressed in one go - I feel that the "how" we have these feelings question is much more important to address than why (and that we make clear conjectures about, no?). Wouldn't intentionality simply be lost if we were not able to feel? Why do anything at all other than to survive and reproduce?! Feeling places greater weight on possible courses of action in our decision trees in everyday human life! Could someone clarify the why question?

    1. 1. Remember that feelings are not just emotions. It also feels like something to see, hear, taste, smell touch; also to move, think, plan, wish, understand, mean.

      2. Yes, feeling is a biological trait. The brain must generate it (somehow). And it must have evolved (somehow). The hard part is explaining how, and why.

      3. How and why does being able to feel something rather than just detect and act on it help us to act on it faster?

      4. If we were all T3 robots, and T3 robots were feelingless zombies, we would be able to do everything we can do now (that's what passing T3 requires), indistinguishably from the way we do it. You can say that you don't think it is possible to build a robot that can pass T3. Then that would be "Linda Says." But to explain why that's true, you'd have to have a hint of the solution to the hard problem, whose flip-side equivalent is "How and why can we not be zombies?"

      5. The how question is functional: How does the brain (or T4 or T3) generate feeling? The why question is: what for? what is the adaptive advantage?

  17. So, Stevan says the Hard Problem is insoluble. Computation is not all of cognition – we have feeling. Yet, we cannot explain how, or why, computation is not all of cognition.

    I wonder would it not FEEL something to have an answer to the Hard Problem (according to Stevan’s formulation)? If it were possible to have an answer for how/why feeling, would that answer still be a semantic interpretation of symbols? A live human meaning-maker would be communicating the answer to the Hard Problem, but it would still be interpreted by people involved (syntax to semantics). Can we communicate the answer to the Hard Problem? Can we feel the answer to the Hard Problem?

    If Stevan says all of cognition is feeling, then what more can there be? What more can be said if we put everything we do under the umbrella of feeling, and then say it can’t be a causal explanation? If we feel everything, how can we have an answer to how/why feeling? Perhaps the formulation of the question is what makes it impossible to find a causal mechanism?

    1. 1. Please always distinguish "P is true" from "Stevan Says P is true."

      2. It feels like something to understand anything, including a how/why explanation of anything. (I think you're getting a little too complicated for kid-sib here...)

  18. "Everything pertaining to consciousness (feeling) will be merely a mentalistic interpretation of the functional mechanism of performance capacity (i.e., a hermeneutic exercise, rather than the causal, empirical explanation that is needed)” (Harnad & Scherzer 2007).

    I agree with the premise of this statement and all it entails, but I am optimistic that the future of cognitive science may be able to elucidate the hard problem. I have no doubt in my mind that feeling performs some sort of critical adaptive role and that answering the “how” is in the biology of organisms. From what I understand, all forms of science thus far are based on observable things so to speak. However, feeling itself is not observable and only available to the person in the first person. Therefore, Steven says it is probably impossible for science to elucidate the actual causal mechanisms for it because any explanation we provide is an ad hoc explanation of how and why we have feeling and there will be no way to test it. For example, constructing a robot that passes the turing test will have all of our observable performance capacity, yet only it will know if it has feelings or not and we can only trust its word. I am in the camp that that feeling is inextricably coupled with our performance capacity because evolution would not have made feeling such a widespread property (we assume) if there was no adaptive function. It is unlikely that it is like a vestigial organ either because it is such a significant part of feeling organism’s lives. I would be comfortable stopping here for now and waiting for something to pass the Turing Test to explore the question of feeling. Like Harnad says, feeling is not a fifth causal force and thus it has to be part of the four causal forces, and the four causal forces thus far have been observable! Like Steven, I distrust the correlational explanations we find in neural studies, thus we must have some sort of external behaviour that signifies feeling if we wish to observe how the mechanisms come together to form feeling. However, that is the paradox of feeling and why the question should remain closed until a later date.

    1. I agree that feeling is a heritable, biological trait and as such it must have an adaptive value, over and above doing: but the hard part is saying what that adaptive value over and above doing is.But what do you mean by "some sort of external behaviour that signifies feeling" other than a correlate (which is not a causal explanation)?

  19. I understand that in class we make the hard problem and the other minds problem very distinct. However, when I'm reading the article "First, Scale Up to the Robotic Turing Test, Then Worry About Feeling" the premise to me rests upon the fact that feeling isn't observable in the 3rd person and because science is based on observation through the 4 causal forces, it is impossible to render causal explanations for it because it is only available in the first person. This seems like an implication of the other-minds problem-we can't be sure others have minds, and also feeling, but because of the similarity between us and other organisms we guess that they do have a mind, and feelings. The greater difference, the less certain we are that they have this. While we might say that thinking, without the feeling aspect, isn't observable, we can say that it is requisite for our performance capacity and because the performance capacity is observable we can reverse engineer the the performance capacity to figure out how we do things. However, feeling is not necessary for this performance capacity and therefore is not observable and this seems like it echoes the other-minds problem in that we cannot know others have a mind but ourselves. Could you clarify why you want us to see these two problems as so different even though I see them as hopelessly intertwined? Is it because though there are similarities, the hard problem focuses on the how and why of feeling while the other-minds problem just states that we cannot know others have a mind. Is it valid to assume that the other minds problem contributes to why the hard problem is so hard.

    1. No, I think that even if T3 feels, hence you can trust his introspective reports and his heterophenomenology, feeling still keeps dangling superfluously, unexplained, even though present. The other-minds problem is determining whether or not there is feeling. The hard problem is: if/when there is feeling, how and why? What does it add, causally, to the "easy solution" (T3/T4)?

  20. This comment has been removed by the author.

  21. “We are ourselves natural biochemical robots that were built by the Blind Watchmaker.”

    I have never heard this term of the Blind Watchmaker before and it was mentioned more than once in the article. So, I decided to look it up. For anyone else who didn’t know what this refers to I will tell you. Blind Watchmaker is basically the idea of Darwinian Evolution. “The Blind Watchmaker” is a book written by Richard Dawkins in 1986. It is also the name of a computer model of artificial selection. He presents an explanation of and argument for the theory of evolution; natural selection. In the book the watchmaker belongs to the 18th century theologian William Paley who argued that just as a watch is too complicated and functional to have sprung into existence by accident, so too much all living things, with their far greater complexity (than a watch). Charles Darwin’s brilliant discovery of natural selection challenged the creationist’s (Paley’s) argument. Dawkins says that natural selection, although it may be true, has no purpose in the mind. I haven’t thought about this until now but it’s true now that I think about it. Natural selection is tied with Survival of the Fittest and is more about human, animals and species’ physical attributes through evolution, not really to do with the mind.

    How and why do we feel?

    This is the hard problem that has been drilled into and been hurting our heads for the last few weeks since there is still no answer. I took a step back from this reading and thought about it for a second. Why are we even asking this question? What will it solve? It’s a very philosophical question. Isn’t it similar to asking, “Why do we exist?”, which is almost like asking, “What is our purpose in the world” or “What is my purpose in life?” Maybe, just maybe, it is a rhetorical question. Maybe it’s just one of those mysteries in life we are never supposed to really know the answer too which, makes life all the more interesting because everything can be questioned. The point is that just like I am never going to know my exact purpose in life for sure (I may think that I figured it out at some point but I really won’t know if it’s true), we may never figure out the hard problem…

  22. Following the class discussion on Monday, I am still wondering whether the hard and easy problem can actually be solved separately. And if they can’t, and the hard problem is unsolvable, then will we ever be able to successfully model human cognition? I realize that there is the possibility that we could have evolved to merely ‘do’ things without feeling them. And that there are now examples of Artificial Intelligence which are able to mimic some aspects of human behavior without feeling. Yet, we seem a long way off creating a T3 robot which is anywhere achieving full, human-like performance. And perhaps that is because ‘feeling’ is central to performance. Could our ‘doing’ capacities (and connecting all our ‘doing’ capacities in a coherent way) rely on ‘feeling’? Perhaps feeling is a ‘doing’ mechanism in its own right, so is a central part of our ‘doing’ capacity and thus is necessary to even construct a T3 robot.

    1. Hi Rose,
      It seems like there will never be a way to successfully model human cognition. Hypothetically, if a T3 robot able to perform like a human was created, there still would be no way of knowing whether or not it is feeling, thus conscious. Mimicking isn’t enough, a robot acting like it’s conscious doesn’t mean it’s conscious (the same way as acting like you enjoy eating Brussel sprouts doesn’t mean that you truly enjoy eating them. But to the outside world, it would look like you enjoy eating them) (that is, assuming you don’t like Brussel sprouts haha). Feeling is not central to performance, they are dissociated things. Because performance doesn’t allow to support a feeling argument. Correlation doesn’t imply causation, doing something the right way doesn’t mean that there is any feeling going on. Therefore even if Artificial Intelligence were able to fool us into believing it feels, it would not mean that it is actually feeling. It might just be doing what it’s programmed to do. So although we might be a long way off from creating a “perfect” T3 robot, it’s not because doing in inherently tied to feeling, as doing, although complicated, is plausible to be able to program. Rather, it’s because it’s implausible to program anything that can feel.

  23. Here is why I think the hard problem is insoluble:

    Say we are successful at building a sensorimotor T3 robot. How do I know why, how and IF it feels? I believe the other minds problem ultimately prevents us from ever building something that can feel because we can never truly KNOW, with confidence, that it feels. Perhaps it is just fooling us by mimicking, or imitating our own behaviour/the fact that we feel.

    Here's where I think we have a shot at getting closer to understanding how and why we feel:

    Instead of reverse engineering a T3 robot, I think efforts should be kept within the realm of physiology/biology. We have agreed on the fact that feeling is biological. By turning to external fields such as computer science and technology, I believe we ultimately hit a wall -- I don't believe we can ever get sufficiently close to our own complex biology and consciousness.

    I remember bringing this up in one of the first classes, and I still wonder about this. Have we ever created a system that is capable of our performance capacity and feeling? Yes. We give birth. I realize that Cognitive Science aims to reverse engineer an external explanation for everything that we do, but I do think that in a far away, (extremely unethical) world, we could knock out certain human biological functions until we find the magic "it factor" which is uniquely human -- that which is required for consciousness.

  24. The way I understood the paper, is that there is a clear difference between consciousness and performance capacity. “Consciousness is feeling”, as put in the paper, in that what can feel in conscious (and thus what cannot feel is not conscious). This is different from something that is able to imitate consciousness. Performing as if one were conscious does not mean one is conscious, if there is no feeling (as in the identification example, in which identification is not conscious). Thus, regardless of how advanced a robot may be, it is unable to feel thus unable to be conscious, because there is no way to “build” feelings into a robot. That is, if programmed to do so the robot would act as if it had feelings, but without actually having any (even though there is no way to know for sure that anything but oneself is feeling, but this isn’t the issue in the paper). As has been said many times, “correlation does not imply causation”; that is even if something is acting like it is feeling, it does not mean that it actually is feeling. Therefore there is never any way of knowing how and why feeling occur, as there are possible causes but that does not mean those possible causes are what are causing feelings.
    I quite liked the second question from the AAAI Symposium, “are AI systems useful for understanding consciousness”. The answer was a flat “no”, because how could something that does not feel, thus is not conscious, be able to explain consciousness itself? Using machines to explain this would only lead correlational explanations, not causal ones. Therefore it is not realistic that any machine ever might be able to explain feeling and consciousness.

  25. Since this piece is framed as a sort of art criticism, I’ll give the same a try. Most works that try to be meaningful operate on the level of Spielberg’s “AI”: they take complex ideas pertinent to human experience and reduce them to the point where they merely toy with one or two basic emotions. Contrast Borges’ “Funes”: His literary portrayal of a man without the power to abstract cannot depict the full implications of a complete loss of abstraction, due to the abstract nature of writing itself, which requires him to paint Funes’ reality as a linguistic reality. Yet he nonetheless works within the limitations of his medium to deliver a compelling portrayal of the human implications of Funes' deficit, which maintains immeasurably more emotional nuance while demonstrating a firm grasp on the psychological importance of abstraction. In short, Borges work is bounded only by the limitations of his medium, while Spielberg's is bounded by the limitations of his understanding.

  26. Consciousness = awareness = feeling, so consciousness = feeling ( A = B = C, therefore A = C). OK.

    "So the problem is not with uncertainty about the reality of feeling: the problem is with the causal role of feeling in generating (and hence in explaining) performance, and performance capacity."
    But perhaps it does not have a causal role? Although, as mentioned further in the article, it almost certainly must have some sort of adaptive function.

    "The real question, then, for cognitive robotics (i.e., for that branch of robotics that is concerned with explaining how animals and humans can do what they can do, rather than just with creating devices that can do things we'd like to have done for us) is whether feeling is a property that we can and should try to build into our robots. Let us quickly give our answer: We can't, and hence we shouldn't even bother to try."

    This is sort of what I've been trying to get at with my comments on the last article: even if we are saying that cognition involves feeling (because cognition is what allows us to do what we do and feeling is something we do), feeling doesn't necessarily matter for AI. That being sad, we cannot create an AI with feeling (as of now), AI can never tell us anything about cognition. That being said, that doesn't mean that AI can't tell us about performance capacities, since AI (as of now), is simply an I/O performance capacity model and much of what we do is simply I/O.

    I found the idea of more complex robots being more likely to have conscious very interesting: "The best that AI can do is to try to scale up to full Turing-scale robotic performance capacity and to hope that the conscious correlates will be there too.....Will systems that can perform more and better be more likely to feel? The answer to that might be a guarded yes, if we imagine systems that scale up from invertebrate, to vertebrate, to mammalian, to primate to human performance capacity, Turing-scale."

    This suggests that we shouldn't worry about feeling at all. Instead we should focus on making more refined and complex performance capacity models and perhaps feeling will follow (as it inevitably did at some point in evolution). I personally feel like this seems unlikely, but I suppose the point is that the task of creating feeling is so daunting and since we continue to make progress in the performance capacity dimension of AI, we should focus on that and if we manage to emulate feeling along the way, then fantastic.

  27. "The concept of force plays an essential explanatory role in current physical theory. Until/unless they are unified, there are four forces: electromagnetism, gravitation, and the strong and weak subatomic forces. There is no evidence of any further forces. Hence even when it feels as if I've just clenched my fist voluntarily (i.e., because I felt like it, because I willed it), the real cause of the clenching of my fist voluntarily has to be a lot more like what it is when my fist clenches involuntarily, because of a reflex or a muscle spasm. For feeling is not a fifth causal force. It must be piggy-backing on the other four, somehow. It is just that in the voluntary case it feels as if the cause is me.

    But the other four forces are all unfelt forces."

    It is quite a mystifying, but mainly flat out frustrating, experience trying to figure out how feeling arises from a subset of unfeeling bits of matter and unfeeling forces acting upon them. I'd have to agree that the problem is definitely insoluble. Something I noticed in relation to both this paper and the previous two (relating to Block's distinction between different kinds of consciousness), which I find to be an intriguing idea but ultimately futile, is in the following article:

    Basically, the authors are looking for the most basic, minimal set of neural correlates responsible for access and phenomenal consciousness. While this seems kind of trivial, mainly because the definition of consciousness in general arbitrarily varies - especially if we are considering conscious to be something we are aware of and is thus felt (comatose state is medically considered a mode of consciousness).

    But it is an interesting sentiment. What minimal system of these unfeeling bits of matter and unfeeling four forces, give rise to feeling. I suppose that this is still no where even close to approaching the real how and why, but it actually could be a step in the right direction. You never know until you try. It may be misguided, but even misguided searches can still help to illuminate aspects of the problem, perhaps one's we might not have noticed otherwise.

  28. This comment has been removed by the author.

  29. As others have stated, I think this paper nicely summarizes the class and is very clear in contextualizing and positioning the "empirical target" of cognitive science.

    As I reflect on this class, I do feel the need to at least posit this question, even if it might be beside the point and outside the scope of interest of cognitive science itself: if consciousness is ultimately a biology issue, then why would we turn to a robot (even a T3 robot) to look for the how and why of consciousness. The paper clearly says that "The best that AI can do is to try to scale up to full Turing-scale robotic performance capacity and to hope that the conscious correlates will be there too." But because of the other minds problem, and also because of the fact that we have never been able to successfully build a feeling T3 robot, what are the chances of one being build AND solving the hard problem ? (none, Stevan Says)

    If we entertain the idea that the hard problem is not insoluble (which it probably is), then wouldn't the better subject to study be humans instead, since we already DO feel and are conscious? While we have not fully solved the easy problem, it is very evident that we do have some explanations for the how and why we do what we can do, the knowledge that we feel feeling, and that consciousness is feeling. Thus, since we are not "feelingless Darwinian machines," it seems more promising to take a stab at getting closer to the hard question through exploring humans, especially since our knowledge of what feeling and consciousness are (still) profoundly inconclusive (Libet's "readiness potential," for example).

  30. I just have some a clarification question for Dr. Harnad. You’ve mentioned a few times in the comments that you believe it would be impossible to have a ‘zombie’ T3 robot - that we couldn’t successfully generate out entire performance capacity without having generated feeling in that system as well. But in this article, you say:

    the real question, then, for cognitive robotics… is whether feeling is a property that we can and should try to build into our robots. Let us quickly give our answer: we can’t, and hence we shouldn’t even bother to try .

    I’m just a bit confused as to whether there is a subtlety here that I’m missing, or whether you’ve simply changed your mind? Or do you mean that we could never ‘build in’ feeling itself, intentionally, since we have no concept of how it is generated or what purpose it serves? In that case, does your belief in the impossibility of a ‘zombie’ T3 mean that you believe feeling is an ‘emergent property’ (for lack of a better term) of our bodies? And why, if you don’t mind me asking, do you believe that T3 zombie would be impossible? Where does this intuition come from?

    (Sorry I guess that was actually a lot of questions for Dr. Harnad).

    1. Actually - one more question! I’m also sorry if you’ve been beaten to death with questions like this since it’s release, but given the Spielberg section of this paper, how did you feel about the film Ex Machina.

      While we’ll have to overlook some of it’s early, blatant shortcomings (calling something a ‘Turing Test’ that doesn’t even remotely resemble the actual TT, the protagonist’s exclamation about building a ‘conscious machine’ etc.), did you find that it tackled the hard problem better than AI? I definitely did.

      The film’s most interesting strength, in my opinion, was in how it portrayed Ava, as a human face on an exposed and clearly robotic/mechanical body. From the start, I feel like this sets up a much deeper and more interesting problem for the protagonist and viewer. There is no trickery here, no hiding the metal and pretending she is human. From the start, we are immensely aware that Ava is a robot, and yet in spite of this , the protagonist becomes convinced of her ‘consciousness’ (aka the fact that she feels). He is so convinced of it that he allows himself to be harmed in order to alleviate the suffering that he believes her to endure. In this why, I felt the movie did a really nice job of bring in the Other Minds Problem and asking both the characters and the viewer whether they could be certain that Ava wasn’t feeling, and making us really question how we would treat her if she existed.

  31. ‘Unconscious knowing makes no more sense than unfelt feeling. Indeed it’s the same thing. And unconscious know-how is merely performance capacity, not unconscious “know- that.”’
    Would something like implicit learning fall under unconscious “know-that”?

    ‘1. Are models of consciousness useful for AI? No. First, consciousness is feeling. Second, the only thing that can be “modeled” is I/O performance capacity, and to model that is to design a system that can generate that performance capacity. Feeling itself is not performance capacity. It is a correlate of performance capacity. The best that AI can do is to try to scale up to full Turing-scale robotic performance capacity and to hope that the conscious correlates will be there too.’
    I feel like this references back to my comment on 10 B – the process of scaling up would, with our understanding of the human brain, imply a system of innumerably complex interactions – whether they be digital or some theoretical T4-esque neurochemistry, that feeling might simply just be an outcome of it.

  32. Okay so in 10a I realize that my thought experiment left out much of the “why” in the hard problem. The “how” aside, Harnard’s article says that the why is the real problem. I had originally posited salience but I’ll amend that though to make it more general. First, the reason we have feeling in my opinion must be an evolutionary answer. If it isn’t that means that it Since almost all animals have it, I believe it must have come from evolution. Whether it came as a byproduct of something else, a random mutation, Baldwinian evolution I don’t know but I feel relatively secure in saying that if it wasn’t adaptive in some way that we wouldn’t all have it. I personally feel quite satisfied with the explanation of feeling being adaptive and an evolved trait without the need to go much further- there are lots of evolved traits whose origins we haven’t quite deciphered yet. But to give a potential explanation, I would say that feeling is an amplification device. It doesn’t cause amplification; it is the amplification. Feeling as we all know if just chemicals, let’s call it all serotonin just to be simple. Imagine you put your hand in a flame, if we only had “functers”/detectors, they would detect this tissue injury and cause the reflexive movement of your hand as well as the “memory” that putting your and in fire is not good. Now what I posit feelings improve upon that is that they highjack these detectors. As the information from the detectors goes to the brain and neurons fire, chemicals (“feelings”) are released that act on receptors in the neuron to cause it to fire more and the signal is amplified in a way that makes this experience stand out to you. The stimulus (fire burning tissue) is the same and the detection of this stimulus is the same but the feelings/chemical acts as dials to turn up or down this detection in a way that is meaningful. In the fire example, this can amplify the signal to strengthen the neuronal connections (what many neurochemical s do, in fact). This could definitely be adaptive for the organism and indeed in those who have difficulty with feelings (ie autistic individuals) they are at a disadvantage. It’s also important to note that in the brain, the physiological or detecting part of pain and the emotional or feeling part of pain (ACC/Insula) are separate. Why would the “feeling” part evolve separately of the detection part if not to serve some adaptive purpose such as amplification of the detection?
    Thus one explanation for why we feel could be because it amplifies our detections of the world and this is adaptive. Maybe instead of chemicals we could have evolved some other mechanisms but we didn’t. Having 4 arms would be more adaptive maybe than having only two but we still evolved two. You can go on asking why forever but I think we have to stop somewhere and for me this sort of an explanation is sufficient.

    1. Hi Ailish!

      Just responding to your other reply earlier and this one in the same place because I feel like I totally get your sentiments in a lot of your posts.

      So just to clarify - what your basically say is that feelings can be causally accounted for if we equate them to their physical causal correlates? This would be a great way to deal with a lot of issues however, identifying the feeling of say itching to the specific firings of a certain types of neurons actually results in some weird problems.

      What you seem to be positing (correct me if I'm wrong) is the same thing as identity theory:

      and one of the issues that arises from it (which ultimately leads us to the hard problem) is that of 'multiple realizability'

      "Yet another problem for identity theory is the possibility that other species feel pain. Whether these species are actual – such as fish and spiders – or hypothetical – such as aliens from Sirius – the problem is the same. Given that these species have very different ways of realising such a sensation as pain (that is, different physical processes for registering it), how can we assume that such an experience is identical with only a certain brain state?

      There are two options here: either we assume that such creatures do not have similar experiences to us, or we admit that such conscious experiences as pain are “multiply realisable”. We shall come across this term shortly when we come to look at Functionalism, and it simply means that in theory a mind may depend upon vastly different physical mechanisms." (taken from the link I posted)

      So if we concede that feelings can be equated to anatomical processes, we end up with this annoying issue of multiple realizability (honestly you could probably try and argue against multiple realizability being a legitimate confound, but I just thought I would mention it!)

    2. oops this was the link:

  33. I like this paper because it summarizes this entire course. But I think I have realised where I disagree with a lot of the stuff we have been discussing in this course:
    “the problem is with the causal role of feeling in generating (and hence in explaining) performance, and performance capacity. Let us agree that to explain something is to provide a causal mechanism for it. The concept of force plays an essential explanatory role in current physical theory. Until/unless they are unified, there are four forces: electromagnetism, gravitation, and the strong and weak subatomic forces. There is no evidence of any further forces.”

    I am not a physicalist or a materialist or naturalist or whatever the proper word is (they are all different things, but I can’t remember the difference). I think that issues like these show precisely why our understanding of causality has to extend beyond physicalism. The issue I have with the mind/matter problem is that, by divorcing matter of a “mind” (or a vital force – I guess a “form” or a “soul” (in a non-religious sense, more like an animating principle)), we have created this mind/matter dichotomy ourselves. I have not read much philosophy but I do think Descartes and the early moderns in general are to blame for this, and if maybe we went back to more mediaeval or ancient philosophy, this would be easier to resolve.

  34. I am back to thinking that if a T3 robot passes the TT it will be feeling – but I don’t think we’ll ever be able to explain how or why—or if we’ll ever be able to even create a T3 system that passes the TT.
    T3 also encompasses our sensorimotor capacities and it also has the capacity to ground symbols – but when thinking about understanding, I think grounding is necessary, but not sufficient. It’s still missing one crucial ingredient – feeling – although Searle didn’t show specifically about the “feeling” problem in his CRA, he did use it in building his argument.
    I think at it’s core, feeling is what matters and is what gives meaning to anything that we do – if not doings are just doings, like zombies. Only cognizers feel, and only feelers can cognize?

    Now thinking that we can create a feeling T3 that can pass the TT, we will have generated that capacity, we will have generated the “feeling” difference – but still have no idea how or why it feels. But I think in the end, to successfully build a T3 robot, it will have to feel, and since we have no idea how or why we feel, building a T3 robot seems like a pretty difficult thing to do.

  35. "And although the uncertainty grows somewhat with animals as they become more and more unlike me (and especially with one-celled creatures and plants), it is very likely that all vertebrates, and probably invertebrates too, feel."

    This is an interesting idea that I touched upon in earlier skywritings, but I feel like it's relevant to this discussion.

    It relates to our precise definition of cognition: cognition is the processes going on inside us that allow organisms to do all the things organisms can do. It is the causal mechanism of an organism's performance capacity. My trouble is in our definition of 'organism'. Does this mean every organism? It would seem that the answer is 'no'. Earlier in the course we decried single-celled organisms as non-cognizing. Then at what level of organic complexity does 'cognition' officially begin? And I suppose this then ties into the hard problem:

    Cognition is felt. A causal explanation of cognition should account for one's doing capacity as well as one's feeling capacity. Feeling then lies at the heart of the problem posed above. At what level of organic complexity do we say an organism is feeling? This arises from the previous (I would argue) reasonable premise, so I think it's an important question to ask.

    The passage I cited alludes to the feeling capacity of plants and single-celled creatures (which I suppose belong to the non-vertebrate group, unless we mean non-vertebrate animals...). Sunflowers have been known to turn and face the sun and ameoba have rudimentary photodetection. Is this sufficient to warrant the label of cognition? What about feeling? These are question we can't truly know the answer to because of the other minds problem, but the way we think about it is important.

    If we imagine that feeling is a property of more complex organisms, then this would suggest that feeling is an emergent property which simply... appears at some level of complexity?
    Otherwise, if we imagine that all living things are feeling, then this begs the question of whether our individual cells (ie, our neurons) are feeling, as well as where, in a single-celled organism, feeling is 'located'.

    These probably seem rather intrinsic to the way we are loosely defining cognition, but I wonder if there is a solution, or something obvious that I'm missing...

  36. “We can reduce just about everything that cognitive science needs to explain to three pertinent anglo-saxon gerunds – doing, feeling, and meaning.”

    I am unsure why "meaning" is a separate entity from doing and feeling. It seems like this all goes back to the symbol grounding problem.
    From my understanding, meaning is a hybrid and dependant on feeling and doing. Arn’t doing and feeling requirements for meaning?

    Isn’t assigning meaning to symbols simply another form of “doing” ~ and there is feeling associated with doing?

    Looking at it from another angle, perhaps we can separate the feeling from the doing? For example, with the Chinese room argument, Searle did not understand Chinese, therefore there was no understanding involved. However, the language would have conveyed the information just as well if Searle had understood Chinese.

  37. “Now we come to the "programming." AI's robot-boy is billed as being "programmed" to love. Now exactly what does it mean to be "programmed" to love? I know what a computer programme is. It is a code that, when it is run on a machine, makes the machine go into various states -- on/off, hot/cold, move/don't-move, etc. What about me? Does my heart beat because it is programmed (by my DNA) to beat, or for some other reason? What about my breathing? What about my loving? I don't mean choosing to love one person rather than another (if we can "choose" such things at all, we get into the problem of "free will," which is a bigger question than what we are considering here): I mean choosing to be able to love -- or to feel anything at all: Is our species not "programmed" for our capacity to feel by our DNA, as surely as we are programmed for our capacity to breathe or walk?”

    I find this paragraph of the article very powerful. It raises all sorts of philosophical questions whilst shedding light on the hard problem of cognitive science. The biggest assumption that the movie makes is: feelings are computational i.e. feelings can be evoked through inputs and outputs with the help of certain algorithms.

    “then mistreating him is not just like racism, it is racism” I’m not sure if racism is the word. Surely, there is some form of discrimination going on. But it is purely based on the fact that humans do not understand the unknown and are scared to accept something different. Regardless, in a world where AI feels, when such a novel concept is being introduced, it will undoubtedly face criticism (the thought of it faces criticism today by some). We see such outcomes in movies like Her and Ex Machina that show possible assumptions/outcomes of AI having emotions.

    Also, I completely agree with Harnad’s point: “It simply plays upon the unexamined (and probably even incoherent) stereotypes we have about such things already”. The effects of such large changes cannot be predicted as we do not know how it will be received by society and the changes it will bring forth. To emphasize the point again, these are merely assumptions.

  38. "This film doesn't sound like a no-brainer at all, if it makes us reflect on racism, and on mistreating creatures because they are different!"

    As Harnad argues, the issue isn't that the film offers some critique of social justice or teaches us about robots; the film is a no-brainer because it just re-tells the story that we know already but in different words, tugging at the heart's strings. I agree with Harnad that we already know about the injustices that we cause with other creatures, and filling this package with metal insides doesn't change anything.

    Essentially, this paper nicely sums up the course, as it addresses the inherent issue of feeling in computation/robots. The underlying question is not whether feelings can actually exist in robots (probably not), but why/how feeling exist at all.