Saturday 2 January 2016

11b. Dror, I. & Harnad, S. (2009) Offloading Cognition onto Cognitive Technology

Dror, I. & Harnad, S. (2009) Offloading Cognition onto CognitiveTechnology. In Dror & Harnad (Eds): Cognition Distributed: How Cognitive Technology Extends Our Minds. Amsterdam: John Benjamins 


"Cognizing" (e.g., thinking, understanding, and knowing) is a mental state. Systems without mental states, such as cognitive technology, can sometimes contribute to human cognition, but that does not make them cognizers. Cognizers can offload some of their cognitive functions onto cognitive technology, thereby extending their performance capacity beyond the limits of their own brain power. Language itself is a form of cognitive technology that allows cognizers to offload some of their cognitive functions onto the brains of other cognizers. Language also extends cognizers' individual and joint performance powers, distributing the load through interactive and collaborative cognition. Reading, writing, print, telecommunications and computing further extend cognizers' capacities. And now the web, with its network of cognizers, digital databases and software agents, all accessible anytime, anywhere, has become our “Cognitive Commons,” in which distributed cognizers and cognitive technology can interoperate globally with a speed, scope and degree of interactivity inconceivable through local individual cognition alone. And as with language, the cognitive tool par excellence, such technological changes are not merely instrumental and quantitative: they can have profound effects on how we think and encode information, on how we communicate with one another, on our mental states, and on our very nature. 

65 comments:

  1. “Are technology-extended bodies all that different?”

    I mentioned this in my 11a. comment but I thought about it some more after reading this article. Where do cyborg technologies (both current and future) fit into cognition and the distinction between facilitating a mental state and being part of a mental state? For now, we have technologies such as cochlear and retinal implants which replace missing pieces (albeit the transducing points) of our brain’s circuitry, but more advanced and more medial enhancements/replacements could feasibly occur inside the skull. When exploring cognition do we treat these like the externalized pen and paper or are they part of our cognitive capacity and ourselves? In researching this a little bit more I found more of Clark’s work and the concept of cognitive prosthetics. I understand in this paper it is important to distinguish between input/output and the important unknown mechanisms in between, but (to me) there is a very fuzzy line between what is input and what is part of that mechanism, as was mentioned with the crane example. It seems peculiar that I am more comfortable calling a cochlear implant a part of this distributed network of cogntion vs. a crane. There is this mental blockade that becomes stronger the farther you physically get from the brain even though both a crane and cochlear implant are arguably increasing the capacity of the person.
    To me, it seems completely intuitive that we can’t see conjoined twins as a single organism because people are opposed to the idea of not having cognitive autonomy in their own body. I think this is maybe because of the other minds problem insofar as if I’m sharing a body with someone but not a mind, I will never know (and therefore will have no control of) their consciousness.
    I realized about halfway through that these two papers are effectively trying to categorize cognition itself. My point therefore still stands with Clark’s argument. If anything that is part of the distributed database is part of cognition, then what is not part of cognition? If we have no counterexamples of things that cannot be cognition (“Nothing is ever casually isolated from everything else”) are we left with the same in-class dilemma we had about categorizing feelings? Or are there actual counterexamples (in Dror’s, Harnad’s or Clark’s opinion) of what could not be a part of this distributed cognition? Is there a line that should be drawn and is there any agreement as to where?

    ReplyDelete
    Replies
    1. Cyborgs are just tools or toys, like vacuum cleaners, unless they are T3, in which case they have minds of their own.

      I think the  question to ask to find out what is and is not part of a mental (i.e., felt) state is whether if you take it out, the felt state ceases to be felt. Notice that I said ceases to be felt rather than just changes (otherwise the sky or the earth are part of my mental state depending on which one I look at).

      So if you have a natural brain component, or an implant, either way, if you can feel if it's there (and active) and not if not, then it's part of your mental state -- or rather part of the cerebral state that generates the mental state.

      This is not a very rigorous criterion! Probably there are absurd counterexamples, such as the unconscious (and vegetative) breathing control center that has nothing to do with how your brain generates feeling, but if you turn it off, you suffocate and die!

      But it does point to the fact that it is only what is actually necessary to generate the felt state that is at issue, not its inputs, outputs, or vegetative support. That cuts out a lot of the "distributed" candidates.

      Delete
  2. “At the very least, we need to pinpoint the cognizer of the distributed cognitive state. Let us say it is the user of the cognitive technology, and that what we are asking is whether the technology outside the body is part of or merely I/O to/from a narrow cognitive state inside his brain?”

    I agree with this. It also points to the biggest issue I have with calling things cognizing: “what is it that’s cognizing, and what is it/are they cognizant of?” When it comes to things like anthills, there doesn’t seem to be a satisfactory answer to those questions. However, this point also brought up a different question for me. Is it really fair to say that mental states are in our head? The mechanisms that allow for them to happen certainly are, I have no problem with that. But I feel like saying mental states are in our head is like saying driving takes place in the engine of a car. The engine of a car allows for driving, but there’s more to it than that. It seems like something is missing with this analogy but I don’t know what.

    ReplyDelete
    Replies
    1. “Is it really fair to say that mental states are in our head?”

      If we come to the conclusion that mental states are felt states, which in turn are cognitive states and that the causal mechanism of how and why we do everything we do takes place in our head, then I think it is safe to say that mental states are in our head. Specifically, if you believe that the “mechanisms that allow for them to happen certainly are” in our head, then do you believe that feelings emerge from the doings taking place in our brain, yet are not located in our head? I know that we still need to explain how and why feeling emerges (the causal mechanism of it all), but I think that it does emerge and exist within our heads. I am wondering where you would say that mental states are located then?


      I agree with the article that “if cognitive states are indeed not mental states, it follows that "cognitive technology" is not just something used by cognizers, but a functional part of the cognitive states themselves, because the boundary between user and tool.” If we succumb to the thinking that cognitive technology and notebooks are extensions of our mind, then I think we will never know where to draw the line. Instead, I think the reasoning that despite the fact that we sometimes rely on things (inputs) other than our own brains to reach a mental state, these inputs do not necessarily have to be part of the mental state that is reached. Rather, they are mere tools as this article has pointed out. We are trying to understand the causal mechanisms of cognition and mental states. From this I believe that the inputs in the form of cognitive technology will not add to the explanation of how we are able to do everything we can, they are not required for the physical implementation of cognition.

      Delete
    2. I think the reason we sometimes say such odd things about where the felt state is located is, again, the hard problem: We have no idea how and why the brain causes feeling. And whereas we have no trouble saying where something we see, touch or here is likely to be located, where the feeling itself is not obvious (though few people feel it's on the moon). But that's only where we feel as if feeling is located. Where it's actually located is another matter. (My hunch it's in the mechanism that generates the feeling, and that that mechanism is the brain...)

      Delete
  3. - Approximate criterion for what performance capacities count as cognitive: conscious execution; how is this consciousness grounded in the brain? So all cognizers are conscious and all non-conscious are non-cognizers? T3 capacity is non-cognizing then? Since apparently, people pointed out that Turing didn’t intend for T3-robot to have feelings, then a T3 robot is not a cognizer?
    “The states consisting of the joint activity of the robot-internal and the robot-external components of the mechanism that give the robot the capacity to pass the TT would be indisputably distributed cognitive states.”
    This was the only answer given to whether a T3 robot can have cognitive states or not, yet the paper still states that this doesn’t explain whether cognitive technology is part of our cognitive state too. Then, nothing can be part of our cognitive states except for ourselves? I also don’t even get what a joint activity is, and what constitutes as internal and external components of a robot since it all sounds so hypothetical that it’s impossible for me to imagine to be honest. And do vegetative states even need to be implemented in robots then, seeing as they’re unconscious states?

    - The paper points out that nonliving systems can have mental states? How?

    - Can an individual have conscious and unconscious states simultaneously? Because it seems as if the paper states the two as being two different things, yet part of one system/individual, but not two subsystems within one system. So there is no conscious system and unconscious subsystems within an individual then?

    - Is there certain level of mental capacity needed in a living being to constitute it as capable of mental states? (E.g.: are fishes able to cognize? Vs humans who are able to cognize.) Here, the only answer given was that whatever is observable, objective properties of living systems exhaust all there is to being alive, which overcomes the other minds problem so living systems must have minds. Then, the answer here is observable functional capacity? Only living systems with functional capacity have minds because they’re able to “do/perform”? Living systems like tree and Earth do not have feelings because they don’t have functional capacity? That's all there is to cognition? Thinking is doing?

    - I’m a little confused by difference between functional capacity and performance capacity. Performance capacity is what we can do, and functional capacity is what generates performance capacity? Hence, functional capacity is how we can do what we do?

    - So distinct minds constitute distinct systems such as the example from Siamese twins, but seeing as much of our mind is made up from views that are shared by others, to what extent are we distinct? Aren’t individual minds made up from distributed minds, as in we learn from what views are shared by others? The answer given was that it is us, the users/cognizers of distributed cognitive minds/technology and our narrow sensorimotor states. However, what is the distinction between narrow and wide then and where is the boundary? Inside and outside the mind right? But then, what constitutes an individual mind? Isn’t the individual mind a distributed whole of individual components? This has become a circular question…

    ReplyDelete
    Replies
    1. - What constitutes as a connection between individual components that give rise to a distributed system? Such as how does the slime mold count as distributed life? The causal interactions of individual modules to form a whole that is bigger than the individuals’ sensorimotor capacity? How can we say something is distributed if it is made up of individual modules, yet something distributed can be perceived as a whole and autonomous and as an individual kind but will not have the certain capacity like feelings, that its’ individual components have?
      The only explanation given, such as why the Earth does not have a mind, is that a whole may not have feeling or know what it’s like to have a migraine. Doesn’t that place a lot of constraints as to what constitutes feeling? It seems as if this states that to be feeling, you must absolutely have a certain capacity, and that’s it; any more or any less does not constitute as a system that feels or thinks, and it seems that that’s whatever inside the mind, which is inside the brain. Thus, what is the capacity needed for feeling?

      - I’m still confused by what counts as cognitive technology then. Any kind, whether natural or artificial, is a noncognitive technology even if their affordances makes our lives more convenient. Then what is this seemingly abstract idea of cognitive technology? I get why language is one, but if someone ask me for a definition of cognitive technology, I would not be able to give one except for examples and for the impact it has on our mind.

      Delete
    2. Hey Oliver, you ask a lot of good questions here, and I’m only going to touch on a few because a lot of them leave me perplexed as well. I’ll also break my comment up based on the questions, as it ended up being too long for a single post.

      Can an individual have conscious and unconscious states simultaneously?

      To start, I find the terms “conscious” and “unconscious” tend to make the distinction between them unclear. But yes, I would say so. For an easy example, think about breathing and well, any conscious activity. Even when I’m thinking about what I should cook for dinner, I’m still breathing, although I’m not doing so consciously. It’s easy to see how planning dinner is conscious while breathing is not, but this is not always the case. Consider the example that Dr. Harnad presents towards the end of the article, what happens when you evaluate the equation 7 x 9:

      When you say to yourself “what is seven times nine?” and then “sixty-three” pops up, you are certainly conscious of thinking “sixty-three.” So that’s definitely mental, and so is the brain state that corresponds to your thinking “sixty-three.” But what about the brain state that actually found and delivered the “sixty-three”? You are certainly not conscious of that, (page 16)

      So it’s not just that conscious and unconscious states can exist simultaneously, they are often intrinsically connected to each other, to the point where you can’t get one without the other. In cases like the 7 x 9 example above, they need to exist simultaneously, and we are only able to do what it is we do because both processes happen at the same time. However, none of this answers your question about how these two systems co-exist:

      Because it seems as if the paper states the two as being two different things, yet part of one system/individual, but not two subsystems within one system. So there is no conscious system and unconscious subsystems within an individual then?

      I agree with your interpretation here. To me, the paper argues that conscious and unconscious systems are so fundamentally distinct from one another that they cannot be considered subsystems of the same overarching system. Describing them as subsystems implies that one is connected to the other through some kind of hierarchal relationship. This is not the case; the two systems function 100% independently, although their functions do often overlap to produce the types of states that are familiar to us. They are not two faces of the same coin, but two entirely different coins. Both of these coins can and do exist within the same individual, but it’s important to note that they are still completely separate entities. So you’re right, there is no conscious and unconscious subsystems within an individual because you can’t consider them each a part of the same system. Both conscious and unconscious systems exist within an individual, but the individual is not a hierarchal structure linking the two together. That sub- prefix is absolutely crucial here.

      Delete
    3. Is there certain level of mental capacity needed in a living being to constitute it as capable of mental states? (E.g.: are fishes able to cognize? Vs humans who are able to cognize.) Here, the only answer given was that whatever is observable, objective properties of living systems exhaust all there is to being alive, which overcomes the other minds problem so living systems must have minds. Then, the answer here is observable functional capacity?

      While that is the only answer given here, I do not believe it is the only answer there is. Or if others consider this a valid answer at all. I feel like this line of reasoning could have benefited from being prefaced with a “Stevan says” disclaimer (although I guess it is implied considering that Stevan wrote this article). Nevertheless, it seems that to Stevan, something observably demonstrating all the properties of having a mind is enough to conclude that this something does in fact have a mind, despite the other minds problem. In other words, the fact that something acts like it has a mind trumps the fact that we will never be able to tell if it actually has a mind. It’s why we don’t kick our T3 robots Riona and Renuka. Because they act like they have a mind, we treat them like they have a mind, even if we don’t actually know that they have a mind.

      Only living systems with functional capacity have minds because they’re able to “do/perform”? Living systems like tree and Earth do not have feelings because they don’t have functional capacity? That's all there is to cognition? Thinking is doing?

      It’s important to note that this line of reasoning is just a way of identifying cognition. So it’s not just that “thinking is doing,” it’s that functional capacity seems to be only way we have of evaluating cognition, at least for now. And some people may not consider this an adequate answer either. To them, the fact that something acts like it has a mind may not trump the fact that we will never be able to tell if it actually has a mind. There may be some way of surpassing the other minds dilemma, but functional capacity is not one of them. But not to Stevan. To Stevan, the functional capacity is more important than the uncertainty. And I would agree. If Riona and Renuka act like my kick hurts them, I wouldn’t kick them, because I don’t want to actually hurt them, even though I’m not sure they would be feeling pain in the first place. They act like they feel pain, and that’s enough to overcome my uncertainty about if they actually feel pain or not.

      So to answer the big question here, yes, fish are able to cognize. Why? Well, because they act like they cognize. If I poke a fish in a tank, it will swim to the other side, suggesting it felt pain from my poke. This is only a suggestion, and it will never be a definite answer because of the other minds problem, but it is enough for me. And Stevan as well, according to this article. That’s why “Stevan says” animals can feel. However, not everyone agrees. For some people, the other minds problem is more important, and it seems silly assuming something can feel based on their functional capacity alone. These people need a different explanation to overcome their uncertainty. The weight you place on each factor is up to you, and if you are unsure of where you stand in this regard, I think a single question will clear things up for you. Would you kick Riona and Renuka?

      Delete
    4. About whether fish can feel, see: Key, Brian (2016) Why fish do not feel pain Animal Sentience 2016.003 but especially the 32 critical commentaries (skywriting!)

      About who the other-minds problem is more important for, you or the other, see: Harnad, Stevan (2016) Animal sentience: The other-minds problem Animal Sentience 2016.001

      Delete
    5. @Alex: Thank you for answering my question about whether other organisms can think or not. I'm just a little confused as to what is the difference between low- and high-cognitive processes are and to what extent do we consider something as capable of thought rather than just functional capacity. But I guess it's like you said, with the other-minds problem, we'll never know.

      And in regards to your consciousness reply, then if the individual is not the system that links the two independent systems together, then where do these two systems exist? What is the connection that you speak of? Is it the individual? Is consciousness thus, not a continuous spectrum seeing as consciousness and unconsciousness are two independent systems? Since one is either conscious or unconscious, then feeling is a discrete state? It's an all-or-nothing phenomenon, but that means there are rules that are implemented in our brain to switch from a conscious to an unconscious state, vice versa. The logical implication of feeling and not-feeling as states suggest that it can be simulated by computation via the CT-thesis. Is this true or no?

      Delete
    6. 1. The vegetative/cognitive distinction is arbitrary, but our intuitions as to what is more cognitive is that we tend to do it consciously, even though we are not conscious of (can't feel) how we're doing it.

      2. Organisms have felt states and unfelt states. The things going on in unfelt states could just as well be going on outside the head, as input or remote storage and processing. Yes, whether a state is felt or unfelt is all-or-none. But not all that is going on in the brain of an organism who is in a felt state is part of that felt state; vegetative function (controlling temperature, blood-pressure, blood-sugar, etc.) is not part of the felt state; neither are the unfelt processes that simply deliver their outputs to the felt state as inputs. (And this is another good moment to reflect on the hard problem: Why bother delivering to a feeler, rather than just doing what needs to be done?)

      Delete
  4. “Locomotion itself, inasmuch as it includes the movements of parts and not just the whole of the organism, covers everything that we are able to do; and that, in turn, extends naturally to all of our cognitive capacities – what we are able to think, deduce, understand – encompassing also the internal mechanisms that generate those capacities.”

    This point (or maybe just the semantics/word choice) confused me. You said locomotion itself covers everything we are able to do and that extends to our cognitive capacities. Does that not mean Locomotion is cognition? And since my understanding is that locomotion is dynamic, does that not mean that all of cognition is dynamic? Or was that description separating cognition as the capacity and mechanism and locomotion as the set of all actions that the mechanism can do?

    “cognition is whatever gives cognitive systems the capacity to do what they can do.”

    This definition keeps appearing to be inadequate. Are plants thus not cognizers? It seems than literally anything that can DO anything should thus be cognizing? The next paragraph itself creates the distinguishing characteristics of what is and is not cognition, yet the definition that we use and reuse does not hint to this. So why has our definition not been updated? Instead whenever the definition arises another paragraph is needed to explain it or add amendments that get to its real meaning. Should we not have been including in our definition “so long as what we do is conscious and non-automatic”? It seems this is intentionally left out to prevent argument and critique although it is necessary to your argument and opinion on the matter.

    “There can be living organisms that have no mental states and there can be nonliving systems that do have mental states.”

    “Until further notice, “conscious states” is synonymous with “mental states.”

    It seems here that you are admitting robots could develop consciousness. This seems odd to me because my understanding from other writings and in class that you did not believe such a thing would be possible for us to create as that would require solving the hard problem which you repeatedly declare insoluble. I find it odd that this sentence was just snuck in there matter of factly when it has taken papers to fully write out your opposition to the matter

    ReplyDelete
    Replies
    1. 1. No, moving (doing) itself is not cognition. But what generates (causes) moving is the mechanism of our cognitive (and vegetative) capacity. (Yes, all moving is dynamic.)

      2. I would call plant function vegetative (e.g. growth, photosynthesis, phototropism, reproduction, immunology, defence, signalling) rather than cognitive.

      3. We are not yet "defining" cognition (cognitive science will have to do that after the easy problem is solved, T3 is passed, and we have reverse-engineered the mechanism). Right now we are just pointing to it.

      4. A "robot" is just a causal system with certain capacities. Organisms, including ourselves, are hence "robots." If we build a robot that can pass T3 or T4 (Renuka or Riona), Turing says it cognizes and we can't do any better, so we should either assume it feels or not worry about whether or not if does. We've explained as much as can be explained.

      5. You don't have to solve the hard problem to build a robot that passes T3 or T4; just the easy problem.

      Delete
  5. When one uses a calculator to find the product of a calculation, I do not think that that person is entirely relying on the calculator (rather than their brain) to cognize. The person is instead just using the calculator as assistance in the calculation, or as a fast check of their own calculations. The person has the ability and the capacity to calculate what is inputed into the calculator, he or she might just choose to use a calculator at that particular time to save time or energy.

    This is an example of what was explained in the paper; even if one relies on something other than our brain to arrive at a certain mental state, does not mean that the things we used to arrive at that mental state are actually part of the mental state.

    Just as I am typing this out now it made me wonder about this hypothetical example:

    What if you give a child a calculator and he or she inputs many different calculators. After a while, the child eventually starts to remember the answers for particular functions. For example, eventually, the child memorizes the answers to the inputs 10^0, 10^1, 10^2 etc., but doesn’t actually understand HOW the calculator arrived at that answer. Are the child and the calculator still arriving at different mental states? Neither of them truly understood how the calculations arrived at the product. Cognition relies on input and both the child and the calculator are receiving input.

    ReplyDelete
    Replies
    1. The child you describe is doing the calculations by recipe (like Searle in the Chinese Room), not understanding what it is doing.

      Delete
  6. “Siamese twins with only one body, that even if Biology were to tell us that they were one single organism, they would still be two distinct cognizers, if they had two distinct minds: They would not have one, shared mind, even though they did have one, shared body. And if they had a migraine, it would be two migraines, even if it was implemented in one and the same head -- just as when something is a 'headache' for the US Congress, it is at most N distinct headaches in the heads of N distinct congressmen, with no further superordinate entity feeling an N+1st headache (or feeling anything at all). There is no such thing as a distributed migraine – or, rather, a migraine cannot be distributed more widely than one head. And as migraines go, so goes cognizing too -- and with it cognition: Cognition cannot be distributed more widely than a head -- not if a cognitive state is a mental state.”

    I see where the article is getting at here, that cognizing can only take place in that cognizer’s head. So even if we consider something one autonomous system, the minds in it cognize individually and feel differently, if they feel at all. My problem with the above statement is that thoughts can be transferred from person to person as well as emotions. We are interconnected in the sense that what one does to another affects how that other feels and cognizes. Just to be clear, I don’t think that all of cognition can be distributed more widely than a head, but I do think that certain aspects of cognition can. If I hear someone say a tone, and I repeat that tone for you to hear it, your brain will be cognizing similarly to how mine was when I heard the same tone. Maybe the feelings you associate with the tone will differ, but the cognition that occurs with hearing processes will be the same. If that example is not convincing enough, because it is more of a performance capacity, I think we can distribute cognition by sharing feelings. When we smile we feel happier. When we smile to others, their mirror neurons fire and they smile back, sometimes. When they smile they will feel slightly happier too. This is distributing the positive feeling from one brain to another.

    ReplyDelete
    Replies
    1. Sharing ideas or emotions is not distributed cognition. It is two separate cognizers sharing ideas or emotions. It would be distributed cognition if they were just one cognizer with two heads. Then there would not be a "they" to be doing the sharing, just one cognizer with two heads (just like we all have two cerebral hemispheres with cognition "distributed" between them).

      Delete
  7. "Being an organism was conflated, animistically, with having a mind. This is an
    error; living and feeling are not necessarily the same thing. There can be living
    organisms that have no mental states and there can be nonliving systems that do have mental states."

    How can one confidently state that there can be nonliving systems that have mental states? As we're wrapping up the course, I've realized that although we have grand plans for how to establish tests for cognition, we have yet to build T3, 4, 5 sensorimotor robots. Considering all our technological advancements, maybe this is simply a mission impossible?

    I'm just as hesitant to say that non-living systems can have a mind as I am in saying that some living organisms do not have a mind. Where do we draw the line in non-human animals? Are cognition, consciousness, and having a feeling an all-or-none thing? Is it possible that cognition/consciousness "matured" and developed evolutionarily to the point of completion in humans, leaving "incomplete" stages of development in more primitive organisms? Is it related to the development of intelligence?

    Speaking of intelligence, this leads me to think about the role of language in determining who has mental capacity. I think that in our society we conflate having consciousness/cognition/feeling with having the capacity for language. Because a gorilla has some limited propositional pantomime abilities, we consider it as having more mental capacity, expression, feeling, than a cow, who can't use language to communicate and thus cannot communicate his (possible) cognition/mental state.

    ReplyDelete
    Replies
    1. You are certainly right that the TT is nowhere being passed today. And I agree that we cannot know that nonliving things can have a mind. We also do not know that they cannot. The point in the passage you cited was just that being alive and having a mind are not synonymous.

      Being conscious = having a mind = being able to feel.

      Just as a mental state (i.e., a felt state) cannot (until further notice) be distributed wider than an organism's head, so having a mental state is not a matter of degree: An organism either can feel or it can't. There is no such thing as being about to 34.5% feel. You can feel 34.5% of the time; or you can feel 34.5% of what another organism can feel (because you are color-blind, tone-deaf, blind, deaf, autistic, paraplegic, sleepy, an octopus, or a bat), but feeling itself is all or none: Either you feel or you don't (at a particular moment: the rest is just memory, felt now, like at the Princeton gas-station at Xmas). Plants presumably don't feel: at all -- not less; none. Ditto (presumably) for a human being in a chronic vegetative state (presumably -- and if not, then they do 100% feel something, sometimes, but they still don't 34.5% feel).

      Whether or not an organism is conscious (i.e., feels) has absolutely nothing to do with how intelligent it is (though intelligence, like color-vision, influences some of what he can or cannot see).

      Delete
    2. So feeling is like being alive in that they are bad categories because we would only ever be able to understand what it is like to feel or be alive.

      I do not think I understand how is it possible that whether or not an organism is conscious has absolutely nothing to do with how intelligent it is. Is the word "conscious" here supposed to mean similar to humans in terms of capacity for language? It seems like what you continue to echo in the worm examples (nematodes) is that if something has the capacity to detect it means we cannot be sure it cannot feel, so then it is conscious.
      I wanted to mention this after we talked about nervous systems being the criterion for things that "can detect and can feel" as opposed to things that can just detect.
      As we said before a sunflower moves in the direction of the sun, morning glories only bloom in the day and so on. I don't want to just sound like a new age hippie but I do not understand why we must depend on the brain and nervous system to say something is conscious... not saying I'm into Fodor or anything but it seems to me a weird distinction.

      Delete
    3. Hey Julia, you raise some interesting points that have me a little perplexed as well!

      from what we've learned in the course, the state of being conscious is merely the state of feeling, and human capacity for language is a cognitive capacity (a subset of our doing capacity). Generally (in lay terms) we use the term 'conscious' in the sense of being 'self conscious' or having the capacity to reflect (requiring language) which is also a cause for a lot of the confusion with regards to affording 'consciousness' to animals.
      With regards to a nervous system being requirement for consciousness (and thus not affording conscious states to plants) i would say that this is based on similarity judgement (using an evolutionary framework as a medium for categorizing types of living entities in the world, i.e. conscious living entities vs non conscious living entities)
      from what Professor Harnad states above, "Plants 'presumably' don't feel", presumably being the key word. Our own (human) biology has developed in line with that of other animals and we are thus able to detect consciousness in them due to our similar biological traits (i don't want to go on a lim here, but that is presumably somewhat related to mirror neurons and our other biological features that perform similar capacities). A lot (if not all) non human animals also have this capacity, though they are not 'asking' themselves constantly whether or not it is the case that the human (or other animal) in front of them is conscious or not.

      Delete
    4. Julia: "is conscious" means feels.. I am not sure what you mean by "intelligent." I would say it has to do with what you can do. More intelligent organisms can do more things. Language is a rematkable thing to be able to do, but there are plenty of other things too. And not a hint of a hint of how or why you would have to feel to be able to do any of them. (The point about the marine worm is that it does not just detect damage, like a t1 robot, but it feels pain. [Not everything that can detect can feel!) Hard question: how? why? Nothing to do with either language or intelligence. Just doing capacity and feeling.)

      Do you feel that every dynamic system feels? Or every living dynamic system? You may or may not be right. But if we go with the odds, it's much more likely to be the ones that are more like us (in both their outer and inner doings, especially neural). Yes, the other-minds problem means we can't be sure, even with people, but which do you think is more likely to feel: a rooster or a rhubarb? How much more likely? And a rhubarb vs toy robot? or a rock?

      Naima: If you had attended Frans De Waal's talk you would not be so sure language is needed to reflect! (And reflection just means thinking about X, whether it's "where's the food" or "cogito ergo sum." Self-reflection is fun, but over-rated. The marine worm has the full "hard problem" if it feels anything at all, regardless of whether it can do self-reflection -- or play tennis, for that matter...)

      Delete
  8. I'm a bit confused about how the article defines/differentiates 'mental states' and 'cognitive states' - I feel like the definition circled back on itself a number of times, so maybe someone can clarify it.

    The article reads: a mental state is simply a felt state - okay, seems simple enough, but what then is a cognitive state exactly? Are cognitive states a kind of subset of mental states, or the other way around? I feel like I'm unable to appreciate the argument that cognitive states are equivalent to mental states without having a clear-cut definition of what is actually meant by 'cognitive state'. As it stands, it just feels like the article just defines cognitive state as a restatement of mental state, in which case there is no argument to be had... they are the same because the article chose to define them that way.

    For example: ... we argued that insamuch as cognition is mentation (i.e., insofar as cognizing is thinking), there can only be distributed cognitive states where there can be distributed mental states.
    What is even up for debate here? If you just define cognitive states as mental states, of course the second thought naturally follows. I'm just at a loss as to how this is an argument for anything, if from the start the authors simply define the two terms as identical.

    And then, if the definition we were to accept is that mental states = cognitive states = 'being conscious while brain states are being implemented', then how on Earth can we assert that there can be nonliving systems that do have mental states ?

    Hopefully someone can clarify this for me - for now I'm just left feeling as though this is all a bit of a non-starter.

    ReplyDelete
    Replies
    1. Hi Adrienne,

      This is how I understand it.

      ”The article reads: a mental state is simply a felt state - okay, seems simple enough, but what then is a cognitive state exactly? Are cognitive states a kind of subset of mental states, or the other way around?”

      Mental states are conscious (felt) states. Cognitive states are functional states that play a role in what a cognizer does. The question is whether the cognitive capacities of a cognizer (what it can do) necessarily generate its capacity to feel. The authors take all mental states as cognitive states, and vice versa. But they are not merely defined in terms of each other. They have distinct definitions but they are taken to always occur alongside each other.

      ”’For example: ... we argued that insamuch as cognition is mentation (i.e., insofar as cognizing is thinking), there can only be distributed cognitive states where there can be distributed mental states.’

      What is even up for debate here? If you just define cognitive states as mental states, of course the second thought naturally follows.”


      If we conclude that there can be no cognition without feeling, then this has implications upon how we view the possibility of distributed cognition. In this case, distributed feeling must be possible for distributed cognition to be possible.

      “And then, if the definition we were to accept is that mental states = cognitive states = 'being conscious while brain states are being implemented', then how on Earth can we assert that there can be nonliving systems that do have mental states ?”

      Since we did not restrict the definition of cognitive states to brain states, the possibility remains open that a cognizer can do what it can do by means other than the operation of an actual brain. In sum, if a something can do all that a cognizer can do, we have no grounds on which to say that it is not cognizing, and therefore not feeling, even if it does not have a brain as such.

      Delete
    2. 1. Cognitive processes are (a) those processes going on inside your brain that make you able to do all the things you can do and (b) those that make you able to feel.

      2. Mental states -- i.e., felt states -- are a subset of cognitive states. There are both felt and unfelt cognitive states. A mental state is a state that it feels like something to be in.

      3. Much of what you are able to do (e.g., remembering the name of your 3rd grade schoolteacher) is generated by unfelt processes: processes that are usually going on while you are awake and feeling -- so they are happening while you are in a felt state -- but you cannot actually feel what they are doing or how.

      4. The difference between a cognitive and a vegetative process is somewhat arbitrary, but the difference between a felt and an unfelt state is definitely not arbitrary.

      5. If all states were unfelt states -- in other words, if all living organisms were feelingless zombies, like bacteria or plants -- then we would not need the word "cognitive." Or rather "cognitive" would just mean whatever processes made organisms able to do whatever they can do. The distinction between "cognitive" and "vegetative" processes would become even more arbitrary, and so would the distinction between internal and external processes. In a feelingless system, nothing much would be at stake if we asked whether a process or state was "cognitive" or "vegetative," input or output, internal or external or distributed (except engineering questions about what makes a system, like a vacuum cleaner, airplane, computer or robot and "autonomous" system).

      6. But with a feeling organism and its felt states, we can ask: what and where are the processes that generate its felt states? We can call both the processes that generate its doings and the processes that generate its feelings "cognitive processes" or "cognition" if we like, but the only cognitive processes for which their location would really matter would be the processes that generate the feeling. Those would be the processes that, if you turned them off or removed them, the system would become a feelingless zombie. It is about those cognitive processes that generate the felt states (and not necessarily the ones that generate the doings) that we ask: Where are they happening? inside the organism or outside?

      Clark & Chalmers, in speculating about "distributed cognition," do not make the distinction between felt and unfelt cognitive states and processes. They talk about the location of the feeling-generators as if their location for feeling organisms were as arbitrary as the location of the doing-generators for zombies.

      Delete

  9. When does it stop being I/O?

    Drs. Harnad and Dror use the examples of a car and a telescope as counterpoints to the extended mind thesis. A car doesn’t become an extension of our locomotive systems, even if it may feel like it, and a telescope doesn’t extend our vision but rather give an I/O that our brain adapts to by generating a certain feeling.

    But at what point does it stop becoming simple I/O, and how does cognition as a task (as opposed to locomotion or vision) affect this? As we know, cognition must also involve some sort of symbol grounding function/module/process/ability, and thus is incomparably linked with our environment. We learn though the manipulation of our environments. Therefore, the environment becomes not just a tool, (like a car or telescope) but a required, functional part of cognition. At the moment we rely on what is outside our skin to deliver information about the environment. It is conceivable, however, that we develop new ways of delivering information (embedded google glass, perhaps). This would then connect us to the environment in a new way that goes beyond our native I/O, with some cognitive processes being offloaded in a way that is imperceptible to the user. Would this not then mean that the brain is not necessarily extended if we are not aware of I/O processes? I would argue that this is just like some blind people cannot sense as well without their stick as a new sensory appendage.



    Could the internet have mental states?

    Anthropomorphism serves a useful function in discourse; we use it as a tool to effectively communicate putative summative characteristics; a group can be angry, a ship (referring to the system of sailors onboard) can be afraid, a political party can be vengeful, etc. To what degree we really mean that a system holds this emotion is questionable. Of course, there is nothing that it feels like for a group of elephants to have a migraine, I assume this is impossible. But there may be something that it feels like to be a member of group that has a certain emotion, where this emotion is an epiphenomenon irreducible to any one of its members. Of course this group is not reduced to the minimal level in terms of feeling, but it does have an emotion that can only be experienced when we consider the group as a collective entity. We can refer to the group as having this emotion, which is just as good as what we can do with other humans (other minds problem).

    As mass-communication technology becomes more integrated into our society, the question of the possibility for real emergence becomes more an more relevant. If we allow language as a tool to offload cognition and recognize that a notebook can replace memory, what happens when communication with other cognizers and databases feels nearly as instantaneous as our own thoughts?

    Just for interest’s sake, a few (qualitative) studies have explored collective memory in online groups:
    http://somatosphere.net/2015/04/varieties-of-tulpa-experiences-sentient-imaginary-friends-embodied-joint-attention-and-hypnotic-sociality-in-a-wired-world.html and Leibing, Annette. "Lessening the Evils, Online." Science Studies 22.2 (2009): 80-101.

    ReplyDelete
    Replies
    1. 1. I like the connection you make between symbol-grounding and the internal/external/distributed question. I would just add that grounding is not the same as meaning. Grounded T3 (or T4) robots could possibly be unfeeling zombies. And for zombies, the internal/external/distributed distinction becomes moot or arbitrary (except for engineering questions about the boundaries of an autonomous system). But meaning (hence cognition) is grounding (doing) plus feeling. And for felt states the internal/external/distributed question becomes much more fundamental: it concerns the nature and location of the processes that generate the felt state -- the ones that, if you removed them or turned them off, would turn the system into a zombie.

      2. It may be useful to talk abut a group of people as if they had a collective emotion, but we all know that what we really mean is just that each individual has some emotion and that the result of their interaction is what makes the group do what it does, not something felt by the group as a feeling entity: the "group" cannot really feel anything.

      3. I'd say the same for speeded-up internet interactions: They're still interactions between individual feeling (and nonfeeling) entities. There is no joint feeling, or joint feeling entity over and above the individual feelings of the individual feeling entities. (The same seems to apply to the Tulpa link you mention. It's always a good idea to try to distinguish what something really is from what it feels as if it is, especially when it comes to who or what is feeling what. The same self/other boundary that creates the other-minds problem puts bounds on speculations about extra super-ordinate feelers distributed across and made up of individual feelers.

      Delete
    2. "what is feels like it is" is all we have, anyways.

      Delete
  10. ‘Does the fact that cognizing is a conscious mental state, yet we are unconscious of its underlying functional mechanism, mean that the underlying functional mechanism could include Google, Wikipedia, software agents and other human cognizers’ heads after all?’

    Theoretically, I feel like our lack of explanation does allow for some forgiveness in postulating what might be sufficient to complete the necessary tasks of cognition, but I am still left with the feeling that ‘Google’, Wikipedia, software agents’ are only explaining a potential mechanism for how cognition may operate, but not why the explanation is so hard to pin down in the first place. I feel like the unconscious aspect of cognition is a key feature to its uniqueness. That’s not to say that future endeavours in determining what mechanisms underlie cognition are futile, simply that I don’t think a human being, plagued by the same unconscious experiences of cognizing, will be able to find an explanation.

    ReplyDelete
    Replies
    1. But do you have any reasons for believing this "easy problem" will not prove solvable? (There are at least reasons for believing the hard one's not.)

      Delete
  11. “Cognizers can offload some of their cognitive
    functions onto cognitive technology, thereby extending their
    performance capacity beyond the limits of their own brain power.”

    I think I can really understand the distinction between offloading cognitive function and endowing physical objects with the capacity to feel.

    When we talk about offloading, I really like to think about how computation is implementation independent. This means a Mac or PC can be a word processor – physical states can change, but the end result is the same. I think that’s similar to when we talk about using a calculator or simply doing the calculation in our heads. The computation of the end result 4 doesn’t change if I do the right process in my head or in the calculator.

    But seriously, just because we use something to extend our performance capacity does not mean that thing suddenly has the ability to feel pain. Sure, I feel like the calculator gave me the answer, but if I pinch the calculator it probably won’t have a terrible calculator-tantrum. Or, if someone else pinches the calculator, I won’t even blink my eye, let alone feel pain as part of my own cognizing system. I do think it’s totally cool how technology helps people (duh). But changing a battery on piece of technology will never require anesthesia.

    ReplyDelete
    Replies
    1. The "distributed cognition" question would not be about whether the calculator feels X, but whether it is part of your felt state when you feel X.

      Delete
  12. "If you are driving a car, is that an extended sensorimotor state, in which your body is moving at speeds in excess of what it can manage alone, narrowly? The wider, distributed sensorimotor state might include the car and its locomotor capacity. Or is it just output from your narrow, skin-and-in sensorimotor state (in this case a slow movement of your foot on the pedal) – output augmented by the horse-power of our external vehicle?” (Drop & Harnad 2009)

    My instincts had told me this is the first argument I should have arrived at to argue against the “extended cognition hypothesis.” Simply put, cognitive technology cannot be coupled with the individual as a cognitive system because these things are simply tools to be used by us provided us input or allow us certain outputs that are based in cognitive processes in our head. Like the simplicity of the car example, it would be ridiculous to argue that the car driving is sensorimotor state that the car and I enjoy because we are coupled system. The car driving is a product of people who have engineered a system that responds to the output of my motor activity on the pedals and on the wheel while I am using my sensory capacities to monitor the car and the road. Therefore, I, an autonomous agent, am acting on the car to produce this end. Similarly, when we discuss cognitive technology like google or the web, these things are tools that augment the input or output I can cognitively process but they themselves are not part of cognition. To say that they are not simply things I cognitively manipulate through my sensorimotor capacities to augment my performance capacity and are in fact a coherent cognitive system with me, they must be able to causally provide to the cognitive states of the system. Otherwise, they just remain tools that I use to magnify my cognitive performance. I do not believe these things could provide cognitive states since most of them just represent computational systems that we interpret, so I cannot accept that they somehow form an extend form of cognition.

    An analogy can be provided by a futuristic prosthetic arm that responds to the cognitive activity in one’s head, such that it behaves and does what a regular arm does. In this case, it is not a tool that I manipulate through my sensorimotor capacities, but is actually a part of the sensorimotor state of my being. This arm causally provides a sensorimotor capacity that is a sensorimotor state in this coupled system.

    ReplyDelete
    Replies
    1. 1. Neither a prosthetic arm nor a real arm is part of my cognitive state.

      2. The right question to ask (about whether X is part of my cognitive state) is whether it is (a) part of what generates my feeling, or just (b) part of what generates my doing. If just (b) then it could just as well have been offloaded on cognitive technology. (If anyone strongly feels this is not true, then they must have a solution to the hard problem, because until that's solved, no one knows why everything can't be offloaded: With a zombie, there's no difference what's inside or outside, or, if it does, it's for trivial reasons of "functional autonomy" only.)

      Delete
  13. “(13) Both sensorimotor technology and cognitive technology extend our bodies’ and brains’ performance capacities as well as giving us the feeling of being able to do more than just our bodies and brains alone can do. ”
    This quotation emphasizes the feeling part of extended cognition. If the notebook extends a man’s cognition, it doesn’t just feel different to the man because he knows the answer, but he is actually doing different things and having a different performance capacity. This “issue” that has been up seems to suggest that some people believe that use cognitive technology just changes the “feeling” aspect of cognition, but it also changes the doing!
    “What about a coral colony, or, better, an ant colony? ”
    When discussing distributed cognition the authors provide several examples of systems in the world that consist of many individual organisms that function as one. One example that would be interesting for cognitive scientists to argue over is the bee hive. In the hive different bees have different functions (the queen, foragers, nurses of young etc.). Although the authors would likely disband this argument of a beehive as a thinking system because these bees can be assumed to each have their own “mind”. In addition, some would say the hive has no mind. However, there have been discussions of the “hive mind” and how bees are able to mutually understand certain things about the world without any detectable communication.

    ReplyDelete
    Replies
    1. Ants and bees probably feel, but do ant colonies or hives feel too? (I'm not sure whether coral polyps have neural function: I think not.) We have to separate distributed doing from distributed feeling.

      Delete
  14. The bulk of this article for me is summed up really well in the points from the introduction following this paragraph. And I’m agreement with everything leading up to the example of something like a Google search serving up an answer on a silver platter (much like the name of our 3rd grade teacher) being part of cognition in a distributed mind. After thinking about this “exercise left to the reader,” I’m not convinced it is true based on point 16 from the introduction quoted below. Otherwise, I think epigenetics is a great analogy (even though it seems as though epigenetics is a species adapting to its environment, whereas the internet and our Creative Commons is us adapting our environment to ourselves).

    “(11) Just as we can see further with telescopes, move faster with cars, and do more with laser microsurgery than we can do with just our unaided hands and heads, so we can think faster and further, and do more, with language, books, calculators, computers, the web, algorithms, software agents, plus whatever is in the heads of other cognizers.
    (13) Both sensorimotor technology and cognitive technology extend our bodies’ and brains’ performance capacities as well as giving us the feeling of being able to do more than just our bodies and brains alone can do.

    (16) Hence, although sensorimotor and cognitive technology can undeniably extend our bodies’ sensorimotor and cognitive performance powers in the outside world, only their sensorimotor input and output contact points with our bodies are part of our cognitive (= mental) state, not the parts that extend beyond.”

    One small point I did have a question about though was this:

    “There can be living organisms that have no mental states and there can be nonliving systems that do have mental states.”

    What’s an example of the latter?

    ReplyDelete
    Replies
    1. I could be totally off here, but Searle was part of a nonliving system that did have a mental state (of feeling like it did not understand Chinese). It's not like the whole Chinese Room was living - he was the only living thing in it...

      But I think also you could use an example like the ones they talk about in the other article where a mental state is applied to a nonliving object through extending my cognitions by using that object

      Delete
    2. ‘There can be non-living systems that do have mental states’

      I also got a little bit confused with this statement. The argument seems to be that it is possible for whole systems to have a shared mental state. So, initially I took this statement to mean that larger systems could be made up of individual living organisms, but the ‘’superordinate’’ organism that encompasses them all is not, in itself, living. However, it seems the larger non-living system relies on the individual living systems it is made up of to exist? If the component parts of ‘’Gaia’’ did not have minds, how could ‘’Gaia’’ itself have a mind? Therefore, this implies that living is necessary for having mental states, just it could be possible for these mental states to be spread across multiple living organisms.

      So, I find it difficult to see how this concluding sentence to the paragraph links to the previous arguments being made. Although I can see that it is possible to live without feeling, I don’t think this can be extrapolated to conclude that we can feel without living. Perhaps the ‘’feeling’’ can be shared across multiple individuals, but these individuals still need to be alive for the feeling to be ‘implemented’.

      As discussed before, feeling seems central to the human experience but is notoriously hard to give an explanation for. One thought I had was that perhaps feeling helps us separate our individual selves from outside cognitive technology. The only way to make use of outside knowledge and assistance is to ‘feel’ like we understand what is being told to us and then use the information in the right way. A Google answer in Japanese, although it might contain information extremely useful to Japanese speakers, will not help me as I cannot understand Japanese. Thus, I don’t get a feel of the meaning and so this does not add to my mental knowledge. Google itself is not part of my mental state. It is only my interpretation of Google which adds to cognition, and this interpretation is personal, individual and only I can ‘feel’ it. Therefore, my cognizing remains inside my brain.

      Delete
    3. What I meant by “There can be living organisms that have no mental states and there can be nonliving systems that do have mental states" was just that “There could be living organisms that have no mental states and there could be nonliving systems that do have mental states.” Of course T3, if we can ever build one, or T4, would be an example of the latter; and plants are an example of the former. (If you can solve the hard problem then you will be able to explain whether, how and why nonliving systems can or cannot feel. Till then we don't know -- but we still have to make the distinction between doing, living and feeling.)

      Delete

  15. There is no such thing as a distributed migraine – or, rather, a migraine cannot be distributed more widely than one head. And as migraines go, so goes cognizing too -- and with it cognition: Cognition cannot be distributed more widely than a head -- not if a cognitive state is a mental state. (13)

    Is this true? I have had migraines so I am not trying to minimize how awful they are and I understand that all of the physical symptoms are FELT (because feelings have to be felt and to feel like you have a migraine is felt) but can’t you empathize if say your child or partner had a migraine? Would not then the migraine be somewhat distributed in the sense that it was effecting the minds of multiple people even if it was only located physically in one person’s head?
    This line of argument is why I am still not convinced that cognitive states have to be mental states. To accept this I don’t think we need to include “everything that can potentially enter into anyone’s cognizing,” but even if we did have to do this is that really such a huge concession? It is good to have environmental tools and cognitive technology clearly - we’ve talked about the example of language in this skywriting and to me it seems that even categorizing is a form of cognitive technology that we get from external cues. Why is it so bad to say that cognition might be some computation, and also might be outside of a brain/mind?

    ReplyDelete
    Replies
    1. Hi Julia,
      Just about the point you raised about the migraine affecting other people. I feel like this is a different type of affecting that the one is distributed cognition. Because it affecting other people (or not) is not your doing per se, nor is it the result or by product or anything really of your interaction with the world. By that I mean that it has nothing to do with your own cognition (which is the only thing that you can be sure of and account for), it depends solely on the other people. And with that, it ties once again to the problem of other minds. It just seemed like distributed cognition still ties to one’s own cognition versus that of others.
      No one denies that cognition involves computation. Simply there is something else in play alongside it. And cognition arises also in part from interaction with the world (otherwise T2 would be enough), so there is definitely interaction with things outside the brain/mind.

      Delete
  16. “So where does this leave the question of distributed cognition? It is still cognizers who cognize -- the tool-users, not the tools.9 Yet there is no doubt that cognitive technology has radically widened the scope of human cognizing10”

    This paper nicely addressed what I was trying to convey in my previous skywriting regarding the mistakes in Clark and Chalmers definition of extended cognition. To put it further, the introduction of “interactive cognition,” or the offloading of brain function onto technologies, takes the modest stance against “extended cognition” that I was attempting to articulate. Interactive cognition retains cognition within the confines of the cognizer, without diminishing the importance and influence of cognitive technologies that “extended cognition” was trying to portray. With interactive cognition in mind, I would like to update my response from last week.

    In the previous skywriting, I critiqued socially extended cognition by explaining the example in the Clark & Chalmers paper regarding Otto and Inga and then contrasted this to socially extended cognition (the idea that your cognition is extended to another person’s mind. An example that was given included a waiter remembering your favorite dish so you don’t have to). In short, Otto is an amnesic who keeps memories inside a notebook whereas Inga does not have this deficit. It was suggested that Otto’s notebook is an extension of his cognition. I argued that this was different than socially extended cognition because when you express your beliefs to another individual, the individual cognizes the belief and at this point there is a division between where your mind ends and the other’s begins. After reading the Dror and Harnad paper, I now change my argument and suggest that there isn’t really a difference between Otto using his notebook for memory and someone using another person to remind them of something (socially extended cognition). In neither case is cognition being extended. Instead, interactive cognition is occurring, or in other words, Otto is offloading brain function onto his notebook so he can have some sort of “stand in” memory and in socially extended cognition, one individual is offloading information onto another person. The cognitive technologies in question would be the notebook and the other person.

    “So if you have a natural brain component, or an implant, either way, if you can feel if it's there (and active) and not if not, then it's part of your mental state -- or rather part of the cerebral state that generates the mental state.” (Harnad’s comment to a skywriting)

    On another note, although I do not believe that our cognition extends outside our bodies, in Otto’s case, wouldn’t he feel that he remembers things when he uses his notebook? And since he can feel that the notebook is there helping him, can it be a part of his mental state according to this comment above? If the notebook is taken away, then he loses the ability to remember things and he loses the feeling that he remembers things, just as when he lost part of his brain and became amnesic. Is this enough to consider the notebook part of his mental state? Or is it that the notebook doesn’t really make Otto remember, but rather it aids him in doing things, without him having the feeling of memory?

    ReplyDelete
  17. I found this article really interesting to read since it is quite relevant to today’s society. There was a lot to really to, so I could really imagine the examples and hypothetical situations.

    “Can cognitive technologies (i) increase cognitive capacities and thus enhanve human efficieny? (ii) affect how people and society go about achieving their goals? (iii) highlight and transform how we view ourselves and our goals? (iv) modify how we cognize and thus change out mental states and nature? (v) give rise to new forms of cognition (such as distributed cognition) and mental states that are either distributed across or even embodied in cognitive technology?” (2)

    These are written in the very first paragraph of the paper. Before even reading on, I feel like I could have answered “yes” so every one of these questions. 20, 30 years ago I may not have been able to. But today with things like Google, Facebook, Instagram, Wikepedia, and so many more startups that come about everyday I think it is obvious that cognitive technologies are changing the human way of life, particularly thinking. We don’t have to sit and wonder about an answer to something, we can simply just ask Google. Access to these can make us more efficient, therefore smarter, and henceforth change our mental states.

    “Why are we ready to contemplate the possibility that Gaia, or an entire species, or an any colony, might be one single, widely distributed, physically disjoint organism, yet we are not ready to consider that Siamese twins, no matter how tightly fused they are physically, are one single organism?” (10)

    What is comes down to, is the mental states and minds. One-ness versus two-ness is clearly not related to its physical entity, otherwise Siamese twins would be “one”. They have two different minds. Could you then say that let’s say Apple same the same product, like a MacBook Air to 5 different people, then all these computers are their own. Yes, they are, because physically there are 5 of them. However, they are all programmed the exact same way, so are they one entity or 5?

    “There can be living organism that have no mental states and there can be nonliving system that do have mental states” (13)

    Maybe I missed something, although I don’t think that I did, but someone provide me an example of how something have be nonliving but indeed have a mental state. How can this be? The paper does not give an example and I challenge this and don’t believe it.

    ReplyDelete
    Replies
    1. Hi Jordana,
      I agree with you, that nowadays technologies are definitely giving us great example of the extended mind. I think that, because we carry our cellphones and our computers around with us so much, our reliance on them is exaggerated. Our way of living has changed enormously because of those technological advancements. The access to those things as so efficient and quick, it reproduces the same quality of cognitive processes. We can then ask if those change in our daily life has changed our cognitive life as well!

      Delete
  18. "There can be living organisms that have no mental states and there can be nonliving systems that do have mental states." -- I got to this point in the article and felt confused about the part that says that there can be nonliving systems that have mental states. How so? Is a mental state not a felt state? THat's how I've always thought of it. Or is this kind of mental state referring to the mental state that a robot that passes the TT has and would say would be a cognized in its own right? What TT level would you say it would have to pass then? I'm especially confused at the above statement when I then read "To have a mind is to be in a mental state, and a mental state is simply a felt state: To have mind is to feel something – to feel anything at all (e.g., a migraine)." And then “Is cognitive technology limited to increasing the cognitive performance capacity of its users? No. We have argued that cognitive tools are not themselves cognizers, nor do they have -- or serve as distributed substrates of -- mental states.” Maybe I've missed a crucial point.... any input??

    ReplyDelete
  19. I found this article and the previous one very interesting because we are finally discussing the possibility of cognition beyond one's mind, even though this specific article claims that "cognition cannot be distributed more widely than a head- not if a cognitive state is a mental state"

    The particular example of the siamese twins and the discussion about whether a migraine can be shared by more than one mind, to which the authors argued no, "they would not have one, shared mind, even though they did have one, shared body. And if they had a migraine, it would be two migraines, even if it was implemented in one and the same head" led me to think back to article 10a by Dennett. In describing phenomena and the framework of heterophenomenology, Dennett claims HP rests on raw recorded data, which includes "internal conditions (e.g. brain activities, hormonal diffusion, heart rate changes, etc.) detectable by objective means".

    Since the science to detect a migraine by empirical and objective means does exist currently, perhaps this is an interesting discussion to be had. To reiterate, in the case described above, the siamese twins are said to have one body, one head, but two minds... and if they had a migraine it would really be two migraines because to have a migraine is to indicate a felt state, and because 1) a mental state is a felt state, 2) cognizing is a mental state, and 3) cognizing cannot be distributed, so therefore a migraine cannot be distributed. Say, the siamese twins were interviewed using an HP method, their fMRI scans, for example, would reveal the exact same data because they share one head (brain), and presumably the rest of their body would not be able to simultaneously exhibit two different kinds of physiological behaviour, and the rest of the raw recorded data was identical (or just one set of data, even?) then couldn't it be argued that according the HP, the migraine is distributed between two minds? Or would it be that two minds were having the exact same migraine? What is the difference?

    Dennett asserts that "It is important to remember that the burden of heterophenomenology is to explain, in the end, every pattern discoverable in the heterophenomenological worlds of subjects; it is precisely these patterns that make these phenomena striking." So in the case of the siamese twins, the pattern would be identical, which would therefore make the phenomena identical. What are the differences or discrepancies, if they exist, between phenomenon and a felt state/mental state?

    ReplyDelete
    Replies
    1. Siamese twins means: two bodies, though some parts of their bodies might be shared. The brains could be partly
      shared, but if the twins really had just one brain then it wouldn't be Siamese twins, but one person with two bodies, wouldn't it?

      If the brain were only partially shared, so they each had minds of their own, then even though the cause of the headache might be in a shared part, they would still be having separate headaches rather than both having the same headache, just as if they were both listening to the same music but each feeling it independently. Same for any other thing they feel or think.

      (But this is all very speculative.)

      Delete
  20. In this paper, cognizing is defined as a mental state. Cognitive technology allows cognizers to extend their performance and offload some functions that would’ve had to be mental. Language is a medium by which (brain) functions can be transmitted to other beings/cognizers. This is different from performance capacity, i.e. ability to do without there being any thinking whatsoever going on. That is, using functions or results of cognitive technology in a cognitive way is different than just mechanically performing operations, where no cognition is going one; there has to be some cognition going on at some level or another, otherwise it’ll all brainless computation. Cognition does not encompass vegetative “autonomous” states, or any other kind of relatively uncontrollable autonomous systems; it is something which can be acted upon. All cognitive processes are conscious. It does not matter whether or not we know how or why these processes occur. But it feels like something to have these cognitive processes occur, therefore they are inherently conscious (Being aware of them is the sole requirement, there is no need to be able to explain them). Several different views are then investigated, which have more or less weight, like the extended mind or the Turing test. Finally, distributed cognition is investigated, which is defined as cognition beyond the boundaries of the mind and body.

    ReplyDelete
  21. Intrinsically cognitive vs. a tool used by cognizers – The paper talks about a T2 passing robot as the only possible thing that could be the former rather then the latter. This does make me think about the frame problem and whether artificial intelligence could ever reach the level of being able to be truly intrinsically cognitive. The frame problem being the gaps in programming/knowledge that exist in artificial intelligence, for example object permanence (knowing a phone will still exist even when you leave the room), and although once you find a gap you can fix that gap, there will always be another gap and another fix to be done. So, if there is always a frame problem, always a gap in the ‘cognition’ of a robot, can they ever actually be intrinsically cognitive?

    ReplyDelete
  22. I think I am unclear with the concept of extended mind. I am not quite sure I understand what that means… Does this mean extended feeling?

    If I understood this, we’re saying we use things other than our brains to cognize, however it’s only our brains that have mental states. Is there a difference between using the input from outside sources to cognize & relying on others’ mental states to do the cognizing for us? I mean, when another “feeler” tells me they feel, I can imagine the feeling. However, even if it makes me feel the same way, it’s still not the same feeling. There is the other person’s feeling and there is the one I am feeling. I don’t think there are “extended feelings” (or whatever that may be); I am using the output of the other feeler as the input for my feeling – but it’s not the same.

    If we agree that a felt state is a mental state, then a “mind” is a feeler. Whatever the feeler feels is part of his mind, and since there are no unfelt feelings, whatever it doesn’t feel isn’t part of the feelers mind. But what if the feeler doesn’t feel the happening, yet it is going on in the feeler’s head. For instance, vegetative states aren’t felt states – therefore they aren’t mental states and aren’t part of the feeler’s mind? But if I become consciously aware of my breathing – then it becomes a mental state?

    But then, if you ask me who my first grade teacher was, I have no idea how, but my brain gave me the answer: I felt the answer. Now let’s say that you asked me that same question, but while I was trying to come up with my professors name, I got distracted; my brain got the answer but I never got to feel the answer. So we could say that everything my brain was doing while looking for the answer was an unfelt state. So how is this any different than Google, who just hands us the answers?
    Sorry, I don’t think I understand the concept…

    ReplyDelete
  23. In this article, I found the way multi-cellular organisms were discussed pretty interesting. The authors gave the example of amoeba that, depending on the acidity of the mixture they are in can group up into a group. This group of amoebas is no longer considered a group but rather a single organism. It is interesting to see how the relative size of things relative to our ability to perceive them changes the way we represent groups. It seems right to think about the group of amoebas as a single organism presumably because when they are grouped together, we do not tend to see the distance between each amoeba. However, with animal species, even thought is has sometimes been suggested that they represent a single organism, this way of categorizing these animals is perceived as wrong and counter-intuitive. In this way, there seems to be great subjectivity based on our ability to perceive individual organisms when they are grouped together. This view can maybe also be applied to humans where things that seem to be indistinguishable from their consciousness are seen as being part of their consciousness. With more and more reliance on technology, it is possible to think that people tend to see the gap less and less between people’s brains and technology which leads to the perception of technology being part of human’s cognitive abilities.

    ReplyDelete
    Replies
    1. Hey Anastasia,

      I'm in agreement with you. I think this talk of single-celled organisms is fascinating. I spoke about it in previous skywritings. The brain (and rest of the nervous system) is the seat of our cognition, (this is the place where the causal mechanisms for our behaviour exist) but in a way, the array of neurons which constitute the brain is also a form of distributed cognition. No single neuron is capable of cognizing, but together they are. In the same way, no single neuron is said to be feeling, but together they are...

      This alludes to this mystical emergent property of feeling. But what happens when we discuss other single-celled organisms? Are they cognizing in their own right? Are they feeling? They certainly possess sensory capacities, but to this date we have never designated them as such. And if they are not cognizing and feeling, where do we draw the arbitrary line in the sand determining what does and doesn't cognize?
      If we assume for a second that the one-celled amoeba are cognizing and feeling, then those properties are probably inherited by the multicellular slime mould. How is our cluster of neurons different (besides the obvious). Is it by virtue of the fact that we evolved to be multicellular organisms (not just when conditions are appropriate) that individual neurons cannot function independently? Have they offloaded this cognitive burden onto each other? Onto a network of cells which allows the whole to function as greater than the sum of its parts?

      I think I'm rambling a bit here, just because I'm hung up on this idea of the emergent property of consciousness. Hopefully it made the tiniest bit of sense.

      Delete
  24. I agree with this paper in response to 11a. especially “he only kind of “technology” that might really turn out to be intrinsically cognitive, rather than just being a tool used by cognizers, would be a robot that could pass the Turing Test (TT)”. That’s what I was trying to get across in 11a, in that the implant wouldn’t be intrinsically cognitive, only used by a cognizer with the ability to extend cognitive scope.

    I also liked the Siamese twins example – they are two organisms because they have two different minds. This made me think of split brain patients. All of us actually have two different “minds” even though we count them as one. Our right brain and left brain are different and even have different functions, thoughts, desires etc. Normally, these are coherently joined together so as to form one “mind” (and one fused organism) but in split brain patients when this bridge is broken, the left mind and right mind often don’t agree. For example, you can see videos of split brain patients who will do up their shirt with one hand and then the other hand (run by the other side of the brain) doesn’t want the shirt done up and will unbutton it again. Would we count a split brain patient as two separate organisms?

    ReplyDelete
    Replies
    1. Hi Ailish,

      I completely agree with you when you say “the implant wouldn’t be intrinsically cognitive, only used by a cognizer with the ability to extend cognitive scope.”. It is exactly what I thought of when I was reading 11a. It raised a lot of questions about where our extended minds start and stop. I think the extensions of our minds stop once they successfully create other minds (like reproduction and how babies are part of mothers and then grow into individual lives at birth).

      Also, I really like the split bran example you mentioned. It raises a lot of questions about the relationship between the body and mind and the physical extended mind. In my opinion, it would be 2 different cognizers that are sharing biological tools that assist cognition (such a perception).

      Delete
  25. I appreciate that this paper called into question what bothered me about the previous one. An extended mind is not generated just because an individual is using peripheral additions to their sensorimotor capacity. Cognition is always confined to that which feels.

    Speaking a bit to the hard problem however, the paper does not address how it is that an array of neurons inside our heads can actually feel things. This would appear to be a form of extended cognition. Intuitively we would say that no single cell has the properties of the whole. Does this mean that feeling is a property which emerges as a consequence of a sufficiently complex system? I touched upon this in a previous skywriting and it still baffles me.

    If you remove a part of a person's brain (say the V4/V8 region of the visual cortex) then this person would be unable to recognize colours. More than that however, they would no longer be able to FEEL colour perception. We have removed a portion of their 'distributed cognition', thereby narrowing their behavioural performance capacity, but also their feeling capacity. But our patient would be otherwise completely functional! So how much brain (cortex in particular) can we remove before an individual is no longer cognizing/feeling?

    This is reminiscent of the old "how many grains of sand do you have to remove from a pile before it's no longer a pile?" Where is cognition/feeling if we can remove parts of a brain, and an individual still cognizes/feels? Is it modular? That is - are particular behavioural (and maybe feeling) capacities relegated to particular functional modules (areas) in the brain? If so, what can we say of the tinier modules (individual neurons) which make up functional areas? Are they cognizing? Are the individual modules cognizing?

    I'm definitely not the first person who's ever asked this question, but I've yet to see it addressed in our readings. Is this sort of problem insoluble? Or does it hinge particularly on the way we define cognition (ie are single-celled organisms, by definition cognizing)?

    ReplyDelete
  26. 1. “It is conceivable that the mechanism of the TT robot could be more widely distributed: some of it inside and some of it outside its body, integrated wirelessly, perhaps, from some central location. The states consisting of the joint activity of the robot-internal and the robot-external components of the mechanism that give the robot the capacity to pass the TT would be indisputably distributed cognitive states. […] Such a hypothetical distributed robot (or person) could even have a distributed migraine. But what we would really have then would be a robot (or person) with an extended (or distributed) body.”

    Clarification: For a mind to be truly extended, it must go beyond the body. So even if the mind is distributed, it is not extended because it is still constrained to the limits of the body.

    2.“Writing and speaking also allow us to offload our knowledge and memory outside our own narrow bodies, rather than having to store it all internally. Individual cognizers
    write books, but Wikipedia, for example, seems to be growing spontaneously according to an independent, collective agenda of its own, more like the joint activity of a colony of ants.”

    This passage in particular spoke to me. Seeing Wikipedia as an independent entity, fully autonomous, is appealing. As with the Internet itself, it is often hard to see were the users (and their cognizing) ends and where what is online appears to take a life of its own. But like books or any other outside source of data, the Internet is just a tool. “It is still cognizers who cognize -- the tool-users, not the tools”.

    ReplyDelete
  27. This paper seem to agree with and articulate a lot of the points and objections I was trying to make in response to 11a. Particularly the passage about the toast and the toaster nicely illustrated what I was trying to iterate beforehand: the toast and the toaster do not make up some kind of hybrid state: "the toaster does what it does, and the bread gets done to it whatever is done to it, but we will consider their states as distinct, acting upon one another." And the paper then goes on to describe the toast + toaster as what it is: not a hybrid state but rather a system.

    I get a little confused when they get into consciousness. I know that it's been continually emphasized in this course that "not everything a human being can do is
    cognitive", for example breathing and balancing (when they are unconscious). These things that we do that are unconscious and automatic are not cognitive but rather vegetative states. OK, fine. But we know that there are many states which we would call cognitive that are unconscious. Not just that we're not conscious of how our perception of a chair works (for example), but that through stimulus-stimulus learning, pavolvian learning and reinforcement behaviour, we learn to perform certain behaviours (which we would call cognitive) in certain situations/settings that become automatic and unconscious. So where to draw the line between what is cognition and what is not if consciousness is not an accurate criterion for determinacy?

    ReplyDelete
  28. "Writing and speaking also allow us to offload our knowledge and memory outside our own narrow bodies, rather than having to store it all internally."

    This goes nicely with the points I made in my previous post about how writing is a fundamentally integral part of higher level mathematics, because one's working memory is unable to store and manipulate such complex strings of numbers and operations at once.

    "We are not aware of the generating mechanism underlying our cognitive capacity, however, only of its outcome: Hence retrieving a word from memory or retrieving a word via a Google search feels much the same to us."

    I think notions like this are interesting in the context of our definition of what cognition is. Since cognition is, in part, computational and dynamical, we can look at these two methods of retrieval as having the same input and output but are simply using radically different algorithms and even dynamical processes.

    ReplyDelete
  29. Yet again, I find this article to be dressing up the same argument in different clothes.

    "Mental States Are Conscious States. Let us consider brain states, rather than just mental or cognitive states. We have agreed that not everything our bodies do is cognitive. Some of it, like breathing, balance, or temperature control is vegetative. So, too, are the brain states that implement those vegetative functions. We have also agreed that although cognizing is conscious, we are not conscious of how cognizing is implemented. When we recognize a chair, or understand a word, or retrieve the product of seven and nine from our memory, the outcome, a conscious experience, is delivered to us on a platter. We are not conscious of how we recognized the chair, understood the word, or retrieved “63”. Hence the brain states that implement those cognitive functions are not conscious either. Are unconscious brain states mental?"

    First of all, of course unconscious brain states are mental. They're not physical - well, perhaps they are, or CAN be, but they're not solely physical. When one is in a coma, completely unconscious of one's surroundings, one's brain is still in a state, and therefore it is mental. Unconscious, as I don't need to inform anyone, is completely different than "brain-dead".

    Another thing - of course we aren't conscious of how cognizing is implemented. This article is problematic in that it seeks to reverse-engineer cognition... we will never be fully conscious of how cognizing is implemented, because this comes back to the same problem! Whenever we think about it, we are cognizing, therefore we draw a blank - by cognizing about how cognizing works, we still don't know how THAT cognizing comes about or can be implemented.

    ReplyDelete

  30. “Let us first agree that not everything a human being can do is cognitive. Breathing, for example, except in some special cases, is not cognitive; neither is balance, again, except in some special cases.”

    I am going to have to disagree with this statement. I believe that breathing is rooted as a cognitive process because it is controlled by the brain. I don’t think that something has to be necessarily conscious for it to be considered cognitive. Therefore, I wouldn’t equate consciousness with cognition; a lot of our cognitive processes go on without being fully aware of them. Just because a process such as breathing is occurring unconsciously, does not mean that it is not cognitive. We attribute functions like this to the brain; and they are under cognitive control.

    ReplyDelete
    Replies
    1. Hi Rachel,

      In my opinion, cognition is the mental act of perceiving, processing and understanding information through our senses and thought. I do not see an action like breathing fitting into this description. I am curious to know why you think it is cognitive. I agree that a lot of our cognitive processes go on without being fully aware (such as dreams and certain reactions [i may be wrong]). But when it comes to breathing unconsciously, I do not think that there is any cognitive activity that takes place.

      I view every conscious action as a cognitive one and hence I would disagree with you and equate cognition with consciousness. Though, I agree with you when you say that every cognitive action does not have to be conscious.

      Delete
  31. Our performance capacity described what we, as an organism, are able to do as a whole. If I do something with a tool for example, I am the one using it, so the outcome is that as a whole, I am accomplishing something. More concrete, when I drive a car, it allows me to move faster to one location to another. The car has extended my capabilities and has now become part of my sensorymotor abilities. In that sense, cognition isn’t restricted to what is happening inside my brain but is rather defined as anything I can do, whether or not it imply other, external things.
    “We simply need to make the observation that what makes some of our capacities cognitive rather than vegetative ones is that we are conscious while we are executing them, and it feels like we are causing them to be executed – not necessarily that we are conscious of how they get executed”
    We are indeed conscious of something when we execute an action. This feeling isn’t a perfect reflection of what is happening in the brain; far from it. If we could explain what neurons are doing, and how this is causing emergent feelings, we would have explained the hard problem. But even if it seems that we can grasp how a thought comes to our mind, we don’t have a clue about the causal explanation behind it; rather we are conscious of things which we executed, even if the causal mechanism isn’t translated into that conscious state.

    ReplyDelete
  32. I think that much of the contention on the extended mind/cognition can be addressed by this: "It is still cognizers who cognize -- the tool-users, not the tools." It seems appropriate that we keep cognitive technologies like telecommunications and computing as just cognitive tools, rather than incorporating these into cognition itself, because this discredits the hundreds of thousands of years of evolution.

    But this raises another difficult question: what if we developed the technology to plug in a microchip in the brain where it stored working memory, thereby increasing our capacity to hold more things at once? Does this chip become a part of our cognition, or is it still only a tool?

    ReplyDelete