Saturday 2 January 2016

6a. Harnad, S. (2005) To Cognize is to Categorize: Cognition is Categorization

Harnad, S. (2005) To Cognize is to Categorize: Cognition is Categorization, in Lefebvre, C. and Cohen, H., Eds. Handbook of Categorization. Elsevier.  

We organisms are sensorimotor systems. The things in the world come in contact with our sensory surfaces, and we interact with them based on what that sensorimotor contact “affords”. All of our categories consist in ways we behave differently toward different kinds of things -- things we do or don’t eat, mate-with, or flee-from, or the things that we describe, through our language, as prime numbers, affordances, absolute discriminables, or truths. That is all that cognition is for, and about.



75 comments:

  1. Hi, I have not done the readings as yet but I have a few questions in mind after today's class discussion. Thought that this would be a good place to share them:

    1) Say we have Category C and Object O. Does C require O to have certain features to be part of C (like the bread box example)? Or does O need to have a certain number (70%? 90%?) of these features to be part of C? Is there a discrete entry barrier or is it flexible?

    2) How do exceptions work for categories? For instance, most birds fly but penguins don’t and are still considered birds, or the case with platypuses being categorized as mammals despite laying eggs.

    ReplyDelete
    Replies
    1. Hi Nivit,

      Your second question made me wonder about irregular verb endings like the following:

      Regular suffixes:
      work worked
      kick kicked
      talk talked

      Irregular suffixes with completely unmatch forms:
      go went gone
      shut shut shut

      How do exceptions work for verb conjuration categories?

      Delete
    2. Hello,
      still have not read this article yet so not commenting on it, just on your comments to class discussions!

      The Wittgenstein article that kept getting referenced is about family resemblances. The essence of the article is that people who belong to the same immediate family do not share every single property, but rather there is a constellation of properties that they all have somewhat in common that endows them with a family resemblance.
      Example from my family:
      ME: brown straight hair, blue eyes, fair skin, freckles, short
      Brother: brown curly hair, brown eyes, olive skin, freckles, short
      sister: brown curly hair, hazel eyes, olive skin, freckles, short
      mom: *fake blonde/real brown hair, blue eyes, fair skin, freckles, short
      dad: thin brown hair, hazel eyes, olive skin, average height

      SO you can see that while my brother, sister and I have different mixes of our parents traits we have the same family resemblance because we all have brown hair and are short and have freckles (i have thin hair, they have thick hair, etc)
      SO individual differences in categorizing are there, but you can see that all members of a category have some properties in common.

      This is not to say that any of these properties are necessary or sufficient but just that having some of these properties is enough...


      To take it to your example of birds, flight is a property of birds, but so are beaks, and both platypus and penguins have beaks. Eggs are another property of birds, and platypus and penguins both do that. But, being warm-blooded (i think) is not a property of birds, and neither are a bunch of the other properties that a platypus has.

      For irregular words, I am not totally sure how it would play out but I would think that there is some sort of Wittgenstein-ian approach to irregular verb suffixes!!!


      This might have rambled but what I am trying to say is that having/not having a property does not exclude a member from a category, it goes beyond individual properties such as laying eggs for a platypus or not having flight for a penguin.

      Delete
    3. Nivit, I don't think that something needs to have 100% of the properties of a category, or 70% or 90%. There may be a threshold, I'm not sure (maybe Stevan can add to that point?). What I do know is that their is the a categorization technique based on psychological similarity called the geometric approach, whereby items are represented as points in a multidimensional space. In this space, objects that are more similar and representative of a certain category are closer together than objects farther apart which are more dissimilar. So, lets say you have a space for the category "birds", bluejays, pigeons, robins, hawks etc. which are represented by dots in a space are all quite close to each other. The dot for penguin, which is still a bird, would be further away showing is less "prototypical" of the bird category and has less similarities to the other prototypical kinds of the bird category, but it still is in the bird category. So I think that we may cognize these attributes as these geometric mental spaces and so when someone says name me the most typical kind of the bird category, you will think of the pigeon, rather than the penguin. And when someone asks to name a fruit you will say apple or banana rather than tomato.

      Delete
    4. Hi. In terms of the animal example, I’m inclined to believe that it’s a majority rules kind of thing. In other words, the things we associate with birds are things that the majority of birds would have. Our minds create these categories without us having to think about it.
      But I think we can also consciously expand these mental categories if we get new information. For example, if you looked at a Platypus and saw that it lay eggs, had a beak etc you’d probably assume it was some type of bird, like a duck. At some point you will have learned otherwise (perhaps in school) and so you will consciously add “platypus” to your category for mammals and remove it from birds. The reason a platypus is considered a mammal and not bird (both are warm-blooded by the way) is that it has mammillary glands which you might not know by just looking at it.
      So I’d say there are certain features which we can use to categorize that are explicit and that our mind does automatically (the article calls this “unsupervised learning”) but there are also features that must be learned in a more conscious way, perhaps through trial and error that Steven discusses (which would be supervised learning). I wonder if this is also known as active vs passive learning? Both must occur because we are certainly able categorize things without the need for error and correction all the time. I feel like how we perform unsupervised or passive learning is really the more interesting of the two.

      Delete
    5. Simple fundamentals of categorization

      Let's say there is a big set of things, B. And let's say that there is a smaller subset of B: C. If for every member of B there is a correct yes/no answer to the question of whether it is a member of C, then C is a category.

      If someone can correctly sort all the members of B according to whether it is or is not a member of C then that someone "has" that category, C.

      Now stop and think. "Correct" means that there is a right or wrong about the matter. It's not just a whim. From that it already follows that there must be a way to distinguish the Cs from the non-Cs, otherwise it would be happening by magic.

      If there are some members of B for which it is not possible to determine whether or not they are members of C, set them aside! We are not talking about things that cannot be categorized but about things that can.

      Let's also set aside a trivial case: If B contains only a finite number of things, say, 10, and 5 of them are in C and 5 are not, then the rule for categorizing them could be just to memorize which 5 are members and which not. If you can do that, you "have" the category.

      But we are interested in non-trivial categories, where you cannot learn the category simply by memorizing which things are members and which ones aren't. Se we are talking about infinite sets -- B has an infinite number of different members, and so do C and not-C.

      Now if all Bs are either green or blue, and the Cs are all green but not blue and the non-Cs are all blue but not green (and you can tell the difference between green and blue) then you have the category as soon as you notice that the Cs are green but not blue and the non-Cs are blue but not green. Green and blue are features, and the simple rule is: If green then C and if blue then not-C.

      Actually, the rule could have been even simpler than that. But it could also have been much more complicated: "If green or round and not-small, then C, otherwise, not-C." And to have an infinite number of Cs and not-Cs there would also have to be other features, irrelevant ones, that vary, but have nothing to do with being C or not-C.

      "Feature" is also an arbitrary descriptor: We call being green a feature, but we could also call "obeying rule R" (where R is "green or round and not-small") a feature. So bear in mind that a feature could be something that can only be described by a complicated if/or/and/not rule (a "Boolean" rule), like the ones you use in more complicated google searches).

      Armed with these simple fundamentals, I'll reply to some of your comments.

      Delete
    6. NK, LA: A feature can be based on a percentage, and there can be exceptions, but there has to be a Boolean rule that jointly covers all of that if there is to be a category at all, so that someone can say, correctly, whether any thing presented is a member of not. (Note that the person does not have to know the rule consciously, or be able to state what it is. They (or rather their) just has to be able to follow the rule. (This is Wittgenstein's distinction between knowing a rule and following a rule.)

      JP: Wittgenstein's "family resemblance" has to be a complicated Boolean rule/feature. And every member has to obey the Boolean rule, hence have the complex Boolean combination of features.

      JS: A rule can be quantitative or statistical. A member can be a member if it's "at least 70% green" or "anything above threshold value X" but it still has to be an all-or-none rule, covering exceptions as well: "at least 70% green, or, if square, then 5% green is enough." But it must not leave anything un-categorizable, otherwise it is not a category, or the category is not yet known.

      Note, though, that I said C is a subset of B, and B is not everything in the universe. So B can be all 2-dimensional shapes, and C can be "square." You can say whether or not any of those shapes is square, but you can't say whether the sound of a foghorn or the smell of strawberries or a feeling of anxiety is square. So for some things (in fact, most things) a given category can be irrelevant.

      But similarity is not categorization. (Nor is typicality.) Similarity can influence categorization -- and, more interestingly, categorization can influence similarity, but they are not the same thing. Categorizing is all-or-none: something either is or is not a member of C; there's no such thing as being a 70% member. That's not a category. "Big" is not a category: Some things can be more big and some can be less big. And there's no non-arbitrary line between big and small. But bird is a category: Something is either a bird or not a bird; it can't be 70% bird. But, again, a fever can't be a bird, and it seems silly to say it's a "non-bird." A fish is a non-bird. And both typical birds and atypical birds are 100% birds. (So, so much for Rosch's view of categories.)

      AV: Yes, unsupervised learning is passive learning, whereas supervised learning is trial and error with corrective feedback. Only very obvious categories can be learned passively. The third way, though, is via verbal instruction. But for that, your words first have to be grounded, and it can't be verbal instruction all the way down. (That's the symbol grounding problem.)

      Delete
    7. Julia and Jordana, I understand the point you made with the family example and the geometric approach. Thanks for your comments! At a certain point the lines must get blurred for something to be included in an category (e.g. when an element is very very far away from the prototype). Do you know how this works?

      It’s slightly easier to see how exceptions can work for these categories with no entry rules, if there are no large alternative categories for elements to fit into. On the other hand, Lucy raises a good point about exceptions in categories with strict rules like irregular verbs.

      Delete
  2. This comment is related to class on Feb 8 but I'm just wondering what everyone thinks about such an arbitrary word as 'art'. We seem to be able to understand what we say when something is 'art' and be able to easily say if something is an art when we look at it but then we also leave it up to the artist to deem whether something they have created (or I suppose found, in the case of found art) fits into the category of art. But then this seems to go against the entire feature detection or even prototype idea of categorizing things because what if someone puts a chair in a museum and calls it art but then you see that chair being used as a chair on another occasion and would find it problematic to call it 'art'. Is this a case when a referent has multiple meanings?? Also, what do you think, then, is the categorization criteria for something of this sort of arbitrary nature? I keep trying to think of features that would categorize something as art (from wide ranging things like modern art to the Mona Lisa) but even if you say "something that can be displayed for others to appreciate" there are so many subjective interpretations. This makes me think of the comment someone made in class about abstract words in general and how unsatisfying it is to try to understand these primitive aspects of the meaning of them. Sorry for rambling, I might be just getting lost in theory but this has been frustrating me since yesterday.

    ReplyDelete
    Replies
    1. In the next article Harnad just touches on on how more abstract terms such as "bachelor" get grounded by other words such as "man" and "married" and take on the categorical properties of those words that define it. The difficulty with art isn't so much how it is categorized but how it is defined by other words. One of the reason it can be frustrating to categorize things into "art" is two fold: 1. It is (arguably) a artificial category and therefor it is whatever people say it is and 2. everyone says it is something different. That's why so many people don't understand certain types of art, simply because their definitions of what fit inside the category are different from the artist's or other viewers. Some might define art around paintings and sculptures, some around music, some think art is everything. There is no one definition. So I wouldn't say it's a case with multiple meanings, each person has an idea of what they consider to be art, but the meanings are likely different for each person and its trying to satisfy everyone's definition that you are getting lost.

      Delete
    2. I really like the points you make, Alba & Jordan!

      In such as case as the word 'art' I think that an individuals causal history with the world would play a much bigger role in the categorization of things that constitute such an abstract notion as 'art'. For instance, in some places writing in itself is considered a visual art (e.g. Chinese calligraphy) and some modern art is considered art for the sake of not conforming to the common features of the category 'art'.

      I totally agree with Jordan that art is more of an artificial, or arbitrary, category. It seems to me to be more of a cultural or sub-cultural phenomena and from within such groups, the category of art and the things that constitute it might narrow a little. Also, the word itself to different individuals may be grounded in words like "beauty," "emotion," and other such abstract words too that further subject-ify the category. There is probably overlap throughout the world as to what humans classify as 'art', in the end though, the arbitrary limits of which things fall into this category and which do not are controlled by the subject alone as well as dictated by what the subject has experienced and associated with as 'art' throughout their life.

      Delete
    3. “We perhaps rely more on our own sensory tastes in the case of beauty, rather than on hearsay from aestheticians or critics, though we are no doubt influenced by them and their theories too”

      I found this to be a really interesting idea related to the art question I had because it leads me to think about the ways that we can universally understand something as art or beauty or simply as “beautiful” even if we can’t describe what it is about it that makes it beautiful.. .do you think there is some capacity in our senses (biologically) that allows us to detect beauty and then categorize it? If we are able to categorize things as being beautiful or not (and supposedly we are, based on how popular things are or how things historically become known as a beautiful object) then it must be related to the earlier claim Harnad made about categorization which is “Let us assume that if organisms can categorize, then there must be a sensorimotor basis for that skill of theirs, and its source must be either evolution, learning, or both.”

      Delete
    4. This comment has been removed by the author.

      Delete
  3. I noticed in class that we briefly touched on a theoretical hierarchal example of abstraction (Charlie<Apple<Fruit<Thing) and then mentioned a more dynamic interconnected example in which the features that are abstracted change based on relevance to the situation. I find this interesting because these two examples (at least conceptually, maybe functionally) resemble two models of sematic memory (hierarchal and spreading activation network). I also noticed that memory was mentioned in parallel quite frequently in this paper. Through this, the process of abstraction reminds me of cognitive economy. Are they related in any way? If we give weight to certain features (as in the ugly duckling story) is this not unlike cognitive economy? I know the model isn’t exactly the same, but the idea of superordinate, ordinate, and subordinate categories in a hierarchal model is a nice contrast to the abstraction process which (if I’m not mistaken) occurs more like a network that is constructed based on relevance.

    I’m having some trouble with the “vanishing intersections” section of this paper because it reminds me a lot of the symbol grounding problem that we discussed this week. Is Fodor proposing that there are some categories that are thought to be distinct to an individual but cannot be defined in relation to other categories or with a definite rule? For instance, is he saying that these categories must be innate because I cannot categorize the word “dog” as I am unable to make an always true, all-the-time rule for defining this category and therefore category membership cannot be absolute? I’m not entirely sure 1)if I have this concept interpreted correcting and 2)how the jump is made from our poorly defined categories to having an innate mechanism: Categories membership can be fuzzy, but why would that mean that we have an innate mechanism for every category? As the paper mentioned, the sensorimotor experiences we have with the world are often refined via linguistic instruction or explicit learning (which is also a sensorimotor event in and of itself no?) so even if it doesn’t look like a dog, if twenty people tell you it’s a dog, you’ll probably put it in that category. Also, if abstraction is done for functionality it should be expected that categorizations could become dynamic (and thus less bounded) under certain situations. Just because we don't know how we do it doesn't mean its innate. Am I missing Fodor’s argument here completely?

    It would make sense to me that the innate evolutionary ability for language which makes most learned categorization possible would save a lot of brain development as opposed to having a pre-made category for everything.

    In the same vein, I've seen a few articles that link Poverty of the Stimulus with Vanishing Intersections, with some saying they're very related and others refuting that claim--Can anyone clear that up for me?

    ReplyDelete
    Replies
    1. Hi Riona,

      I can't respond to your comments about semantic memory models because I don't know very much about that. But regarding your questions of the vanishing intersections idea/model here's what I've got.

      “All evidence suggests most of our categories are learned”
      To address this statement, and your question Riona, I must bring up the vanishing intersections argument of Fodor. This is the idea that if we look at the intersection of all of the things in a category, and we try to find (a) feature(s) that doesn’t change (is invariant) among all of them, we find nothing. This basically says that we can’t possibly find a feature that all members of a category share.
      We explain the process of categorizing sometimes as creating a mental Boolean formula like “if x but not y and not z, either w or u but not p” etc. I agree that if I looked in a dictionary there is no way that I knew what “chair” meant prior to learning from parents or teachers. But perhaps, the aforementioned Boolean formula is innate, but attributing a particular name to that formula is learned. This would make categories/categorization hybrid innate-learned. I am hesitant to accept that categories are entirely learned because of something like the poverty of the stimulus argument. It is impossible that children/people/humanity have learned all the possible things that could be a chair (because almost everything can be a chair) simply because we haven’t had enough “feedback”! Somehow we know, (although we likely have never seen it before) that we can sit on a rock or a stump or a phonebook and that could be called a chair. And that “somehow” I think, Fodor supposes is innate. Categories are innate this is how we know that all these things which really share no features, except that they are all chairs (which circular anyway), are chairs.

      Delete
    2. Hi Riona,

      I'm also wondering to what extent the spreading-activation model relates to categorization.

      During class it was mentioned that objects are [probably] not categorized based on the prototype of its kind. However, if we think of our semantic memory as being a spreading-activation model (Collins and Loftus 1975), then we can see typicality effects. Firstly, the model suggests that we have a semantic network with nodes that represent certain concepts (which point to specific features) and the closer the nodes are in proximity to one another, the more closely related they are (length of links represent degree of relatedness).

      There is different evidence supporting this model. For one, typicality effects can be seen when you ask someone whether or not a canary is a bird and whether or not a penguin is a bird. On average, people will tend to answer “yes” more quickly to the first question than the second. This is probably because a canary has typical features of a bird (it can fly, wings, beak, feathers, etc) whereas a penguin has less identifiable characteristics (does not fly, etc). Would this not suggest then that bird and canary were more related (shorter link) than bird and penguin were? When the word canary and concept “bird” were activated, they each “spread” to their respective features and ultimately these overlapped to come to the conclusion that canary belongs in the bird category.

      “Because the number of intersecting or common features generally increases with the typicality of an instance to its category, there are more opportunities for an intersection with typical than atypical instances, hence more opportunities for early termination of the process” (Collins and Loftus 1975) Can this not then support that there is a prototypical model for each kind of category?

      Delete
  4. Part 1)
    I have a mixture of comments and questions.

    In order to have the meaning of something, we need to ground the referent (the thing out there in the world) to the sense (the thing/symbol or whatever it is in the brain). If I see an apple out there in the world, to get APPLE, I need to link the thing out there in the world to the “symbol” in the brain. But how do we come to link the two? Is simply grounding the referent to its sense necessary and sufficient for meaning? It certainly seems like its necessary (as seen in CRA), however don’t think it’s sufficient. I think were still missing one important component: feeling.

    I think meaning is something that only feeling organisms have the capacity to do. You know you understand something because it feels like something to understand and I definitely think the missing component to linking the sense to its referent lies in our sensorimotor capacities.

    “What a sensorimotor system can do is determine what can be extracted from its motor interactions with its sensory input. If you lack sonar sensors, then your sensorimotor system cannot do what a bat can do, at least not without the help of instruments. Light simulation affords color vision for those of us with the right sensory apparatus, but not those who are colorblind. When we move, the “shadows” cast on our retina by nearby objects move faster than the shadows of further objects means that, for those of us with normal vision, our visual input affords depth perception. For more complicated objects, like 3-D shapes, like a chair, it can be recognized as being the same shape & size even though the size & shape of its shadow on our retinas change as we move in relation to it. Its shape is said to be invariant under these sensorimotor transformations and our visual systems can detect & extract that invariance & translate it into a visual constancy. So in order to detect object, need to have appropriate sensors & appropriate invariance-detectors. “ (p.1-2).

    I tend to agree with this! However, what I am a little unclear about is with the issue of invariant features. As mentioned on p.12, there is a difference between seeing and recognizing. Seeing requires sensorimotor equipment, but recognizing requires more. It requires the capacity to abstract, to single out some subset of the sensory input and ignore the rest. My issue is, how does our system know what constitutes a feature & what doesn’t and how does it know which/what features are relevant for a given object? I don’t think we can simply forget or discard the “irrelevant” features and only focus on those that are relevant. Yes, objects afford certain sensorimotor interactions with them, but how does our system do that and based on what? How does it know what parts to focus on specifically? I see some people say neural nets are a possible way of explaining how we ground symbols and learn categories, however I am not entirely clear on how they specifically would help explain anything more than T3, how they can scale up to all of T3.

    ReplyDelete
    Replies
    1. Part 2 – Cont)
      I don’t doubt that our system is picking out the relevant information through sensorimotor capacities. Since we use language to label concepts, I think reducing the dictionary down the minimal grounding set is a promising idea. Giving us a sense of the “primitives” could no doubt give us some important insight in regards to this whole issue. Whereas I don’t think we have innate categories (or if we do, we have a very limited few), I think we do have an innate capacity for a “locking in” mechanism, locking into the properties of a given concept. What these properties are exactly, I am not quite sure, but I would think they lie in our sensorimotor capacities. By being genetically endowed with the “locking in” capacity, our system comes to somehow pick out the relevant invariant sensorimotor features for grounding those in the minimal grounding set. Once those are grounded, we can then start grounding other concepts through hearsay –which I think could come to explain how we come to understanding “abstract” (in the intangible sense) concepts like goodness, truth and belief?

      Sorry for the massive message, just wanted to share my thoughts.

      Delete
  5. First off, I genuinely appreciated the fine level of scrutiny and deconstruction taken to parse apart categorization – I can say, I have yet to give it this much thought, until now. While reading ‘6. Learned Categories’, specifically the line: “Whether it is Jerry Fodor or a boomerang, my visual system still has to be able to sort out which of its shadows are shadows of Jerry Fodor and which are shadows of a boomerang”, I thought of the prism experiment used to alter the way our visual system operates. Specifically, the experiment where the attached prism inverts our visual perception of the world and, in order to interact with our new environment, the visual system effectively learns that the theoretical coffee table to your left, which was once ‘right-side-up’ is now ‘upside-down’. I think this provides interesting evidence to the keen level of specialization that occurs within our visual system and mind – and how our mind has the ability to take such an innate quality of our world, the ‘right-side-upness’, and re-categorize every object we come to interact with, with this new orientation – the ‘upside-downness’ that is now our new ‘right-side-upness’. Specifically, I find it interesting how elastic our perception is, and how innately boundless our capacity to categorize and then re-categorize appears to be – even when it comes to something as drastic as an inversion of the perceived orientation of the information entering our sensorimotor systems.

    ReplyDelete
    Replies
    1. Hi Justin!

      I'm really happy you brought up Stratton's prism experiment; I'm also very interested in different forms of perceptual adaptation. I'm currently wondering how categorical perception fits into this perceptual adaptation. Was the rotation of vision occurring in parallel with the categorization or does categorization have to occur first to identify that an upside down table is indeed a table and therefore it should be rotated to fit into our category of "table" ? If that was the case, would fixating on an ACTUAL upside down table for 5 days cause us to see the table right side up? I can't say that sounds very plausible. The fact that it took him several days to adjust to the upside down vision and then several days to readjust suggests that the categorization occurred rapidly and the adaptation had to catch up and was also based on other environmental cues (like everything else being upside down too). Is knowing the timeline of these events even important? I'm not sure, but it would be helpful to see where along in the visual system this is occurring.

      I did find a paper that mentioned perceptual adaptation x categorical perception: Infants experience auditory adaptation when learning language. The category separation effect occurs when they hear two syllables that cross a phoneme barrier in their language. Infants who are learning another language that does not have that phoneme barrier don't have that separation. Could we call this an example of categorical perception causing perceptual adaptation? Apparently this is to be interpreted "conservatively" since testing babies for this is a little difficult. Anyways, the paper mentions categorical perception as "useful" to perceptual adaptation but doesn't really tie the two together, which I'd like to see.

      Delete
  6. This comment has been removed by the author.

    ReplyDelete
  7. "categorization is any systematic differential interaction between an autonomous, adaptive sensorimotor system and its world… Categorization is accordingly not about exactly the same output occurring whenever there is exactly the same input. Categories are kinds , and categorization occurs when the same output occurs with the same kind of input”

    My understanding is that every type of input we get goes through rigorous categorization. The article mostly deals with sight by grouping patches of light signals and then further grouping objects to scenes, to foreground/background and then further into categorizing every instance of every object to match the individual or group they belong to. The rest are how I understand this would apply to the other senses. For sound it would be categorizing sound waves for timber to determine the types of sources and to understand more complicated sound waves. For touch this might be determining textures and pressure but would the authors view of categorization apply to pain and temperature? Clearly we categorize things into “only a little painful” and “very painful” and even different types of pain but temperature is only a linear scale, but would we still categorize it analogously to how we do for sight? Then moving into taste and smell which use chemoreceptors, I’m inclined to say again that we use similar methods of categorization but does the different input receptor make a difference since each receptor theoretically already separates categories? Can we say we are cognitively categorizing taste if our taste buds are doing it mechanically already? According to the quote above I would say it is since similar kinds of input produce the same output to the brain.

    ReplyDelete
  8. Although the title and conclusion point 30 of the paper maintain a strong stance that cognition is categorization, I believe that we can all agree that cognition is not purely categorization, but a substantial part of it. In the broad way the article described categorization (“categorization is any systematic differential interaction between an autonomous, adaptive sensorimotor system and its world… categorization occurs when the same output occurs with the same kind of input”), it is quite astonishing how many capabilities this includes. A lot of our interaction with the world is based on this fundamental idea of acting on kinds of inputs to produce actions, such as broadly this food is edible so I will eat it or more narrowly, cashews fall into the category tree nuts of which I am allergic to so I will not eat it.

    In trying to wrap my head around what exactly cognition is and where we should go from there, the main idea I came away with from reading this article is that cognition = categorization + ability to do continuous actions + feeling. We will disregard feeling in this instance as we have discussed previously, so our goal currently for reverse engineering is T3 capacity in which we need categorization and ability to interact continuously with the world through judging/assessing the similarity/differences between two categories. In class we alluded to the fact that categorization is a discrete process of abstraction, in which we make use of some features and ignore the ones that are not relevant. In this way, even looking at the same object, like our yellow apple Charlie, we are able to abstract differently. That being said, what happens when doing the right thing with the same input changes with context. Much of our appropriate responses to inputs depends on context, so would we have to create another category in this different situation? Or would context be another feature that we abstract in connection to the object or organism?

    ReplyDelete
  9. "Categorization are kinds"
    - What is the distinction of category and kinds?

    - So is there really such a thing as "miscategorization" due to categories varying with context? Is miscategorization based on the fact that we are all able to abstract (considering Fume doesn't exist)such that an error may occur due to the wrong abstraction?

    - I want to know if I got the definition right: features are sensorimotor inputs? Then what does it mean by "features themselves are things too"? Please elaborate.

    - Since categorization is both relative and absolute (kind of confused by this), then is CP absolute or relative?

    - Is there a "limit" to abstraction? Is it possible to overdetermine or underdetermine an abstraction or is it an absolute "thing"?

    ReplyDelete
    Replies
    1. A category is a group of objects that have been grouped according to some property or properties. Kinds are what make up the category and there are two types of kinds, according to a reading I did in another class by Smith. There are natural kinds which are naturally occurring species in the world. Ie) tigers, flowers, minerals. Artifact kinds are man-made objects such as chairs, socks, tables etc. So for example, a category would be like furniture and a (artifact kind) would be chair.

      Delete
    2. So kinds are objects that make up a category, and there are two classfications: artifical or natural, but then when the paper talks about objects of different kinds or similar kinds depending on the invariant features shared/not shared among the objects within-category and between-category, in this case what does kinds mean? Now if kinds only are distinguished as natural or artificial, then invariant features shared are just dependent are whether the objects grouped together are natural or artificial, in which case doesn't say much about a category. The way the usage of the word "kinds" varies in specificity/generality in this paper is what is throwing me off.

      Delete
    3. Hi guys,

      This is confusing me as well, can anyone clarify this?

      Delete
  10. In this article, there is a focus made on us learning to make categories and making distinctions. There are different theories describing how theories are formed and how distinctions happen to get established. A fact that would be interesting to consider in addition to what is mentioned in the article is that often children make more distinctions than adults do and then as they grow up stop making these distinctions as adults. Their categories are therefore more numerous and regroup less things since children’s categories are less broad than adult ones for certain things. For example, children that are only exposed to English after reaching some critical period,if they are not presented with other languages in which nasalization matters, growing up fail to notice the difference between nasal and non-nasal vowels even though they were able to do so as infants and their sensory apparatus could allow them to differentiates between these vowels. In this case, it seems that there is a happening of categorization of categories that eliminates some categories that are not useful for everyday life.
    A similar argument can be applied to colours. In the article it is said that although colour perception is innate, colour perception is categorization due to the adaptive consequences of Darwinian evolution. I agree that colour perception is a form of categorization, however, I have trouble accepting that it has such as a strong component of Darwinian evolution to it (apart from being able to detect wavelengths within a certain range). It seems more to me that human beings, just as with the case with speech sounds (mentioned in the previous paragraph) are able to differentiate much more colours ( have more categories for colour), however, children, when noticing that colours are can be grouped without any big consequences can be grouped into the colour groups (such as green, yellow,…), they stick to these categories. Colour categorization, can also not be an effect of Darwinian evolution as many different cultures have different categories and cut-offs for different colours.
    Another concept that I have some difficulty in this article is the ‘’vanishing intersections problem’’ (I am also not really sure to understand it or grasp what it is trying to say). However, with the example given: if you look up words in the dictionary, there is no intersection between the meaning of these words. I am not sure I understand the concept properly but the way I understand it, it seems that the problem does not exist. It is easily defeatable by a counter-example. For example, if you take a short legged table, it can either be considered as a table or a bench, depending on the context. There is therefore is some intersectionality between the words ‘’table’’ and ‘’bench’’.

    ReplyDelete
  11. I really enjoy how Harnad addresses what my intuition was about categorization which is that it seems if we find an algorithm capable of perfectly categorizing, we can seem to say we have reverse engineered cognition.
    This, combined with the paragraph on page 9 of the article " ). Not only are all things the members
    of an infinite number of different categories, but each of their features,
    and combinations of features is a potential basis (affordance) for
    assigning them to still more categories.
    (9)"

    When you think of just how many categories something can belong to it makes it more interesting that certain programs and algorithms can categorize effectively and semantically, because many objects are a part of a category for reasons that are not evident in their outward or exhibited properties but more in their relations to other members of the category.

    ReplyDelete
  12. “Dynamical systems are systems that change in time…and categorization refers to a special kind of dynamical system. Categorization too will have to have something to do with changes across time.”

    This is referring to the fact that the same input does not always produce the same output and things look different and are changing, ie) for the example with sand, it is always in a different configuration. We can even think about humans in this sense. Take me for example, at the age of 18 just four years ago I looked a bit different, had shorter hair etc. and I was still the kind “Jordana Saks” of the category “human”. Now, at 21, my hair is darker and longer and I look more mature. I am still “Jordana Saks” of the category “human”.

    So, it is a very interesting question of whether dynamic systems are still able to be categories. From the information I’ve gained reading this paper, it’s because of our perception and because of invariance to orientation and configurations that even when things change we can still categorize them.
    On a different note of things changing, society is always changing and inventing new things. With all the new technology and new things that come about each day, we have new CATEGORIES. For example, 10 years ago “phone Apps” would not have been a well-known category to people. Now, however, if you had to name 5 kinds in the category of phone apps I’m sure you could easily say… Shazam, Facebook, Instagram, Twitter, Instagram, Spotify, Uber etc. In this way, categorization is definitely not entirely innate and must adapt to changes in society as new categories are created all the time and our brains must learn to create and establish new categories.

    ReplyDelete
    Replies
    1. "So, it is a very interesting question of whether dynamic systems are still able to be categories. From the information I’ve gained reading this paper, it’s because of our perception and because of invariance to orientation and configurations that even when things change we can still categorize them."

      I agree with you wholeheartedly here. At the risk of sounding redundant (and for the purposes of solidifying my own understanding), I'd say that it is 100% possible to categorize dynamic systems. (I don't know that the systems themselves can BE categories, but I think you meant can BE CATEGORIZED so I'll go with that.) Abstraction is the name of the game of course, because we are able to pick out certain features (the ones that don't change - the invariant ones ) and focus on them in order to lump even dynamic systems together.

      "In this way, categorization is definitely not entirely innate and must adapt to changes in society as new categories are created all the time and our brains must learn to create and establish new categories."

      As far as this, and your discussion about apps, I think you've raised an interesting point. With the advent of new technology, we are given new 'things' to categorize. It would seem that because we are so good at categorizing and selectively attending to the features that distinguish members of a category, we would be able to do the same thing with new technology. We could easily learn a new category I think, and would need only give it an arbitrary label like "app". I think our innate capacity to categorize makes us especially good at organizing new information, and creating new categories to accommodate. The actual language symbol we give a new category is trivial compared to this.

      Delete
  13. The difference between supervised and unsupervised categorization is that the former is guided by trial and error corrective feedback, whereas the latter is based on the physical similarities and differences between things due to the way in which they reflect light or cast shadows on our retina. And of course, the essence of categorization is to allow us to "do the right thing with the right kind of thing".

    I have two questions/comments regarding this:

    1. How do you know if you are categorizing something via supervised or unsupervised methods? I tried to answer my own question and thought that if I sat down and tried to distinguish the sex of chicks just by repeatedly staring at them (ie unsupervised learning only) I would probably go in endless circles and not be able to complete the task without feedback/guidance. But is this the best answer? Say I have never come into contact with a black widow spider, nor have I ever seen a picture of it. Until the moment I either come come into physical contact with it (and visually observe its distinguishing features) OR am guided to categorize it via reinforcement learning, what is the status of the black widow spider? Is it uncategorized? If so, what role does it play in cognition?

    Cognition is not all categorization because of course we do things not only for the sake of "doing the right thing with the right kind of thing", but also for no apparent reason at all - just for the sake of it (like art, and staring at a scenic view for hours on end). But where does this black widow spider fit in to my picture of cognition - is it excluded from cognition until 'further notice'?

    2. How is categorization represented neurologically? Do all members of a category share a certain pattern of neural activation? If so, consider Greebles - objects designed to look nothing like human faces, but since they mimic the geometrical face configurations, they light up the fusiform face are like human faces do. But when asked, people surely won't consider these funny looking objects as faces.

    ReplyDelete
    Replies
    1. Hi Maya,

      I definitely can't answer all your questions as I know nothing about the neurological representation of categories, but I wanted to address one of your points.

      Cognition is not all categorization because of course we do things not only for the sake of "doing the right thing with the right kind of thing", but also for no apparent reason at all... like art, or staring at a scenic view for hours on end"

      I would argue in response to this that 'doing the right thing with the right kind of thing' does not necessitate that the 'doing' is a willful, instrumental act. I would actually that staring is exactly the right thing to do with the category of "scenic view" (among other 'right things' such as taking a picture; exclaiming 'Wow, beautiful' to your friend; or simply walking away if nature's beauty isn't your cup of tea).

      It's not that we stand at the top of a mountain range and think "Now I must stare for a certain amount of time because that is the correct action". Rather, I would imagine that the 'thing' we're seeing, beyond just falling under the category 'scenic view' would also be associated with categories like 'beautiful', 'pleasurable' etc. and all the positive affective responses that accompany these - all of which would contribute to a desire to stare at it for a long time. I'm not sure if that explanation makes sure, but I'm just trying to clarify that the definition of categorization we're discussing doesn't imply a carefully calculated and volitional act - it's simply the brain recognizing a 'kind' correctly and responding in the way one has learned is right based on previous learning, either supervised or unsupervised.

      Delete
    2. Hey Maya,

      I really like your questions about the black widow spider, because if I'm understanding your question correctly, this ties in directly with what makes language so special, which we learned later in the course.

      "Say I have never come into contact with a black widow spider, nor have I ever seen a picture of it. Until the moment I either come into physical contact with it (and visually observe its distinguishing features) OR am guided to categorize it via reinforcement learning, what is the status of the black widow spider? Is it uncategorized?"

      Here we see that the pieces of the puzzle have kind of fallen into place. Language by its very nature is propositional - that is, it allows for its users to make propositional statements about kinds of the things (categories). The fact that you can use the category name of black widow spider, suggests to me that you have heard about it elsewhere, which means that you probably learned the black widow category as being a composite of other categories that you already know (ie., black and red, eight legs, many eyes, makes webs...) Hence, you have the black widow category through hearsay, through language, through instruction. As such you needn't visually observe it, interact with it, or otherwise learn about through firsthand experience since you already possess the capacity to categorize black widows!

      This also ties in handily to adaptive theories of language. By learning a category through instruction rather than induction or firsthand experience, there is no longer any reason for you to tangle with a dangerous animal, since you've already acquired a category. Not only is this efficient, but it also means that the danger is minimized for you.

      Delete
  14. This comment has been removed by the author.

    ReplyDelete
  15. I found the article really interesting and it sparked many questions in my mind about the relationship between incoming visual information, memory encoding and categorisation.

    There are examples of optical illusions in which there are multiple ways of seeing the same image. Often people only see one interpretation. But it is possible to describe in words a different interpretation (a different way of 'categorising' the image) and suddenly the picture takes on a whole different meaning and your brain keeps switching between the two options. What I find fascinating is that once you have been alerted to the other image, there is no way of ''unseeing'' and returning to a single interpretation. Even if after a long delay, the brain seems to remember both ways of looking at the illusion, even if the illusion has only been seen once.

    This finding makes me wonder whether the brain stores, encodes and uses more information about single retinal images than we think. Perhaps it is just that most of the memory of specific moments is inaccessible to the conscious mind, but more details are processed and used by the brain to gradually build a 'database' of previous visual experiences. The issues with patients such as the one described by Luria, is that the information becomes more consciously accessible and thus interferes with normal processing. How is previous visual experience used to abstract incoming information into categories? Are there types of visual information that are only processed subconsciously and are never accessible to human memory? Our experience of the world is shaped by the consciously accessible parts of vision, but perhaps there is more information taken from our environment which is not translated into 'sight'.

    But it is also remarkable how quickly we can construct new categories of objects from a single visual experience. For instance, if I see a picture of one animal I have never seen before and someone tells me what it is called, I will immediately be able to use the information to categorise and generalise about all members of that species. Is this an acquired talent? Do we get more adept at creating new categories the older we get and the more categories we have previously constructed? Or is it an innate human ability that stays stable across our lifetimes?

    ReplyDelete
  16. I have several questions/clarifications this week.

    “Categorization problem is…. How it is that sensorimotor systems like ourselves manage to detect those kinds that they can and do detect: how they manage to respond differentially to them”.
    In other words, categorization is doing the right thing with the right kind of thing.
    Assuming this is correct, this means we have to have some sort of “thing” to do something with. This is fine when we are categorizing finite things like apples, chairs or primroses. But can the “thing” we are doing something with be an internal input like a desire? For example I want to kick my leg. This input (wanting) is the kind of thing that sets of a bunch of molecular processes to contract/lengthen various muscles in my leg so that I can do something – kicking. Can a person’s desire be a category?

    “Recognizing is special, because it is not just a passive sensory event. When we recognize something, we see it as a kind of thing (for an individual) that we have seen before. And it is a small step from recognizing a thing as a kind of an individual to giving it a name.”
    How does categorization and recognizing differ? Is it that categorization necessitates naming and recognizing doesn’t? In that sense recognizing precedes categorizing?

    “The reason is that between black and white there is no innate category boundary, whereas between green and blue there is”
    In this section we are discussing categorical perception. The idea that there is a relatively fixed boundary in our minds, on one side of which is one category and on the other side is another. At the boundary, the category is indeterminate- it cannot be decided if the “thing” we are looking at belongs on one side or the other. The author argues that this is because there is “no innate category boundary” between black and white but I propose a different explanation for why there is confusion at this mid point.
    Its very simple: another category either exists or has been learned for the mid point and that category is grey. We would see frequencies compressed and perceived as black, separated by and expanded range of frequencies in a midpoint that is grey-black, then a compressed range of frequencies for grey, followed by an expanded region of a few frequencies for grey-white, and then a compressed range of frequencies for white. I argue that instead of struggling with a black-white midpoint, one would struggle with a grey-black midpoint and a grey-white midpoint. So while I agree with the general concept of CP and indeterminate midpoints to an extent, I disagree that there is no innate category boundary between black and white. Instead I think there is a category boundary (whether it is learned or innate I don’t know) between black and grey and between white and grey.

    ReplyDelete
    Replies
    1. Hey Renuka, you ask some great questions. I'm a little stumped by them myself, but I'll take a swing at a few anyways.

      Can a person’s desire be a category?

      Not entirely sure on this one. In your leg kicking example, I would say the desire to kick your leg is simply a response to a categorizing event. You must want to kick your leg for some reason (say, an itch), so when you correctly categorize the sensorimotor disturbance on your leg as an itch, you decide to kick your leg in response. Past experience has taught you that kicking your leg is the way to relieve that itch (or whatever else is going on), so you do. However, I'm not sure how plays out with long-term goals, such as “I want to go to the Grand Canyon.” Maybe you are simply doing the same thing as with the leg kicking, just with feelings. Your desire to go to the Grand Canyon is simply a response to a “lack of adventure” feeling you have that, because of how you categorize it, you believe the Grand Canyon will alleviate.

      How does categorization and recognizing differ? Is it that categorization necessitates naming and recognizing doesn’t? In that sense recognizing precedes categorizing?

      To start, recognizing is identifying the thing you're seeing as something you have seen before, while categorizing is the act of picking out the invariant features that allow you to do that. Neither categorizing nor recognizing requires naming. Finally Dr. Harnad hints at your last question on page 11 with “recognizing requires more. It requires the capacity to abstract,” and again on page 19 with “Categorization Is Abstraction.” Therefore, recognizing requires categorization. So it is actually categorizing that precedes (and enables) recognizing.

      Instead I think there is a category boundary (whether it is learned or innate I don’t know) between black and grey and between white and grey.

      I had similar thoughts and am confused on this issue as well. Although I think the problem with dividing black and white into more categories like that is you're starting down the slippery slope of infinite regress. Just like you don't think the black-white midpoint is fair, a gray-white color aficionado might not find the gray-white midpoint fair, instead arguing that category should be broken down into a white-light gray category and a light gray-gray category. So on and so forth forever and ever. However, this seems very similar to the issue of colors like gray and blue. Maybe the difference is that experimental data (like the within category compressions and between category expansions shown in Figure 2) has empirically proved that a blue-green category boundary exists while a black-white one does not. I don't know though. I could use some clarification on this as well.

      Delete
  17. Just some small questions/comments concerning the discussion on innate categories.

    1. “All evidence suggests that most of our categories are learned. To get a sense of this, open a dictionary at random and pick out a half dozen “content” words (skipping function words such as “if,” “not” or “the”). What you will find is nouns, verbs, adjectives and adverbs all designating categories (kinds of objects, events, states, features, actions). The question to ask yourself is: Was I born knowing what are and are not in these categories, or did I have to learn it?”

    If we agree that we are probably not born “knowing what are and are not in these categories”, otherwise we would all have the same lexicon (e.g., “apple” would be a noun and would refer to a specific type of fruit), it does not deny that there could be innate categories? For example, even if we are not born knowing that “apple” is a noun, we could still be born with categories for parts of speech (verbs, nouns, adjectives, adverbs, etc.) that we have to fill with words as we learn.


    2. “Fodor and others have sometimes suggested otherwise: They have suggested that one of the reasons most categories can be neither learned nor evolved (and hence must be “innate” in some deeper sense than merely being a Darwinian adaptation) is the “vanishing intersections” problem: If you go back to the dictionary again, pick some content words, and then look for the “invariance” shared by all the sensory shadows of just about any of the things designated by those words, you will find there is none: their “intersection” is empty.”

    Again, this concerns innate categories. It’s not really about supporting the argument that is discussed, but understanding it’s implications. If we assume that the argument is correct, is the conclusion that we would have innate categories even for things like “table”? But how could we be born with categories for things that have not always existed? A table does not occur naturally, there no tables to be found in the wild, so how could such a category and others of the same sort be innate?

    ReplyDelete
    Replies
    1. Hi Hernan,

      Regarding your first question: I don’t see how “parts of speech” would constitute a different example of a category. For example, learning verbs and nouns would consist of different instances of symbol grounding. All the symbols would be grounded in different sets of concepts, (like nouns being for people, places, things, ideas, etc. and verbs being for actions). So I don’t think there is a need to invoke innate categories like verbs or nouns (in a semantic sense anyway - grammar is a completely different story and I don’t know enough about it to discuss it).

      Regarding your second point: I completely agree with you. In fact, you could even take your point even further. Why would we be born with concepts for things that don’t exist at all, like unicorns or Pegasus’s?

      Delete
  18. Most of my comment is based on the Appendix 1: There is nothing wrong with the “classical theory” of categorization and on section 15. Direct Sensorimotor Invariants.

    “Eleanor Rosch has suggested that because we cannot state the invariant basis on which we categorize, that invariance must not exist”. For me, this isn’t a valid argument. The fact that we cannot state the invariant basis on which we categorize is a natural and mere fact of cognition. If we will be able to do so, wouldn’t we have solved what cognition is all about? The invariant basis of category such as “truth” isn’t always explainable. It could be the case that what makes something a “truth” isn’t the same for everyone, and isn’t the same depending on the context. Not being able to expose the invariant basis doesn’t suggest the invariance principle doesn’t exist.

    “To categorize on the basis of prototypes would be to identify a bird as a bird because it looks more like the template for a typical bird than the template for a typical fish”. This prototype alternative supposes there is something like a template. But there is none. It seems strange to me that the proposed alternative would suppose the existence of a template and would, on the other hand, deny the existence of invariance in categorization. The way I interpret Rosch categorization proposal is that by seeing an instance of a bird, one would have to compare it to an ideal bird that exist in one’s head. To do so, wouldn’t the person have to compare it to every ideal model of every other category? It seems it would be an inefficient and exhaustive mechanism for categorization. Harnad response states that: “Template-matching is not very successful among the many candidate machine-learning models, and one of the reasons it is unsuccessful is that it is simply not the case that everything is a member of every category, to different degrees”. I would certainly like some clarification on that statement. I understand that everything is not a member of every category (categorization is more discrete, right?). What I am missing is by what mechanisms is template-matching implying this.

    ReplyDelete
  19. “Cognition is Categorization”

    and

    “Categorization is Abstraction”

    therefore

    Cognition is Abstraction

    This, I think, is a very interesting and satisfying outcome. In a sense, cognitive systems are distinguished from other dynamical systems in a negative way, by what they leave out. Cognition works to filter the infinite domain of possible inputs into a finite range of possible outputs. These outputs correspond to the range of things that a cognitive system can do. So what allows cognitive systems to do what they can do is precisely the capacity to abstract by neglecting irrelevant specifics and hone in to the behaviorally important details that the input ‘affords.’

    ReplyDelete

  20. I was most intrigued by the notion that we can learn from either theft (hearsay) or toil. This article, and I believe others by Harnad, carefully stays far away from mentioning the social. Except of course in this instance. Without the social, and without evolution we might be (at one stage) nothing more than edge-detection systems. What makes us distinct as humans is our ability to use social language; describing cognition as categorization then becomes a powerful tool that is hard to escape from. The theory doesn’t however, seem to fully take into account how categories are created in a social manner, which is likely relevant. We cannot work simply from an individual account, considering that we interact and cognize within the context of our social world.

    Finally, I ask if there can be a conscious “failure to categorize”; the paper (and comments in the previous class) seem to imply that categorization occurs at every moment. Given an edge task, perhaps asked to categorize an image, I can feel incapable of doing so. I assume this can be answered by creating the new category of “well, I just don’t know”. But when we fail to comprehend a scene or an emotion, and categorization is not occurring, does this then mean that cognition is not occurring? What about those with anosognosia? They are failing to recognize an object, failing to categorize it, yet we still say that they are cognizing.

    ReplyDelete
  21. Hello friends! My biggest source of confusion in this paper was understanding the vanishing intersections problem and why it was incorrect. However, the later explanation of Biederman and his success at teaching chicken-sexing helped illustrate that point, so I'm going to piece together my thoughts on the issue and make sure I got this right. Let me know where I lose you.

    They have suggested that one of the reasons most categories can be neither learned nor evolved ... is the “vanishing intersections” problem: If you go back to the dictionary again, pick some content words, and then look for the “invariance” shared by all the sensory shadows of just about any of the things designated by those words, you will find there is none: their “intersection” is empty. (page 10)

    Okay, that seems fair. If there are no common features, I guess we haven't really categorized these things. The universe has, and what we are actually detecting is the category inherent to that thing as a consequence of existing. But, as Dr. Harnad points out:

    since we do manage to categorize correctly all those things designated by our dictionaries, there is indeed a capacity of ours that needs to be accounted for (page 11)

    So on the other hand, another fair point. Boomerangs and tables may not appear to have common features on the basis of their sensory shadows alone, but they must share some features. For example, they are both objects in space. Furthermore, we can categorize them, so there must be something allowing us to do so. But what I missed at first is that something is the invariant features of the objects themselves. Dr. Harnad's final words of the section confirm this idea:

    there must be enough in those shadows to afford all of our categorization capacity. (page 11)

    Even if we cannot explicitly state what invariant features allow us to correctly categorize things like newborn chicks, it is silly to assume they do not exist. We can tell the chicks apart, so there must be some feature allowing us to tell them apart, even if the grandmasters can't tell us exactly what that feature is. And ultimately, that's what Biederman proved (and why the vanishing intersections problem is a non-starter): although the invariant features may be “ineffable,” they are not necessarily “invisible” (page 18). They must exist, because we categorize on the basis of invariant features, and we correctly categorize the chicks. That's why Dr. Harnad's quote above really hits the nail on the head: those sensory shadows alone allow us to categorize, even if our sensorimotor abilities to detect them exceed our vocalmotor abilities to elucidate them.

    ReplyDelete
    Replies
    1. Hi Alex,

      I think you explained this quite well. The idea you’re getting at is similar to a number of the questions I had last week. I think it’s also helpful to remind ourselves that when are talking about categorizing, we aren’t doing metaphysics. This is to say that we aren’t trying to see what the fundamental substance (or set of features) is for an object and what it is that makes it that object and not another. Instead, we are trying to explain how we perceive or create the category. The two are profoundly different and I think this clears up a bit of confusion. So I feel like the vanishing intersections problem is also a non-starter for this reason.

      Like you pointed out, the fact is that we are capable of categorizing. Given that we deem categorizing to be a cognitive capacity (or to be cognition itself), and ascribing that ability to magic is unsatisfactory, there surely exists a way that it takes place.

      As a side note, the other issue I have with the nativist thesis is that we clearly did learn a number of categories. Why in the world would we be born with categories that serve no possible adaptive function like “unicorn” or “Pegasus”?

      Delete
  22. I found the "Feature Selection and Weighting" aspect of this article very interesting:

    "The only reason it looks as if the ducklings are more similar to one another than to the swanlet is that our visual system "weights" certain features more heavily than others -- in other words, it is selective, it abstracts certain features as privileged. For if all features are given equal weight and there are, say, two ducklings and a swanlet, in the spatial position D1, S, D2, then although D1 and D2 do share the feature that they are both yellow, and S is not, it is equally true that D1 and S share the feature that they are both to the left of D2 spatially, a feature they do not share with D2. Watanabe pointed out that if we made a list of all the (physical and logical) features of D1, D2, and S, and we did not preferentially weight any of the features relative to the others, then S would share exactly as many features with D1 as D1 shared with D2 (and as D2 shared with S). "

    Obviously, the things we see ARE weighted differently. The swan's whiteness and the ducks' yellowness are weighted more heavily than the distance from the swan to the first duck, therefore we remember the colours more than we remember the birds' spatial orientations with respect to one another. This then allows us to categorize the swan as different than the ducks. But I was wondering why it was that we weight the colours more heavily than the spatial orientation? This seems to indicate that we, as humans, are more inclined to find search for incongruence rather than for similarity. Objectively, there is no reason for us to prioritize the colours over the birds' spatial orientations, so why do we do that? Is it because we, on some level, know that their colours are less subject to change than are the spatial orientations? And, if that is the case, how do we know that? I'm wondering if that is linked to hearsay.... Knowing that, generally speaking, animals don't change colours, but it is easy for organisms to move around sporadically. Or, perhaps this is an issue of exposure. If one had ONLY ever been exposed to chameleons (in terms of non-human organisms) and vegetative paraplegics (in terms of humans), and then was asked to categorize D1, S, and D2, I wonder if that individual would categorize D1 and S as part of the same group, and D2 as being the outlier (assuming that the individual were accustomed to seeing animals change colour and humans unable to move freely). Perhaps I haven't explained my point with as much clarity as I had hoped, but you can pick up on the general idea. It's just interesting to me that, to us, it seems so obvious that of course the swan is the outlier and the ducks are in the same group, even though "Watanabe pointed out that if we made a list of all the (physical and logical) features of D1, D2, and S, and we did not preferentially weight any of the features relative to the others, then S would share exactly as many features with D1 as D1 shared with D2 (and as D2 shared with S). "

    It's interesting to think about how the fictional Funes would categorize that group of birds... Or if he would categorize it one way once and then a different way if he had to categorize it a week later. If he were able to remember absolutely everything, would his categorization of D1, S, and D2 change each time? Would he weight things differently every time? Would that be from accumulated experience over the course of a week?

    ReplyDelete
    Replies
    1. Perhaps our more heavily weighted color discrimination is because color categories are innate in the sense that they're related to the direct wavelength to retina input? Our brains maybe deem this process to be a trusted way of discriminating things (even related maybe to safety or survival?) I don't know though, it sounds like I'm personifying the brain here which seems tricky. I'm thinking that maybe we rely on some primary detection first, like colors, that don't need to be learned as different from one another and then maybe things like actions or demeanors come as secondary or tertiary features for categorization? I'm also not sure I'm getting my point across as I want to but maybe you can see at least what I'm trying to get at with an idea of (I want to refrain from saying 'heirarchy') more easily available/innate discriminations versus learned ones

      Delete
    2. Also.. do you think our brains have a process of primary, secondary, tertiary feature recognition when it comes to categorization? If we can't figure something out or place it in a category then we move onto another possible feature? Or is it all in one go?

      Delete
  23. “Borges portrayed Funes as having difficulties in grasping abstractions, yet if he had really had the infinite memory and incapacity for selective forgetting that Borges ascribed to him, Funes should have been unable to speak at all, for our words all pick out categories bases on abstraction. He should not have been able to grasp the concept of a dog, let alone any particular dog, or anything else, whether an individual or a kind. He should have been unable to name numbers, even with proper names, for a numerosity (or a numeral shape) is itself an abstraction. There should be the same problem of recognizing either a numerosity or numeral as being the same numerosity (numeral) on another occasion as there was in recognizing a dog as the same dog, or as a dog at all.”

    After reading this article, I feel a bit unclear about the term abstraction, and what it entails. From what I understand, abstraction is a way of simplifying the information we take in, by selecting only a part of it that is most important, given a specific context. What is the difference between abstraction and selective attention then? Human attention capacity is limited, and we can only pay attention to some parts of our environment at a time. Is this considered another form of abstraction?

    From what I understand about recognition at a neuronal level, when we recognize objects, we first use very specific neurons to gather tiny details of a visual scene, and they send messages to less specific neurons that group “info” from their messages. Inputs are gradually combined until certain neurons in our temporal lobe fire to an entire object. Would this be considered another type of abstraction, but at the neuronal level? The type of “abstraction” I mentioned in the previous paragraph is what I understood when the article defined the term, but reading about the patient with amnesia made me understand abstraction more as the second example I gave. Could both be considered abstraction, or am I misunderstanding the term?

    ReplyDelete
  24. I would like to make sure i understand this point: categorization is discrete (rather than continuous) and functions computationally but based on inputs that have been acquired dynamically (by our/a robot's interactive capacity with the world)


    ReplyDelete
  25. “Watanabe’s (1985) “Ugly Duckling Theorem” captures the same insight [that an organism must be able to selectively detect invariants and ignore the rest of the variation]. He describes how, considered only logically, there is no basis for saying that the “ugly duckling” -- the odd swanlet among the several ducklings in the Hans Christian Anderson fable -- can be said to be any less similar to any of the ducklings than the ducklings are to one another. The only reason it looks as if the ducklings are more similar to one another than to the swanlet is that our visual system “weights” certain features more heavily than others -- in other words, it is selective, it abstracts certain features as privileged.”

    “This is an exact analogue of Borges’s and Luria’s memory effect, for the feature list is in fact infinite (it includes either/or features too, as well as negative ones, such as “not bigger than a breadbox,” not double, not triple, etc.), so unless some features are arbitrarily selected and given extra weight, everything is equally (and infinitely) similar to everything else.”

    I think that Watanabe’s (1985) Ugly Duckling Theorem is fascinating. I’m curious about how the fundamental concepts underlying this Theorem may be compared/integrated with the unsupervised learning mechanism of “reciprocal inhibition”. Harnad (2005) describes reciprocal inhibition as an example of an “unsupervised contrast-enhancing”: shadows cast by real-world objects are detected based on inputs to the visual system, organized based on structural similarities and dissimilarities (covariance and invariance), and mentally sorted in a way that enhances both the contrasts between and likenesses within distinct structures in visual space. This type of system would be impracticable if it was all the result of supervised learning; “it is unlikely that our visual systems learned to do this on the basis of having had error-corrective feedback from sensorimotor interactions with samples of endless possible combinations of scenes and their shadows” (Harnad, 2005).

    In the case of the “ugly duckling” and the other ducklings (I’m going to call them “nonugly” ducklings, sorry little guy), Wantanabe says there is no logical basis for saying there is more or less similarity between any of the ducklings and the ugly duckling, or between any pairing of two ducklings at all ugly or nonugly, because there are infinitely many features to describe each duckling. We know we can only detect certain perceptual features, and that these features may be interpreted as more or less salient than others (e.g. colour versus number of feathers).

    How do we explain the relationship between the nonugly duckling and ugly duckling (i.e. the grey zone between duckling and non-duckling)? What features of ducklings are interpreted as more salient and why? The within-category enhancement of similarity (all ducklings) and the between-category enhancement of differences (nonugly versus ugly) seem to work in a mechanism similar to reciprocal inhibition: the nonugly ducklings look more similar to one another and the ugly duckling thus seems especially ugly in comparison. We recognize a sameness, but we also recognize features salient enough to distinguish the ugly duckling as ugly (ugly being a categorical designation largely based off of hearsay, like in the case of “goodness, truth, and beauty” (Harnad, 2005)). What would govern these arbitrary distinctions when all we have is a reciprocal inhibition mechanism to rely on?

    ReplyDelete
  26. Hi Dr. Harnad,

    This comment is intended to clarify the question I had in class last Monday about the systems response to the Chinese Room Argument.

    Basically I completely disagree with the systems argument, but for a reason other than the one you provided. I will explain:

    It is my understanding of the systems argument that if Searle was the system, it says that cognition is the entire CRA situation, so Searle contains both himself (as in the CRA - where he executes the "program" of if X symbol then Y), as well as whatever method by which he is "given" the sets of symbols that represent the 'script', 'story' and 'questions'. In class I conflated this by saying inputs, I more meant that the systems argument says that (from Searle) "the people who are giving [him] all of the symbols" are also within him, so he would know and therefor understand Chinese, not by translation through English, but by knowing what the symbols represent. He would presumably know the 'meaning' of the words in each batch, just as "the people" he refers to that provide the symbols would (otherwise they would not be able to provide these symbols).

    My understanding is that the fact that Searle speaks English is irrelevant to the Chinese room argument. He simply uses English in the example because the 'program' is adapted to be read by the specific hardware that is Searle - as such it is in English instead of say, HTML. I think the use of English as the language of the program is the basis of confusion surrounding the systems argument - because the reader can't separate Searle's existing ability to understand English from his ability to execute the rules he happens to be given in English (as computer hardware would execute a software program given in a viable language).

    The fact that the systems argument makes no sense, to me, is not because of the split personality argument (I will address this specifically later), but because even if Searle contained the ability to understand and execute 'if X then Y' manipulation rules, as well as knew all the possible symbols he was given and the appropriate corresponding outputs by heart, this still does not explain where the actual meaning of the Chinese symbols is derived from.

    In the original Chinese room argument, the meaning of the symbols is understood by "the people who are giving [him] all of the symbols" - not existent within the symbols themselves - thus why these people can make the rules and batches of symbols to begin with. If they did not know the meaning of the symbols, then their designation as well as the rules would be completely arbitrary, and the entire CRA situation collapses.

    So, even if these "people" were inside Searle (rather than just the symbols sets), then it still doesn't erase that the meaning for the words has to be derived from somewhere, and it can't be that the CRA system also exists within these "people" within Searle, or we get a different grounding problem. However, the existence of these "people" does not imply multiple personality. It implies that these "people" act as correlates for dynamic systems, which understand the meaning of the Chinese symbols because they have a sensory connection to the outside world.

    ReplyDelete
    Replies
    1. [continued]
      So the systems argument either ignores the "people" providing the symbols batches and rules when considering the system - which Searle clearly points out creates no capacity to understand - OR it includes these "people", creating either a new grounding problem of infinite "people within people", or acknowledges that the "people" have to have sensory capacity, and as such, are dynamic. Since the latter is the only sound conclusion, the systems argument discredits computationalism itself by saying that the only way the CRA could work is if the entire system were in one body, because in saying this, it implies the existence of dynamic systems within Searle.

      The "split personality" argument against the systems argument never comes into play here. It is mentioned in his paper only as an extension of Searle's comparison, because he uses himself as the hardware both times in the English and Chinese examples to execute the English program (as such, if both of these examples could still reliably be applied to the same person - Searle - he would have to have a split personality). But we have to remember that the "understanding" in the English example is in terms of understanding the meaning of the symbols, not interpretation of the rules system. The rules system (i.e. the program) being in English blurs the distinction between these two. My understanding of the argument is that even if the rules were by some visual stimuli, say by using different colored lights to indicate output rather than English rules, then it is excruciatingly obvious that even with Searle's ability to see colours he still won't understand the Chinese inputs/outputs, but he would understand the English outputs, even if he still relied on the colours to produce the output. This is because in the English example he is attributed with the additional capacity of speaking English, that is, the capacity beyond 'seeing colour' or 'reading and interpreting the program' that is contributing to understanding. This capacity is still within Searle in the Chinese Room, because Searle is still Searle, but it is irrelevant to the example. These are separate examples, and are conflated because of the fact that the "program" in the Chinese Room so happens to be written in a language that Searle has previously grounded through sensory inputs. It is through the split personality argument that the argument for translation capacity comes into play (i.e. Searle would understand Chinese because he would somehow translate from English), but that once again encounters the problem of why does he understand English, and we enter the same grounding problem that I mention above except in English instead of Chinese.

      Delete
    2. Hi Esther, I think you understand the Chinese Room Argument almost 100% but you are adding even more into it than is necessary. (BTW, it would have been better to post this in 3a even though it's late, because that's where the CRA discussion is going on (even if there are afterthoughts in week 11!).

      You are right that people are thrown off by the fact that Searle speaks English, and because they think translation is involved (it isn't). But actually the code (if a T2 algorithm were possible -- which "Stevan Says" it isn't, because of the symbol grounding problem) of that algorithm would only be in "English" in the same sense that all programming languages (and all of maths) are in English (and French and Chinese): All formal languages are just subsets of natural language. So "2 + 2 = 4" is in English (French, Chinese). The only difference is the shape of the (arbitrary-shaped) symbols that are being used, plus maybe a bit of local grammar, sometimes. Your color analogy (though it's hard to imagine) is intuitively correct. The T2 programme (if it had been possible at all) could have been written in a color-code rather than in Java or C++ or LISP or Algol. And then that would have then been part of English too. The only symbol shapes that are necessarily what they are would be the Chinese symbols themselves, which the algorithm must manipulate according to the rules for getting from the input message to the output message. But those symbols are not themselves English. They are just shapes being manipulated using rules that are based on their (meaningless) shapes, formulated in any formal language you like (they're all English, just like the quadratic root recipe is: x = b +/- SQRT(b**2 - 4ac)/2a. (The understanding of maths symbols is English understanding, but the maths does not include the understanding, just the symbol manipulation rules -- because, again, computationalism is wrong, and the grounding of the meaning of even just "2" is sensorimotor, not symbolic.)

      Now that we have the notation system out of the way, where do all your "people" come from? The only things we need are incoming strings of Chinese symbols plus a T2 algorithm for turning them into outgoing Chinese symbol strings. The T2-passing algorithm (that "Stevan Says" is impossible because of the SGP, but never mind) can be imagined to have been written by one person, or many persons, or as having been typed by chance by millions of chimpanzees playing with keyboards, or as having grown on a tree, or as having been evolved by trial and error and survival and reproduction as coded in our DNA. It is a mistake to think of an algorithm as including a programmer (though it almost always does, when we are talking about modelling). There is nothing in Turing's definition of computation (the Turing machine) that mentions a programmer. A Turing machine is a machine, mechanically (and mindlessly) executing computations (symbol-manipulation rules) regardless of how they got there.

      And it's not even true that the T2 programmer would necessarily have to understand Chinese. Maybe he got lucky and discovered the T2 Chinese symbol-manipulation rules without having had to understand the symbols. What's true is that Chinese readers would understand the incoming and outgoing Chinese symbols. But that doesn't mean that they are inside the machine. Their understanding -- and Searle's lack of understanding -- is just a symptom of the SGP. And there is no "system" inside (or outside) Searle that has that missing understanding of Chinese. It just isn't there. (The fact that it's there for English is irrelevant. When it's another computer rather than Searle that is passing the Chinese T2 it doesn't understand English either. And there are no other "people" in there either.)

      Delete
  27. I’m a bit confused by what exactly supervised and unsupervised learning are. The way I (think) I understood it is that unsupervised learning is when a category is acquired via exposure, where the main component is shape. But supervised learning is when the exposure/pure perception isn’t enough, so there’s a need for some kind of interaction. But it seems like there’s some overlap between those two concepts, situations where it isn’t clear whether the categorization is acquired through unsupervised of supervised learning. For example visual cues can help discriminate between different objects, but there could also be learned properties of those objects that allow to discriminate between them. That is, supervised vs unsupervised don’t seem mutually exclusive…

    ReplyDelete
    Replies
    1. As an add-on to the lecture (these are more observational, to make sure I understood correctly):
      About the experiment where person A walks up to person B in the street, then they end up briefly separated by a cardboard sign and person A switches out for person C without person B noticing. Tying this to the Hillary observation, where we can’t tell the difference between Hillary from today, last week, or next week. I think I remember someone pointing out that those were two completely different situations/examples but I’m not quite sure I agree with that assessment. In the case of the cardboard sign, people are paying attention to the general features of the stranger (dark hair tan skin) and not necessarily registering minute details; so when someone with similar features is substituted, the person retains the general image of the stranger and doesn’t notice the switch. In the Hillary case I feel like it is the same thing. Because of regular exposure, perhaps more details of Hillary’s features are registered than for the stranger, but in the end not every single detail is remembered, so with the switch to past or future Hillary, the overall impression/details are still there, the ones that were paid attention to. But just like the stranger, more minute and small details weren’t necessarily remembered which is why no difference is noticed. Therefore the scale of change between two strangers and two Hillarys might not be the same, but the level of attention paid to detail isn’t either, which seems to even out to a same explanation for the phenomenon.
      Another observation I wanted to elaborate on was us students recognizing professor Harnad at an airport in 5 years versus him recognizing us. I believe it would be much easier for us to recognize him. He is teaching a class of approx 40 students, only once a week, for less than 4 months, and every (or every other) semester he has a new set of students in the same situation and the (somewhat) same content. Those are a lot of faces, and just as in a few years he might not remember which student he taught which year, it might be quite difficult for him to remember individual faces. Whereas for us students, we only have one face to remember, only took the course once (therefore less risk of confusion), and in terms of teaching it seems plausible to remember some of what a specific person taught us. In a sense we’ve had more exposure to professor Harnad than he has had to us individually so it makes sense that we might remember him better than he might remember us.

      Delete
  28. "The unsupervised models are generally designed on the assumption that the input “affordances” are already quite salient, so that the right categorization mechanism will be able to pick them up on the basis of the shape of the input from repeated exposure and internal analysis alone, with no need of any external error correcting feedback” (Harnad, 7).
    Overall, the explanation of categorization is very detailed and manages to refute Fodor’s idea of innate categories for everything very well. I ams still unclear as to what an example of unsupervised learning is since no clear example other than a hypothetical world where only boomerangs and Fodors existed. To me, it seems every category on some level requires some trial and error feedback or information communicated via language because so many categories seem context dependent and there are so many objects to be categorized in the world that a simple mechanism in the form of unsupervised learning could not possibly complete it.
    Another question I have is how much this really advances our understanding in cognitive science. Reinforcement learning through the lens of behaviourism discusses a primitive form of categorization but doesn’t talk about the internal mechanism. How does an explanation that an unsupervised model uses salient affordances and manage categorization through repetition and internal analysis go any deeper than a superficial explanation like that provided by a behaviourist? What are the learning mechanisms that allow someone to sample the structure and correlations to establish unsupervised learning? I understand these questions are what Cognitive Science seek to answer, and we are using the unsupervised learning model as a guiding point for research, but my reaction so far is that the unsupervised model doesn’t provide much over the behaviouristic explanation. The supervised model is (categorizing according to trial and error and feedback) and the use of language, through grounded words, are much more interesting to me since they are less ambiguous than words like “internal analysis.”

    ReplyDelete
  29. This is something I was thinking about in class today:

    For the mountain and valley example the name labels of “mountain” and “valley” themselves become irrelevant in the categorization process… you’d still have the capacity to recognize the difference between the two things and then form categories for the kinds of things they are even if you’re told to call them thing 1 and thing 2 or even that they don’t have specific names. So shouldn’t we make a point to separate our ability to label something with a name (like mountain) because we’ve learned that the label goes with the thing that looks like that and our ability to recognize that this thing that is tall and comes to a peak (insert all other invariant features here) is a unique category of its own? Basically, I just want to clarify that the labeling or naming of something is secondary to our categorization of it as a kind of thing… because labeling/naming is just for convenience if you really think about it. Is this true to say though?

    ReplyDelete
    Replies
    1. Learned Pop-Out

      Some categories Nature wears on her sleeve: mountains and valleys. You don't need to do any feature-learning. Mountains and valleys just "pop out." Then, giving them each their name is trivial, just paired associate learning.

      Categorization becomes interesting when the categories are not obvious (like in mushroom picking, or detecting cancer cells in microscope slides). (I'm not comfortable using chick-sexing any more because it feels as if I am making fun of something cruel and tragic.)

      Yes, when categories are hard to recognize, once you have learned to recognize them, adding a label is secondary. But while you are learning a category by trial and error and corrective feedback (feedback/reinforcement learning) rather than just by passive exposure, making the response -- i.e. "the thing you do with the kind of thing you need to do it with" -- is crucial, because that's what gets the corrective feedback, whether it's eating the edible mushroom or calling it "edible." That's what makes it trial, error and correction: You have to try. And trying is something you do. Then you are corrected on whether you did it with the right kind of thing or the wrong kind of thing. The "supervision" need not come from a teacher. It can come from a stomach-ache...

      (Sometimes the pop-out of categories is because there is a gap between them in nature, like mountains and valleys. Sometimes its because you have inborn feature-detectors that create the (fuzzy) gap and the pop-out, as with color categories. But the most interesting case is where the feature-detection has to be learned. The function of learned categorical perception --- which is not the same as categorization! -- is to make the category pop out.)

      Delete
    2. I want to make sure I understand the difference between categorization and learned categorical perception, as I had a bit of difficulty teasing the two apart in my mind when we discussed this in class this week. Here’s my kid sib explanation:

      Categorization
      Sorting anything in the world to be able to do the right thing with the right kind of thing. Based on ignoring irrelevant features and paying attention to relevant ones, like colour, size and shape, one can categorize things as concrete as chairs and tables and as abstract as ideas like beauty.

      Learned Categorical Perception
      A mental separation of categories based on categories learned through experiences of reward and punishment gained through trial and error. This ‘mental separation’ is not categorization, but instead like a lens through which an individual already possessing categories perceives new things.

      Say there are two learned categories, A and B, which differ along one dimension, say wavelength (like colour). Before learning the categories, a difference in 10nm between two waves would be perceived the same between two waves in A, two waves in B, and a wave in A and a wave in B. However, after learning the categories A and B through trial and error, a 10nm difference between two waves in A seems smaller than a 10nm difference in a wave in A and a wave in B. So in fact, learned categorical perception is not categorization, but instead helps with categorization by making the relevant features for categorization easier to deal with.

      Delete
    3. Assuming my kid sibling knows a bit about physics...

      Delete
  30. With regards to today's class, I have a few more elaborations on the matter of the boundary between discreteness and continuity (and how discreteness seems to be a necessary qualification of categorization).

    Using the example I asked in class (whether solving mathematical problems is categorical), we concurred that, despite the content of mathematics often involving infinite continuums, that the actual act of doing mathematics involves the application of discrete algorithms, and is thus categorical.

    So, despite the content being continuous, the act itself is discrete.

    However, bringing back up the topic of dance as continuous rather than discrete, I'm finding myself troubled once again. Though the movements in themselves are continuous, the dance is composed of discrete steps made up of discrete actions that progress from point a to point b in a discrete number of arbitrary counts of time that the dancer is aware of. The entirety of the dance is continuous, yet I feel that the performer may still have sections of movements arbitrarily marked or categorized (in terms of grouping movements into beats of time, or musical cues, etc), however implicit they may be.

    In a sense, this seems (to me) to be similar to taking the continuum of real numbers and marking off 1, 2, 3, etc or perhaps calculating the limit of a tangential function where, as x approaches infinity, y approaches 1 - we provide a discrete value to a continuous function.

    I certainly agree that the act here differs, in that the act of movement is indeed continuous and the act of calculation is discrete, however, I don't think that we should deny the categoricity that accompanies something like dance. I'm not saying that it is in its entirety categorical, since it does have this obvious element of continuity WITHIN the act, but that there seems to still be an aspect of categorization.

    Originally what I felt like concluding after deciding that mathematics is categorical is that since what mathematicians do is considered computation, and since we have decided through class that cognition is both dynamical and computational, we can say that not all of cognition is categorization.

    After considering what I have written above, I am more unsure as to what sort of conclusion to draw however.

    If the master categorizer of mushrooms implicitly discerns two nearly identical mushrooms, one deadly and one edible, without realizing the 'algorithm' for feature detection that they had been using (this is drawing from the example Professor Harnad used with a chicken sexer becoming a brown belt in ten minutes who, while slower than the masters, can correctly discern the sexes, but when they show one of the masters, they do not even realize they implicitly use such a rule), a dancer may go through the motions - which granted are purely continuous - without realizing the implicit discreteness of the motions and timing.

    This kind of bothers me because intuitively I want to say that not all cognition is categorical, but I seem to conclude (from what I mentioned above), that all cognition has some aspect of categoricity, though is not entirely categorical.

    Any thoughts?

    ReplyDelete
    Replies
    1. Hi Kaitlin,
      A few things!
      I like your comparison of the mathematical computation to a dance routine because in both cases I think there is an element of applying a formula which is of course categorical... in the case of a dance, we memorize the steps and are able to perform them fluidly and in the case of the mathematical formula we are able to manipulate numbers step by step as well.
      However, I also am bothered by the idea that there seems to be a categorical aspect to all cognition but that's the conclusion I keep coming to. I was also unsure in class what exactly the argument was for dance to not be categorical besides the "artistic" argument which Prof Harnad said was off track. Can someone clarify this?

      I was also thinking about the idea of driving. Clearly, there are rules, there is trial and error in learning to drive, and there is a learned physical and mental aspect to driving which is specific to driving successfully and safely. Yet, most people who have driven a long time would agree that there are times when we drive for a while only to realize that 20 minutes have gone by and we can't recall how we got from point a to point b in detail. It is obvious to say that we were applying our learned driving skills in order to drive the car from point A to point B, and that in this we were categorizing along the way, yet we seem to be on autopilot and not purposefully considering each second. On the other hand, it is also seemingly absurd to say that driving is something we do without being conscious of it at all times because then we wouldn't be able to drive the car successfully. Maybe it takes some sort of ability to only focus on the things that are absolutely necessary in order to get the job done? I'm not sure what to make of the driving situation but it seems to me to be a problem for continuous vs. discrete and therefore categorical or not.

      Delete
    2. ** continous vs. discrete distinction

      Delete
  31. In the “Abstraction from Amnesia” section, in which Harnad describes a novel about a man named Funes who does not forget anything, the explanation that Funes should not be able to abstract – thereby unable to speak – was unsatisfying to me. While I understand that for Funes, every moment is a unique one for him, but I don’t see how being able to remember every detail about a scene will result in an inability to abstract from the scene. In fact, I would think that he would have an enhanced ability to grasp abstractions, since he can pick out every detail from memory. For example, if he looked at a dog and could remember every feature of the dog, he would be able to tell that dog apart from other dogs based on his memory. Furthermore, if he had the Boolean coordinates of what constitutes a dog, maybe from a teacher, would he not be able to categorize a dog as a dog?

    On another note, I had a question from today’s class about certain dynamic activity as non-categorical cognition. I understood that because the cognizer isn’t consciously categorizing each movement, but in competitive dancing for example, isn’t the dancer always categorizing their own movement as either right or wrong? Perhaps the fluidity of the motion comes from the skill, but no dancer is infallible and they should monitor their movements. If a dancer is performing a dance routine – let’s say one that is commonly known in the dance community and therefore right and wrong movements do exist – and perhaps the dancer is not consciously categorizing each movement as arabesque, plié, or pirouette, but they must have carry a sense of right or wrong movements throughout the routine.

    ReplyDelete
  32. I was very interested with section eight, that discusses instrumental (operant, reinforcement) learning. In particular, how an animal is able to be conditioned to categorize between different colours; or in the case of the example between black and white. However, it seems like there are several flaws with this. The paper points out itself that perhaps the animal had the inputs segregated in advance. In addition, perhaps this example is most evident because is uses two polar opposite shades of white an black. Would the experiment work as well with colours that were closer together on the spectrum; say orange and red? If the pigeon learned the entire gradient between black and white; how would it react right in the middle? IT seems that there are many flaws/quetions to be answered with this theory. The biggest being that is it really what the animal is being conditioned; or is it simply based on pre-determined inputs, influenced by other factors.

    ReplyDelete
    Replies
    1. Hi Rachel,

      I am intersted in the example you brought up of orange and red. In section 9, it states "But if the animal had color vision, and we used blue and green as our inputs, the pattern would be different. There would still be maximal confusion at the blue-green midpoint, but on either side of that boundary the correct choice of key and the amount of pressing would increase much more abruptly Ð one might even say "categorically" -- than with shades of gray.”.

      In the retina, there are three different cones (red, blue and green) which process colour. In Rachel’s example, the two colours red and orange are encoded by the same type of cone. But the example in section 9 uses blue and green, which use different cones. How does this effect the results? Would the pigeon be equally as good working with colours that use the same cone (Red and orange) as they are with black and white since black, white and grey use rods. Or would the results be entirely different?

      Delete
  33. I really appreciated how the term "dynamical systems" was defined here. I was often confused in class about how the term was being used. But I did get a little lost when he started to talk about differentiality and systematicity and adaptivity..

    "Everything in nature is a dynamical system, of course, but some things are not only dynamical systems, and categorization refers to a special kind of dynamical system. Sand also interacts "differentially" with wind: Blow it this way and it goes this way; blow it that way and it goes that way. But that is neither the right kind of systematicity nor the right kind of differentiality. It also isn't the right kind of adaptivity (though again, categorization theory probably has a lot to learn from ordinary dynamical interactions too, even though they do not count as categorization)"

    However I think this became clearer later when he says:

    "Categorization is accordingly not about exactly the same output occurring whenever there is exactly the same input. Categories are kinds, and categorization occurs when the same output occurs with the same kind of input, rather than the exact same input. And a different output occurs with a different kind of input. So that's where the "differential" comes from."

    So I believe what's being said here is that categorization is a special kind of dynamical system which is also differential... As in there is not a strict input/output but rather KINDS of inputs and KINDS of outputs, and so the output will differ if they kind of input differs. Again I'm not totally sure if I've interpreted this correctly. But even if I've misunderstood what was meant by differential, the term 'dynamical system' has been made very clear to me now.

    ReplyDelete
  34. I have a question/comment about the distinction between categorization versus dynamical processes (such as dancing, as discussed in class Monday) that are not categorization.
    Based on my understanding of the definitions in this article, categorization is knowing to do the right things with the right kinds of things, and (from class) dynamical processes cannot be categorization because they simply are not a category, they are dynamic.
    My question is: are outputs themselves, independent or any computational or dynamic process, considered to be cognition? I would say no, and as such, would say that the reason dancing is not categorization is that the independent output of dancing cannot be considered as cognition.
    What I mean by that, is that if I were watching dancing, as a non-dancer, I would likely see the entire activity of dancing as part of the category ‘physical activities’, or maybe also as a category of ‘ballet’, or ‘free-lance’, etc. But I would see it in this way because watching the dance would be an input. When I myself am actually dancing, it is an output. I am certainly not breaking down what I am doing into some category or another, as it is a continuous process that for me, someone who knows nothing about dance, is very dynamic and non-structured. But it is still an output, that when considered as an input, is categorized as dance. Someone who knows dance as a very structured activity, and breaks down every subcomponent of what I am doing into a move categorized as one type of dance or another, would possibly claim to be perceiving categories of my dance that to me do not exist.
    It seems like in asking if cognition is categorization, we have to consider again what cognition is. While inputs can be categorized (i.e. watching someone dance), an output will always be dynamic and thus not categorization, but rather the response to the categorization of some input. I would argue that even dance is a response to some input, whether the input is highly abstract or a feeling in itself. I absolutely do not agree that feeling is only an output and not an input – I think that it exists because it is the output or byproduct of some other input, but that it can act as an input for a subsequent response in the form of cognition as a chain reactions.
    To summarize what I am trying to say, I think that in saying dance is not categorization because it is a dynamic process, we are forgetting to consider that outputs are not categorized until they are in the form of an input, in which case we process the input into whatever category we can associate it to and ground it with (i.e. for me, it may be just dancing, while to an expert it may be ‘free-style ballet’). In this way, we can restrict the definition of cognition to the processing of various inputs to produce an output. While ‘processing’ is not a kid-sibly word, the point is that it is the inputs that are relevant to cognition. We categorize dynamic processes to whatever category we can once we consider them as an input; if it is just something we are doing (i.e. an output), then is it really relevant to the question of is cognition categorization? We need to consider what inputs would have had to occur to produce this output, and whether or not those are categorizable. I think, when looked at in this way, where cognition is some form of input manipulation to produce an output (either computationally or dynamically), and the outputs and byproducts (i.e. feelings) are irrelevant until considered as an input, we can say that yes, all cognition is categorization. Although acknowledging that I am obviously not an expert in this, someone please explain to me why if they think I am following the wrong train of thought.

    ReplyDelete
  35. By paraphrasing section 2: A sensorimotor system’s abilities is determined by what it’s motor actions can do with the sensory input provided to it.
    So even if you have sonar sensors, if you do not have the appropriate translation system to be able to use it, it is still not an ability of the system. This could be related to items 17. Abstraction and Amnesia and 18. Invariance and Recurrance. In these cases they have the extra abilities of infinite (or almost) rote memories, but they had the additional abilities without the adequate systems ability to appropriately handle the additional information. This in turn resulted not in super-humans, but in largely disabled faculties where abilities to grasp abstract concepts or generalizations were extremely difficult. This can be related back to identifying if machines can think or understand - if the perceptual processes are added to a system, but the appropriate deciphering/translating system is not put in place, new system add-ons cannot actually be said to improve a system

    In separating dynamic systems, like the wind and the sand, from categorization the article explains that both are changing systems in time (ie dynamic), but the output for a given input with categorization depends on the kind of input, as opposed to the exact same output. The article later identifies that error exists in categorization, but not in the system of the wind interacting with the sand. Can be further extended into presupposing intentionality in categorization, separate from other dynamic systems in nature? It supposes that there is a right and a wrong way to categorize something, but there is no right or wrong way for sand to blow in the wind.

    Identifying innate categories, it might be useful to move away from the classic example of language, simply because it has been frequently used and a more general approach may better put the point across. Perhaps a better way to create a more broad instance of innate versus learned categorization would be something that is universal (like language, walking, affection) from something that is largely cultural (aggression, perception of colours**).

    ReplyDelete
  36. “…the “vanishing intersections” problem: If you go back to the dictionary again, pick some content words, and then look for the “invariance” shared by all the sensory shadows of just about any of the things designated by those words, you will find there is none: their “intersection” is empty.”

    I am a little confused with regards to the “vanishing intersections” problem. Is Fodor suggesting that there is no commonality between all the sensory shadows associated with a given word? For example, if I picked out the word chair, then all the sensory shadows associated with it, such as furniture, sit, seat, legs etc. have no general commonality? And the only commonality is the word that they describe, thus that word must be an innate category? I find this innate category argument unconvincing and have to agree with the passage that follows:

    “To say that these categories are “innate” in a Cartesian, Platonic, or cosmogonic sense rather than just a Darwinian sense is simply to say that they are an unexplained, unexplainable mystery.”

    The word being described is not an innate category by the fact that there is no reducible relationship between it’s designated sensory shadows. The category is rather learned through some mechanism that may have been affected by evolution and the sensory shadows it designates allows for one to communicate the nature of the category.

    ReplyDelete
  37. Understanding the Symbol-Grounding Problem seems to be essential for understanding a lot of the arguments presented here. The idea is that the words themselves do not point towards that which they refer to. The letters within the word ‘Jim’ do not themselves possess the qualities of Jim (the letters do not need to have ears and work at Dunder-Mifflin). So, we moved on to discuss the role of sensorimotor capabilities which are part of cognition. It is the sensorimotor experience of the referent which grounds its meaning. Experiencing something hot on my skin grounds the word hot. I know what it means because I felt it. And now that I know what the word refers to, I can use the word. I can place that word among other words belonging to a proposition – we have language.

    Now what I find especially interesting about this article is how Stevan says we can use language to acquire new categories. This is not to say that language alone can ground words to their referents, but it is to say that language can be used to learn new categories of entities which are already grounded. In other words, language can be used to learn new things, but we need to have some sort of pre-existing understanding of what we’re talking about at all to even use language. It seems like the process of learning to categorize all things in the world can take different forms. One way to learn categories has to do with distinctions that cannot be vocalized. So the differences between male and female chicks are subtle, not easily distinguished, and – most importantly – not necessarily describable with words. One has to sit there performing trial-and-error for years to learn these differences properly, to the extent that males and females can be successfully distinguished. But this learning isn’t the kind where someone can just tell you something simple that relies upon information you already know. Words are not sufficient to explain the how of categorizing, so to learn how to categorize requires more than just talking about it. But not all categorizations have to be learned without words. It does seem possible to learn new categories through communication of words. But it‘s essential that the words communicated have to name categories that I already have. It’s kind of like the trial-and-error grounding process has to do with learning the language, and the categorization that comes from communicating words has to do with using the language. Stevan seems to be saying that we can use language to learn language.

    ReplyDelete
  38. I just had one question that I wanted to further clarify. If indeed cognition occurs before embodiment, how does this not affect the way to try to comprehend cognition?

    ReplyDelete
    Replies
    1. To help with clarification, I think the most simplistic/reductionist version of kids sib is: Categorization means doing the RIGHT thing with the RIGHT category/kind of thing. Would anyone have any more kids-sib way to describe it? Furthermore, it can be distinguished based on mirror neutrons such as (they are/are not doing the same thing as me). It can be social, such as (thats a celebrity, thats a student); it can be oneself (thats me yesterday) or it can be someone else (thats Brandon). In terms of “doing” - categorization can be calling something blue vs not blue, painting vs not painting something.

      Delete
    2. I am not sure what you are asking here: Who said cognition (thinking) occurs before "embodiment" (having a body with sensorimotor interactions with the world)? Mirror neurons (whose function is not yet explained) certainly don't explain categorization in general, but their capacity (to detect when I or another make the same movement) is probably mostly innate. The rest of what you mention sounds like categories we learn.

      Delete