Saturday 2 January 2016

(5. Comment Overflow) (50+)

(5. Comment Overflow) (50+)

7 comments:

  1. This comment has been removed by the author.

    ReplyDelete
  2. (I wasn't sure if I should move to the overflow comments, since we are still able to leave comments on the pervious one, but since it is 50+)

    Just something that came up during the readings, where I was having trouble wrap my head around:
    “According to connectionism, cognition is not symbol manipulation but dynamic patterns of activity in a multilayered network of nodes or units with weighted positive and negative interconnections” – I understand that what we discussed in class is that cognition is more than just computation, but after reading this quotation, I was not so sure about the definition of connectionism: would the concept of connectionism be the opposition to computationalism? Would they mean that the two would be exclusive to each in that a system cannot embodies both the concept of connectionism and computationalism? It is mentioned that that artificial intelligence, assuming it as an example of computationalism, is better at formal and language-like task, and in contrast that the connectionism is better at sensory, motor and learning tasks. So can we say that the said hybrid in looking at the symbol-grounding problem is a combination of computationalism and connectionism? Or are these terminologies not applied correctly in this case?

    As to relating it back to the one of the readings we know that to solve the symbol grounding problem, we need a Robit with hybrid symbolic/sensorimotor (Harnad 2000). Such that “meaning is grounded in the robotic capacity to detect, identify, and act upon the things that words and sentences refer to” Since the readings did say a hybrid in symbolic/sensorimotor: can we equate the concept of computationalism to symbolic and connectionism to sensorimotor? Or are these terminologies are not meant to be interchangeable?

    Furthermore, From previous commentaries I remember reading about that Stevan said “there is no such thing as ‘basic level’ categories” and that the “level is arbitrary and varies with context and background” with the hierarchy going above, below, across, and continuously; such that, it does not seem to be in a structure of hierarchy, but more like a web of networks. From what I understand, these are built up from a certain number of vocabularies that are grounded to start, which are the building blocks to form other combinations. So how do we first obtain theses “grounded” words? Since, the readings and recorded lectures specify that, we ground symbols via learned categorization capacity, so would our first few grounded words be from our categories from experience? Then starting from the beginning, we are given a capacity to learn a new category experientially and as more categories are accumulated, we can expand and start to learn other categories verbally?

    ReplyDelete
  3. Harnad’s symbol grounding problem raises the question: “how can the semantic interpretation of a formal symbol system be made intrinsic to the system?” This problem is analogous to the Chinese “merry-go-round” problem in which one tries to learn Chinese using a Chinese/Chinese dictionary. Searle argues that algorithms are not sufficient for understanding, and Harnad claims, in response, that if Searle is right, then you can’t produce the Chinese Room/algorithm. In this paper, a symbol system is defined (reconstructed) from a computational foundation. Here we make the distinction between explicit and implicit rules. From Wittgenstein, explicit rules being those that are “followed” and implicit rules being those in which a computer (digital or not) behaves “in accordance with” a rule. Also from Wittgenstein we have “forms of life”-meanings that aren’t represented symbolically. We compare between two models, computationalism and connectionism, both holding that cognition is all in the head, but differ in that computationalism takes mental states to be symbolic representations and cognition to the rule based transformation of these representations, but connectionism is anti-representation. By anti-representation I mean that symbols are grounded from the bottom up, and that we have non-symbolic representations and then higher order symbolic representations and systems. Computational models follow explicit rules, which are formed from explicit generalizations and representations. Unlike computational models, connectionist models (introduced in this paper) act “as though” they know the rules by working under inferences instead of explicit rules, and can develop new patterns. Connectionist systems are ~heuristic based, whereas computational systems are algorithm based (although the two are not distinct). Connectionist models are flexible, and learning mechanisms “come for free” (there isn’t a programmed learning mechanism/explicit rule for novel patterns). Sorry this isn’t a very well organized response to the reading, but my question is what is “meaningful” and what do we do about meanings without representations?

    ReplyDelete
  4. I understand that symbol grounding can only come in the form of sensorimotor interaction with the objects being grounded. My question is whether or not it is truly necessary. I am wondering whether, if you were to program categories into a computer with all of their members and all of the associated actions, that would suffice. I know that it is not the same as having real-world referents for words. Maybe I’m thinking somewhat along the lines of implicit definitions in the axioms of Euclidean geometry. Rather than tying a “point” down to a specific idea, what if we defined “line” and “point” to circularly refer to each other. I wonder if it were possible to build a dictionary web that was so strongly connected that it could birth some implicit “understanding” of vocabulary. I bet this computer would make some really quirky word choices… But then it could be trained to speak normally, perhaps?

    ReplyDelete
  5. Throughout the articles, I was wondering how languages are created in the first place. If language is fluid and dynamic, it makes sense that words and meaning is created through an association of other words that are already grounded, but how did language first come to be? In non-phonetic languages such as Chinese, many characters will look like its referent, like fire or tree, for example.

    I guess, more specifically, I am confused about the issue of intentionality (Bretano) and feeling/functionality problem, because "symbol-grounding touches only its functional component". So when the first languages were "invented," how and when did it go from solely functional to also encompass intentionality?

    ReplyDelete
  6. It must be so exciting to get published so often! I like your papers because they aren’t very long.

    I agree with most of the points in this paper. But I am curious about:

    a) Where connotations come in
    b) Referring to oneself – how do we know the I refers to the I?

    I think a lot of this has to be with individualizing people/things/oneself i.e. the rules as you say. I can individualize let’s say, Tony Blair through several things, including: a) His features b) Knowing he’s a war-mongering person c) Knowing he’s British, etc. When I first read about Mr Blair, it was purely syncatical. I was reading it without meaning. Over time, I assigned references to him. Whereas, when I refer to myself, this is something almost innate? It seems to me that we individualize ourselves through self-awareness.

    (Also, a point of interest: Frege refers to these “rules” as the “sense” that picks out the reference.)

    I know all of this doesn’t answer the SG problem. But it’s still cool.

    I like this paper because I also think the intellect/mind is presumed in discussions of meaning and reference and stuff. As such, the intellect is primary.

    ReplyDelete
    Replies
    1. Oh and (sorry, I just remember stuff after I post a response), I think this is cool to relate to Berkeley’s idealism, wherein he thinks existence is perception (by God). I disagree with him, but he does think that stuff has to be perceived to have existence (and presumably, meaning?).

      Delete