Saturday 2 January 2016

(9a. Comment Overflow) (50+)

(9a. Comment Overflow) (50+)

3 comments:

  1. “Language acquisition, then, would be learning to think, not just learning to talk.” Although I have never thought about language acquisition this way, I do understand where Wharf is coming from. The ability to recall memories begins around the time a child begins to grasp the language, and improves as their language does. As well, even introspecting when I am thinking, I am talking in my head, I hear my voice in my head reacting, processing, thinking and that definitely contributes to the feeling of thinking. Maybe the reason I feel like I am thinking is because the language I know has a word for thinking? Although the paper is right, we don’t just think in words, there are images, etc. As well pinning our thinking ability purely on language seems to ignore the wide variety of input our brain receives. Furthermore, studies on babies have been proven to think before language acquisition.

    “Children's first words are similar all over the planet.” I wonder why that is. The paper lists specific examples, mostly objects and people, but I am curious about why these words. Why these words across cultures and languages are these the first ones a baby will grasp? Are they used more by parents and therefore heard more by children? Or are these the first referents a baby can ground? I feel this does relate to symbol grounding as none of the examples are close to being abstract words, they all are grounded in real life things.

    It is incredible to read about how little positive evidence a child needs to form language, and on the other hand how interesting it is that a child with severely limited positive evidence (ie. only around other children) will form their own language. This makes me wonder how much positive evidence is enough? How much positive evidence allows a child to fully develop an existing language?

    ReplyDelete
  2. Through the positive and negative evidence perspective, I found myself drawn to unpacking the notions of negative evidence because it’s not something I’ve actually thought of before.
    Focusing on Pinker’s statement: “Without negative evidence, there’s no general-purpose, all powerful learning machine; a machine must in a sense “know” something about the constraints in the domain in which it is learning.”
    I found myself centering “The Boundary Problem” within language acquisition. If a child is a given a sentence where they associate a word, say ‘carrot’ with the word ‘eat’, but then in another situation are given the association of ‘eat’ with the word ‘fast’ – where does the true meaning behind ‘carrot’ and ‘fast’ come from when one is a type of food and the other the speed at which you do it. How do infants avoid overextending the boundaries of the word ‘eat’?

    ReplyDelete
  3. “Imagine that a parent says ‘The big dog ate ice cream.’ If the child already knows the words big, dog, ate, and ice cream, he or she can guess their categories and grow the first branches of a tree: In turn, nouns and verbs must belong to noun phrases and verb phrases, so the child can posit one for each of these words. And if there is a big dog around, the child can guess that the and big modify dog, and connect them properly inside the noun phrase: If the child knows that the dog just ate ice cream, he or she can also guess that ice cream and dog are arguments of the verb eat. Dog is a special kind of argument, because it is the causal agent of the action and the topic of the sentence, and hence it is likely to be the subject of the sentence, and therefore attaches to the "S." A tree for the sentence has been completed: The rules and dictionary entries can be peeled off the tree:
    S --> NP VP
    NP --> (det) (A) N
    VP --> V NP
    dog: N
    ice cream: N
    ate: V; eater = subject, thing eaten = object
    the: det
    big: A

    This hypothetical example shows how a child, if suitably equipped, could learn three rules and five words from a single sentence in context.”

    This example is extremely helpful. The rest of the article is quite long and somewhat technical, but this is the crux of the whole piece. Pinker mentioned some empirical studies that show that children do, in fact, listen for phrases rather than words, but nowhere, to my knowledge, did he mention where the ability to parse phrases comes from. Conceivably, the penchant to parse phrases is one of the aspects of UG that aids in language acquisition, and I think that’s what Pinker is getting at. But I think this warrants much more attention. In what contexts does prosody help in parsing phrases? In which cases does prosody belie phrase boundaries? What kinds of pre-existing knowledge of grammar, if any, are necessary to construct determiners, NP or VP categories? Pinker pays these questions some lip service in section 8.3 but is quick to jump into detail-thick and less problematic questions of negative evidence and the like.

    ReplyDelete