Saturday 2 January 2016

9a. Pinker, S. Language Acquisition

Pinker, S. Language Acquisitionin L. R. Gleitman, M. Liberman, and D. N. Osherson (Eds.),
An Invitation to Cognitive Science, 2nd Ed. Volume 1: Language. Cambridge, MA: MIT Press.

The topic of language acquisition implicate the most profound questions about our understanding of the human mind, and its subject matter, the speech of children, is endlessly fascinating. But the attempt to understand it scientifically is guaranteed to bring on a certain degree of frustration. Languages are complex combinations of elegant principles and historical accidents. We cannot design new ones with independent properties; we are stuck with the confounded ones entrenched in communities. Children, too, were not designed for the benefit of psychologists: their cognitive, social, perceptual, and motor skills are all developing at the same time as their linguistic systems are maturing and their knowledge of a particular language is increasing, and none of their behavior reflects one of these components acting in isolation.
        Given these problems, it may be surprising that we have learned anything about language acquisition at all, but we have. When we have, I believe, it is only because a diverse set of conceptual and methodological tools has been used to trap the elusive answers to our questions: neurobiology, ethology, linguistic theory, naturalistic and experimental child psychology, cognitive psychology, philosophy of induction, theoretical and applied computer science. Language acquisition, then, is one of the best examples of the indispensability of the multidisciplinary approach called cognitive science.

Harnad, S. (2008) Why and How the Problem of the Evolution of Universal Grammar (UG) is Hard. Behavioral and Brain Sciences 31: 524-525

Harnad, S (2014) Chomsky's Universe. -- L'Univers de Chomsky. À babord: Revue sociale es politique 52.

61 comments:

  1. Ok, quite a few questions after this article. I’ll limit myself to one & will save the others for class.

    I was particularly interested by the part of the article on input—where Pinker discusses positive & negative evidence for language acquisition.

    If I understood this correctly, positive input includes what the child hears or sees in their environment – the linguistic input in their environment. And they go on to say that this input doesn’t even need to be a full language – as long as there is some interaction with some words, it seems that language will develop (if the interaction is done within the critical period). So in other words, the positive evidence is the information that the child gets from the input about “which strings of words are grammatical sentences of the target language” – in the descriptive sense, not prescriptive (p.13).

    Pinker then talks about how negative evidence is also needed for the child to acquire language. What I seem to understand is that negative evidence refers to the strings of words that are not grammatical sentences in the target language. So the child gets feedback & corrections from parents telling them that given utterances are ungrammatical. Pinker then cites various studies that demonstrate that this type of feedback has no effect on the child’s language acquisition and goes to conclude that “the child must have some mental mechanisms that rule out vast numbers of “reasonable” strings of words without any outside intervention” (p.14).
    I am a little puzzled by this. I mean, no doubt, we have mental mechanisms, but I feel he’s quickly jumping to the conclusion that positive evidence is the golden answer given the lack of the role of feedback in the environment.

    But where does UG come from? Where does this descriptive grammar come from? It seems that UG only has “positive evidence” – in the sense that what children hear from others in their environment follows UG rules but also what they produce also follow UG rules, well anyways, after having quite some experience with a given language. But what’s interesting is that children know things that don’t follow UG rules. For instance, when we look at quantifiers, there are an infinite number of relations between two sets that are never realized in natural language and children never make the mistake of using those in language. So it seems that we’re not only endowed with the capacity of knowing what follows UG rules but also what doesn’t.
    I wonder how do we come to know the difference? Are we innately endowed with this capacity or did we learn it? If we did; how?

    I mean, thinking about the poverty of stimulus (which Pinker, yet again fails to mention), I think would show that there’s no negative evidence and that UG is innate?
    I admit, I think I’ve lost myself and I don’t know what I am saying anything.
    Language acquisition is one puzzling topic!

    ReplyDelete
    Replies
    1. Nope, you got it all right:

      1. The language-learning child only hears, sees, and says language that is UG-compliant.

      2. Children also hear and make grammatical errors, and they do get corrected, and they do learn the rules -- but those errors and corrections are not UG errors and corrections; they're errors and corrections of ordinary grammar, the kind we learn in schools, through examples, trial-and-error and corrections, and sometimes even by being explicitly taught the rules.

      So the first point is that Pinker does not make the distinction between ordinary grammar and UG. The poverty of the stimulus (hearing and saying UG-compliant utterances only -- positive evidence only) applies only to UG.

      That's why Chomsky concludes that UG must be innate: The distinction between utterances that are UG-compliant and UG-noncompliant is a category distinction. To learn UG, children would have to have examples of what is in the category and what's not in the category: They would have to hear both kinds as well as to speak both kinds. In other words, they would have to make mistakes, and be corrected on their mistakes, as with every other learned category. But they never make mistakes. No UG violations. And they never get corrected. So how did they know not to make mistakes?

      That's the puzzle that is solved by concluding that UG must be inborn.

      But that still leads to the next question -- as with all other innate categories and structures: how did it get there in the first place?

      For the frog's inborn bug detector, there are plausible ordinary evolutionary hypotheses of the usual kind, as with wings and fins: genetic variation, and then selection based on whether the genetic variants make survival and reproduction more successful.

      But UG is very complex and abstract. It's still hard to imagine how it could have ended up being genetically coded in our brains: all at once? gradually? what was the adaptive advantage?

      Hard questions, not yet answered.

      Delete
    2. You said in class that UG is not the same as language, and that distinction makes sense, as language is acquired and UG is innate (plus all the complexity differences and the fact that languages differ whereas UG is the 'same' for all humans) but what exactly is UG? are the details of UG undefined and is it generally addressed as the innate capacity to learn a language (thus the human capacity to learn a/many language(s)) ?

      Also, is it fair to say that when children start learning a language they learn the core first (or one of the many cores) and then once they've reached a certain level go to the kernel and then the extra? or is it more sporadic and not that important whether they start with the core or not?
      It would seem instinctual to say that they would acquire the core first but then does that imply that the core is what can be acquired only through symbol grounding?

      Delete
    3. In Chomsky's World you state "What was remarkable was that speakers of any language could immediately say whether a new sentence was or was not grammatical, even though the rules that were being tested were not the rules of the ordinary grammars that they had been taught (or had learned by induction)"
      and im pretty sure you've illustrated many time what a UG compliment sentence was but if you could illustrate again and in contrast with a sentence that draws on normal grammar, i think that could help clarify the nuance!

      Delete
  2. “Learning and Innateness. A child growing up in Japan speaks Japanese whereas the same child brought up in California would speak English, so the environment is also crucial”.

    This one line reminded me of the documentary Twinsters. For those of you that have not seen it, it is about twins born in Korea that were separated at birth because they were adopted by different sets of parents. One twin girl went and grew up in California, and the other in English. Now at the age of 19 or so, they come across each other on social media. Realizing they look identical, one reaches out to the other, wondering if it’s possible that they are twins. After a blood test, and talking for a while it turns out they are twins. They have SO much in common. It’s crazy that their personalities, mannerisms and interests are very similar. However, they have very different accents and vocabularies. The one in California has an American accent. The one who grew up in England has an English accent and has different sayings and slogans that we are used to. This example here just shows how crucial the environment is in language acquisition (and socialization).

    I thought I would summarize the Learnability theory for everyone since the description is long in the paper.
    Learnability theory is defined by:
    - A class of languages (one of which is the target language to be learned – usually the one spoken in the community)
    - An environment (information that the child relies on to acquire the language)

    - A learning strategy (using information from the environment, the child tests out different hypotheses about the target language)
    - A success criterion (the hypotheses are not random, but systematically related to the target language)
    From the paper, it is obvious that two extremely important types of evidence are requires for language acquisition: negative and positive. I will summarize and lay out the difference for clarification.
    Negative evidence refers to information about which strings of words are not grammatical sentences in the language; such as corrections or other forms of feedback from parents that tell the child that one of his or her utterances is ungrammatical. In its absence, children who hypothesize a rule that generate a superset of the language will have no way of knowing that they are wrong. Without it, language acquisition becomes much more difficult. Positive evidence refers to the information available to the child about which strings of words are grammatical sentences in the target language. Most of what a child hears during the language-learning years is fluent, complete, and grammatically well formed. This is important for knowing what is right in acquiring a language. We need both to acquire language!!!
    Lastly, the over-regularization theory was really intriguing to me. This error is common in children who have not been exposed to enough of the correct input (irregular verb endings such as break-broke); the blocking principle is unable to be implemented. However, once the child is exposed to enough of this type of input, the blocking principle will be put into effect, and the child will no longer say “breaked” instead of “broke.” This follows a U type pattern, where they are first good at irregular verbs, then they get a lot wrong, they with enough negative evidence they are good again.

    ReplyDelete
    Replies
    1. Positive and negative evidence of a category -- sampling both members and non-members -- is needed in order to be able to tell them apart. The corrections of UG would be no different from saying "yes, that's an apple" and "no, that's not an apple." Then your brain can figure out what the difference is: what features distinguish apples from non-apples -- or UG-compliant utterances from UG-violating utterances: namely, the rules of UG.

      But how can you learn those UG rules (or what's an apple) if everything is UG-compliant (or an apple)?

      Remember "uncomplemented categories"? Categories that everything is a member of. ("Laylek")? That's what's meant by positive evidence only, or the "poverty of the stimulus." (This will come back again when we get to the "hard problem.")

      Pinker's second piece of carelessness (or faulty understanding) is that he does not distinguish the real problem -- UG-learning -- from ordinary grammar-learning. He mixes them up. Past-tense learning, regular verbs and irregular verbs are not examples of UG! They are perfectly learnable. And they are learned. Errors are made. And corrected. No poverty of the stimulus. The example is completely irrelevant.

      Pinker's past tense example and the conclusions from it are as silly as Fodor's claim that we can't learn any categories at all, because of poverty of the stimulus ("vanishing intersections" no features that distinguish all members from all non-members), so all categories must be innate.

      (This is what happens when pygmies get stuck in the shadow of a giant...)

      By the way, although the rules of UG are not learnable from the (impoverished, positive-only) data available to (and produced by) the language-learning child, the rules of UG are learnable (otherwise how would Chomsky have been able to tell us what they are?).

      They are learnable by teams of linguists who, unlike the child, have decades to try to figure out explicitly, by trial and error, what the UG rules might be: "Look, if X is a rule of UG, then I ought to be able to say this and not *this." Now since all language speakers "know" (implicitly) the rules of UG, if *this violates UG and this doesn't, then they have the kind of positive and negative evidence they need.

      (And that's more or less how teams of linguists at MIT and other universities have been piecing together the rules of UG across the years.)

      Delete
  3. - I think the part where Pinker talks about phrases really connects with the reading from last week about having a sufficient dictionary and how constraints learned from experience could help the child to know what structure words should follow in a sentence. However, I’m a little confused between syntax and level-ordering because the paper talks about constraints on level-ordering may be innate, but isn’t level-ordering part of syntax? Albeit, it’s not really like rules for combining of words to form phrases, and into sentences, but the axioms of level-ordering seems to be similar in intention as in structural relations of words in sentences. By intention, I mean it sounds right in certain ways, and sounds weird if the rules were combined randomly.

    - I want to make sure if I got this right, but structural relations in sentences would be the learned rules of syntax based on positive instances the child took as input, and grammar would be the output based off the learned rules and application of principles from their innate UG? Grammar would thus be based off parameters that we have set from the constraints provided by our parents? In this case, it makes me wonder how we were able to ever determine what the rules were in language in the first place, since our parents must have learned from their parents, and etc. I wonder if it had been due to how certain word or phoneme combinations affect our visual or auditory perception of them as being pleasant or weird, or how the structures in the vocal tract have shaped language structure, rather than the language structures shaping our vocal tract, as suggested in the paper. However, this seems implausible as it relies on the impact of the environment on our senses, which makes me even more curious on language origin and its impact on language structure, and how language origin may be able to explain and perhaps predict trends in language acquisition.

    - It also confuses me how innate capacity to learn language is affected by growth of the brain. The fact that the mute children found in the wild are mute because they had not have enough exposure to language, and that there is a critical period for learning a language, shows the impact of the environment, but UG doesn’t explain what capacities of language are innate, and what requires learning. Seeing as how deaf children can have language despite not being able to hear it or know how to speak it makes me wonder what language even is in terms of its physical representation in our brains, and how growth can affect the capacity of language rather than having a capacity that unfolds itself as we grow.

    - This is an unrelated question about this week’s reading, but from the class a few weeks back. I have a question about movement and learned skills in regards to categorization and cognition. So we aren't categorizing when we're using a motor skill that we've developed expertise for, but I recently learned about embodied cognition, which suggests that gestures ground people's mental representations in actions, and that a study showed that when dancers view dance videos that they are familiar with, they are able to simulate/embody the action during the action observation. Don't these results suggest that we are categorizing when we perform actions? Or is it because experts have those categories from initial learning of those motor skills that once they have mastered the skills, the person's concept exerts a top-down influence on their actions, but there is no bottom-up influence from the sensorimotor interaction with the environment during the action execution? So they are not doing any kinds of things with the environment, but they are just doing? And that the learned categories are based off instances they have seen in the environment and decide to try themselves?

    ReplyDelete
  4. http://www.telegraph.co.uk/news/science/science-news/12073587/Meet-Nadine-the-worlds-most-human-like-robot.html

    Just found this article and it's obviously extremely pertinent to our class.. what do you guys think??

    ReplyDelete
    Replies
    1. It's much easier to generate the kind of superficial cosmetic features that fool people (but just for a little while) than to generate real T3 capacity. It only takes a few minutes to put Nadine through her paces and show that she's just a robot. But try that with Renuka...

      Delete
  5. In reading this text, I do not understand why so much emphasis is placed on the poverty of the stimulus argument that stresses that children do not receive enough language stimuli to be able to entirely learn language from the language that they get to hear. This theory, in particular, states that children must have some innate language learning mechanism to be able to form the grammatical sentences they form. While children learn language, they do not get enough negative feedback to sentences that are ungrammatical in order to understand what sentences are well or non-well formed.
    I do not understand why so much weight is accorded to this theory. I do not think that negative feedback is necessary to learn language, even without an innate mechanism meant helping children learn languages. Why is it not thought that children tend to not pronounce sentences that do not resemble the other ones do not pronounce but that resemble the ones that other people pronounce. Why cannot language acquisition be compared to say to the use of technology, for example?
    There is a form of functional fixedness that comes into play when technology is concerned. Computers are, for example used to write and read texts and to watch movies but they are not used as umbrellas. Children tend to use technology the way their parents use it. In order to do this, they do not have to receive negative feedback from their parents. It is unthinkable for a child to use technology for some other purpose than what it is meant and negative feedback is not necessary for this. A child does not need to throw a their parents cell phone and to receive the negative feedback that comes in association with this action to know that a phone cannot be used as a tennis ball. Children do not tend to do so because this is not the use for phones that they see.
    Another concern I have related to Universal Grammar is the fact that despite humans having some universal grammar, why cannot they understand what animals, that lack some sort of grammar or rules in their languages and propositional sentences, say to each other? It seems to me that if animals lack some sort of grammar or rules in their languages and cannot form propositional sentences, then it would be much more easier for humans to understand the short utterances they communicate with.

    ReplyDelete
    Replies
    1. You are not distinguishing ordinary grammar (learned, with plenty of positive and negative evidence) from UG (no negative evidence: everything the child hears and says complies with it: how?).

      Delete
  6. 1
    “Many other small effects have been documented where changes in information processing abilities affect language development. For example, children selectively pick up information at the ends of words (Slobin, 1973), and at the beginnings and ends of sentences (Newport, et al, 1977), presumably because these are the parts of strings that are best retained in short term memory. Similarly, the progressively widening bottleneck for early word combinations presumably reflects a general increase in motor planning capacity. Conceptual development (see Chapter X), too, might affect language development: if a child has not yet mastered a difficult semantic distinction, such as the complex temporal relations involved in John will have gone, he or she may be unable to master the syntax of the construction dedicated to expressing it.”

    This makes a lot of sense to me. However, could it be possible to see it the other way around? Could language acquisition provide some sort of “cognitive steroid” which significantly improves information processing ability? We saw earlier in the course that language allows us to ground symbols for progressively more complex categories such as "beauty" and "justice" that we can't point to in the world, which may facilitate higher-order thinking. That being said Pinker is talking about syntax, not semantics here.

    2
    “Children whose mothers use Motherese more consistently don’t pass through milestones of language development any faster.”
    “When given a choice, babies prefer to listen to speech with [the properties of Motherese] than to speech intended for adults.”

    Within the same section it seems to me that there is a bit of a contradiction. If babies prefer to attend to speech that has the properties of Motherese, why doesn’t it help them acquire language?

    ReplyDelete
    Replies
    1. Hi Joseph,
      I’ll do my best to clarify your second point for you. In terms of babies preferring to listen to motherese speech versus regular adult speech, there are two reasons I can see for this: either because their own mother/parents might speak to them in motherese, in which case the speech might be more familiar. Or (more likely), it is because motherese speech is inherently simpler than regular adult speech. In the case of language acquisition, just like it’s easier to understand simple sentences rather than complicated when learning a foreign language, it’s easier to understand simplified motherese than regular adult speech. This, however, does not mean that babies do not understand regular adult speech as well. They might not understand it (yet) as completely or fully as motherese, but even if they did it seems intuitive that it’s more pleasant to listen to simple sentences than long drawn out ones with “big” words. So in terms of milestones of languages development, the paper says that the complexity of input does not affect language development. That means that whether the child is exposed to motherese or regular adult speech, it won’t acquire language any faster. This does not mean that it can’t prefer motherese; these are simply two different points.
      As a side note, motherese is actually not very effective in facilitating language acquisition. This is fortunate, because should children learn their language via motherese only, they would end up learning a simplified and less complex language than the target adult language. I.e. learning from motherese would lead to less proficiency in a language in the end, therefore it is not in the long run facilitating language acquisition (because the acquired language would be incomplete).

      Delete
  7. In class, it was stated that names are arbitrary in both verbal language (like English) and non verbal language (like American Sign Language (ASL)).

    It is difficult for me to accept that names are arbitrary in ASL. For example, the ASL sign for a ‘cat' mimics the gesture of stroking a whisker with the thumb and index finger, and the ASL sign for ’smile’ is moving your two index fingers away from one another infant of your mouth, while simultaneously opening your mouth to present a smile. To me, neither of these signs seem arbitrary. Rather, the names seem to be chosen based on reason or system. The signs seem so sensible that some signs might be correctly guessed by a non ASL speaker.

    ReplyDelete
    Replies
    1. Hi Lucy! I agree with you, since I had the same thought about this. It does seem that non-verbal languages are not as arbitrary as verbal languages. For me, non verbal language like ASL could be potential protolanguage. For example, when men first were able to communicate their thought, but didn’t possess any common verbal language, they might have communicated through non verbal means, gradually putting verbal arbitrary sounds to them. Using pantomime does seem to communicate ideas that most culture could understand. On the other hand, words don’t carry on its own anything that can be related to the object. I still agree that ASL is a language, but it seem based on interpretable gestures.

      Delete
    2. Hi Roxanne,
      Though humans did not posses any common verbal language, humans did posses a tongue and vocal folds (they could make noises and sound). I wonder if non verbal communication came before verbal communication, or if they arrived together, and the reliance on body language drifted away. Resulting in verbal communication as the primary form of human communication. Many deaf people who communicate through ASL actually do make sounds when signing and word spelling. (I don't know the answer, just a curious thought about how verbal language arrived into human communication.)

      Delete
    3. Hi guys,

      Although it’s clear that many names in ASL have their ‘shape’ as symbols derived from the properties of the things they refer to, I do not think that this should be taken to mean that “verbal languages are less arbitrary than non-verbal languages.” The names in sign language are arbitrary in the sense that their gestural shape need not represent the definite properties of the things that they refer to in order to refer to them. If you swap the names of objects around, once you and everyone you wish to communicate with learns the new assignments, the language will work just as well as before.

      Delete
    4. This comment has been removed by the author.

      Delete
    5. Hi Lucy, I think language and the capacity/motivation to communicate might have evolved at the same time. So they are some chance that capacities for non verbal language and verbal language have co-evolved!
      Timothy: I agree that ASL gestures don't need to represent the thing they refer to; but it seems to me they do sometimes; when you compared with verbal languages, which only contain arbitrary names (except for onomatopoeia- names that phonetically imitates the thing it refer to).

      Delete
  8. “Though artificial chimp signaling systems have some analogies to human language (e.g., use in communication, combinations of more basic signals), it seems unlikely that they are homologous. Chimpanzees require massive regimented teaching sequences contrived by humans to acquire quite rudimentary abilities, mostly limited to a small number of signs, strung together in repetitive, quasi-random sequences, used with the intent of requesting food or tickling (Terrace, Petitto, Sanders, & Bever, 1979; Seidenberg & Petitto, 1979, 1987; Seidenberg, 1986; Wallman, 1992; Pinker, 1994a). This contrasts sharply with human children, who pick up thousands of words spontaneously, combine them in structured sequences where every word has a determinate role, respect the word order of the adult language, and use sentences for a variety of purposes such as commenting on interesting objects.”

    I take issue with this sort of sentiment, maybe it’s just because I feel the need to defend the chimpanzees in question but I also think there are flaws nonetheless. Why is Pinker able to confidently say that, though chimps do use some language and use it to communicate, this is not similar in function to human communication with language? Is that because he is saying it is more rudimentary? Doesn’t that mean function can be the same if delivery is different? Just because human babies and chimps both have the ability to acquire means of communication, but obviously do so in different ways and at different speeds (ultimately the human baby far surpasses a chimps limitations), that doesn’t seem like a satisfying reason to say that their limited form of communication is not homologous in function to ours. At its core, isn’t our communication functioning to help us get or receive messages from other beings just as theirs is? To me it feels like this is a superiority argument and also dismisses any form of language that animals might use without help/teaching from humans.

    I’m not trying to say that chimps have the same capacity for language as humans do—from our interactions with chimps, we know that they have a limit of language. But, I’m questioning our ability to say that their function of language and communication is so different from ours at its core. Further, how can we prove the capacity of the animal’s mind when it comes to language if we’re basing it solely off of behaviorist data that the animal performs for us? Could there be some sort of I-language that a chimp internally but is unable to communicate in the ways we can detect or are satisfied with?

    ReplyDelete
    Replies
    1. Also:
      “Everywhere, children announce when objects appear, disappear, and move about, point out their properties and owners, comment on people doing things and seeing things, reject and request objects and activities, and ask about who, what, and where. These sequences already reflect the language being acquired: in 95% of them, the words are properly ordered”

      Would it be correct to think of this as the start of categorization? It seems to be a look into the process of learning the names of categories that we have an innate ability to detect.

      Delete
    2. And is the hypothesis language the result of UG + environment? I'm confused about where it comes in

      Delete
    3. Hi Alba,

      I think the root of the problem stems from the difference between language and communication. We have the capacity for language, which means that we can express and understand any proposition (subject/predicate utterance/statement with a truth value of T/F) i.e. through English we can say anything and everything that can be said. On the other hand, there are many studies that show how proficient Chimpanzees and Gorillas are at communicating, as well as how intelligent they are and how sophisticated their interactions can be. That being said, the way that they communicate is not through our propositional approach of language. So I feel that we are not questioning the ability of Chimpanzees to communicate, which is similar in function to human communication (i.e. they can show affection and hostility in their communication like we can). I do agree with your sentiment that being able to communicate in different ways and learn at different speeds should not play into deciding which form of communication is ‘superior.’ Once again I think it comes down to the definition you have for language, as Pinker says “whether one wants to call their abilities ‘language’ is not really a scientific question, but a matter of definition: how far are we willing to stretch the meaning of the word ‘language.’”

      Delete
    4. Hi Alba and Melissa,

      In regards to your question about why chimpanzees' communication skills aren't considered language, I believe it has to do with the fact that the chimp cannot use the propositions she is taught to exponentially express anything and everything that can be said.

      A characteristic of language is that it can express anything and everything that can be said. So considering the chimp's limitation, he/she might not be actually using language at all.

      Delete
  9. “Ervin-Tripp (1973) studied hearing children of deaf parents whose only access to English was from radio or television broadcasts. The children did not learn any speech from that input. One reason is that without already knowing the language, it would be difficult for a child to figure out what the characters in the unresponsive televised worlds are talking about. “

    I had previously just assumed children could in fact learn language from television or radio so it was interesting to see them mention this study. I do still wonder if these mediums still do not improve the learning of hearing children who are otherwise also learning language from their parents and community. My intuition says it should because the more examples of input should allow for better training but most of what I know about language acquisition leads me to believe that it still wouldn’t matter much at all. This leads to wonder about the case of blind children, whose access to the nonlinguistic world is obviously severely limited. From what I recall reading, these blind children of course still learn language very successfully albeit a bit delayed or slower. Should my understanding be that this delay in acquisition is because there is less clear responsiveness with language for these children? Parents I’m assuming interact and talk with their blind children the same amount if not more but it may not be as clear to the child which sounds are getting responses or not since they are missing the vision to make stronger connections. I again lean towards believing that radio or the sound from televisions should help these blind children learn language. If they are not making the connections from language to responses with sight then all they are left with is touch to understand responses and that to me seems like a much smaller amount of input being obtained. I would assume the brain has a better way of taking in language even without context or response from the radio and be able to use it as a learning tool. But again these are mostly just intuitions I naturally have an not necessarily based on researched evidence.

    ReplyDelete
  10. After reading this article, I am slightly confused as to which grammatical rules are ordinary and which are part of universal grammar. I understand that UG is inborn grammar present in all languages, but I am not a linguistics master by any means and do not understand how the average person would be able to pick up on whether certain rules are UG or learned through observation. Pinker talks about the Minimal Distance Principle in sentences; which states that the verb at the end of the sentence’s subject is the noun nearest to it. Children are more likely to make mistakes with this principle. Does this mean that the Minimal Distance Principle a part of ordinary grammar?

    ReplyDelete
    Replies
    1. Hi Amanda, I found the portion of this article about the Minimal Distance Principle confusing as well. I think you're right that “the Minimal Distance Principle [is] a part of ordinary grammar.” But it's not because children are more likely to make mistakes with this principle, it's just because children make mistakes with this principle at all. If this principle is a part of UG, then children would never make these mistakes because there would be innate structures facilitating the application of this rule. Since these structures are innate, the principle would always be applied correctly, because there is no way the child could learn an incorrect version of it. Experience has absolutely no influence over innate properties. On the other hand, if the Minimal Distance Principle is just part of the ordinary grammar a child learns through exposure, it is possible to make incorrect assumptions about the rule and apply the principle incorrectly, at least until the child has accumulated enough experience to drop all their incorrect hypotheses in favor of the actual rules of the principle.

      However, this is just an example of something that is not UG. I'm not exactly sure how you could prove a principle is definitely a part of UG. Never making a mistake with that principle? But how can you test for all the various possibilities where a mistake could be made? For example, Pinker mentions the Structural Principle and how children who understand passive construction will interpret sentences in a way that violates the Minimal Distance Principle so they can adhere to the Structural Principle. This experiment may prove that children “have acquired a grammar of the same design as that spoken by their parents” (page 19), but it does not say anything about whether the Structural Principle is innate or learned through experience. In a different set of circumstances, children could violate the Structural Principle in order to adhere to a more fundamental rule of grammar, just like they violate the Minimal Distance Principle here. How can we empirically determine, beyond all doubt, that one grammatical rule is innate (UG) while another is merely learned by experience (ordinary grammar)?

      Delete
  11. "Crucially, the rules cannot apply out of order. The input to a Level 1 rules must be a word root. The input to a level 2 rule must be either a root or the output of Level 1 rules. The input to a Level 3 rule must be a root, the output of Level 1 rules, or the output of Level 2 rules."

    I’m having a little trouble understanding this article. Later, Pinker says that children can learn language by connecting syntax and semantics by connecting objects initially to nouns and actions initially to verbs and then from there learn more and more abstract applications of syntax. Somehow, because of the innate inborn UG, the child generalizes rule and is able to learn language extremely fast. I don’t contest that there is some sort of innate constraint because Pinker provided a very nice permutation of possibilities later to say it would be impossible to learn language purely through experiential learning. I did understand overgeneralization because we’ve previously reviewed that in PSYC 213, where the child does say “breaked” occasionally, but eventually understands the proper past tense form of “break” is broke and that this has to do with experiential learning. However, the discussion of level 1, level 2 and level 3 rules went totally above that. I have no idea how a child would be able to tell Darwinisms is okay over Darwinsism. Is this what we talk about in terms of innate rules, that a child knows where its proper to add the “s” to make it plural? I just have trouble understanding how its possible to go from context and semantics to this level of understanding of how grammar works, but as always, since it is innate perhaps I am unable to be aware of a possible cognitive mechanism that could mediate this. I did enjoy reading about parameters at the end of the text because I have a tiny bit of experience with Spanish and I did question how it was possible to just drop I (Yo) in sentences and how that would shape Spanish as a language. Though I didn’t understand how cascade effects work exactly, I think it would make sense that there are innate rules that children have in UG and that the details of the language manifests these rules differently. However, I’d like a clearer explanation of what exactly UG is and if what we’ve stated about the ordering about Level 1, 2 and 3 rules is a defined component of this. If so, I would find the UG capabilities to be incredible and I am very curious to see how Dr. Harnad explains how this work via “kid sib.”

    ReplyDelete
  12. Two unrelated points on this article, although they’re both pretty short, so I’m gonna keep them in the same post.

    But if the features are the things that do define grammatical categories, like agreement and phrase structure position, the proposal assumes just what it sets out to explain … It is a general danger that pops up in cognitive psychology whenever anyone proposes a model that depends on correlations among features: there is always a temptation to glibly endow the features with the complex, abstract representations whose acquisition one is trying to explain. (page 20)

    First, Pinker’s point about circular reasoning made me realize just how common this problem was in a lot of the articles we have read and a lot of the topics we have been discussing. From the homuncular explanation to the Systems Reply, it seems like explanations that merely beg the question rather than answering it are going to be a regular occurrence in this line of work. However, I’m having a hard time identifying these cases. We rarely see obvious examples of this fallacy in the readings; a circle is a circle because it’s a circle is clearly circular reasoning (ba dum, tss). But the arguments we encounter tend to be a lot more nuanced in their logical weaknesses, especially considering topics like language acquisition are totally new to me.

    At first, I thought the possibility “that the child sets up a massive correlation matrix” was a valid solution to the bootstrapping problem. If Pinker had not explained exactly why it was incorrect, I probably still would. But once he pointed out the circular reasoning, it became crystal clear why this solution was false. Does anyone have any tips on how to pick up on this form of argumentation? I feel gullible walking my way through an argument and reasoning out why it makes sense, only to read in the next paragraph exactly why it does not make sense. Are there any tell-tale signs I should be looking for? Anything that I definitely won’t miss, even if the fallacy is very well-hidden within the assumptions of the argument?

    But by unconsciously labeling all nouns as "N" and all noun phrases as "NP," the child has only to hear about twenty-five different kinds of noun phrase and learn the nouns one by one, and the millions of possible combinations fall out automatically. (page 23)

    Second, this point that Pinker raises made me recognize just how crucial categorization is to language. When I first read the story of Funes, I had trouble understanding why a real life Funes could not exist. Sure, he may seem a bit eccentric and have trouble with day-to-day activities, but we just read a story about him that seems realistic enough. Why would his inability to think, as Borges puts it, prevent him from being able to speak? However, Funes’ problem isn’t his inability to think, it’s his inability to categorize:

    To think is to forget a difference, to generalize, to abstract. (last page of the Funes story)

    Although Borges uses the word “think” here, what he is really describing is categorization. And as we learned from Pinker, our ability to categorize is a direct requirement of our ability to use language. Without it, children would never master languages, because there is simply no other way to sift through the infinite array of grammatical possibilities. “The child must couch rules in grammatical categories like noun, verb, and auxiliary, not in actual words” (page 22). So Funes could never exist because he cannot properly categorize, and there is no way to learn languages that does not involve categorization.

    ReplyDelete
  13. If I understand Universal Grammar(UG) correctly, they are inborn set of rules that will never produce certain type of errors. Every natural language is compliant with UG. We don’t have to learn UG; does this mean UG is a logical consequence of how language is represented in our brain? I have to admit I don’t grasp the concept of UG perfectly. It seems odd that UG’s rules are so abstract, most people cannot grasp them. I guess we must trust experienced linguist on this matter.

    We saw in class that UG are a logical necessity to produce propositions. And that the constraints of UG are part of the constraint of thought. That seems reasonable to me, that there exist a certain set of rules which both thought and language are bond to obey. Since the brain has some physical and genetic basis we all share, it might as well follow there are some shared logic. At this point, I’m just amazed Chomsky ever discovered such phenomena!

    “A grammar is not a bag of rules; there are principles that link the various parts together into a functioning whole. The child can use such principles of Universal Grammar to allow one bit of knowledge about language to affect another. This helps solve the problem of how the child can avoid generalizing to too large a language,”

    Can we say UG is necessary for the shared capacity of language? If UG didn’t exist, could there still be language at all? Children are obeying the rules of UG; maybe that is why they are learning so fast, because they aren’t generalizing to too large a language.


    ReplyDelete
  14. “Any theory that posits too little innate structure, so that its hypothetical child ends up speaking something less than a real language, must be false. The same is true for any theory that posits too much innate structure, so that the hypothetical child can acquire English but not, say, Bantu or Vietnamese.”

    I quite like these two sentences regarding a theory of language acquisition. They perfectly sum up the targeted theory of language acquisition and simplistically state what the theory does and does not encompass. That is, the theory must not posit too little innate structure. This means that if there were too little innate structure, children would wind up speaking a non-logically plausible language (because out of the infinity of possible languages, only a subset are logically possible; children might acquire the illogical ones). However positing too innate a theory would mean that children would be pre-disposed to acquire a specific language, i.e. pre-programmed to speak a language. This is of course not true. An obvious example would be adopted children; a child born to Swedish parents but fully raised in Spain by a Spanish speaking family would speak perfect Spanish but not a word of Swedish, therefore was not pre-disposed to learn Swedish.
    With regard to negative evidence, there is an important element that Pinker failed to mention. It is that even when parents do supply children with negative evidence, children ignore it. For example, there are many instances of children generalizing the pluralization rule, being corrected, showing they can pronounce the corrected word, but when using the plural form within a sentence revert to their ungrammatical versing. This is important and compelling evidence with regard to the “uselessness” of negative evidence. I feel like it would have been a good idea to incorporate this into the explanation as to why negative evidence doesn’t play a role in language acquisition.

    ReplyDelete


  15. Pinker’s article states, “hence language acquisition depends on an innate, species-specific module that is distinct from general intelligence”

    Although this is an interesting point, I think Pinker should have clarified what he meant by general intelligence. This would have helped strengthen his arguments. In his article, he proceeds to express how we obtain language by picking words out of phrases, which are then further categorized and algorithms then help deduct the parameters of the language. Would it then be acceptable to see this categorization/deduction as a form of intelligence? Then, how isn’t the acquisition of language linked to general intelligence?

    ReplyDelete
    Replies
    1. Hi Rachel,

      I wholeheartedly agree that Pinker's definition of general intelligence is somewhat lacking. While I suppose it should be obvious to someone in the field, it would be nice if it were explained in a kid-siblier way.

      If I had to take a stab at it, I would say that Pinker is drawing a distinction between a module (or what he calls a "mental organ") and general intelligence (our capacity to solve problems, what might be measured by an IQ test). He goes on to describe the reason for the separation between the two when he cites that intact language can coexist with severe mental retardation. This, I think, is how he conceptualizes general intelligence.

      In doing so, he's bringing to light the idea that there is a brain region, functionally separate from other general cognitive functions, which is responsible for the generation of language.

      Delete
  16. “Perhaps linguistic milestones like babbling, first words, and grammar require minimum levels of brain size, long-distance connections, or extra synapses, particularly in the language centers of the brain.”

    I understand how brain maturation, increased numbers of synapses etc. may support language acquisition and allow more neural plasticity to aid the encoding of new information contained in natural language. We know that learning requires synaptic plasticity and therefore acquiring a language would be most efficiently done during a period when the brain maximally facilitated these processes. But I see no reason for neural development to be the “Driving force underlying the course of language acquisition”. Perhaps Pinker means to say that this period of development in a child’s life is what contributes to the “critical period of language acquisition” taking place at this specified time. However his statement is misleading and seems to suggest that it is maturation of the brain in a toddler’s early years that causes language and this does not seem likely to be the case.

    ReplyDelete
    Replies
    1. This comment has been removed by the author.

      Delete
    2. Pinker's article on language acquisition was a delightful read. I especially enjoyed learning more about the critical period in a child's life for acquiring language. Of course I knew that a child's brain has more plasticity during childhood and that the earlier you learn a language the better, however I was not aware of how early this all happened (successful acquisition by age 4). I gave it some more reflection and started coming across more research involving the "reopening" of this critical period.
      There was news last week of DARPA launching a Targeted Neuroplasticity Training (TNT) in attempt to modulate peripheral nerves so that this period can be reopened and new learning can take place. It seems a bit far-fetched but nonetheless very interesting. If there has been evidence of peripheral nerve stimulation activating the same brain regions involved in learning and if there is in fact a cascade of events that follow from this (release of neurochemicals, restructuring of network connections, increased connectivity), then it is plausible that we can actually enhance plasticity! That being said, perhaps we could then eventually avoid the decline of learning ability of languages as we age and be able to use this cognitive training to aid in language acquisition.

      In addition, given that the adult brain does have plasticity even going into older age (see Rakic 2002 study on Neurogenesis in adult primate neocortex),

      Renuka, I think Pinker was talking about this neuronal development being part of the critical period and he meant that during this period is when successful language acquisition occurs? Do think this research will guide us in right direction of figuring out which mechanisms are at play during this critical period when our brains are extremely adaptive?

      Delete
  17. “Benjamin Whorf (1956), asserts that the categories and relations that we use to understand the world come from our particular language, so that speakers of different languages conceptualize the world in different ways. Language acquisition, then, would be learning to think, not just learning to talk.”

    This is the strong Sapir Whorf Hypothesis – the claim that language shapes our view of the world. I think because the strong hypothesis is too strong we must take a step back and reconsider this proposition.
    Later in the article Pinker discusses how children can think long before they can speak. Children can pick up on various cues in the world and even in language before they learn to speak themselves. So it is not the case that language is a pre-requisite for thinking which is what it seems Pinker is suggesting by saying “Language acquisition, then, would be learning to think”. What is true about language is our inability to get away from it and our predisposition for it- our brains are designed for language. As a result we can use language to express our thoughts. So I think it is foolish to suggest that language acquisition is also learning to think. It would be much better to suggest that language acquisition is learning to talk, and learning to use it to express our thoughts (which we were certainly having before we could speak).

    ReplyDelete
  18. Just a theory but in terms of situations where children might use two past tense forms like broke and breaked due to too large a hypothesis language, couldn’t they simply learn to use broke because they would get positive feedback constantly for this and not for breaked. So rather it would be the absence of positive evidence for the improper tense rather than presence of negative evidence?
    Although, I feel like some type of negative evidence has to be necessary. I kind of saw negative and positive evidence like supervised and unsupervised learning. Since we need corrective learning like supervised learning I feel as though negative evidence in language acquisition must be needed also. Perhaps UG is this innate negative evidence that children seem to have that somehow guides their formation of correct-sounding sentences.

    ReplyDelete
    Replies
    1. This is what I was wondering too Ailish!

      I work at a daycare and am constantly hearing the 5-year-olds in my classroom use this kind of rule for their past tense ('breaked' instead of broke as you mentioned). Many times I've corrected them and they have even repeated it back to me but this overregularization continues to override this learning.

      Are you suggesting that UG could start with overgeneralized innate rules and that as we age, we learn to use the negative evidence from our caregivers/environment to select the right choice/correct for the rule? I thought Pinker was trying to say negative evidence isn't how the complexity of our grammar skill advances?

      Then again, as Pinker mentions, "conceptual development might affect language development.. if a child has not yet mastered a difficult semantic distinction, such as the complex temporal relations involved in John will have gone, he or she may be unable to master the syntax of the construction dedicated to expressing it". So I'm a little confused. Is this example of "breaked" not just a solvable induction problem or does it require more concept learning?

      Delete
    2. Hi Ailish and Linda,

      I think you both make a really interesting point about overregularized verbs! It seems like, as Ailish mentioned, because the children are constantly bombarded by plenty of regular verbs, the irregular version is not spoken – even if the child has previously spoken the verb in the irregular form before. So, while there is certainly a lack of positive evidence for that particular word, and, as Linda mentioned, frequent correction may not immediately result in the child’s use of the irregular form, this may simply be due to the difficulty of consolidating these novel instances. Since children in natural language learning environments are not undergoing rote memorization techniques to learn the irregular forms of certain words (like how an adult learning a language would), the process of consolidation is lengthened. Probably both negative and positive examples will assist with this process, since they are contextually different ways of learning the forms.

      I’m not too sure if UG is the thing to explain overregularization of verbs though, but I might just be misunderstanding you (I kinda see where your going with saying its innate negative evidence but am still a bit confused)! Irregular verbs vary between languages and aren’t a facet of UG – to break in another language could be a regular verb. The formation of “correct sounding sentences” is technically occurring in the situation of overregularization, changing breaked to broke would be a case of regular grammar instead.

      Delete
  19. “ Scientifically speaking, the grammar of working-class speech -- indeed, every human language system that has been studied -- is intricately complex, though different languages are complex in different ways.”

    I really appreciated that this was emphasized because I was about to wonder if the grammatical validity of the input matters for the child. This doesn’t really have any import for language acquisition theories though, because as the authors cite rural American English for example, though mainstream American English speakers might not agree with the conventions used they still hold their own internal grammatical validity. When thought of in this way it really is remarkable that children are able to acquire so much language and that the language mirrors the speakers they are used to hearing.

    ReplyDelete
    Replies
    1. Hi Julia! I like your observation!

      This reminds me of a study that was discussed in one of my other classes. They studied the number of words that young children hear based on the level of education completed by their parents (no high school diploma, high school diploma, college diploma etc.) and found that the children whose parents graduated from college heard thousands more words (and subsequently sentences) per year compared to children whose parents hadn't finished high school. The difference was huge. However, obviously all these children learn to use language effectively, albeit with some having a more vast vocabulary than others. This feels like almost a complimentary argument to the poverty of the stimulus: Children don't hear everything yet still learn language, and children who hear more don't necessarily develop better usage of language.

      Delete
    2. hey Riona ,

      I was also in that class and love the example - it's definitely a nice piece of complementary evidence to the poverty of the stimulus, and a strong argument in favour of the innateness of language acquisition ability.

      Delete
  20. This is a tangential comment, but something I've always been really interested in studying is language isolates.
    https://en.wikipedia.org/wiki/Language_isolate
    Controversially many people say Korean is an example of this (one of the most spoken language isolates).
    Basically the idea is that there are no ancestors for these languages the way there are for language families (i.e indo-european langauges, austronesian languages, niger-congo languages, etc.)

    I mention this because it seems really obvious that Korean follows the same UG rules as any other language does. Does a language isolate become acquired differently than any other language?
    Why do language isolates exist?

    ReplyDelete
    Replies
    1. I think these sorts of languages are great examples of evidence validating the existence of UG. I assume such languages would be acquired in the same manner as any other language, though it’s inception may have differed from other languages that have “evolved” from others. Perhaps these languages exist because of circumstantial situations like the populations of deaf individuals that spontaneously develop sign language isolates. While this type of language isolate development seems more reasonable than an entire language like Korean coming out of thin air, it could be that something like this happened historically, in a verbal manner (though I assume some sort of prior language exposure must have been involved).

      Delete
  21. When the authors talk about “negative evidence” the argument works best when we’re talking about grammar, or syntax. The idea is to see if infants receive corrective feedback from their surrounding environment when they speak grammatically-incorrect propositions.
    The authors briefly discuss the history of experiments that have looked at this issue specifically. These studies have involved bringing in first-language-learners to the lab and have them utter both grammatically correct and incorrect propositions to a language-knower. The scientists wanted to see if the parents would perhaps correct their childrens’ grammar, and they found no support for their hypothesis. They found that children were usually corrected based on the truth-value of what they say, rather than the syntax of their propositions. In fact, the authors note how no studies have been able to find support showing that parents react differently when their children use correct or incorrect grammar. All of this, according to the authors, points to Chomsky’s Universal Grammar as a pre-existing cognitive capacity that humans are born with.

    For me, I don’t know if I would use the information from those studies to support the concept of Universal Grammar, necessarily. To say that syntax is inborn and semantics are learned is quite a hard-lined theory; that is to say, it’s a theory that would be hard to support conclusively. Nowhere in the literature we have discussed so far has it been proven that syntax and semantics are separate faculties. This seems to be the underlying assumption that isn’t mentioned, yet pervades the article: that syntax and semantics can be studied separately in human subjects. Especially in the experiments mentioned above, I do not see how the experimenters controlled for the distinction between shape and meaning.

    Let me put it this way. A lot of the times, syntax has an effect on the semantic meaning of a proposition. This is especially familiar to me because my mother spoke to me in her second-language throughout my childhood. Many times, her misuse of grammar got in the way of my understanding her. For instance, she might say “You missed me!” when she intends to mean “I missed you!” as a result of direct translation from French to English. So, I might have to ask my mother questions to find out exactly what she intends to communicate, because the grammar changes the meaning of the sentence.

    When the authors talk about how adults usually correct language-learners for the truth-values of the statements, I do not think this irrevocably supports the notion of Universal Grammar. The meaning (therefore, truth-value) of a proposition relies upon the use of grammar, and I really don’t see how we could separate the two in order to study them independently.

    ReplyDelete
  22. “Ervin-Tripp (1973) studied hearing children of deaf parents whose only access to English was from radio or television broadcasts. The children did not learn any speech from that input. One reason is that without already knowing the language, it would be difficult for a child to figure out what the characters in the unresponsive televised worlds are talking about.”

    This immediately made me think of the symbol grounding problem. Could this be support for the theory? Namely that we can’t learn categories from other people who have experienced them before we have learned a minimal grounding set of our own from direct physical experience.

    ReplyDelete
    Replies
    1. This comment has been removed by the author.

      Delete
    2. Sorry there was a typo in my other comment! Here it is again,

      Hey Ian , I also had some thoughts about this specific section. Namely, I find it incredibly surprising that children can't learn any language from exposure to television speech, if UG is innate and not learned by the child (although it can be learned, as the linguists have demonstrated). If all it takes for a child to learn language is exposure to set the parameters on their innate UG ability, then why is TV insufficient? I suppose that the fact that the children obviously can't interact with the television might have something to do with it, but I'm genuinely shocked that they learn nothing from that speech. It also makes me think of the children he talked about who were raised in isolation and, as a result, mute. Is their muteness permanent? And if so, why would that be? Does the ability to acquire language truly have a strict window of time early in life (i.e. the period of incredible synaptic density in growth that Pinker discusses), and these children have simply missed the boat? But if their muteness is not permanent, can they ever learn language at the native level that a normal child does? Or, is it more akin to learning a second language, which we know is an entirely different process than learning a mother tongue, as our UG-grammaticality judgments are not flawless for a second language like they are for the first. I think that answers to these questions would be really illuminating for understanding the finer details of UG - and maybe they already exist.

      I also quite like your connection to the symbol grounding problem, although I'm not sure it would be support for the theory so much as an analogous scenario.

      Delete
  23. It is clear that ordinary grammar needs to be taught through supervised learning: in order to learn the correct rules of a language, there must be some kind of positive and negative evidence. This part is quite intuitive. However, with universal grammar (UG), it isn’t clear if any kind of learning is necessary; in fact, the theory of UG posits that there is no learning involved, that certain rules are innate. But perhaps the rules of UG are like mountains and valleys: Could it be possible that the difference between right and wrong grammar is so apparent, even to a child, that it is acquired through unsupervised learning?

    To me, this makes more sense than an “innate” set of rules of UG. If the “neurologically determined ‘critical period’ for language acquisition” does exist, and we know that pruning synaptic connections occurs in parallel with learning, it makes sense that children “learn” the rules of UG during this time. Another evidence that points to a type of learning is grammatical gender: Pinker states in Section 3 that grammatical gender presents no problem with children, but it becomes “mystifying” to many adults learning a second language. If UG is de facto innate, wouldn’t we see some lingering evidence in adults as well?

    Another question that I had was whether UG applied to sign language. Presumably, all the hand-signs and grammar is taught (via supervised learning), but do sign-language users never make UG errors?

    ReplyDelete
  24. One thing that strikes me about the idea of Universal Grammar is how little consensus there is among linguists about what it actually is. As discussed in the comments above, even Pinker seems to bring in ideas like the Blocking Principle into the UG discussion which do not seem to add to the argument for an innate language capacity. Instead, past-tense irregular forms seem possible to acquire through analogy and pattern-spotting, and additionally this is an area of grammar which children are often given feedback about. If the Blocking Principle was innate, then it seems to imply that once the correct past tense has been learnt then the regular rule would always be compressed for that verb. However, adults do sometimes make errors and there are also examples of verbs that have ambiguous past tense forms which people switch between (dreamt vs dreamed) which seems incompatible with this analysis.

    In my Linguistics classes at my home university we had long debates about the essence of Universal Grammar. So I will try and give a brief summary of what I understand about UG. The most recent ideas behind Universal Grammar relate to the ‘Principles and Parameters’ theory. There have actually been very few ‘universal’ rules found that hold true across all languages in the world (Principles). Initially, languages were studied from a very Western point-of-view, but as linguists analyzed data from languages across the globe it became clearer that there were many counterexamples to the initially proposed universal rules. However, many people believe there might be a limit to the extent to which languages can vary. It has been argued that Universal Grammar sets constraints on possible grammatical structures (Parameters). This argument helps explain why children are able to accurately acquire complex elements of syntax, despite limited input and feedback (poverty of the stimulus argument). Instead of having to explore infinite grammatical possibilities, they have an innate knowledge of potential ‘options’.

    However, there remains much debate about how these ‘parameters’ work. One proposition is that there a number of innate grammar ‘settings’ available and the child is able to learn which setting is appropriate for the particular language from the input available. Thus the parameter is like a switch, with a limited number of options to choose from. It has also been speculated that many of these ‘settings’ co-vary, meaning once one has been correctly identified, then the others can be automatically reset, which helps explain how grammars are acquired so quickly and so accurately despite a small amount of language input. But there is still huge debate in the field about whether all languages adhere to the settings suggested, or follow these parameter ‘cluster’ patterns. Another line of debate is whether there is an innate ‘default’ setting (an ‘unmarked’ option) for the parameter, or whether the parameters start ‘unset’ which means all options are equally available. And there seems to be no agreement on what or how many parameters exist.

    Another theory we discussed is Optimality Theory (usually attributed to phonological acquisition, but has been extended to syntax too). This theory puts forward the idea that there are universal constraints, which can be violated. Languages rank these constraints in order of importance (i.e. would be worse to violate higher-ranked constraint than a lower-ranked one). Therefore, during language acquisition children learn how the language of input ‘ranks’ each constraint, allowing them to choose the correct ordering of constraints, which results in the correct grammar. However, it seems to be a nearly impossible task to pinpoint what exactly these universal ‘constraints’ are.

    ReplyDelete
    Replies
    1. Comment continued (ran out of characters)
      What this paper, and the theories discussed above, show is that no-one really knows what Universal Grammar is. And those that argue for Universal Grammar, are arguing for very different things. I question whether Universal Grammar can really be defended if there is so little agreement on what it might contain. Despite Chomsky’s efforts, it seems we are still a long way off determining how language is acquired. Before arguing about the evolution of Universal Grammar, perhaps we should try to define it properly first.

      Delete
  25. I found this article very interesting and I like how it gave a comprehensive overview of the topic. However, my peers have already adequately voiced and answered most of the questions that I had for this reading. I’m wondering, however, what the relevancy is in separating general intelligence and language on the basis that one can be affected whilst the other one remains intact. I understand why this is reasonable, but couldn’t it be that there is damage to a third aspect (one that feeds into either system) that is causing either system to malfunction? Couldn’t, theoretically, the hub for language and general intelligence be the same with the same mechanisms but the process that feeds into them be different and only affected in one case. I myself have no reason to think this, I just am wondering how we can be so quick to use this as evidence for two discrete systems. Devil’s advocate, I guess. Maybe just trivial.

    I think this was touched on already but I’m definitely seeing the parallels between UG and whatever innate system we have in place to be able to learn categorization. I hope this is a correct assumption.

    I’m also having serious trouble comprehending UG. It seems like such an abstract conceptual entity. Its hard for me to distinguish between language parameters and UG itself and I don’t think Pinker made this much easier for me.

    ReplyDelete
  26. I’ve encountered this paper, and others of Pinker’s, on several occasions. Every time I have to start over at the task of wrapping my head around at the concept of language as something to study separate from our ability to understand and categorize. I find Wilson’s syndrome fascinating; it emphasizes how language can be understood in terms of its social function rather than by its information processing function. At the same time that Pinker argues that language cannot be the internal language of thought I find it is important as perhaps its own type of external cognition - if I want to think something over I won’t do it in my head but rather scratch words on paper or free write; I use language as a tool. I do wonder, however, if language is successful at capturing all that we categorize and think about; and do we have a chance at exploring and expressing these things without language?

    Finally, I wonder why don’t we tell children to stop with ungrammatical sentences; is it because our own language-learning module is simply more efficient with positive evidence (which doesn’t make sense in terms of hypothesis-testing, disconfirming is quick). Overall, this paper just leaves me unsatisfied due to the large number of questions it posed!

    ReplyDelete
  27. "These results, though of profound importance, should not be too surprising. Every speaker of English judges sentences such as I dribbled the floor with paint and Ten pounds was weighed by the boy and Who do you believe the claim that John saw? and John asked Mary to look at himself to be ungrammatical. But it is unlikely that every such speaker has at some point uttered these sentences and benefited from negative feedback. The child must have some mental mechanisms that rule out vast numbers of "reasonable" strings of words without any outside intervention."

    I get what this point is getting at. And I think it's interesting. However, I think that this gives UG a little too much credit. I understand that UG is different than the rules of grammar, and UG is sort of like the frame of the house and then syntax and specific grammar rules are the furniture and the insulation, etc, but to say that children have some sort of mental mechanism that helps them to figure out reasonable strings of words without ANY outside intervention negates the influence of several things. Sure, children might not receive feedback for every instance wherein they speak syntactically or grammatically incorrectly, but certainly this cannot mean that it is certain that children then MUST be able to figure it out on their own. Outside intervention doesn't even necessarily have to mean "correction", but rather refer to something as simple as merely overhearing adult conversation and eventually realizing some degree of incongruence - incongruence they perhaps never would have realized without this minimal outside influence.


    "For example, Gleitman (1990) points out that when a mother arriving home from work opens the door, she is likely to say, "What did you do today?," not I'm opening the door. Similarly, she is likely to say "Eat your peas" when her child is, say, looking at the dog, and certainly not when the child is already eating peas."

    I found this part fascinating. I don't have any bone to pick with it, but rather I had never considered something like this before. I'm trying to liken it to if I decided to learn Japanese at the age of 21. If I were to go live in a Japanese household and tried to pick up the language, it's interesting that it would probably take me a LOT longer (assuming I'm not carrying around a dictionary with me wherever I go and defining each sentence I hear word-by-word) to pick up Japanese than it would for a child who was raised in a Japanese household. If a Japanese person in the Japanese household were to open the door and say "I had such a long day" or "what do you feel like doing for dinner?" I would definitely have to somehow learn what each of those words meant, as her action while saying them would be opening the door - a SEEMINGLY unrelated action. However, I guess it's also interesting that I would ASSUME (because I already speak English so I am familiar with the nuance of language and behaviour) that this person probably isn't narrating his/her every move. Context vs UG here is an interesting dichotomy.


    ReplyDelete
  28. A bit of a gripe:

    I have to say that, for all the grief we (rightfully) give these papers for not addressing the real issue at hand (UG), I still found the goals of this article to be valuable, and will admit that it clarified a lot of material for me with respect to language as a whole.
    And while the evolution of UG (and how it could be possible) is definitely a fascinating and also infuriating question, I have to wonder if, at this point, it is a worthy question to pursue. If our ultimate goal in cognitive science is the reverse-engineering of our capacity to do everything that we can do -and if we’ve been able to elucidate the rules of UG, of ordinary grammar, and the physiological benefits of the child’s mind that enable it to learn grammar with such incredible speed and consistency - then do we need anything else to accomplish that goal? I can’t imagine why. Perhaps, much like our approach to the hard problem of feeling, we should simply accept this question as 'probably insoluble’ and move on.

    ReplyDelete
  29. I must admit – the entire time I read this, I just imagined my cute li’l nephew learning to talk and it made it much more enjoyable. Babies are the best!

    One interesting part of the article for me was the fact that children make grammatical mistakes only around 8% of the time that they have the opportunity to make them (e.g. “mouses”). I’d never noticed it before, but it’s really remarkable. I also liked reading about prosody because it is just so amazing to me how children across the board enjoy certain lilting voices more than others.

    Although this isn’t language acquisition per se, I am curious about things that we learn as adults – rules or memes or what have you. For instance, ever since getting an Instagram account, I have started to say, in response to something that I agree with / that resonates with me, “same”. It’s literally just looking at, for instance, a girl eating chocolate and saying “same”, and every(young)body knows what I mean. I guess this is a meme, not related to language in particular? I’m not sure. I just know that if I say this in front of my mum she has has no idea what I’m talking about.

    I don’t know the history of language, but I am also curious why languages vary so much in terms of phonetics. It may just be a matter of time, but for instance, why is it that in the five languages I know of, the word “water” is different in each? Not by a syllable or whatever, but completely different words (paani, aab, eau, water, maa’). If languages are developed not through genetics, was there originally just one language that changed a lot when peoples drifted away from others? If so, why did they change so differentially? I guess, ultimately, I am saying that environments don’t differ so much that the development of language should differ so much, and genetics don’t play a role (since for e.g. Japanese kids and Pakistani kids can learn a language based on their surroundings). So even over time, they should change in the same ways? I don’t know.

    One thing I noticed was that when I meet young children in Pakistan (family members and strangers), their vocabulary and speech seems much more developed than children I meet here in Canada of a similar age. I think this might have something to do with the fact that the children are often in a joint family system and are surrounded by people and adults at all times. Although by 12 or so I think the speech differences level out, at around 4/5 yr old, this difference is quite apparent. Moreover, of the children I know who live with just 2 parents in Pakistan, they are like the children I meet in Canada in terms of language ability.

    The most important part of the article is of course UG being innate. I don’t know enough about linguistics to conclude anything (as should be evident), but it reminds me of Augustine’s argument of the primordality of the intellect. I believe Augustine even gave an example of “walking”. If I say “walking” and then walk 2 metres, how do you know I mean “walking” and not “walking 2 metres”, “walking slowly 2 m”, “walking slowly”, etc? How does one extract the fundamental aspect of it? Augustine concluded that the intellect is primordial, before any empirical sort of understanding of learning and insight.

    Lastly, this article reminded me of Helen Keller’s story. Her Teacher (Anne Sullivan) taught her her first words when she was 7 yrs old – quite old and apparently past the age of learning. Keller eventually somehow deduced that the symbols made on her hand by Sullivan were related to particulars (i.e. water, mug, etc). Once she figured this out, she went on a rampage learning words. There was a breakthrough moment for her, however, when she realised this. Do children have that moment? I don’t think so. It’s all got to be unconscious for them.

    ReplyDelete
  30. "Children do not, however, need to hear a full-fledged language; as long as they are in a community with other children, and have some source for individual words, they will invent one on their own, often in a single generation."

    I think this a fascinating concept. If I am understanding correctly, children who receive very little in the way of examples of language can invent their own language for the purposes of communicating. This seems to suggest a propensity for language acquisition which is driven by the innate UG. I might sound crazy, but it seems that the input children receive is not just corrective feedback, but is purposefully fitted to the innate structure of UG (regardless of how poorly formed the input language is) so that at least some form of language is acquired, however broken.

    "Innate knowledge of grammar itself is not sufficient. It does no good for the child to have written down in his brain "There exist nouns"; children need some way of finding them in parents' speech, so that they can determine, among other things, whether the nouns come before the verb, as in English, or after, as in Irish. Once the child finds nouns and verbs, any innate knowledge would immediately be helpful, because the child could then deduce all kinds of implications about how they can be used. But finding them is the crucial first step, and it is not an easy one."

    This paragraph would then seem to imply that a child's representation of UG is an innate structure specialized in some way to recognize important aspects of language (such as noun and verb phrases) and how they appropriately fit together.

    Given these two things, it seems that the representation of UG in the brain is to some extent responsible for the way we acquire language in the first place. It's implied by this article that UG 'picks out' any statistical regularities in sentence structure (whether in the first example with very little input, or the second with full-fledged language) and so is able to recognize noun phrases as noun phrases, and verb phrases as verb phrases, allowing for the acquisition of categories, and subsequent structuring of propositions.

    Then again, this could all be wild speculation on my part...
    Maybe I'm just looking for a fanciful explanation to tie all these aspect of language (the innateness of UG and language acquisition) together.

    ReplyDelete
  31. What Pullum and Scholz (2002) tried to assess the poverty of stimulus argument, and they did not find any evidence of grammatical elements for which a child would not receive enough input. This argument of the poverty of stimulus is one of the cornerstones of any defense of UG as being inborn. However, Pinker addresses another characteristic of language, that to me, appears to come in support of UG being inborn and necessary for language acquisition.
    “All languages in some sense have subjects, but there is a parameter corresponding to whether a language allows the speaker to omit the subject in a tensed sentence with an inflected verb. This "null subject" parameter (sometimes called "PRO-drop") is set to "off" in English and "on" in Spanish and Italian (Chomsky, 1981).”

    Considering the issue of parameter setting, the only way that it could be successful is by setting a parameter, for example “null subject”, so that the setting could be changed if erroneous.

    “But how would the parameter settings be ordered? One very general rationale comes from the fact that children have no systematic access to negative evidence. Thus for every case in which parameter setting A generates a subset of the sentences generated by setting B (as in diagrams (c) and (d) of Figure 1), the child must first hypothesize A, then abandon it for B only if a sentence generated by B but not by A was encountered in the input (Pinker, 1984; Berwick, 1985; Osherson, et al, 1985). The child would then have no need for negative evidence; he or she would never guess too large a language.”

    So when setting a parameter, one option is the default that the child uses as a first hypothesis. For this default to be the same for everyone it has to be innate and so, for language acquisition to be successful, we must have a innate set of rules that guide language: UG.

    ReplyDelete
  32. I took a class on language acquisition which was based on many of the assertions made in this paper. It is strange to be taught something like UG as fact, and to accept it as such, and then to go back and look at it critically.
    That being said, I've always found it to be a very strange sort of....thing? Entity? Module is the word they use. The problem is just that it's so abstract that it's hard for me to wrap my head around. It's almost easier just to accept it and not have to go through the complexities of breaking down what it actually means. I suppose it's best to think about it like machinery - as purely computational - as they do in the paper. One of the assertions made by Chomsky was that language (i.e. UG) is a separate module from general intelligence. The main proof for this is that many people when injured or after a stroke, may suffer from impaired intelligence, yet their language is intact (and vice versa). And this is mainly due to the language-specific areas of the brain. Intuitively, I don't feel as if language is a separate module from intelligence. Especially because of the way categorization appears very similar to the UG machinery.

    ReplyDelete
  33. “Is language simply grafted on top of cognition as a way of sticking communicable labels onto thoughts (Fodor, 1975; Piaget, 1926)? Or does learning a language somehow mean learning to think in that language?”

    The topic of language and thought has always interested me. I find it very interesting how language helps us express thoughts. How thoughts can be broken down to language and how language can create thoughts. Can we say that language is a discrete way of expresses thoughts which could be viewed as analog?



    Additionally, the parts of the article I liked the most were the evidence and innate parts of language. Here’s a summary:

    Negative evidence is retrieved every time children are corrected when they speak ungrammatically. It would seem that language acquisition should be more difficult without these corrections, but many studies have shown that children obtain an understanding of the target language’s grammar with or without negative evidence (Pinker, p. 154). Programs, however, need linguistic constraints. Children can learn that the hypothesis is incorrect also by hearing positive evidence, where the child hears sentences from the target language that are not in the hypothesized one.

    Across all languages, there exists UG. The first piece of evidence says derivational suffixes appear inside the inflectional one, and not vice versa, a rule people can easily follow without ever learning the exact rule. The second piece of evidence states an irregular plural seems natural inside a compound word, but not a regular one. For example, children who think “mouses” is the correct regular plural will still not say mouses-eater, even if they never learned not to do this. Universal grammar specifies the allowable mental representations and operations that all languages are confined to use. Because of the subtlety and abstractness of the principles of universal grammar, Chomksy has claimed that the overall structure of language must be innate.

    ReplyDelete