Saturday 2 January 2016

8b. Blondin Massé et al (2012) Symbol Grounding and the Origin of Language: From Show to Tell

Blondin-Massé, Alexandre; Harnad, Stevan; Picard, Olivier; and St-Louis, Bernard (2013) Symbol Grounding and the Origin of Language: From Show to Tell. In, Lefebvre, Claire; Cohen, Henri; and Comrie, Bernard (eds.) New Perspectives on the Origins of Language. Benjamin


Organisms’ adaptive success depends on being able to do the right thing with the right kind of thing. This is categorization. Most species can learn categories by direct experience (induction). Only human beings can acquire categories by word of mouth (instruction). Artificial-life simulations show the evolutionary advantage of instruction over induction, human electrophysiology experiments show that the two ways of acquiring categories still share some common features, and graph-theoretic analyses show that dictionaries consist of a core of more concrete words that are learned earlier, from direct experience, and the meanings of the rest of the dictionary can be learned from definition alone, by combining the core words into subject/predicate propositions with truth values. Language began when purposive miming became conventionalized into arbitrary sequences of shared category names describing and defining new categories via propositions.

55 comments:


  1. “It has to be understood that our test pertains to natural language, such as English, French, American Sign Language, or Proto-Indo-European It is not about artificial formal languages such as math, logic, computer programming languages… What is our test of whether something is a natural language?”
    Katz’s version: A natural language is a symbol system in which one can express any and every proposition.
    Harnad’s version: “translatability” thesis: Anything you can say in any natural language can also be said in any other natural language – though it is important to add, not necessarily said in the same number of words.

    Katz’s definition of a natural language does not sit well with me. The definition he uses is that 2 + 2 = 4 is a proposition, just like “the cat is on the mat” is a proposition. Both have predicates and a truth value. However, it was earlier stated that natural languages are not talking about mathematics, which is an artificial language. So how can he use 2 + 2 = 4 as an example if this is mathematics?
    Second, Harnad’s (this paper’s definition) does not sit well with me either. If we are stuck defining what a natural language IS, how can we define it using other natural languages (which we also don’t have a definition for). On top of that, let’s say we create a symbol system that has instructions of how to translate (kind of like Searle’s instruction in the Chinese Room argument), then here is a perfect example of a natural language being said in any other symbolic language. Meanwhile, the definition says any natural language can also be said in any other natural language (and says nothing about being said in any other symbolic language). Seems a little tricky and contradicting to me.

    ReplyDelete
    Replies
    1. Neither Katz's "effability" not intertranslatability are definitions of language. They are criteria (rather like the TT) for whether or not something is a (natural) language.

      Formal languages like arithmetic, logic or C++ are not natural languages, they are parts of natural language. "2 + 2 = 4" is English ) and French, etc.).

      (What follows from this, for those quick at putting things together, is that the Strong C/T Thesis, because it applies to computability, also applies to natural language. What is computable is effable and what is ineffable is uncomputable. It's less clear whether everything that is effable is computable. Natural language is, after all, at least as strong as computation and, being grounded, may also be stronger.

      No problem with defining language using language (as long as the language is grounded...)

      The T2-passing algorithm is not a translation (of anything into anything -- and certainly not Chinese into English or vice versa. (If you thought so, then you misunderstood Searle's Chinese Room Argument!)

      (Jordana, please reply to this so I can see whether you have understood all these points and how they reply to your commentary, or I've not been Kid-Sib enough.)

      Delete
  2. As much as I found the Pinker article painful to read (and endless), I found 8b really interesting! Not even it gives us summaries of previously covered class material, but it includes them into the problem of language and evolution. The questions raised were clearly presented and actually invited the reader to actively think about them: “we have now reduced the question of what is the origin of language to the question of what is the origin of a symbol system in which you can say anything that can be said in any natural language. When did it happen? Where did it happen? Why did it happen? How did it happen?” On the other hand, Pinker and al. profoundly lost me. I don’t enjoy linguistic, and unfortunately this article confirmed that belief.

    Also, I already had some kind of theory of my own concerning the adaptation of human into a more social environment. For me, reading the portion on motivation really made lots of sense in my actual mental schema. Yes, we are intelligent, but so are other primates, and animals. What makes us unique is our endless interest in inferring thing and sharing that knowledge. Emergence of language might have appeared in a clement time and environment where teaching was made possible. That passage of knowledge must have being beneficial, thus selected by natural selection. If it hasn’t being of a profound motivation and interest, maybe language and our other capacity would never have emerged. For example, a chimp seems to understand an association and contingencies; but does it get it as proposition with truth value? Most of the time, we consider strings as either true or false. It is really intrinsic to language and to thoughts (and I guess to the language of thought then?). That is one of the question raised in this article that I find so innovative.

    ReplyDelete
    Replies
    1. (1) Neither Pinker nor Bloom is a linguist! Both Montrealers, Pinker did Honours Psych at McGill and then a PhD with Steve Kosslyn at Harvard. Paul Bloom is a developmental cognitive psychologist (student of Sue Carey at MIT).

      But you're right that their paper is rather long and uninspiring (though not for everyone). They confirm the obvious -- that all aspects of language evolved and that many aspects are learned (so for that we only needed to evolve the capacity to learn it). But the paper begs one huge nontrivial question about the evolution of language: How did the rules of UG evolve, since they cannot be learned by the child from the evidence it has available (poverty of the stimulus)? -- We will discuss that next week too.

      (2) The other non-trivial question about language is how and why it evolved. Motivation may explain how and why -- given the adaptive advantage of language -- children learn it so quickly (and obsessively). But inborn motivation to communicate linguistically (propositionally) cannot have preceded language itself! It can only have been a consequence of the adaptive advantage of language.

      In other words, the proposition -- and its benefits -- had to come before the Darwinian selection (actually Baldwinian evolution, since it is the selection for more and more readily learning it) that facilitated and accelerated the capacity to learn language.

      That still leaves the question of what is the adaptive advantage of being able (and inclined) to produce and understand propositions (if that's what language is).

      Yes, it's true that every proposition P -- whether it's "the cat is on the mat," "a zebra is a striped horse," "2 + 2 is 4" or this very sentence itself -- is implicitly making the statement "It is true that P."

      (We lie or make mistakes too, but the default assumption behind the power of language is that what we are saying is true. Lying arose because there was language (and saying that it's true that NOT-P)-- and there may even be congenital liars, e.g., psychopaths -- but what people say must mostly be more likely true than false, otherwise language would be anti-adaptive. But beware of the Web, where anyone can say anything, anonymously, with no consequences for themselves. There we have to consciously over-ride our evolved inclination to make the default assumption that what we're being told is true.)

      The default truth assumption embedded in language motivation may also be behind the power of hypnosis...

      Delete
  3. So so far it seems like there are two major perspectives about how language evolved humans.

    1. Pinker and Bloom: a series of calls/gestures turned into combinations of complex language. Over time, these combinations became adaptational, advantageous for survival.
    2. Chomsky and Gould: language evolved as a byproduct of evolution, not a direct adaptation (what they call a "spandrel"). So perhaps language evolved as a byproduct of the increasing complexity of the brain.

    I am more in the Chomsky camp. I believe that language is far too complex to have been its own independent event, I believe it must've been supported, or grounded, in some higher events (brain development).

    The Harnad et al. position is that over time, pantomimed gestures became pantomimed propositions which eventually became vocal propositions. I can see how this fits into the Chomsky/Gould camp considering that the development of pantomimed gestures gave way to pantomimed propositions and eventually to vocal ones, and all these sequential developments rely on the underlying brain developments occurring at the same time.

    An interesting brain area I studied in one of my classes is the "new motor cortex"(caudal of central sulcus/caudal part of area 4) which is only present in some higher apes and humans, and allow for highly skilled, individuated movements (such as fine finger movements when gripping a glass as opposed to the sweeping hand movements seen in lower primates). Perhaps the evolution of this area would have been important for the usage of tools, gestural communication, and eventually pantomimed propositions and vocal propositions.

    ReplyDelete
    Replies
    1. Hi Maya. I agree that language must have developed as a result of increasing complexity in brain development but I’m not sure I’m sold on the idea that it was a byproduct of something else. In other words, I think this increased brain complexity occurred specifically for language and not for some other function/reason that happened to allow for language. For me, it seems as though language is too complex (and too specific) to have evolved as a byproduct. Especially the brain regions for language, which are so discrete, for me, point to the idea that they developed for language and not as a result of general brain development. I think the development of language areas in the brain and the use of increasingly complex communication skills went hand in hand - that brain areas developed specialization for language over time as communication skills increased and were beneficial. Since humans are necessarily social, it’s only natural that communication attempts would occur over time, probably beginning with gestures, then calls. Perhaps over time, with repeated use of gestures and calls, the brain areas responsible grew more complex and allowed for more complexity in the communication, since this was so useful. I could see this eventually developing into language. Not sure exactly what “camp” this puts me in but this is the most natural way I see it- perhaps a combination of what you’ve summarized above form the articles.

      Delete
    2. The spandrel notion is an (unsuccessful) attempt to explain the evolution of Universal Grammar, not the rest of language. (The guess that language had something to do with tool use is not much better! -- What is the connection?)

      Yes, language is a form of communication (and all social animals communicate). But what sort of "complexification" turns communication into language?

      Delete
  4. From my understanding, this article is proposing that language began by acquiring categories through direct experience and teaching these categories to others. The authors called this “proposition learning”; to understand new categories by observing and then to intentionally teach them to others. Chimps cannot do this. They have the observational capacity and the ability to categorize, but they lack the motivation to pass the categories on. Learning categories through gestures and pantomimes, by instruction, was faster and more adaptive than learning them through induction. So maybe the “propositional attitude”, the motivation to pass on categories through instruction, is what gave humans the capacity to have a language.

    The authors also mentioned Kernel words, Core words, and Minimal Grounding Set (MGS). This is where they lost me. Kernel words are words that can be used to define all other words, but cannot be defined by other kernel words without using a core word. The article did not go into much detail about what core words are, but I understood that core words are words like “not”, which allow you to define kernel words. What other words are core words? Would “and” and conjunctions like “but” be core words too? Also, I am having trouble understanding the difference between Kernel words and a Minimal Grounding Set. The article defines kernel words as words that define all others, but they are not the smallest number of words that can define all others. Does this mean that creating a MGS involves narrowing down Kernel words into those that are closest to the core? How then would you be able to define the other kernel words if the definitions are circular?

    ReplyDelete
    Replies
    1. Hi Amanda, I was also confused by the relationship between the kernel, core, and MGS. You're totally right about the kernel words being able to define all other words, and core words are needed to define the kernel words. They are the “strongly connected components” (page 13) that make the circularity of definition possible. What confuses me the most is where function words like “and” or “but” fit into all this. Do we even consider these words when discussing the core and the MGS, or do we just take them for granted so we can focus on the content words that correspond to discrete category names? The authors also make it seem like “the all import power of predication” (page 14) works the same way. So do words like “is” and “are” have to be explicitly included in the core or MGS? Are function or predication words in the core/MGS at all? Or are these groups reserved for distinct content words alone?

      As far as the difference between the kernel and the MGS goes, I think it's important to note that the kernel is unique to the specific dictionary, while the MGS is shared amongst all dictionaries. The way I see it, this implies that the kernel of a dictionary is determined by the authors as a direct result of their style and rhetorical choices. Meanwhile, the MGS is determined by the fundamentals of the language shared between all the dictionaries. It is the smallest possible set of words that can define all others based on the way the language itself works. So the kernel is an internal constraint on a dictionary, while the MGS is an external constraint.

      Where does this leave us with the relationship between the kernel, core, and MGS then? I think you're on the right track with “creating a MGS involves narrowing down kernel words into those that are closest to the core.” It all depends on what “closest” to the core really means. But to me, it seems like there is hierarchal relationship between the three such that the core is the smallest set, the MGS is the core plus some necessary kernel words, and the kernel is the MGS plus whatever other words the authors favored when writing the dictionary. So I would say you definitely have the right idea, but I could be way off here too.

      Delete
    2. Gestures and pantomime are not language. And you can't learn a category such as which mushrooms are edible or inedible through gestures or pantomime. The gestures can warn you if a mushroom is or isn't edible, but they can't tell you how to tell apart the members from the non-members. For that you need grounded category-names that can be combined into propositions describing the features that distinguish the members from the non-members of a category, thereby defining the word that names the category.

      The words in the dictionary analysis are all content words -- nouns, verbs, adjectives, adverbs -- not function words like "not" or "and" or "is." Content words are the names of categories, and almost all words are content words. The function words are for logic and grammar; they are purely formal. They have no referent, hence need not be grounded.

      Kernel words are the 10% of content words that are left when you remove any word that does not define any further word and can itself be reached by definition out of the words that are left.

      A dictionary has only one Kernel; it is unique. But the Kernel is not the Minimal Grounding Set (MinSet or MGS) -- the smallest number of words out of which all the rest of the words in the dictionary (the 90% outside the Kernel, plus all the Kernel other than MGS itself) can be defined.

      The MGS is not unique: There are very many MGSs in the Kernel -- many different sets of words out of which all the rest can be defined. The differences between the MGSs are sometimes very small, just a word or two difference, based on a choice between two near-synonyms such as "big" in one and "large" in the other.

      The Kernel contains another structure, the Core, which is defined as the biggest "completely interconnected" subset of words in the Kernel: one in which where is a definitional path from any word to any other word within that same subset. The path can be one or many steps. The Core makes up about 3/4 of the Kernel. The remaining quarter of the Kernel is the "Satellites" -- very small completely connected subsets like "big = not large, large = not small."

      Every MGS is a little smaller than the Core, about 2/3 of the Kernel. And each MGS is part-Core and part-Satelliteé

      The words in the 10% Kernel are learned younger than the 90% of the words in the rest of the dictionary, and are also more frequently used. The words in the Core are even younger, and even more frequently used.The Satellite words seem to be more abstract -- but what is meant here by "abstract" is a bit fuzzy. (Pick 20 words at random from the dictionary and the rank them on a continuum from concrete to abstract: What is going on there?)

      The picture of it all is here.

      Delete
    3. Hi Professor Harnad,

      If 'not' is just a function word, why is it in the core of the diagram of your example toy language? 'Not' is defined as 'no,' and vice versa. Is 'no' a content word?

      Delete
    4. Timothy, the sample dictionary was not a real dictionary. It was just a toy example to illustrate what a dictionary graph looks like and how you can reduce it to the Kernel, the Satellites, Core and MinSets. The definitions are not real, and hence neither are the words. (But you're right that function words should have been excluded from the example! We've made better ones since -- in fact real toy dictionaries from the dictionary game (try it -- but if you start a dictionary, finish it too!): http://lexis.uqam.ca:8080/dictGame/

      Delete
  5. This article discusses symbols, symbol systems, formal symbol systems/formal language and natural language. We know symbols are objects with arbitrary shapes. A symbol system is a set of symbols that are paired with a set of rules for manipulating the symbols. A formal symbol system is everything that a symbol system is, and additionally the rules are based only on shape and not on the meaning of the symbols. A formal language is a language that is a formal symbol system but it also can be interpreted to mean something, and therefore is has semantics*. A natural language is defined as one that can express any and every proposition. The authors discuss that anything said in one natural language can be said in another.
    Is there truly a difference between formal symbol system and formal language?
    What is the definitional difference between formal languages and natural language?
    Is it true that natural languages constitute a subset of formal languages?

    ReplyDelete
    Replies
    1. Translatability is a property of language, not a definition of it.

      The difference is not between a formal symbol system and a formal language (they're the same thing) but between a formal symbol system and a natural language.

      The difference is that in a natural language the (grounded) word are manipulated on the basis of both their form and their meaning whereas in a formal language they are manipulated only on the basis of their form (as in a Turing Machine).

      It is formal languages that are a subset of natural language. (See my earlier replies above.)

      Delete
    2. I quite enjoyed this reading because many concepts discussed in the previous paper by Pinker and Bloom were clarified and re-emphasized here. More emphasis was placed on why language had evolved in the first place and I found it interesting to differentiate between a formal language and a natural language.

      Is it not possible for a programming language such as COBOL or VB.NET to get near to English natural language though and why not?

      Example of COBOL code:
      000100 IDENTIFICATION DIVISION.
      000200 PROGRAM-ID. HELLOWORLD.
      000300
      000400*
      000500 ENVIRONMENT DIVISION.
      000600 CONFIGURATION SECTION.
      000700 SOURCE-COMPUTER. RM-COBOL.
      000800 OBJECT-COMPUTER. RM-COBOL.
      000900
      001000 DATA DIVISION.
      001100 FILE SECTION.
      001200
      100000 PROCEDURE DIVISION.
      100100
      100200 MAIN-LOGIC SECTION.
      100300 BEGIN.
      100400 DISPLAY " " LINE 1 POSITION 1 ERASE EOS.
      100500 DISPLAY "Hello world!" LINE 15 POSITION 10.
      100600 STOP RUN.
      100700 MAIN-LOGIC-EXIT.
      100800 EXIT.


      Yes, programming languages are very specific and take rigid input so it is difficult to address different kinds of interpretation/abstraction of the English language but you can still create complex combinations of predicates that are understandable as natural language is.

      If you define the ontology of concepts using natural language (ie. in English words) in a programming language then will you not be able to create a 'natural' programming language (NPL)? NPLs can really change the way we interact with computers (although at a cost of increasing the burden on the machine in terms of accuracy, design, etc) and are probably more efficient since they have less ambiguity as well as less complexity which you find in the English language. This actually leads me to wonder whether we are being less efficient in using our own natural languages? Just take a look at all the meaningless verbiage we come across when reading different research articles... are we really improving in our ability to master language or have we hit a plateau? Is figuring out countless ways to express abstract ideas effective and efficient or is kid-sib way of communicating more effective and efficient?

      Delete
    3. The formal languages of maths, logic and computer programming are all subsets of natural language. They are all English (and French, etc.). But being formal, they don't have a symbol grounding problem. They are pure symbol systems, manipulated on the basis of their shapes, not their meanings. If we interpret them, we move out of the formal language into the natural language of which they are a subset. ("2 + 2 = 4" is English. But in formal maths, the meanings of 2, 4, + and = have nothing to do with the rules you are using to manipulate them to get correct results, nor with what they can be interpreted as meaning in English.)

      Delete
  6. “What seems missing [within chimps] is not the intelligence but the motivation, indeed the compulsion, to name and describe… Because we were more social, more cooperative and collaborative, more kin-dependent… The tendency to acquire and convey categories by instruction thus grew stronger and stronger in the genomes and brains of the offspring of those who were more motivated and disposed to learn to do so. And that became our species’ “language-biased” brain” (11).

    The article suggests that the primary factor separating humans that possess language capacity and chimps that do not possess language capacity is that the latter lack the motivation to do so. I am not fully convinced that the main reason we evolved the ability to communicate through language is the fact that over 250,000 years ago we as a species were more social/cooperative and perceived a greater benefit from learning by instruction. I feel that the causality could go the other way, i.e. that the capacity for language has made us more social. There is no doubt that expression through language rather than pantomimes/gestures allows us as humans to be more cooperative + collaborative. In terms of the how/why of language evolution, I feel that authors never fully answered how language evolved over time in terms of reverse engineering and specific components or mechanisms in the brain required for language. In this way, I feel the answers to these questions could help distinguish between those species that possess language and those that do not. The article did a good job of once again clarifying categorical learning, however, is learning categories all there is to language? In other words, how did we form phonology, syntax, pragmatics and meaning? Or are these already accounted for?

    ReplyDelete
    Replies
    1. Yes, no convincing (let alone confirmed) explanation of how and why language evolved exists yet.

      But, yes, acquiring categories ( = learning to do the right thing with the right kind of thing) is a very general and powerful cognitive capacity, covering much if not all of cognition. (There's still continuous sensorimotor skills too.)

      For example, every sentence (that is not a command or a request) is a subject/predicate proposition. The famous syllogism: "All humans (H) are mortal (M); Socrates (S) is human (H); therefore Socrates (S) is mortal (M)" is really three statements about category membership: the subject category is a member (or a subset) of the predicate category: H is a member of M, S is a member of H, so S is a member of M.

      Phonology is partly innate, partly learned; conventional grammar is learned, UG seems to be innate; meaning is grounding + feeling; pragmatics, I assume, is learned... (It has a lot to do with the context of confusable alternatives when you have to do the right thing with the right kind of thing, hence, uncertainty-reduction.)

      Delete
  7. While this paper was definitely more interesting to read and agree with most of it, it has left me with quite a few questions.

    I am still speculating whether this paper has answered the question about the evolution of language. While I think it’s definitely touching upon key components, I think it is still incomplete…
    Now I wouldn’t be surprised if I am completely off, but I was not entirely convinced of the mushroom example when discussing the symbol grounding problem. More specifically, I don’t understand how this example specifically shows how we link the sense to the referent? What I understood from this example is that it explains that we learn categories through “induction”, but I don’t feel it explains how we link our learning of categories to the learning of words symbolically.

    However, now that I think about it, I think I am trying to explain something else. When I think of explaining the evolution of language, I think of how we came to create this arbitrary set of symbols, their rules and how their composition, productivity and systematicity. How were these symbols chosen? How did we come to have semantics and syntax? I think maybe we need to look at those to get a sense of the evolution of language? I mean understanding the evolution of language is difficult – it doesn’t fossilize. We can’t find any remnants of language; it dies with the species. I know there are some projects looking at the remnants of articulatorial perceptual mechanisms, looking at bone anatomy and looking at if there is a throat that could account for the types of sounds that we do – but all we get from that is that they were capable of certain sounds but still doesn’t tell us much about what they we’re doing with those sounds. But if natural language is a symbol system in which one can express any and every proposition – maybe looking at where semantics & syntax came to play will give us crucial insight?
    I confess, I don’t know where I am going with this anymore.
    Hectic message and weird questions, sorry!

    ReplyDelete
    Replies
    1. The idea of the mushroom (toy) world was to show that if you ground some categories directly ("edible," "markable") through direct trial and error experience then you can get further new categories for free by combining the grounded categories in a proposition: "returnable" = "edible" and "markable." That's like a definition of a new category using the grounded categories of a dictionary's minimal grounding set.

      (But this tiny toy simulation does not distinguish the trivial case of name-compounding that you (I think) asked about in class -- "on-mat cat" (same as "edible-markable-returnable) vs a subject/predicate proposition with a truth value" "cat" [is] "on mat" ("returnable" = "edible" and "markable"). That's a much harder problem.)

      Where does a language's vocabulary come from? Same place our categories come from. We keep making it up and adding to it generation after generation, based on what's the right thing to do with the right kind of thing. Then we just need to agree on what name to give the category.

      Where does grammar come from? Conventional grammar is just that: conventions we agree on so we can talk to one another, rather like agreeing to stop and red lights and go on green lights.

      But UG (Universal Grammar) is another matter (and we'll be talking about that next week).

      The evolutionary questions in all this (apart from the evolution of UG, which is a very hard problem) are about the evolution of the capacity to learn language (vocabulary, phonology, conventional grammar) rather than the evolution of vocabulary, phonology, and conventional grammar. (Don't mix up Darwinian evolution with "cultural evolution" (i.e., learning).

      Delete
  8. I am not really familiar with the literate on the subject. However, after reading the article, I still have trouble understanding why did language appear from ‘’show’’. It seems much more logical to me that language appeared from the observation of grunts and sounds other animals made. The animals would then listen and be able to correlate the sounds the animals produced with the action or the event that just happened. By observing several situations, animals would then have been able to establish a correlation between these sounds produced and in similar situations would start employing these sounds. With the model proposed, I also find it difficult to visualize how the animal looking at the instructor animal doing the action would know when to keep his or her attention focused on the instruction. The instructor animal, like all other living beings presumably did many actions, not all of them having an instructive value to the learning animal. Furthermore, with for example dogs getting trained, they tend to react and learn the command their owner says rather than seeing their owner sit down (this is also true for animals that do not have a lengthy history of living with human interaction, for example chinchillas present similar patterns).

    ReplyDelete
  9. - I’m confused as to how explaining the origin of language as conventionalized communication explains how UG is inborn, since the kernel is learned. And wouldn’t the requirement for induction differ from language to language, rather than generalizing for natural languages? If a language is dependent more on context rules rather than symbol manipulation rules, or if the system is limited in symbols, wouldn’t induction needed before instruction in these cases differ?

    - As to why language began, I’m having a little trouble understanding the explanation given. So at first, we found out that we can acquire categories, kinds of things, verbally/through instruction, without being predisposed to it, and people became motivated to share their knowledge, which led to evolution creating a bias in which we are disposed to learn categories. This language-biased brain is the reason why we have language??

    ReplyDelete
  10. "Because we were more social, more cooperative and collaborative, more kin-dependent—and not necessarily because we were that much smarter—some of us discovered the power of acquiring categories by instruction instead of just induction, first passively, by chance, without the help of any genetic predisposition. But then, as those who happened to be more motivated to learn to acquire and share categories in this all-powerful new way began to profit from the considerable advantages it conferred…" (Massé et al. 2013)

    This is a very interesting account of language. Usually, in science, we commonly focus on humans as more intelligent creatures, which lead causally to many of our higher executive functions and behaviors. However, approaching the issue from a social perspective by stating that the cooperative and collaborative nature of humans resulted in stronger desire to instruct is far more novel in the field. This desire eventually became a substrate for natural selection to act on because of its adaptiveness, through its ability to increase efficient learning, eventually resulting in the genome having been modified to acquire and convey categories more efficiently. As a student more interested in social psychology rather than other fields of psychology, it is fascinating to see how social interactions contributed to the evolution of language. I am still struggling with the concept of a cognitive faculty being capable of establishing propositions. Because propositions seem to be the crux of the origin of language, I can’t rest easy thinking humans just already had a skill that needed to be amplified or attained a mutation that gave them the capacity to articulate propositions. However, I understand that we shouldn’t degrade such hypotheses immediately in the area of Cognitive Science because it is extremely unrealistic to have all the answers immediately.

    ReplyDelete
  11. I’m a little confused as to the point of this paper. It explores how language came to be from early humans but in my opinion it doesn’t offer anything interesting in its explanation. We know language is used for categorization so the idea that it came from categorization isn’t anything particularly novel or interesting, nor do I see how it helps further any knowledge in the field of linguistics. The paper also discusses the evolutionary advantage of language, another point that isn’t particularly interesting as it’s pretty hard to argue it doesn’t offer such an advantage. Though I did find the simulation from Cangelosi and Harnad a little interesting, I can’t imagine it would hold up too well under scrutiny in terms of the timeline that it offers. So I come to my question of why write this paper? I don’t see it offering any insight to further either the field of linguistics or anthropology. It doesn’t propose further areas of study or potential experiments or lines of thought to look into. It does serve as something of a summary to the topic of the beginnings of language. I would have like some harder evidence as to why this theory might be true other than what seems to me to be common sense (particularly to anyone in the field of linguistics and especially cognitive science). If not strong research papers to back the theories presented then it doesn’t seem unreasonable to leave some sort of direction for the reader as to what could follow these theories or what they could accomplish. I don’t mean to be overly critical, but I wonder these types of things when I read general knowledge papers such as this. Maybe I look for too much direction in research, but in undergrad we get the idea drilled into our heads that we need to have strong purpose for any research that we do. Also, side note: Why the introduction about powerpoints? It seemed unnecessary and irrelevant. I understand the connect to language but it fell flat for me.

    ReplyDelete
  12. "Chimps have categories. We keep training them to “name” their categories (whether with gestures, symbolic objects, computer keyboards, or words) — even to combine those names into quasi-propositional strings. And the chimps oblige us if they are hungry or they feel like it. But what is striking is that they never really pick up the linguistic ball and run with it. They just don’t seem to be motivated to do so, even if they sometimes seem to “get it,” locally, for individual cases."

    I posted a review article previously in my skywriting a few readings ago and I think that it has some interesting ties to these specific remarks on chimpanzee motivation vs intelligence in naming things.

    (https://ecourses.uprm.edu/pluginfile.php/68040/mod_resource/content/1/berwicketal.pdf).

    Within the review article they spend some time looking at the pros and cons of animal models to understand language evolution, with a particular emphasis on the fact that language is a species specific phenomenon.

    One particular study they point out is one by Laura-Ann Petitto comparing human and chimpanzee language acquisition.

    "a chimpanzee uses the label for ‘apple’ to refer to ‘the action of eating apples, the location where apples are kept, events and locations of objects other than apples that happened to be stored with an apple (the knife used to cut it), and so on and so forth – all simultaneously, and without apparent recognition of the relevant differences or the advantages of being able to distinguish among them’"

    In contrast: "for human infants even the first words ‘are used in a kind-concept constrained way (a way that indicates that the child’s usage adheres to ‘‘natural kind’’ boundaries)’. Even after years of training, a chimpanzee’s usage ‘never displays this sensitivity to differences among natural kinds. Surprisingly, then, chimps do not really have ‘‘names for things’’ at all."

    I think that these findings are interesting in the conversation of how language specifically effects the way in which we categorize and perhaps even perceive the world around us.

    However, as mentioned in the Blondin Masse article, it isn't that chimpanzees have no categories or are not intelligent, it seems that the presence or absence of a capacity for language affects the ways in which each species categorizes (chimpanzees must have slightly broader and more associative categories since they do not directly and specifically name 'names').

    When I first read the review article that I posted, I thought that this study was really interesting. However, I find that it mainly seems to (in that particular paper) radicalize and expand the gap between human and chimpanzee modes of categorization due to the presence or absence of the capacity for language. It looks at these findings as a major hinderance to understanding how human language could have evolved. So when I read the Blondin Masse article, I thought that this notion of motivation was a really interesting way of describing how these disparities between chimpanzees and humans in our language-dependent and independent ways of categorizing might have evolved. That because chimpanzees were perhaps less social than us, they continued to categorize from induction, while we began to also learn to categorize through instruction as well - thus eventually shaping our capacity to 'propositionalize' (paragraph 1 pg. 11 of Blondin Masse).

    ReplyDelete
    Replies
    1. I agree with you Kaitlin, that it isn't that chimps have no categories or are not intelligent. I'm not convinced that it's because they are unmotivated/lazy to do so - they just do not need to use language in the same sense that we do in our every day lives (ex. explain abstract concepts or theorums). Although bonobos like Kanzi from Georgia State University or gorillas like Koko have shown some kind of primitive semantic ability, it does not mean that they are less intelligent than us but rather have a different kind of intelligence.

      It would be super interesting if we could one day disprove the innateness hypothesis by teaching a primate how to communicate like us though (not speaking natural language of course since they have different vocal cord anatomy and whatnot but through different audible or visual symbol manipulation).

      Delete
  13. What I found most compelling about this article was the timeline for how language evolved. I mentioned in my 8a Skywriting that a key aspect of the Pinker article the dissatisfied me was their concept of a language-endowing mutation occurring in a single individual, who suddenly had this completely unique capacity. I definitely find it much more feasible that language could have developed as the result of a change in 'motivation' - building on "cognitive components... [that] were already available before language began," as this would mean that the 'motivated' individual's communications could actually be successfully understood by 'non-motivated' individuals (using 'motivated' in an over-simplified way here for clarity).

    I found it especially attractive because, in addition to resolving my qualms about initial reproductive benefits, it helped soothe an issue I had regarding apes and language ability. While I definitely agree that chimpanzees or Koko the gorilla don't have language, I'm partial (maybe because I've watched about 100 Koko videos and have quite a soft spot for her) to the possibility that Koko's lack of language doesn't come from a lack of intelligence, but instead a lack of "the compulsion to name and describe." I certainly might just be "projecting a propositional gloss", and definitely appreciate the likelihood that she is in fact learning complex categories as opposed to actual propositions (i.e. an on-the-mat-cat, and not a cat being on a mat) - but something about her truly intricate interactions with people and rapid responses to new propositional information (e.g. her immediate, genuine grief at the news of Robin Williams' death) leads me to hope/believe that she may really be understanding propositions - just lacking in the drive to extend this learning to true language.

    ReplyDelete
    Replies
    1. Hi Adrienne,

      I too am a fan of Koko and have watched my fair share of videos of her. For a while, I thought that Koko was just undergoing the Clever Hans Effect, which Julia brought up in the 8a skywriting. However, after seeing Koko respond to appropriately to sad news (Robin Williams’s death, and Koko’s kitten’s death), creating words (Koko was never taught the sign for ‘ring’, but combined the signs for ‘finger’ and ‘bracelet’) and signing about language signing, made me hope/believe as well as you, that she might be understanding propositions but is just lacking the motive for language.

      Delete
  14. "Our version of the very same criterion as Katz’s Glossability Thesis was the “translatability” thesis: Anything you can say in any natural language can also be said in any other natural language"

    In attempt to find a counterexample to this argument, I tried thinking of Russian words I have heard in my life which do not have a direct translation in English. The word Тоска (tas-'ka) is one that I could not directly translate into English. In Russian, it carries a meaning of describing intense melancholy or pain but its true sense cannot be captured by these English words alone and thus it is inherently specific to the Russian language/culture. A famous Russian writer, Vladimir Nabokov, has said that:

    "No single word in English renders all the shades of 'toska'. At its deepest and most painful, it is a sensation of great spiritual anguish, often without any specific cause. At less morbid levels it is a dull ache of the soul, a longing with nothing to long for, a sick pining, a vague restlessness, mental throes, yearning. In particular cases it may be the desire for somebody of something specific, nostalgia, love-sickness. At the lowest level it grades into ennui, boredom."

    I understand that the translatability thesis states that it does not have to be a 1-to-1 word translation but this particular word seems to encompass an entire English explanation to account for its significance - so could this be considered a counterexample?

    ReplyDelete
    Replies
    1. Hey Linda - I like your example here and thought the passage really beautiful.

      That being said, I think you actually answered your own question! I feel like the fact that Nabokov was able to capture the meaning of the word in English - albeit in a considerably more long-winded way - rules it out as a counter-example by definition of the translatability thesis. In other words, just because we lack the term for it in English, does not mean we are unable to express it, even if it takes 50 words instead of one.

      Delete
  15. I thought this article was very interesting and shed light on a layer of language that is fairly new to me. There were several comments I would like to make after reading this article.

    Firstly, the article recognizes that there are numerous ways to define language; both broad and narrow. However, the article mentions "We will try to avoid the pitfalls of both the over-wide and the over-narrow view of language by proposing not a definition of language but a test to determine whether or not something is a language at all.”
    Although I think this is commendable, I think it should still provide a definition of language that will best represent the article. Language is the basis of the article and it would help enhance the clarity and precision of the article.

    Secondly, this is probably a bit nit-picky but I don’t think the way the article introduces categorization as “doing the right thing with the right kind of thing” to be very powerful or provoking. This term is very arbitrary and can be applied to numerous terms, not specifically categorization. I don’t see the correlation between categorization and that sentence.

    Thirdly, how are we sure that only human beings can learn categories by word of mouth “instruction”? It seems like a lot of the evidence in previous articles stems from research done on animals, why hasn’t this article considered doing such? Although it is evident that animals don’t speak the same language as humans, perhaps with their own language they are able to learn by instruction as well.

    Next, the article mentions that species construct new categories by defending and describing them via instructions. How are we sure that the two species are understanding the same information and cognize the same category? Perhaps this is all by chance and they happen to be understanding parts of the same information. The definitive framework for a “category” seems to be lacking here.

    Lastly, the article mentions that universal grammar is inborn. I am having trouble grasping this concept. Firstly, it seems like this could be an argument of nature vs. nurture. There are many cases, such as the famous one with genie, who was lacking any form of language and grammar, and this could have been due to her being held captive and deprived of her learning abilities.

    ReplyDelete
    Replies
    1. Hi, I would like to address a few of the things you have said here (but not all of them).

      Firstly, 'how are we sure that only human beings can learn categories by word of mouth “instruction”? It seems like a lot of the evidence in previous articles stems from research done on animals, why hasn’t this article considered doing such? Although it is evident that animals don’t speak the same language as humans, perhaps with their own language they are able to learn by instruction as well.'
      I'm not quite sure where this is coming from, but animal don't have language. Humans are the only organisms who have language. Almost every organism on the planet has communication and many of them have vocal communication. But that is not language and that does not allow them to confer verbal instruction. The closest thing an animal might have to verbal instruction would be mimicry, but of course this is quite different.

      Secondly, 'the article mentions that universal grammar is inborn. I am having trouble grasping this concept. Firstly, it seems like this could be an argument of nature vs. nurture. There are many cases, such as the famous one with genie, who was lacking any form of language and grammar, and this could have been due to her being held captive and deprived of her learning abilities.' Universal grammar is inborn. Language is not. There is no argument of nature vs. nurture because Universal grammar is 'nature.' They have done experiment with week-old babies and it has been shown that these babies are able to distinguish between different languages. This ability goes away after the baby matures to a certain age. So all babies are born with the ability to acquire any language. And if the baby is not exposed to language, the baby will not acquire language, this is certain. So what you are saying about the reason Genie had no grammar or language was due to the fact that she was held captive and away from exposure to language is entirely correct. But this has nothing to do with putting the inborn nature of Universal Grammar into question. Genie was born with innate Universal Grammar but was unable to implement it since she was not exposed to language.

      Delete
  16. "So the origins of language, we suggest, do not correspond to the origins of vocal language; nor do they correspond to the origins of vocal communication. Language itself began earlier than vocal language (though not earlier than vocal communication, obviously)."

    I was wondering what the distinction between “vocal communication” and “vocal language” was being made here? Obviously language precedes vocal language, but how does vocal language come before vocal communication? Wouldn’t vocal communication be a subset of vocal language? Vocal language would include all the formal symbol manipulation of language plus all the extra sound properties that speaking allows such as phonology and vocal inflection; all of which can be used to communicate vocally. Lastly, why would vocal language be developed independently of it’s main use, which would be vocal communication? Without communication, there is little that vocal language allows us to do.

    With the chronology of language in mind, is there evidence that animal communication began as a non-vocal form of communication? Or could some other species have developed vocal communication before they adopted gestures and other non-vocal languages. Take for example birds and their courtship calls, which determine their fit for sexual selection. It would seem that these vocalizations would have developed before any form of gesture as they determined whether or not the organism could pass on their genes. With this in mind, does the chronology of vocal language vs. non-vocal language production effect the characteristics of the language?

    ReplyDelete
  17. "That advantage must have been enormous, to have become encoded in our genotypes and encephalized in our language-prepared brains as it did."

    First of all, I really appreciated this sentence off the bat because it spoke to my distaste for the last article in that it included the phrase "encephalized in our language-prepared brains" which is a much better picture of language development already. This obviously goes on to mean UG but I just really was happy with that phrase.

    I'm a bit confused how chimps having categories "Chimps have categories. We keep training them to “name” their categories (whether with gestures, symbolic objects, computer keyboards, or words) — even to combine those names into quasi-propositional strings. " but this contradicts what was said that pantomime alone cannot convey new categories. Wouldn't chimps need pantomime "gestures" to convey the categories they have?

    ReplyDelete
    Replies
    1. Also, I finally think this is a fantastic explanation of the way that our human brains became biased towards language "The tendency to acquire and convey categories by instruction thus grew stronger and stronger in the genomes and brains of the offspring of those who were more motivated and disposed to learn to do so. And that became our species’ “language-biased” brain." I've always felt that there is a big part of language that involves motivation and the idea that the primate ancestors may have had the capacity but not the motivation completely makes sense to me.

      Delete
    2. Hi Alba,
      Nice quote! I also have always thought that language and motivation go hand in hand. For example, when someone learns a second language a lot of motivation is needed. Learning a language takes a lot of effort, time and dedication. That could not be down without motivation. AND learning a second language can open many doors to different jobs/new friends etc., which in itself is motivating.

      Delete
    3. Hey Alba!

      "I'm a bit confused how chimps having categories "Chimps have categories. We keep training them to “name” their categories (whether with gestures, symbolic objects, computer keyboards, or words) — even to combine those names into quasi-propositional strings. " but this contradicts what was said that pantomime alone cannot convey new categories. Wouldn't chimps need pantomime "gestures" to convey the categories they have?"

      I think I can clarify a little bit. Chimps, just like us, have categories, and are capable of category learning. They are able to learn how to do the right thing with the right kind of thing and label these kinds using arbitrary symbols (gestures, objects, keyboards, words...). This is unsurprising as categories are useful for knowing what to do and what not to do in the world. Chimps learn these categories via firsthand experience, or via induction (observation of descriptive pantomime in other members of the species who already possess the given category). As a result, chimps are able to use categories in a purely descriptive manner.

      What chimps do not possess is a capacity for proposition, or veridical statements about category combinations and the relationships between categories. This is a quality of language which allows for category learning in humans via a third additional method: instruction (learning propositions about composite categories from someone who has already experience them).

      Maybe that clears things up a little?

      Delete

  18. “For those readers who have doubts, and think there is something one can say…. You will find you have generated your own counterexample (4).”

    I loved this quote.

    Sections 3.3 and 3.4 of this paper made some of my confusion about what computation is (From way back in 1.A) make a lot more sense, because symbols are arbitrary it is just the things we attribute to them that matter (like truth to the statement 2+2=4).
    This is the autonomy of syntax it seems, the rules that decide how we can manipulate symbols are independent and formal – they don't have anything to do with the meaning of the symbols. They are interpretable because of how we have learned to use symbols. This is pretty amazing and makes me take more stock in universal grammar as a theory – at least for things that are not natural languages like coding languages and arithmetic, because there are so many combinations of ways you can put things together and it seems like often times learning a rule isn’t enough to account for the amount of knowledge and recombination we can do.

    In terms of categorization this is really interesting – it makes me wonder if the universal grammar system is not more just a universal categorization system built on instruction (maybe this is what the abstract was saying but I am not totally sure). Since most of the words in our natural languages are content/category words, it seems like there is an innate UG/C (universal grammar with specialization for categories). The categories themselves are not inborn but the capacity for them might be….
    ( quick aside, aren’t there some categories that are inherent? Like smiles and frowns or tastes of bitter versus sweet things? Does this have any bearing on evolution of language – things like the moral emotions that are cross-cultural? Or is this just an example of the weak Whorf hypothesis? ?)

    ReplyDelete
  19. “When we put the induction-only learners in competition with the instruction learners, within a few generations the instruction learners had out-survived and out reproduced the induction learners, of which there were no longer any survivors left.”

    The instruction learners were able to learn from others as well as from their categories, they could watch others and ‘steal’ the categories and understanding. Observation and imitation are key to quick learning and survival as the paper points out, communication enhances this ability as well as creates the possibility of teaching kin to also keep them alive. Mirror neurons can be shown in this example to be important. With only non-verbal communication was at play it is easy to imagine the necessity for motor neurons was ten fold. Without the ability to explicitly say what action you are carrying out, motor neurons would have provided the recognition of understanding when they purposely only eat mushroom C and avoid mushroom A.

    “Despite its dramatic benefits, however, learning categories through language, by symbolic instruction, is not completely equivalent to learning them by sensorimotor induction.”

    I feel like this is a really important point. Although, as pointed out in the former quote, symbolic instruction may have had more survivors and more offspring, that cannot take away from the value of sensorimotor induction. Without it in the first place no learning could take place. The referent would be lost and understanding would have no ground to stand on.

    ReplyDelete
  20. This paper touches on the phenomenon which is language and its possible origins. It begins by explaining the relationship between show and tell, which is that before linguistic ability “existed”, the only way to learn and acquire categories was through actions, showing in a way to give feedback to the learner. Somewhere along the way an ability to verbally transmit this information emerged. Then, an attempt is made to explain what language is. It emerged that language itself it not that important to study, as it is a product of UG - Universal Grammar, an innate ability to acquire language without need for feedback. Therefore rather than studying the origin of UG, it might be more interesting to study the origin of UG. Ug is quite a powerful tool: it gives the ability to generate any possible and imaginable well formed sentence with a truth value (although the actually truth value is of little importance). A protolanguage is a system used to test whether or not it is truly possible to express anything in a given language. A symbol is an arbitrary shape/object used to refer to an object. This means that the shape of the symbol does not actually point to what its referent actually is. Categories are kinds of objects. Categorization is doing the right thing with the right kind of thing. This leads to the symbol grounding problem: how to know what to do with the symbols referring to an object? And how to ground the symbol by linking it to its meaning? Simulation of the origin of language was attempted, however it is still unclear as to know it emerged in the first place (because even in the simulation there were preexisting entities able to “voice”). Language is powerful in that it is able to teach and acquire categories, but it can also combine them into bigger categories and communicate those to other thinking organisms.

    ReplyDelete
  21. "We are not sure whether chimps really do get it. They get the associations and the contingencies; but do they get them as propositions, with truth values? It’s It is hard to say. What’s most puzzling is why they don’t seem to pick up on the power that is within their hands when they start systematically combining symbols to define new categories. What seems missing is not the intelligence but the motivation, indeed the compulsion, to name and describe."

    In the discussion on evolution it was mentioned that traits can coexist in multiple species because of shared ancestry and because of general adaptiveness of the trait which makes evolution select for it in several further away species! It seems curious that this has not been the case for something as adaptive as language and as such i find the discussion on the 'disposition to propose' particularly interesting

    ReplyDelete
  22. On the discussion of chimps:

    On the topic of whether chimps learn language, Harnad argues that they never really learn language: they have categories, and we train them to name these categories. Moreover, they never progress from this learned categorization to making new categories.

    Harnad is right in that "we are really not sure whether chimps really do get it." They may or may not understand that when they are taught to communicate "banana," they understand the truth value. But, how important is the power of proposition when determining whether chimps learn language? For example, honeybees have a waggle dance to communicate information about patches of flower for nectar and pollen. Obviously, the waggle dance is not a language, with symbols and grammar, but the action in itself is a proposition. So if proposition and language can be mutually exclusive, can it be a determining factor in whether chimps really do get it or not?

    Harnad also suggests that the reason that chimps do not go on to name and describe new things is that they lack the motivation to do so. This to me is very unsatisfying and does not seem to answer the question.

    A question that I had from in class discussion and this paper is how neural nets are determined.

    ReplyDelete
  23. ‘’Baldwinian evolution began to favor this disposition to learn and to use symbols to name categories and then to recombine their names in order to predicate and propositionalize about further categories, because of the adaptive benefits that category description and sharing conferred’’

    In the history of language evolution, I wonder whether the fact that our hands, facial muscles and vocal systems are so much more adapted to encompassing a complex language system was the reason that we were able to first develop language, or whether we developed these systems as the adaptive advantage of intricate communication began to show through the passing down of genes.

    Humans have a much more agile vocal system than many animals, and a fine tuned auditory perceptual system. When did these appear? Before language or due to the evolution of language?

    If categorical description and sharing these descriptions was the initial aim of language, then how did language developed into such a rich and complex system? Surely for evolutionary advantage, simplicity would be best? Why bother to evolve a Universal Grammar? When and why did language stop being merely an adaptive system and become such an important cultural tool and such a nuanced mechanism of communication? If language is such an advantage, why haven’t other species also evolved to have a similar system?

    ReplyDelete
  24. “Those artificial languages do have formal definitions, but we suggest that in reality all of them are merely parts or subsets of natural language. “

    You mentioned this in class as well and at the time it seemed intuitive to me. But now that I think about it a little more I realize I have trouble coming up with an explanation of why this is the case. My understanding is that the reason is because an artificial language like mathematics or C++ cannot make “any and every proposition,” but natural languages can. And furthermore, that because natural languages can make any and ever proposition, it follows that they can make all of the propositions that can be made by an artificial language, rendering that artificial language a subset of the natural one.

    Where I get stuck, though, is thinking about propositional logic. With all of the operands (∧,∨, &, || etc.), isn’t it the case that we can make any and all propositions? If not, what would an example be of a proposition it could not make? If so, does propositional logic constitute an artificial or natural language (or perhaps neither)? Because it seems as though any natural language proposition could simply be captured by, say, “A” where A represented "the cat is on the mat," "the Pinker paper was long," or any other conceivable proposition.

    ReplyDelete
    Replies
    1. To put a finer point on my questions, the part I’m actually having trouble understanding is this:

      “when we manipulate natural language symbols (in other words, words) in order to say all the things we mean to say, our symbol manipulations are governed by a further constraint, over and above the syntax that determines whether or not they are well-formed: that further constraint is the “shape” of the words’ meanings.”

      If we tie propositional logic variables (of which we have an infinite amount: A, A’, A’’ and so on), to meaningful and grounded natural language, does this cause the formal language to become a natural one?

      Delete
  25. So, if I understand the claim of the paper, the main difference between an ape’s ability to sign categories — and even combine words into novel categories, like pizza = bread tomato cheese — and our fully developed propositional speech is our drive to do it more?
    This makes some sense to me, but fundamentally I find it difficult to bridge the behavioral gap between naming compound objects and predicating. And to claim that one is merely repetition of the other with increasing complexity… I’m not sure.
    But then again, it is helpful to remind myself that languages without the ‘is’ predicate, wherein “apple red” is a grammatical proposition, show great resemblance to the compound objects of the “bread tomato cheese” variety. Perhaps it did just come down to our compulsive obsession with naming things that eventually generated our speech.

    ReplyDelete
  26. Protolanguage:

    This part of the paper got me slightly confused. I did not quite understand this concept of a protolanguage. What is a protolanguage? It seems like the homunculus argument all over again for me. If a protolanguage can express everything you need it to, then shouldn’t it be considered as a language (and not a protolanguage)?

    ——

    Combining and communicating categories:

    The question “Was the power of language invented or discovered?” got my attention. This got me thinking, did categories come first or did language. Did we create language because we started categorizing things or did we start categorizing things because we could express them through language?

    ——

    Repose to Oliver H:

    “I’m confused as to how explaining the origin of language as conventionalized communication explains how UG is inborn, since the kernel is learned”

    By conventionalized communication are you implying that there could have been other ways to communicate other than language? I may be misinterpreting your point.

    The fact that UG is an inborn framework helps us to develop our communication/linguistic skills. When we talk about language conventions, we get different language but all languages have so many common aspects to them (which we could term as UG). Additionally, the kernel is learned and what is used by the UG framework for us to get ‘language’.

    ReplyDelete
  27. Is this article saying that gestural – or pantomime – communication was truly the forbearer for all other forms of communication? I mean, I think the idea is to show the evolutionary progression of language from “grunting” to Virginia Woolf. I think the paper is very successful in showing the how and why of propositions, as well as producing empirical data in support of the advantage of instruction over induction. I would definitely say that Stevan’s “show to tell” concept is worth being investigated further, especially the advantage of instruction (given that Stevan at least proved it can be studied).

    I would like propose that Stevan’s concept of the “pantomime” as the essential start of language might be incomplete. I think that pointing and uttering sounds is not the only way to communicate non-verbally. I’m talking about cave-drawings. First of all, does Stevan think that drawing can only come after the development of language? I think that would be unlikely. Drawing, just like grunting and pointing, is meaningful and non-arbitrary. What’s remarkable about cave-drawing is that we can look at examples that have been kept nice through history, and pull something out of it. These people were drawing things, and those things resemble kinds of things in the world. I don’t think it’s a stretch to say that someone could learn instructively from pictures. If someone already possesses inductive knowledge of concepts represented in the picture (for instance, a mammoth, a person, and blood), the drawing could possibly convey new categories to the learner (mammoth belongs to category of dangerous). It is that syntactical structure for uttered sounds makes instruction possible, or merely that some induced categories have already been learned in whatever way?

    ReplyDelete
  28. “Many of these cognitive components (and probably other ones too) were already available before language began (though some of them may also be effects rather than causes of language). It could well be that our ancestors had the power of communication by pantomime before the advent of language; but you can’t convey new categories by pantomime alone.”

    This quote confused me in terms of whether or not it is really an argument for natural selection, or an argument for language as a bi-product of some other mechanism. It seems like stating that the cognitive components for language already existed prior to the existence of language itself suggests that there is some process or mechanism of discovery (possibly the other mechanism that is being referred to by Chomsky when refuting the possibility of natural selection in language creation, or based on non-adaptionist mechanisms from neo-Darwinian evolutionist theory) that is occurring by-which language is produced. While I was convinced otherwise with the 8a. reading, this reading leaves me confused over whether the process of language production being defined and labelled as natural selection is simply a discovery process for a more effective or different use of an existing complex system. Similar to in the example of wings being created for flying and then used as a shading mechanism, there would still have to be some mechanism of discovery by which this change of function is determined and selected for.

    We use the idea of ‘natural selection’ and ‘adaptation’ exclusively to describe a selection that is against the odds in terms of increasing in complexity, but with the odds in terms of increasing chances of survival. This makes sense, and it makes sense that we think of a change differently when the odds of its occurrence are statistically favourable to both be discovered and be repeated (thus arise the non-adaptionist mechanisms of change). It seems that the argument in this paper, largely based on the ease with which something so complex could come about in our species since there were so many pre-existing structures that could be used, as well as an existing high level of intelligence, is arguing that language came about by the fact that its existence was statistically favourable, as was the odds of its creation given the existing structures. Perhaps this argument, while proving language is something whose creation was adaptive, is better suited as evidence for the Neo-Darwinian theory of evolution, showing that non-adaptionist mechanisms are far more relevant than we initially believed.

    It seems like this paper has also divided the evolution of language and language structures into a different time frame so as to argue for its natural selection. Stating that the existence of complex structures is what allowed for its natural selection is maybe providing evidence for a new understanding of natural selectionist and evolutionary theory altogether. Depending on the time frame that is denoted, one can say that language is a bi-product in the evolutionary process selecting for the existence of intelligence and other neural mechanisms that language relies on, or, one can say (like in this article), that language was specifically selected for in light of the existence of these complex neurological mechanisms. It seems that “natural selection” may in fact be more of a combination between non-adaptationist and adaptationist approaches than we previously considered, allowing us to bypass the idea that there always has to be some intermediate state.
    Ironically, the argument that ‘language is so unbelievable complex that it cannot be a product of evolutionary theory of natural selection at all’ that is seen in the rhetoric of Chomsky and others doesn’t seem to fit anywhere when mapping the possibilities of its origin. It seems that no matter how complex a mechanism, with a strong enough adaptive potential (which clearly exists), it can be created. The complexity of language further attests to the adaptive advantages that clearly resulted from its use.

    ReplyDelete
  29. “We suggest that this is how the proposition was born. Learners may have begun picking up new composite categories through passive observational learning. But once the knower—motivated to help kin or collaborators learn by sharing categories—became actively involved in the communication of the composite categories, it became intentional rather than incidental instruction, with the teacher actively miming the new category descriptions to the learner. The category “names” would also become shorter and faster— less iconic and more arbitrary—once their propositional intent (the “propositional attitude”) was being construed and adopted mutually by both the teacher and the learner.”

    I’m confused about the process of teachers teaching new category descriptions via miming their descriptions to learners. The authors write in section 4.3 “It could be that our ancestors had the power of communication by pantomime before the advent of language; but you can’t convey new categories by pantomime alone. All you can do is mime objects, events, and actions, and to a certain extent, properties and states (including requests and threats).” If you can communicate objects, events, etc. through pantomime why can these not constitute explanations of new categories that can be passed from teacher to learner without language? If category names, once they become arbitrary, are represented by their “shape” alone and thus their sensory modality no longer matters, why can this “shape” not be expressed through pantomime as a form of symbol grounding (vs language)? I understand the advantages/adaptive value of being able to use your hands to do something while communicating rather than relying on gestures (and not being able to use your hands for anything else at the same time). I don’t understand how new categories cannot be communicated by pantomime alone; for example, does sign language not do exactly that?

    ReplyDelete
  30. One part of this article states that the natural candidate for learning categories is direct sensorimotor experience, and then goes on to explain how explaining using language (ie through dictionaries or another human) develop as another method for learning categories (and facilitated the development of language in the process). This makes me wonder about any other potential candidates for category-learning. Many people mention to me (when they learn that I’m a cognitive science student) that they have a fantasy of hooking their brains up to an information super-cable and learning all that there is. I usually respond skeptically, citing lack of technology and the fact that we would not be acquiring this information by experience. Reading this, however, made me go back and question these assumptions. Perhaps there will be a way to simulate experience, but speeded up, so that categories can be acquired (and learning can occur). Or perhaps a fast-forwarded experience will not be necessary; we can simulate the necessary neural arrangements and create false memories and associations to them so as to facilitate recall. If these associations were not created would we still possess the categories? I think not. This is likely all fantasy, but not necessarily untenable.

    ReplyDelete
  31. After reading this article, I think my biggest questions lies with UG, the Kernel and MGS. In addressing the symbol grounding problem with language, the paper claims UG is "a complex set of rules that cannot be acquired explicitly from the child’s experience because the child does not produce or receive anywhere near enough information or corrective feedback to learn the rules of UG inductively, by trial and error. So UG must be inborn" and while my intuitive question is whether UG therefore IS language, I have given it a bit more thought and decided explore why exactly UG must be inborn? First of all, what exactly is UG and how can we be more specific with what it is other than "a complex set of rules" ? Perhaps this is better explained elsewhere, but the majority of UG definition is what it's not, so what exactly is it, or are we still somewhat unsure?

    In addition, the Kernel and MGS, while very interesting to learn about, seems to me more specific to 'natural language' as we know it in present day, and does not necessarily apply to the thousands and thousands of years in which language has evolved through founder effects, drifts, hybridization, and adaptation. For example, putting UG aside for now, the Kernel is said to be about 10% of the dictionary or ~500 words (in English dictionary, I'm assuming). What would the Kernel have looked like when we first started using what we call 'natural language'? Surely, it must have been so vastly differently in many facets, so while it maybe be unlikely that a child can learn 500 content words through induction (though not impossible), it may have been much easier for children 10,000 years ago to learn their version of the Kernel or MGS, rather. And thus, symbols could've been grounded only through induction and without the presence of so called UG. The article specifically claims that MGS is "the smallest number of words from which all the rest can be reached by definition alone" which is not unique. If our current MGS can vary, then why couldn't those of a natural language long ago have varied enough such that language could have be taught without an inborn UG, but rather just something that evolved for the capacity to learn any language (natural or otherwise) through natural selection. So many animals have variations of language, none of which would be considered 'natural language,' so what makes UG so special to humans?

    ReplyDelete
  32. "We suggest that this is how the proposition was born. Learners may have begun picking up new composite categories through passive observational learning. But once the knower - motivated to help the kin or collaborators learn by sharing categories, it became intentional rather than incidental instruction, with the teacher actively miming the new category descriptions to the learner."

    This is a concept I find fascinating. I've been consistently troubled by this idea since it emerged - how and why did human make the transition from pantomime (simple category description) to proposition (making veridical statements about categories and how they are defined by other categories). This article gave me something fairly substantial to chew on as far as why, but I'm not as convinced by the how.

    As before, I'm still haunted by Noam Chomsky's acknowledgement that "[an innate language faculty] poses a problem for the biologist, since, if true, it is an example of true 'emergence' - the appearance of a qualitatively different phenomenon at a specific stage of complexity of organization."

    The passage from this week's article that I cited would claim that the transition from pantomime to proposition was the transition from incidental to intentional instruction. In my mind, I don't see how this solves the problem (which maybe the article wasn't even intending to do, but I'll make it my scapegoat anyways, so I can vent my troubles). Incidental to intentional seems as much a leap to me as description to proposition, almost by definition. Watching someone (your teacher) do the right thing with the right kind of thing is fine and dandy as a way of learning categories and the affordance of certain objects, but as soon as your teacher starts explaining things intentionally, and combining categories to designate new categories, we're in the realm of propositions. It would seem to me that this evolutionary explanation of how the proposition was born doesn't seem to close that wide gap between showing and telling.

    I still can't conceive of a way that a single favourable mutation could give us propositional language, or how propositional language could have arisen gradually as a result of an accumulation of favourable mutations (since we've agreed that protolanguages cannot exist).

    I feel that the Pinker article didn't even try to address this question, and while this paper was convincing, I was still left somewhat unsatisfied.

    ReplyDelete
  33. 1. “Notice that in a formal symbol system we have true “autonomy of syntax,” in the sense that the rules governing the symbol manipulations are completely independent and formal: they have no recourse to meaning.”
    I’ll try to make sense of “autonomy of syntax” in the context of language, to make sure that I get it right. So in the case of “linguistic” syntax, the rules would be word order (where the verb, the subject and the object go in the sentence) and things of the sort? These would be autonomous from meaning because the word order determines the shape of the sentence (in English: Subject, then verb, then object) without any meaning having to be assigned to each word.

    2. “To construe A is B as a proposition—as asserting something true about the world, rather than just producing a string of associations for instrumental purposes—may have required a new cognitive capacity, perhaps even one with a genetic basis.”
    So, at the stage of pure showing, before the new “cognitive capacity” was developed things like A is B would not have existed? At that stage, any “showing” would simply be creating an association between things in the world and the showing, not between elements in the showing? So if, for example, I am miming a cat, the mime is not propositional because it’s only establishing an association between the real world thing that is a cat and my actions and not really creating a proposition linking the cat to something else (another category)?

    ReplyDelete
  34. "We are not sure whether chimps really do get it. They get the associations and the contingencies; but do they get them as propositions, with truth values? It’s It is hard to say. What’s most puzzling is why they don’t seem to pick up on the power that is within their hands when they start systematically combining symbols to define new categories. What seems missing is not the intelligence but the motivation, indeed the compulsion, to name and describe"

    I never really thought about it like this before. Chimps/other primates are known to be extremely intelligent, and extremely similar to humans, but perhaps that which differentiates their intelligence from ours is the motivation. This leads me to wonder if there is any less feeling there? Motivation does stem from some sort of feeling... so perhaps that which differentiates the extremely intelligent apes from humans is the same thing that differentiates us from robots that (we assume) don't feel. I know that apes are known to have feelings like jealousy, empathy, etc, but their lack of motivation to explore and use their knowledge to breed further results really does make me think about whether or not robots would have any sort of motivation to innovate as well.

    ReplyDelete