tag:blogger.com,1999:blog-8019050899542469366.post8520878835704504671..comments2021-01-07T07:56:36.212-08:00Comments on CatComConM2016: 5. Harnad, S. (2003) The Symbol Grounding ProblemUnknownnoreply@blogger.comBlogger93125tag:blogger.com,1999:blog-8019050899542469366.post-17209753739551795622016-04-16T04:18:06.918-07:002016-04-16T04:18:06.918-07:00A tentative first estimate of the "minimal gr...A tentative first estimate of the "minimal grounding set" -- the smallest number of words that can define all the other words -- is made in 8b. We now think it's about 1500 words, but it's not unique. There can be many variations. And it's almost certain that it's not just matter of learning exactly 1500 directly and then spending the rest of one's life in a room doing T2! Categories surely continue to be learned both ways -- by sensorimotor induction as well as verbal instruction -- throughout life.Stevan Harnadhttps://www.blogger.com/profile/14374474060972737847noreply@blogger.comtag:blogger.com,1999:blog-8019050899542469366.post-75152699284333522092016-04-16T04:13:10.707-07:002016-04-16T04:13:10.707-07:00This is more fully explained in the readings on ca...This is more fully explained in the readings on category learning and language. The members of the same category differ but share some common features (invariants), although the invariants may be described by a complex rule ("A or not B or if C then D..."<br /><br />Words with multiple meanings name different categories, with different invariants.<br /><br />None of this explains anything about feelings, because of the "hard problem."Stevan Harnadhttps://www.blogger.com/profile/14374474060972737847noreply@blogger.comtag:blogger.com,1999:blog-8019050899542469366.post-12745968985957512392016-04-15T20:39:52.841-07:002016-04-15T20:39:52.841-07:00After re-reading this article and the comments on ...After re-reading this article and the comments on this thread; I had another question that I think deserves further research. After doing all the readings from the course, I am aware that most words do not have to be grounded directly; we can learn their meaning indirectly from things like explanations, as long as the word in the defenition is grounded (directly or indirectly). Since the symbol grounding problem is that there cannot be fully indirect grounding, my question is then how many words have to be grounded directly?Anonymoushttps://www.blogger.com/profile/10194345390540485438noreply@blogger.comtag:blogger.com,1999:blog-8019050899542469366.post-31409037473610754732016-04-14T18:21:29.055-07:002016-04-14T18:21:29.055-07:00I just had one question that I wanted to address. ...I just had one question that I wanted to address. I like how the author explained the derivation of meaning of a word by linking it to the various representations and subsequent experiences. However, what would then be the common feature that would help unify all of these variations? Would it be a singular feeling? Is the author saying that ambiguous word’s meaning comes from sensorimotor capacities? Do we need to experience this common feeling to help understand the meaning of a word? Anonymoushttps://www.blogger.com/profile/10194345390540485438noreply@blogger.comtag:blogger.com,1999:blog-8019050899542469366.post-17622375633721958622016-03-28T19:07:27.633-07:002016-03-28T19:07:27.633-07:00The symbol grounding problem deals with how words ...The symbol grounding problem deals with how words get their meanings and then raises questions related to what meaning actually is. From the examples of the Chinese Room and the Chinese Dictionary, it can be seen that the meanings of words are grounded in other words. Essentially, words are symbols and we see that these symbols have meanings grounded in other symbols. This is the symbol grounding problem. <br /><br />Obviously, the meaning of a word has a sense and reference. But this problem helps us conclude that there is something more than that; something more than just sensorimotor experience and computation grounded in meaning. Can we say that there is a sense of feeling grounded in words and meanings? <br /><br />Since the symbol grounding problem is essentially a symbol system, it is closely related to computation and this really interests me. The words we process, are they the dynamic in our heads or are they static as we see them on paper? The symbol grounding problem raises more questions against the view of computationalism.<br /><br />——— <br /><br />Another perspective: since word definitions lead to an infinite regress, there is an external factor that we are not accounting for that links words to meanings. In this case we assume it is “feeling”. But what if it’s not feeling and it’s senses? We could be using our senses/perception to link words we hear/read to referents in the physical world. <br /><br />Maybe AI will figure it out once we have reached a phase where AI perception is strong enough.Nivit Kochharhttps://www.blogger.com/profile/02862823112183046960noreply@blogger.comtag:blogger.com,1999:blog-8019050899542469366.post-63459433836616367492016-03-26T11:34:47.607-07:002016-03-26T11:34:47.607-07:00Regarding your first reply above:
“Grounding does ...Regarding your first reply above:<br />“Grounding does not require consciousness (unless you can solve the hard problem and explain how and why!). Grounding requires connecting symbols to their referents using T3 (robotic) capacity.”<br /><br />I was hoping you could clarify this in contrast to the following point from your paper:<br />“Consciousness. Here is where the problem of consciousness rears its head. For there would be no connection at all between scratches on paper and any intended referents if there were no minds mediating those intentions, via their internal means of picking out those referents.<br />So the meaning of a word in a page is "ungrounded," whereas the meaning of a word in a head is "grounded" (by the means that cognitive neuroscience will eventually reveal to us), and thereby mediates between the word on the page and its referent.”<br /><br />Was it simply a little misleading to bring up consciousness here? Because it seems from your Skywriting comment that, in fact, it isn’t consciousness that “mediates between the word on the page and its referent,” rather just the internal mechanism of the mind (or the T3 robot). <br /><br />Regardless of whether or not my above point hits the mark, can you elaborate more on what you mean by “the problem of consciousness rear[ing] its head”? If the problem isn’t that grounding requires consciousness, as you point out in your comment, then I’ve missed the point.<br />Anonymoushttps://www.blogger.com/profile/15484890696945967505noreply@blogger.comtag:blogger.com,1999:blog-8019050899542469366.post-54652448251953447732016-03-20T22:58:14.558-07:002016-03-20T22:58:14.558-07:00I think the most enjoyable aspect that I pulled ou...I think the most enjoyable aspect that I pulled out this article is how Stevan creates a distinction between grounding and meaning. Well, pointing out that meaning might actually be more than just grounding. All of meaning might not be captured by understanding what grounding is. This, I think, is where the paper becomes especially fascinating. This is because Stevan really can’t prove the distinction between grounding and meaning in the same way the rest of his paper focused on the distinction between computation (symbol-manipulation) and cognition (everything humans can do). I would like to ask what Stevan means when he is talking about some fascinating extra property that humans have which computers do not necessarily have. Stevan says that a T3-passing robot might not have what Searle has in his head: “It could be a zombie, with no one home, feeling feelings, meaning meanings.” What does that mean, exactly? What is this functional capacity? <br /><br />Anyways, sensorimotor grounding is super interesting to me. According to Stevan, words are not grounded in other words, but they can be grounded through sensory capacities and motor capacities. So when one smart-ass-undergrad asks another undergrad to explain the meaning of coffee, the explainer could explain by funneling hot coffee directly down the throat of the questioner. It is undoubtedly a powerful definition of coffee, which is grounded in the sensorimotor capacities of the meaning-making subjects that are involved. <br />Anonymoushttps://www.blogger.com/profile/13002631969647958913noreply@blogger.comtag:blogger.com,1999:blog-8019050899542469366.post-87464425328585595342016-02-17T20:44:45.907-08:002016-02-17T20:44:45.907-08:00"If both tests are passed, then the semantic ..."If both tests are passed, then the semantic interpretation of its symbols is "fixed" by the behavioural capacity of the dedicated symbol system, as exercised on the objects and states of affairs in the world to which its symbols refer; the symbol meanings are accordingly not just parasitic on the meanings in the head of the interpreter, but intrinsic to the dedicated symbol system itself."<br /><br />I feel like this passage captures the essence of symbol grounding. I feel as if a lot of people have failed to grasp what the symbol grounding problem really tells us about cognition, so I would like to refer back to the opening statement (where Harnad refers to the Chinese room), to clarify what exactly the intrinsicity of the system implies: <br /><br />Talking about the Searle's Chinese room: "The symbols and the symbol manipulation, being all based on shape rather than meaning, are systematically interpretable as having meaning -- that, after all, is what it is to be a symbol system, according to our definition. But the interpretation will not be intrinsic to the symbol system itself: It will be parasitic on the fact that the symbols have meaning for us, in exactly the same way that the meanings of the symbols in a book are not intrinsic, but derive from the meanings in our heads. Hence, if the meanings of symbols in a symbol system are extrinsic, rather than intrinsic like the meanings in our heads, then they are not a viable model for the meanings in our heads: Cognition cannot be just symbol manipulation."<br /><br />So here Harnad is saying that in Searle's scenario, interpretation of the symbol system is parasitic on the meanings that symbols have in our heads and not intrinsic to the system itself (i.e. extrinsic). If meaning of symbols is extrinsic to the "system" then cognition cannot be just symbol manipulation because symbol manipulation alone would not produce intrinsic meaning. Going back to the passage at the end, Harnad suggests that symbol meanings are not only parasitic on the meaning inside our heads but are also intrinsic to the symbol system itself. Hopefully this makes sense, although with all this "intrinsic" and "extrinsic" business it isn't exactly kid sib. Anonymoushttps://www.blogger.com/profile/05119485457325429564noreply@blogger.comtag:blogger.com,1999:blog-8019050899542469366.post-44206373654094642482016-02-11T14:37:24.133-08:002016-02-11T14:37:24.133-08:00From reading some of the other comments here, it s...From reading some of the other comments here, it seems that some individuals are confusing the hard problem and the easy problem with regards to the symbol grounding problem, and I can certainly see why. I myself was a little confused as to what 'solving' this problem would mean at first as well, since we talked about how it feels like something to 'understand,' and yet solving the symbol grounding problem apparently does not require such feelings.<br /><br />I think I better understand now the separability of the easy and hard problem now. <br /><br />"The present grounding scheme is still in the spirit of behaviorism in that the only tests proposed for whether a semantic interpretation will bear the semantic weight placed on it consist of one formal test (does it meet the eight criteria for being a symbol system?) and one behavioral test (can it discriminate, identify and describe all the objects and states of affairs to which its symbols refer?). If both tests are passed, then the semantic interpretation of its symbols is "fixed" by the behavioral capacity of the dedicated symbol system, as exercised on the objects and states of affairs in the world to which its symbols refer; the symbol meanings are accordingly not just parasitic on the meanings in the head of the interpreter, but intrinsic to the dedicated symbol system itself." (Harnad, 1990)<br /><br />These lines in the closing paragraph I found helpful in teasing apart just what sort of solution we are after for the symbol grounding problem. While feeling certainly accompanies the vast majority of our our conscious mental states, the easy problem is searching for why and how we have the capacity to behave and cognize the way we do, NOT why and how it feels like something to when we do. In order to have a T3 robot pass the Turing Test, it must be behaviourally equivalent, and that requires it possessing a grounded symbol system in the sense mentioned above. Without this capacity to reference the outside world as a link to the symbols it manipulates, it can never be considered behaviourally (or cognitively?) equivalent. <br /><br />In short, the easy problem here is understanding why and how we have the capacity to understand (language), which is what the symbol grounding problem posits, and the hard problem is why and how it feels like something to understand, which is is inessential to the question at hand. Anonymoushttps://www.blogger.com/profile/08829874383703767370noreply@blogger.comtag:blogger.com,1999:blog-8019050899542469366.post-32667990344097490452016-02-11T13:52:55.412-08:002016-02-11T13:52:55.412-08:00"Discrimination is independent of identificat..."Discrimination is independent of identification. I could be discriminating things without knowing what they were. " (Harnad, 1990)<br /><br />Hi Freddy!<br /><br />I think you might have the Chinese room experiment and the Rosetta stone a bit confused!<br /><br />In Searle's thought experiment, he is receiving inputs of Chinese characters and has some rules in english that say "if you receive THIS string of characters, then send out THAT string of characters." You can think of it as being equivalent to him sitting in a room and having inputs and outputs of strings of shapes with rules like: "if you receive the string: square, triangle, circle, then send out star, octagon, circle" and other arbitrary rules for every type of string input you could possibly receive. Whether Searle is receiving strings of Chinese characters or strings of shapes, it remains that he cannot determine meaning from these arbitrary symbol manipulations without some reference to their real world meaning (thus why T2 is not adequate enough to pass the Turing Test, since interaction with the world (T3) is the only way that one can ground such symbols). <br /><br />The Rosetta Stone on the other hand was a decree written in 3 languages: Ancient Egyptian hieroglyphs, Demotic script, and Ancient Greek. While there had been no knowledge of how to read hieroglyphs, upon discovery of this stone, the Greek text, which was widely known by scholars, provided a source of reference for hieroglyphic text so that meaning could be understood from the hieroglyphs. <br /><br />To contrast this with Searle's Chinese room: <br /><br />If Searle's thought experiment followed a Rosetta Stone-esque template, then the rules he would for deciphering, say a made up shape language, would be something more like this:<br />"If you receive the string: 'square, triangle, circle' the person is asking 'how are you?', you should respond with, 'star, octagon, circle'."<br />After a while, of back and forth, you would in fact come to ascribe meaning from these words - however, as I mentioned before, this is not how Searle's actual thought experiment (nor how a T2 computer program with no outside references) functions.<br /><br />I think the quote I put at the top sums it up nicely:<br />In Searle's case he is just discriminating between symbols provided, on the other hand, your Rosetta Stone example is actually a case of identification.<br /><br />Hope this makes sense!Anonymoushttps://www.blogger.com/profile/08829874383703767370noreply@blogger.comtag:blogger.com,1999:blog-8019050899542469366.post-52673252475067915382016-02-09T21:28:38.015-08:002016-02-09T21:28:38.015-08:00I find the effect Harnad (2003; 1990) terms the “h...I find the effect Harnad (2003; 1990) terms the “hermeneutic hall of mirrors” very interesting. Harnad (1990) defines the hermeneutic hall of mirrors as “[the] illusion one creates by first projecting an interpretation onto something (say, a cup of tea<br />leaves or a dream) and then, when challenged to justify that interpretation,<br />merely reading off more and more of it, as if it were answerable only to itself” (Harnad, 1990). I wonder how this may relate to the earliest interpretations of cryptologists, who are only able to decipher “ancient languages or secret codes” in modern day because they can ground their exploration in their first language and in their knowledge of the real world. In the footnotes it is explained that “cryptologists also use statistical information about word frequencies, inferences about what an ancient culture or an enemy government are likely to be writing about, decryption algorithms, etc.” (Harnad, 2003). Having access to this sort of relevant contextual information seems de facto to the task of interpreting any code or symbol system that is entirely foreign and initially appears very arbitrary. But what did cryptologist do before this type of information was available? Clearly it is possible to eventually “crack the code”, even without statistical information, algorithms, or the like. How would this be accomplished? In the same vein, without the help of modern technology/contemporary knowledge how did cryptologists not fall victim to the hermeneutic hall of mirrors? How did one recognize the presence or absence of a pattern/syntax in a series of symbols that may or may not be grounded? <br />Anonymoushttps://www.blogger.com/profile/07528898581889060359noreply@blogger.comtag:blogger.com,1999:blog-8019050899542469366.post-79151149871842168082016-02-09T21:17:40.819-08:002016-02-09T21:17:40.819-08:00I found this article to be very insightful and inf...I found this article to be very insightful and informative. There are just a few things I had trouble grasping that I wanted to clarify. Firstly, if the meaning of a word is the rules/features that one must use to pick out of its referent and there is no explicit way to uniformly pick out rule, would we obtain some kind of uniform result if we were to work backwards? If we were given the meanings would we be able to identify the rules; and thus identify the referent. Would this help establish some kind of uniformity for the rules for referents and therefore meaning? Due to the large ambiguity; if we are unable to do so, then how can we even come up with the meaning of words? <br /><br />In addition, I just wanted to clarify the following statement. "So the meaning of a word in a page is "ungrounded," whereas the meaning of a word in a head is "grounded" (by the means that cognitive neuroscience will eventually reveal to us), and thereby mediates between the word on the page and its referent.”<br /><br />I don’t think there is enough evidence to support this statement and it seems like it is slightly flawed. It seems like they are both intertwined with the meaning on a page and in one’s head. In order to even have a word appear on a page; wouldn’t it have to go through a writer and therefore through one’s head? How can it be separated then if it goes through both the writers head and paper; how would the meaning change. Anonymoushttps://www.blogger.com/profile/10194345390540485438noreply@blogger.comtag:blogger.com,1999:blog-8019050899542469366.post-30183633839699799562016-02-09T06:56:08.716-08:002016-02-09T06:56:08.716-08:00CL: From the look of it, sensorimotor category le...<strong>CL: </strong> From the look of it, sensorimotor category learning for symbol grounding looks to be largely trial-and-error learning with corrective feedback (reinforcement/supervised learning) though there is also some learning from passive exposure and correlations (unsupervised learning). These are discussed next week.<br /><br />Giving a neural net a huge set of "labelled" data is the computational way of doing supervised learning, but in real-life it is trial-and-error learning with corrective feedback.<br /><br />Grounding requires being able to identify the members of the category. Words are the names of the categories. We have to learn which kinds of things are called what. So we need to learn the kinds (categories) and their names. Categories are acuired by learning to abstract the features that distinguish the members from the non-members. Unless the features are very obvious, this can only be done via supervised (trial and error) learning, guided by the consequences of doing the right or wrong thing with the right or wrong kind of thing. Unsupervised learning is unlikely to be enough, not only when the relevant features are hard to find, but also when the same things can be sorted in many different ways, hence different categorizations.<br /><br />The transition from unsupervised to supervised learning comes from the consequences of mis-categorizing. <br /><br />A dictionary is mostly the names of categories (nouns, verbs, adjectives, adverbs). Take a random sample of words from a dictionary, say 100 of them.Count how many of them are likely to have been innate. (Very few.) Now how many of the learned ones are likely to have been learnable from passive exposure alone, from their shapes and correlations, without any feedback as to what's what and what to do with what. (Let me know the percentage of each -- innate, unsupervised, supervised -- that you come up with. But count also what percentage of the learned categories is likely to have been learned indirectly from grounded verbal definitions rather than from direct sensorimotor experience. That's the portion contributed by language.)<br /><br />T3 is a long, long way off as yet...Instructorhttps://www.blogger.com/profile/08246824164400922565noreply@blogger.comtag:blogger.com,1999:blog-8019050899542469366.post-13038503490091348182016-02-09T06:51:13.453-08:002016-02-09T06:51:13.453-08:00CL: From the look of it, sensorimotor category le...<strong>CL: </strong> From the look of it, sensorimotor category learning for symbol grounding looks to be largely trial-and-error learning with corrective feedback (reinforcement/supervised learning) though there is also some learning from passive exposure and correlations (unsupervised learning). These are discussed next week.<br /><br />Giving a neural net a huge set of "labelled" data is the computational way of doing supervised learning, but in real-life it is trial-and-error learning with corrective feedback.<br /><br />Grounding requires being able to identify the members of the category. Words are the names of the categories. We have to learn which kinds of things are called what. So we need to learn the kinds (categories) and their names. Categories are acuired by learning to abstract the features that distinguish the members from the non-members. Unless the features are very obvious, this can only be done via supervised (trial and error) learning, guided by the consequences of doing the right or wrong thing with the right or wrong kind of thing. Unsupervised learning is unlikely to be enough, not only when the relevant features are hard to find, but also when the same things can be sorted in many different ways, hence different categorizations.<br /><br />The transition from unsupervised to supervised learning comes from the consequences of mis-categorizing. <br /><br />A dictionary is mostly the names of categories (nouns, verbs, adjectives, adverbs). Take a random sample of words from a dictionary, say 100 of them.Count how many of them are likely to have been innate. (Very few.) Now how many of the learned ones are likely to have been learnable from passive exposure alone, from their shapes and correlations, without any feedback as to what's what and what to do with what. (Let me know the percentage of each -- innate, unsupervised, supervised -- that you come up with. But count also what percentage of the learned categories is likely to have been learned indirectly from grounded verbal definitions rather than from direct sensorimotor experience. That's the portion contributed by language.)<br /><br />T3 is a long, long way off as yet...Instructorhttps://www.blogger.com/profile/08246824164400922565noreply@blogger.comtag:blogger.com,1999:blog-8019050899542469366.post-88723642197920994682016-02-09T06:30:00.342-08:002016-02-09T06:30:00.342-08:00WB: I"m not sure what you mean by "knowl...<strong>WB:</strong> I"m not sure what you mean by "knowledge" (data? skill? felt data/skill?), "experience" (felt? unfelt?), "percept" (?). And we don't make symbols, but we do agree on words to name categories we have learned to identify.<br /><br />Blind people don't see things, but they sense them with other senses and with the help of analogy. Blind people can't see color but they can talk about it, just as we can talk about echolocation in bats.<br /><br />"Representation" is a weasel word. It refers to something-or-other in the brain, but we have no idea yet what, or how...<br /><br /><strong>LK:</strong> Yes, symbol grounding is related to what we call "semantic memory" -- but to "episodic memory" and "procedural memory" too..<br /><br />Yes, the same category can be identified using different features (where that is possible). It is another form of "<a href="http://plato.stanford.edu/entries/scientific-underdetermination/" rel="nofollow">underdetermination</a>."<br /><br />Yes, neural nets can play a role in feature extraction and category learning but, no, connectionism does not yet "explain" anywhere near as much through concrete causal models as it seems to in vague metaphors.Instructorhttps://www.blogger.com/profile/08246824164400922565noreply@blogger.comtag:blogger.com,1999:blog-8019050899542469366.post-24572010856875133672016-02-09T06:09:52.723-08:002016-02-09T06:09:52.723-08:00RW: Before you can imagine shapes, you have to gro...<strong>RW:</strong> Before you can imagine shapes, you have to ground real shapes. <br /><br />(Robots are not "programmed." And computers execute algorithms, which, if they are algorithms for learning, can change as a result of practice or input. It is misleading to imagine that they are "programmed" in the movie sense, which means compelled to do this or that in advance. Learning changes them.)<br /><br />Neural nets may help in learning and feature abstraction, hence in grounding.<br /><br /><strong>AH:</strong> In the cruel experiment you describe, there is no "program" in the computational sense involved. They manipulated the little victim's brain dynamically -- and got a published article out of it. <br /><br />Yes, in principle, T3 could have "false memories" (a non-existent virtual past -- see reply to Anastasia above) but in practice there's no need for it in real reverse-engineering aimed at T3: Real learning capacity in real time is enough. (But neural nets, though they may help in some functions, like feature learning, certainly can't do the whole job.)<br /><br />You are also using the verb "to program" too loosely. It just refers to implementing an algorithm, not to dynamic processes, nor to "mind control."<br /><br />Kid-sib couldn't follow what you were saying about concepts...<br /><br />Meaning = T3 grounding + feeling.Instructorhttps://www.blogger.com/profile/08246824164400922565noreply@blogger.comtag:blogger.com,1999:blog-8019050899542469366.post-20949699244508518012016-02-09T05:38:21.109-08:002016-02-09T05:38:21.109-08:00TA: This is how we have reduced dictionaries to th...<strong>TA:</strong> This is how we have reduced dictionaries to their minimal grounding sets: <br /><a href="https://www.newscientist.com/article/mg21929322-700-why-your-brain-may-work-like-a-dictionary/" rel="nofollow">https://www.newscientist.com/article/mg21929322-700-why-your-brain-may-work-like-a-dictionary/</a> <br />(Is this what you were referring to, or did you ask the question without knowing anything about this?)<br /><br />In a week or so, everyone in the class will get a chance to play the dictionary game at: <br /><a href="http://lexis.uqam.ca:8080/dictGame/" rel="nofollow">http://lexis.uqam.ca:8080/dictGame/</a> <br />(Not this week: it's being updated.)<br /><br /><strong>EJ:</strong> You couldn't ground T3 with just one word (category). You'd need to ground enough words directly so that all the rest can be grounded indirectly out of definitions made up of already grounded words. The question of how many words and which ones is a question for research. (We're working on it. We can already say that the minimal grounding set is under 2000 words but it is not unique: there are many possible grounding sets consisting of different combinations of words. It's also almost certain that we don't actually ground one minimal grounding set directly through sensorimotor experience and then do everything else verbally using those words. We probably keep grounding some categories directly throughout life; but the more verbal and bookish or textish ones among us might do more of it the indirect, "virtual" way...).<br /><br />Probably the right way to put it is not that "meaning does not necessarily mean understanding" but rather that <em>grounding does not necessarily mean meaning</em>. (Apart from that, "meaning," "understanding," "thinking," "intelligence," "intentionality" etc. all just refer to the fact that cognition is not just "done" [the easy problem] but also felt [the hard problem].)<br />Instructorhttps://www.blogger.com/profile/08246824164400922565noreply@blogger.comtag:blogger.com,1999:blog-8019050899542469366.post-62698534609057500952016-02-09T05:04:27.371-08:002016-02-09T05:04:27.371-08:00Computers don't "recognize" symbols,...Computers don't "recognize" symbols, they manipulate them (by implementing a computer program). Computers are not robots or vice versa. Grounded T3 robots do recognize the referents of their symbols.<br /><br />"Intentionality" is a <a href="https://www.google.ca/?gfe_rd=cr&ei=r-K5VovxDaeC8QeXtJuIDA&gws_rd=ssl#safe=active&q=harnad+%22weasel+word%22" rel="nofollow">weasel-word</a> (a mixture of intending, meaning, and feeling: but intending and meaning are already felt states). It explains absolutely nothing.<br /><br />It is part of T3 to have the capacity to learn. Renuka & Riona can learn and "prepare." It is true, however, that if learning something changes the state of a learner (whether human or T3) and imparts new information across a period of time, then, in principle, the end result could also have been implanted directly, rather than in real time. That's why I say that R & R were built in MIT 2 years ago. They have a "history," but it is a virtual or fictional history. In practice, of course real experience and real time is the way brains are changed and information is acquired, but, in principle, there is nothing sacred about real time -- the past, that is. T3 does require real-time learning capacity in the future, however. (Turing also talked about making a child T3 and letting it "grow up" through real-time development and learning.) If and when a real T3 is developed, a fictional past will probably be completely unnecessary and it can be Turing-tested on its own terms: as an MIT-built robot whom we are interacting with for a lifetime to see whether it really does have the full capacity to behave and speak indistinguishably from a real person. To the extent that real-time experience is needed for certain capacities, the T3 must have it, and can use and develop it in real time, just as people do.<br /><br /><em>The Turing Test is not a trick.</em> It is a test of whether the real capacity to do what thinking people can do has been successfully reverse-engineered.<br /><br />I don't think <a href="http://users.ecs.soton.ac.uk/harnad/Temp/steelscom.pdf" rel="nofollow">Steels</a> has a very good grasp of the symbol grounding problem. ("Stevan Says.")Instructorhttps://www.blogger.com/profile/08246824164400922565noreply@blogger.comtag:blogger.com,1999:blog-8019050899542469366.post-80417158717593123692016-02-08T18:53:31.652-08:002016-02-08T18:53:31.652-08:00JP: It's not that the referent has to be "...<strong>JP:</strong> It's not that the referent has to be "represented." The referent is an outside object, and the symbol has to be connected to it by the T3 capacity to recognize it, manipulate it, categorize it, name it.<br /><br />Neurons do whatever they do. The fact that they can be "represented" by 0/1 is just the fact that they (like almost any object and process) can be simulated by computation. That does not mean that real neurons are just implementing a computation. They could be part of a dynamical system.<br /><br />It is not clear that the projections of objects on our sensory surfaces are digitized and then processed computationally. The projection can be part of a dynamical system. (And the sensory surfaces and motor effectors have to be dynamic.)<br /><br />What's needed to ground a symbol that refers to an object or a category of objects is the sensorimotor feature detectors that can recognize, manipulate, and categorize the symbol's referent through sensorimotor (robotic, T3) interactions with it. These interactions are dynamic, not computational.<br /><br />It's not consciousness (feeling) that's needed for grounding, it's sensorimotor interactions and categorization. No one knows what feeling is for, just what doing is for.<br /><br /><strong>AH:</strong> No, computation alone cannot do symbol grounding at all. Symbol grounding is necessarily hybrid: sensorimotor/symbolic.<br />Instructorhttps://www.blogger.com/profile/08246824164400922565noreply@blogger.comtag:blogger.com,1999:blog-8019050899542469366.post-66495797990185313922016-02-08T18:37:12.863-08:002016-02-08T18:37:12.863-08:00JP: The question is not whether Searle understands...<strong>JP:</strong> The question is not whether Searle understands that he is manipulating symbols but whether he understands Chinese.<br /><br /><strong>EJ:</strong> Not all word meanings need to be directly grounded. A definition or description is enough, as long as all of its words are grounded (directly, or via grounded definitions, etc.)<br /><br />The only other understander in Searle is the one speaking the Chinese, not all the outside pen-pals that are sending the letters. But Searle's point is that there is not other understander in his head, just Searle himself, doing exactly what he says he's doing: manipulating meaningless symbols based on their shapes according to rules he memorized.<br /><br />The symbol manipulation rules just are not translations of Chinese into English or vice versa. They are just rules for manipulating symbols based on their shapes: if squiggle then sqoggle.<br /><br />We don't know if grounding is enough for meaning, which is also something felt, not just "done."Instructorhttps://www.blogger.com/profile/08246824164400922565noreply@blogger.comtag:blogger.com,1999:blog-8019050899542469366.post-81576360616265659812016-02-08T18:17:58.880-08:002016-02-08T18:17:58.880-08:00Yes, sometimes we don't know what a word or na...Yes, sometimes we don't know what a word or name refers to. So what? To find out, you ask some more questions, or do some more looking...<br /><br />When you read a sentence in a book, you know what it means, but the book doesn't. It doesn't mean anything to the book. Ditto for the words spoken by a T2 computer. It doesn't mean anything to the computer, So what's actually going on in your head can't just be T2 computation, like what's going on in the T2 computer, otherwise the words wouldn't mean anything to you either (the way the Chinese symbols don't mean anything to Searle).<br /><br />To ground the words in your head in their referents, you need to be a T3 robot able to detect and interact with those referents. But unlike for T2 computation, for a grounded T3 hybrid robot, there is no "Searle's Periscope" to check whether T3 really understands (i.e., whether the words mean anything to T3). So there's no way to know whether T3 has meaning, or just grounding.<br /><br />And even if there were a divine periscope, which a god used to check whether T3 really understands, and the god assured you that, yes, T3 really does understand and feel, that still wouldn't solve the "hard problem" of explaining <em>how and why</em> it feels, rather than just acts T3-indistinguishably from someone that feels. So although T3 (or T4) solves the "easy problem" of explaining how and why we can do what we can do, it cannot explain how and why we feel (hence why grounding either isn't enough for understanding or meaning, or it is enough, but we just can't explain how or why).<br /><br />Unconscious "understanding," until further notice, is just T3 grounding, whereas the question is whether (and how and why) grounding is understanding. Calling it "unconscious (unfelt) understanding" just begs the question (and, if you think about it, it actually says nothing at all: it just renames T3 grounding as "unconscious understanding."Instructorhttps://www.blogger.com/profile/08246824164400922565noreply@blogger.comtag:blogger.com,1999:blog-8019050899542469366.post-56464033535585872016-02-08T16:15:30.908-08:002016-02-08T16:15:30.908-08:00Yes, sometimes we don't know what a word or na...Yes, sometimes we don't know what a word or name refers to. So what? To find out, you ask some more questions, or do some more looking...<br /><br />When you read a sentence in a book, you know what it means, but the book doesn't. It doesn't mean anything to the book. Ditto for the words spoken by a T2 computer. It doesn't mean anything to the computer, So what's actually going on in your head can't just be T2 computation, like what's going on in the T2 computer, otherwise the words wouldn't mean anything to you either (the way the Chinese symbols don't mean anything to Searle).<br /><br />To ground the words in your head in their referents, you need to be a T3 robot able to detect and interact with those referents. But unlike for T2 computation, for a grounded T3 hybrid robot, there is no "Searle's Periscope" to check whether T3 really understands (i.e., whether the words mean anything to T3). So there's no way to know whether T3 has meaning, or just grounding.<br /><br />And even if there were a divine periscope, which a god used to check whether T3 really understands, and the god assured you that, yes, T3 really does understand and feel, that still wouldn't solve the "hard problem" of explaining <em>how and why</em> it feels, rather than just acts T3-indistinguishably from someone that feels. So although T3 (or T4) solves the "easy problem" of explaining how and why we can do what we can do, it cannot explain how and why we feel (hence why grounding either isn't enough for understanding or meaning, or it is enough, but we just can't explain how or why).<br /><br />Unconscious "understanding," until further notice, is just T3 grounding, whereas the question is whether (and how and why) grounding is understanding. Calling it "unconscious (unfelt) understanding" just begs the question (and, if you think about it, it actually says nothing at all: it just renames T3 grounding as "unconscious understanding."Instructorhttps://www.blogger.com/profile/08246824164400922565noreply@blogger.comtag:blogger.com,1999:blog-8019050899542469366.post-83028183945753948132016-02-08T12:35:55.093-08:002016-02-08T12:35:55.093-08:00Hey Chris, thanks for the response. I get confuse...Hey Chris, thanks for the response. I get confused over the exact meaning (ba-dum, tss) of certain words as well, especially since different authors tend to use them in different contexts. I'm glad someone is down in the trenches with me trying to hash it out though. Anyways, on the two different meanings of understanding, I think the first one you mention, understanding = “capacity to interpret symbols,” is simply symbol grounding. The two terms describe the same concept, which is best illustrated by another point you make:<br /><br /><i> It seems to me that the main difference between symbol manipulation (computation) and "understanding" is the addition of sensorimotor capacities, </i><br /><br />Sensorimotor capacities are what takes us from basic symbol manipulation to symbol grounding. Now this “understanding” (symbol grounding) is necessary for meaning, but not sufficient. What else is required then? Consciousness, which is where the second “understanding” (feeling of understanding) comes into play. So you're right about the T3 robot you mention at the end of your comment:<br /><br /><i> A T3 robot capable of doing what I've described, and passing the Turing Test would surely "understand" or be able to interpret symbols as they are related to their referents, but its "conscious understanding" would remain impenetrable to us because of the other-minds problem. </i><br /><br />A T3-passing robot is definitely “understanding” when using the first definition (symbol grounding), and we can't know if it is “understanding” when using the second definition (feeling of understanding). Because of this, we can't know if there is actually meaning going on there. As Dr. Harnad points out at the end of his article:<br /><br /><i> for it is possible that even a robot that could pass the Turing Test, "living" amongst the rest of us indistinguishably for a lifetime, would fail to have in its head what Searle has in his: It could be a Zombie, with no one home, feeling feelings, meaning meanings. </i> (page 4)<br /><br />Piecing this all together, I'd say meaning requires both forms of “understanding.” There needs to be the “capacity to interpret symbols,” <b>as well as</b> the “feelings of understanding.” And because of this second requirement, meaning <b>is</b>, in fact, “necessarily contingent upon consciousness.”Alex Heberthttps://www.blogger.com/profile/01977588260839672797noreply@blogger.comtag:blogger.com,1999:blog-8019050899542469366.post-6200947036544966542016-02-08T06:45:56.667-08:002016-02-08T06:45:56.667-08:00I agree... I think this might be confusing conscio...I agree... I think this might be confusing consciousness with the ability to detect sensory input, which clearly is not the same thingAnonymoushttps://www.blogger.com/profile/03541247890693616412noreply@blogger.comtag:blogger.com,1999:blog-8019050899542469366.post-4246943920793226052016-02-08T05:20:41.278-08:002016-02-08T05:20:41.278-08:00Hi Hillary,
I liked that you linked to symbol and ...Hi Hillary,<br />I liked that you linked to symbol and tree falling questions together, because they both follow the same logic:<br />When a tree falls down and there’s no one there to hear it, it does NOT make a sound. A sound is defined as a traveling air pressure fluctuation received by an eardrum. The tree falling does physically create a pressure fluctuation, but if there’s no one there to receive the result of the fluctuation then there’s no sound.<br />The same thing applies to symbols on a piece of paper: they need to not only contain potential for meaning, but that meaning is only realized once there is a mind present to receive/interpret it. So I would say that squiggles on a piece of paper have no meaning. Instead, meaning arise from a mind’s interpretation of those squiggles.<br />And this all ties in to consciousness once again. A digital recorder can store the “sound” of the tree falling, it still won’t be a sound until an eardrum perceives it. That does not mean a computer can’t perform measures on it. Likewise, a computer can analyze squiggles and output the correct squaggles, but there’s no meaning until a human mind receives said squiggles. Therefore I don’t think I would agree that there’s any understanding going on in the computer in the first place… (that is, I’m not quite sure I agree with your definition of understanding/meaning)<br />Anonymoushttps://www.blogger.com/profile/07258603697722601029noreply@blogger.com