Symbol grounding problem
The symbol grounding problem is a concept in the fields of artificial intelligence, cognitive science, philosophy of mind, and semantics. It addresses the challenge of connecting symbols, such as words or abstract representations, to the real-world objects or concepts they refer to. In essence, it is about how symbols acquire meaning in a way that is tied to the physical world. It is concerned with how it is that words (symbols in general) get their meanings,[1] and hence is closely related to the problem of what meaning itself really is. The problem of meaning is in turn related to the problem of how it is that mental states are meaningful, and hence to the problem of consciousness: what is the connection between certain physical systems and the contents of subjective experiences. DefinitionsThe symbol grounding problemAccording to his 1990 paper, Stevan Harnad implicitly expresses a few other definitions of the symbol grounding problem:[2]
To answer the question of whether or not groundedness is a necessary condition for meaning, a formulation of the symbol grounding problem is required: The symbol grounding problem is the problem of how to make the "...semantic interpretation of a formal symbol system..." "... intrinsic to the system, rather than just parasitic on the meanings in our heads..." "...in anything but other meaningless symbols".[2] Symbol systemAccording to his 1990 paper, Harnad lays out the definition of a "symbol system" relative to his defined symbol grounding problem. As defined by Harnad, a "symbol system" is "...a set of arbitrary 'physical tokens' scratches on paper, holes on a tape, events in a digital computer, etc. that are ... manipulated on the basis of 'explicit rules' that are ... likewise physical tokens and strings of tokens."[2] Formality of symbolsAs Harnad describes that the symbol grounding problem is exemplified in John R. Searle's Chinese Room argument,[3] the definition of "formal" in relation to formal symbols relative to a formal symbol system may be interpreted from John R. Searle's 1980 article "Minds, brains, and programs", whereby the Chinese Room argument is described in that article:
BackgroundReferentsA referent is the thing that a word or phrase refers to as distinguished from the word's meaning.[5] This is most clearly illustrated using the proper names of concrete individuals, but it is also true of names of kinds of things and of abstract properties: (1) "Tony Blair", (2) "the prime minister of the UK during the year 2004", and (3) "Cherie Blair's husband" all have the same referent, but not the same meaning. Referential processIn the 19th century, philosopher Charles Sanders Peirce suggested what some[who?] think is a similar model: according to his triadic sign model, meaning requires (1) an interpreter, (2) a sign or representamen, (3) an object, and is (4) the virtual product of an endless regress and progress called semiosis.[6] Some [who?] have interpreted Peirce as addressing the problem of grounding, feelings, and intentionality for the understanding of semiotic processes.[7] In recent years, Peirce's theory of signs has been rediscovered by an increasing number of artificial intelligence researchers in the context of symbol grounding problem.[8] Grounding processThere would be no connection at all between written symbols and any intended referents if there were no minds mediating those intentions, via their own internal means of picking out those intended referents. So the meaning of a word on a page is "ungrounded." Nor would looking it up in a dictionary help: If one tried to look up the meaning of a word one did not understand in a dictionary of a language one did not already understand, one would just cycle endlessly from one meaningless definition to another. One's search for meaning would be ungrounded. In contrast, the meaning of the words in one's head—those words one does understand—are "grounded".[citation needed] That mental grounding of the meanings of words mediates between the words on any external page one reads (and understands) and the external objects to which those words refer.[9][10] Requirements for symbol groundingAnother symbol system is natural language.[11] On paper or in a computer, language, too, is just a formal symbol system, manipulable by rules based on the arbitrary shapes of words. But in the brain, meaningless strings of squiggles become meaningful thoughts. Harnad has suggested two properties that might be required to make this difference:[citation needed]
Capacity to pick out referents
One property that static paper or, usually, even a dynamic computer lack that the brain possesses is the capacity to pick out symbols' referents. This is what we were discussing earlier, and it is what the hitherto undefined term "grounding" refers to. A symbol system alone, whether static or dynamic, cannot have this capacity (any more than a book can), because picking out referents is not just a computational (implementation-independent) property; it is a dynamical (implementation-dependent) property. To be grounded, the symbol system would have to be augmented with nonsymbolic, sensorimotor capacities—the capacity to interact autonomously with that world of objects, events, actions, properties and states that their symbols are systematically interpretable (by us) as referring to. It would have to be able to pick out the referents of its symbols, and its sensorimotor interactions with the world would have to fit coherently with the symbols' interpretations. The symbols, in other words, need to be connected directly to (i.e., grounded in) their referents; the connection must not be dependent only on the connections made by the brains of external interpreters like us. Just the symbol system alone, without this capacity for direct grounding, is not a viable candidate for being whatever it is that is really going on in our brains when we think meaningful thoughts.[12] Meaning as the ability to recognize instances (of objects) or perform actions is specifically treated in the paradigm called "Procedural Semantics", described in a number of papers including "Procedural Semantics" by Philip N. Johnson-Laird[13] and expanded by William A. Woods in "Meaning and Links".[14] A brief summary in Woods' paper reads: "The idea of procedural semantics is that the semantics of natural language sentences can be characterized in a formalism whose meanings are defined by abstract procedures that a computer (or a person) can either execute or reason about. In this theory the meaning of a noun is a procedure for recognizing or generating instances, the meaning of a proposition is a procedure for determining if it is true or false, and the meaning of an action is the ability to do the action or to tell if it has been done." Consciousness
The necessity of groundedness, in other words, takes us from the level of the pen-pal Turing test, which is purely symbolic (computational), to the robotic Turing test, which is hybrid symbolic/sensorimotor.[15][16] Meaning is grounded in the robotic capacity to detect, categorize, identify, and act upon the things that words and sentences refer to (see entries for Affordance and for Categorical perception). On the other hand, if the symbols (words and sentences) refer to the very bits of '0' and '1', directly connected to their electronic implementations, which a (any?) computer system can readily manipulate (thus detect, categorize, identify and act upon), then even non-robotic computer systems could be said to be "sensorimotor" and hence able to "ground" symbols in this narrow domain. To categorize is to do the right thing with the right kind of thing. The categorizer must be able to detect the sensorimotor features of the members of the category that reliably distinguish them from the nonmembers. These feature-detectors must either be inborn or learned. The learning can be based on trial and error induction, guided by feedback from the consequences of correct and incorrect categorization; or, in our own linguistic species, the learning can also be based on verbal descriptions or definitions. The description or definition of a new category, however, can only convey the category and ground its name if the words in the definition are themselves already grounded category names[17] According to Harnad, ultimately grounding has to be sensorimotor, to avoid infinite regress.[18] Harnad thus points at consciousness as a second property. The problem of discovering the causal mechanism for successfully picking out the referent of a category name can in principle be solved by cognitive science. But the problem of explaining how consciousness could play an "independent" role in doing so is probably insoluble, except on pain of telekinetic dualism. Perhaps symbol grounding (i.e., robotic TT capacity) is enough to ensure that conscious meaning is present, but then again, perhaps not. In either case, there is no way we can hope to be any the wiser—and that is Turing's methodological point.[19][20] See alsoReferences
Works cited
Further reading
|