Meaning in Artificial Agents: The Symbol Grounding Problem Revisited

Minds and Machines 22 (1):25-34 (2012)
  Copy   BIBTEX

Abstract

The Chinese room argument has presented a persistent headache in the search for Artificial Intelligence. Since it first appeared in the literature, various interpretations have been made, attempting to understand the problems posed by this thought experiment. Throughout all this time, some researchers in the Artificial Intelligence community have seen Symbol Grounding as proposed by Harnad as a solution to the Chinese room argument. The main thesis in this paper is that although related, these two issues present different problems in the framework presented by Harnad himself. The work presented here attempts to shed some light on the relationship between John Searle’s intentionality notion and Harnad’s Symbol Grounding Problem

Other Versions

No versions found

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 100,793

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

Analytics

Added to PP
2011-12-05

Downloads
167 (#138,645)

6 months
13 (#231,061)

Historical graph of downloads
How can I increase my downloads?

Author's Profile

Dairon Rodríguez
Universidad Industrial de Santander

References found in this work

Minds, brains, and programs.John Searle - 1980 - Behavioral and Brain Sciences 3 (3):417-57.
On Denoting.Bertrand Russell - 1905 - Mind 14 (56):479-493.
On Denoting.Bertrand Russell - 2005 - Mind 114 (456):873 - 887.

View all 14 references / Add more references