The Flashcard Sorter - Applicability of the Chinese Room Argument to Large Language Models (Preprint)

Abstract

Does the Chinese Room Argument (CRA) apply to large language models (LLMs)? The thought experiment at the center of the CRA is tailored to Good Old-Fashioned Artificial Intelligence (GOFAI) systems. However, natural language processing has made significant progress, especially with the emergence of LLMs in recent years. LLMs differ from GOFAI systems in their design; they operate on vectors rather than symbols and do not follow a program but instead learn to map inputs to outputs. Consequently, some have suggested that the CRA is no longer relevant in discussions surrounding artificial language understanding. Contrary to these authors, I argue that if the CRA successfully demonstrates that implementing a symbolic computation is not sufficient for language understanding, then it also shows that implementing an LLM is not sufficient for language understanding. At the core of my argument lies a thought experiment called “the flashcard sorter”.

Other Versions

No versions found

Links

PhilArchive

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

  • Only published works are available at libraries.

Similar books and articles

Analytics

Added to PP
2025-02-18

Downloads
78 (#285,258)

6 months
78 (#83,232)

Historical graph of downloads
How can I increase my downloads?

Author's Profile

Johannes Brinz
Universität Osnabrück

Citations of this work

No citations found.

Add more citations

References found in this work

Minds, brains, and programs.John Searle - 1980 - Behavioral and Brain Sciences 3 (3):417-57.
The symbol grounding problem.Stevan Harnad - 1990 - Physica D 42:335-346.
Could a machine think?Paul M. Churchland & Patricia S. Churchland - 1990 - Scientific American 262 (1):32-37.

View all 7 references / Add more references