Large Language Models Demonstrate the Potential of Statistical Learning in Language

Cognitive Science 47 (3):e13256 (2023)
  Copy   BIBTEX

Abstract

To what degree can language be acquired from linguistic input alone? This question has vexed scholars for millennia and is still a major focus of debate in the cognitive science of language. The complexity of human language has hampered progress because studies of language–especially those involving computational modeling–have only been able to deal with small fragments of our linguistic skills. We suggest that the most recent generation of Large Language Models (LLMs) might finally provide the computational tools to determine empirically how much of the human language ability can be acquired from linguistic experience. LLMs are sophisticated deep learning architectures trained on vast amounts of natural language data, enabling them to perform an impressive range of linguistic tasks. We argue that, despite their clear semantic and pragmatic limitations, LLMs have already demonstrated that human-like grammatical language can be acquired without the need for a built-in grammar. Thus, while there is still much to learn about how humans acquire and use language, LLMs provide full-fledged computational models for cognitive scientists to empirically evaluate just how far statistical learning might take us in explaining the full complexity of human language.

Other Versions

No versions found

Links

PhilArchive

    This entry is not archived by us. If you are the author and have permission from the publisher, we recommend that you archive it. Many publishers automatically grant permission to authors to archive pre-prints. By uploading a copy of your work, you will enable us to better index it, making it easier to find.

    Upload a copy of this work     Papers currently archived: 105,586

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

Large Language Models and the Reverse Turing Test.Terrence Sejnowski - 2023 - Neural Computation 35 (3):309–342.

Analytics

Added to PP
2023-03-03

Downloads
98 (#229,322)

6 months
19 (#161,013)

Historical graph of downloads
How can I increase my downloads?