Classifying Genetic Essentialist Biases using Large Language Models

Abstract

The rapid rise of generative AI, including LLMs, has prompted a great deal of concern, both within and beyond academia. One of these concerns is that generative models embed, reproduce, and therein potentially perpetuate all manner of bias. The present study offers an alternative perspective: exploring the potential of LLMs to detect bias in human generated text. Our target is genetic essentialism in obesity discourse in Australian print media. We develop and deploy an LLM-based classification model to evaluate a large sample of relevant articles (n=26,163). We show that our model detects genetic essentialist biases as reliably as human experts; and find that, while genes figure less prominently in popular discussions of obesity than previous work might suggest, when genetic information is invoked, it is often presented in a biased way. Implications for future work are discussed.

Other Versions

No versions found

Links

PhilArchive

External links

  • This entry has no external links. Add one.
Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

  • Only published works are available at libraries.

Similar books and articles

You are what you’re for: Essentialist categorization in large language models.Siying Zhang, Selena She, Tobias Gerstenberg & David Rose - forthcoming - Proceedings of the 45Th Annual Conference of the Cognitive Science Society.

Analytics

Added to PP
2024-12-01

Downloads
118 (#182,228)

6 months
118 (#47,900)

Historical graph of downloads
How can I increase my downloads?

Author Profiles

Jack Chan
National University of Singapore
Ritsaart Reimann
Macquarie University
Kate Lynch
University of Melbourne
1 more

Citations of this work

No citations found.

Add more citations

References found in this work

No references found.

Add more references