Large Language Models and Biorisk

American Journal of Bioethics 23 (10):115-118 (2023)
  Copy   BIBTEX

Abstract

We discuss potential biorisks from large language models (LLMs). AI assistants based on LLMs such as ChatGPT have been shown to significantly reduce barriers to entry for actors wishing to synthesize dangerous, potentially novel pathogens and chemical weapons. The harms from deploying such bioagents could be further magnified by AI-assisted misinformation. We endorse several policy responses to these dangers, including prerelease evaluations of biomedical AIs by subject-matter experts, enhanced surveillance and lab screening procedures, restrictions on AI training data, and access controls on biomedical AI. We conclude with a suggestion about future research directions in bioethics.

Other Versions

No versions found

Analytics

Added to PP
2023-10-09

Downloads
843 (#27,188)

6 months
228 (#12,167)

Historical graph of downloads
How can I increase my downloads?

Author Profiles

Nathaniel Sharadin
University of Hong Kong
Harry R. Lloyd
Yale University

References found in this work

What Should ChatGPT Mean for Bioethics?I. Glenn Cohen - 2023 - American Journal of Bioethics 23 (10):8-16.

Add more references