Predicting and Preferring

Inquiry: An Interdisciplinary Journal of Philosophy (forthcoming)
  Copy   BIBTEX

Abstract

The use of machine learning, or “artificial intelligence” (AI) in medicine is widespread and growing. In this paper, I focus on a specific proposed clinical application of AI: using models to predict incapacitated patients’ treatment preferences. Drawing on results from machine learning, I argue this proposal faces a special moral problem. Machine learning researchers owe us assurance on this front before experimental research can proceed. In my conclusion I connect this concern to broader issues in AI safety.

Other Versions

No versions found

Links

PhilArchive

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

AI Healthcare ChatBot_ using Machine Learning (13th edition).Brahmtej B. Bargali Akash S. Shinde, - 2024 - International Journal of Innovative Research in Science, Engineering and Technology 13 (12):20832-20837. Translated by Akash S Shinde.
Will intelligent machines become moral patients?Parisa Moosavi - 2023 - Philosophy and Phenomenological Research 109 (1):95-116.

Analytics

Added to PP
2023-08-09

Downloads
831 (#31,810)

6 months
229 (#14,852)

Historical graph of downloads
How can I increase my downloads?

Author's Profile

Nathaniel Sharadin
University of Hong Kong

References found in this work

Rethinking informed consent in bioethics.Neil C. Manson - 2007 - New York: Cambridge University Press. Edited by Onora O'Neill.
Self-Fulfilling Beliefs: A Defence.Paul Silva - 2023 - Australasian Journal of Philosophy 101 (4):1012-1018.

View all 19 references / Add more references