Moral Engagement and Disengagement in Health Care AI Development

AJOB Empirical Bioethics 15 (4):291-300 (2024)
  Copy   BIBTEX

Abstract

Background Machine learning (ML) is utilized increasingly in health care, and can pose harms to patients, clinicians, health systems, and the public. In response, regulators have proposed an approach that would shift more responsibility to ML developers for mitigating potential harms. To be effective, this approach requires ML developers to recognize, accept, and act on responsibility for mitigating harms. However, little is known regarding the perspectives of developers themselves regarding their obligations to mitigate harms.Methods We conducted 40 semi-structured interviews with developers of ML predictive analytics applications for health care in the United States.Results Participants varied widely in their perspectives on personal responsibility and included examples of both moral engagement and disengagement, albeit in a variety of forms. While most (70%) of participants made a statement indicative of moral engagement, most of these statements reflected an awareness of moral issues, while only a subset of these included additional elements of engagement such as recognizing responsibility, alignment with personal values, addressing conflicts of interests, and opportunities for action. Further, we identified eight distinct categories of moral disengagement reflecting efforts to minimize potential harms or deflect personal responsibility for preventing or mitigating harms.Conclusions These findings suggest possible facilitators and barriers to the development of ethical ML that could act by encouraging moral engagement or discouraging moral disengagement. Regulatory approaches that depend on the ability of ML developers to recognize, accept, and act on responsibility for mitigating harms might have limited success without education and guidance for ML developers about the extent of their responsibilities and how to implement them.

Other Versions

No versions found

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 100,774

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

Testimonial injustice in medical machine learning.Giorgia Pozzi - 2023 - Journal of Medical Ethics 49 (8):536-540.

Analytics

Added to PP
2024-04-09

Downloads
21 (#994,267)

6 months
7 (#652,610)

Historical graph of downloads
How can I increase my downloads?

Author's Profile

Citations of this work

No citations found.

Add more citations

References found in this work

Principles alone cannot guarantee ethical AI.Brent Mittelstadt - 2019 - Nature Machine Intelligence 1 (11):501-507.
Informatics and professional responsibility.Donald Gotterbarn - 2001 - Science and Engineering Ethics 7 (2):221-230.

Add more references