The Ethics of Artificial Intelligence in Medicine: Preliminary Remarks

Global Philosophy 35 (1):1-17 (2024)
  Copy   BIBTEX

Abstract

The application of AI in medicine (AIM) is producing health practices more reliable, accurate and efficient than traditional medicine (TM) by assisting partly / totally the medical decision-making, such as the use of deep learning in diagnostic imagery, designing treatment plans or preliminary diagnosis. Yet, most of these AI systems are pure “black-boxes”: the practitioner understands the inputs and outputs of the system but cannot have access to what happens “inside” it and cannot offer an explanation, creating an opaque process that culminates in a “trust gap” in two levels: (a) between patients and the medical experts; (b) between the medical expert and the medical process itself. This creates a “black-box medicine” since the practitioner ought to rely (epistemically) on these AI systems that are more accurate, fast and efficient than any expert or group of human experts but are not transparent (epistemically) and do not offer any kind of explanation. In this paper, we want to introduce some preliminary remarks about the pros and cons of three possible solutions to deal with the “trust gap” in AIM.

Other Versions

No versions found

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 100,063

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Analytics

Added to PP
2025-01-05

Downloads
10 (#1,460,288)

6 months
10 (#379,980)

Historical graph of downloads
How can I increase my downloads?

Author's Profile

Steven Gouveia
University of Porto

Citations of this work

No citations found.

Add more citations

References found in this work

No references found.

Add more references