Abstract
The application of AI in medicine (AIM) is producing health practices more reliable, accurate and efficient than traditional medicine (TM) by assisting partly / totally the medical decision-making, such as the use of deep learning in diagnostic imagery, designing treatment plans or preliminary diagnosis. Yet, most of these AI systems are pure “black-boxes”: the practitioner understands the inputs and outputs of the system but cannot have access to what happens “inside” it and cannot offer an explanation, creating an opaque process that culminates in a “trust gap” in two levels: (a) between patients and the medical experts; (b) between the medical expert and the medical process itself. This creates a “black-box medicine” since the practitioner ought to rely (epistemically) on these AI systems that are more accurate, fast and efficient than any expert or group of human experts but are not transparent (epistemically) and do not offer any kind of explanation. In this paper, we want to introduce some preliminary remarks about the pros and cons of three possible solutions to deal with the “trust gap” in AIM.