Accuracy and Interpretability: Struggling with the Epistemic Foundations of Machine Learning-Generated Medical Information and Their Practical Implications for the Doctor-Patient Relationship

Philosophy and Technology 35 (1):1-20 (2022)
  Copy   BIBTEX

Abstract

The initial successes in recent years in harnessing machine learning technologies to improve medical practice and benefit patients have attracted attention in a wide range of healthcare fields. Particularly, it should be achieved by providing automated decision recommendations to the treating clinician. Some hopes placed in such ML-based systems for healthcare, however, seem to be unwarranted, at least partially because of their inherent lack of transparency, although their results seem convincing in accuracy and reliability. Skepticism arises when the physician as the agent responsible for the implementation of diagnosis, therapy, and care is unable to access the generation of findings and recommendations. There is widespread agreement that, generally, a complete traceability is preferable to opaque recommendations; however, there are differences about addressing ML-based systems whose functioning seems to remain opaque to some degree—even if so-called explicable or interpretable systems gain increasing amounts of interest. This essay approaches the epistemic foundations of ML-generated information specifically and medical knowledge generally to advocate differentiations of decision-making situations in clinical contexts regarding their necessary depth of insight into the process of information generation. Empirically accurate or reliable outcomes are sufficient for some decision situations in healthcare, whereas other clinical decisions require extensive insight into ML-generated outcomes because of their inherently normative implications.

Other Versions

No versions found

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 101,174

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

Testimonial injustice in medical machine learning.Giorgia Pozzi - 2023 - Journal of Medical Ethics 49 (8):536-540.

Analytics

Added to PP
2022-01-29

Downloads
49 (#449,543)

6 months
12 (#302,973)

Historical graph of downloads
How can I increase my downloads?