Of Opaque Oracles: Epistemic Dependence on AI in Science Poses No Novel Problems for Social Epistemology

Synthese (forthcoming)
  Copy   BIBTEX

Abstract

Deep Neural Networks (DNNs) are epistemically opaque in the sense that their inner functioning is often unintelligible to human investigators. Inkeri Koskinen has recently argued that this poses special problems for a widespread view in social epistemology according to which thick normative trust between researchers is necessary to handle opacity: if DNNs are essentially opaque, there simply exists nobody who could be trusted to understand all the aspects a DNN picks up during training. In this paper, I present a counterexample from scientific practice, AlphaFold2. I argue that for epistemic reliance on an opaque system, trust is not necessary, but reliability is. What matters is whether, for a given context, the reliability of a DNN has been compellingly established by empirical means and whether there exist trustable researchers who have performed such evaluations adequately.

Other Versions

No versions found

Links

PhilArchive

External links

  • This entry has no external links. Add one.
Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Analytics

Added to PP
2025-01-20

Downloads
184 (#131,817)

6 months
184 (#18,817)

Historical graph of downloads
How can I increase my downloads?

Author's Profile

Jakob Ortmann
Universität Hannover

Citations of this work

No citations found.

Add more citations

References found in this work

Trust and antitrust.Annette Baier - 1986 - Ethics 96 (2):231-260.
Trust as an unquestioning attitude.C. Thi Nguyen - 2022 - Oxford Studies in Epistemology 7:214-244.
Knowledge on Trust.Paul Faulkner - 2011 - New York: Oxford University Press.

View all 15 references / Add more references