Black Boxes or Unflattering Mirrors? Comparative Bias in the Science of Machine Behaviour

British Journal for the Philosophy of Science 74 (3):681-712 (2023)
  Copy   BIBTEX

Abstract

The last 5 years have seen a series of remarkable achievements in deep-neural-network-based artificial intelligence research, and some modellers have argued that their performance compares favourably to human cognition. Critics, however, have argued that processing in deep neural networks is unlike human cognition for four reasons: they are (i) data-hungry, (ii) brittle, and (iii) inscrutable black boxes that merely (iv) reward-hack rather than learn real solutions to problems. This article rebuts these criticisms by exposing comparative bias within them, in the process extracting some more general lessons that may also be useful for future debates.

Other Versions

No versions found

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 100,809

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

Big Data and Deep Learning Models.Daniel Sander Hoffmann - 2022 - Principia: An International Journal of Epistemology 26 (3):597-614.
Deep learning: A philosophical introduction.Cameron Buckner - 2019 - Philosophy Compass 14 (10):e12625.
Neural networks, AI, and the goals of modeling.Walter Veit & Heather Browning - 2023 - Behavioral and Brain Sciences 46:e411.
Classification of Real and Fake Human Faces Using Deep Learning.Fatima Maher Salman & Samy S. Abu-Naser - 2022 - International Journal of Academic Engineering Research (IJAER) 6 (3):1-14.

Analytics

Added to PP
2021-04-21

Downloads
226 (#114,097)

6 months
55 (#98,191)

Historical graph of downloads
How can I increase my downloads?