The perfect technological storm: artificial intelligence and moral complacency

Ethics and Information Technology 26 (3):1-12 (2024)
  Copy   BIBTEX

Abstract

Artificially intelligent machines are different in kind from all previous machines and tools. While many are used for relatively benign purposes, the types of artificially intelligent machines that we should care about, the ones that are worth focusing on, are the machines that purport to replace humans entirely and thereby engage in what Brian Cantwell Smith calls “judgment.” As impressive as artificially intelligent machines are, their abilities are still derived from humans and as such lack the sort of normative commitments that humans have. So while artificially intelligent machines possess a great capacity for “reckoning,” to use Smith’s terminology, i.e., a calculative prowess of extraordinary utility and importance, they still lack the kind of considered human judgment that accompanies the ethical commitment and responsible action we humans must ultimately aspire toward. But there is a perfect technological storm brewing. Artificially intelligent machines are analogous to a perfect storm in that such machines involve the convergence of a number of factors that threaten our ability to behave ethically and maintain meaningful human control over the outcomes of processes involving artificial intelligence. I argue that the storm in the context of artificially intelligent machines makes us vulnerable to moral complacency. That is, this perfect technological storm is capable of lulling people into a state in which they abdicate responsibility for decision-making and behaviour precipitated by the use of artificially intelligent machines, a state that I am calling “moral complacency.” I focus on three salient problems that converge to make us especially vulnerable to becoming morally complacent and losing meaningful human control. The first problem is that of transparency/opacity. The second problem is that of overtrust in machines, often referred to as the automation bias. The third problem is that of ascribing responsibility. I examine each of these problems and how together they threaten to render us morally complacent.

Other Versions

No versions found

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 100,561

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Analytics

Added to PP
2024-08-03

Downloads
17 (#1,129,509)

6 months
17 (#163,572)

Historical graph of downloads
How can I increase my downloads?

Author's Profile

Marten H. L. Kaas
Charité Universitätsmedizin Berlin

Citations of this work

No citations found.

Add more citations