What machines shouldn’t do

AI and Society:1-12 (forthcoming)
  Copy   BIBTEX

Abstract

Meaningful human control (MHC) is increasingly becoming an important topic in AI ethics beyond the domain of autonomous weapons systems. MHC has been conceptualized, analyzed, and applied. However, in this article, I show how all the current attempts at realizing MHC have fallen short because we have not taken the important first step of deciding what machines should and should not be doing in the first place. We must first ensure that the output we have delegated to the machine is appropriate – only then do we have to do the work required to ensure that MHC is realized. Here, I argue that machines should not be evaluating – that is, we should not be delegating evaluative outputs to machines. This is practically important because machines that have evaluative outputs cannot be evaluated for efficacy. We can’t say how effective they are. This is ethically important because machines don’t have the moral agency required to make evaluations. Furthermore, machines should not be in a position to change our values – which they would be if they were evaluating. Finally, evaluations are judged based on the considerations used to reach a particular judgment. Contemporary AI cannot provide these justifying considerations – so we have no way of evaluating their evaluative outputs.

Other Versions

No versions found

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 101,423

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Analytics

Added to PP
2025-01-23

Downloads
6 (#1,700,713)

6 months
6 (#888,477)

Historical graph of downloads
How can I increase my downloads?