Abstract
Meaningful human control (MHC) is increasingly becoming an important topic in AI ethics beyond the domain of autonomous weapons systems. MHC has been conceptualized, analyzed, and applied. However, in this article, I show how all the current attempts at realizing MHC have fallen short because we have not taken the important first step of deciding what machines should and should not be doing in the first place. We must first ensure that the output we have delegated to the machine is appropriate – only then do we have to do the work required to ensure that MHC is realized. Here, I argue that machines should not be evaluating – that is, we should not be delegating evaluative outputs to machines. This is practically important because machines that have evaluative outputs cannot be evaluated for efficacy. We can’t say how effective they are. This is ethically important because machines don’t have the moral agency required to make evaluations. Furthermore, machines should not be in a position to change our values – which they would be if they were evaluating. Finally, evaluations are judged based on the considerations used to reach a particular judgment. Contemporary AI cannot provide these justifying considerations – so we have no way of evaluating their evaluative outputs.