Abstract
Most people who have considered the matter have concluded that machines cannot be moral agents. Responsibility for acting on the outputs of machines must always rest with a human being. A key problem for the ethical use of AI, then, is to ensure that it does not block the attribution of responsibility to humans or lead to individuals being unfairly held responsible for things over which they had no control. This is the “responsibility gap”. In this paper, we argue that the claim that machines cannot be held responsible for their actions has unacknowledged implications for the conditions under which the outputs of AI can serve as reasons for belief. Following Robert Brandom, we argue that, because the assertion of a claim is an action, moral agency is a necessary condition for the giving and evaluating of reasons in discourse. Thus, the same considerations that suggest that machines cannot be held responsible for their actions suggest that they cannot be held to account for the epistemic value — or lack of value — of their outputs. If there is a responsibility gap, there is also a “testimony gap.” An under-recognised problem with the use of AI, then, is to ensure that it does not block the attribution of testimony to human beings or lead to individuals being held responsible for claims that they have not asserted. More generally, the “assertions” of machines are only capable of serving as justifications for belief or action where one or more people accept responsibility for them.