Abstract
Raul Hakli & Pekka Mäkelä (2016, 2019) make a popular assumption in machine ethics explicit by arguing that artificial agents cannot be responsible because they are designed. Designed agents, they think, are analogous to manipulated humans and therefore not meaningfully in control of their actions. Contrary to this, I argue that under all mainstream theories of responsibility, designed agents can be responsible. To do so, I identify the closest parallel discussion in the literature on responsibility and free will, which concerns ‘design cases’. Design cases are theoretical examples of agents that appear to lack responsibility because they were designed, philosophers use these cases to explore the relationship between design and responsibility. This paper presents several replies to design cases from the responsibility literature and uses those replies to situate the corresponding positions on the design and responsibility of artificial agents in machine ethics. I argue that each reply can support the design of responsible agents. However, each reply also entails different levels of severity in the constraints for the design of responsible agents. I offer a brief discussion of the nature of those constraints, highlighting the challenges respective to each reply. I conclude that designing responsible agents is possible, with the caveat that the difficulty of doing so will vary according to one’s favoured reply to design cases.