Abstract
Artificial Intelligence (AI) is increasingly adopted in society, creating numerous opportunities but at the same time posing ethical challenges. Many of these are familiar, such as issues of fairness, responsibility and privacy, but are presented in a new and challenging guise due to our limited ability to steer and predict the outputs of AI systems. This chapter first introduces these ethical challenges, stressing that overviews of values are a good starting point but frequently fail to suffice due to the context sensitivity of ethical challenges. Frequently, additional (ethical) values emerge for specific applications, as e.g. challenges with fraud detection are very different from those around language technologies. Second, methods to tackle these challenges are discussed. Main ethical theories (virtue ethics, consequentialism, and deontology) are shown to provide a starting point, but often lack the details needed for actionable AI Ethics. Instead, mid-level philosophical theories coupled to design-approaches such as Design for Values, together with interdisciplinary working methods, offer the best way forwards. The chapter aims to show how these approaches can lead to an ethics of AI that is actionable and that can be proactively integrated in the design of AI systems.