Abstract
Many of the benefits and burdens we might experience in our lives — from bank loans to bail terms — are increasingly decided by institutions relying on algorithms. In a sense, this is nothing new: algorithms — instructions whose steps can, in principle, be mechanically executed to solve a decision problem — are at least as old as allocative social institutions themselves. Algorithms, after all, help decision-makers to navigate the complexity and variation of whatever domains they are designed for. In another sense, however, this development is startlingly new: not only are algorithms being deployed in ever more social contexts, they are being mechanically executed not merely in principle, but pervasively in practice. Due to recent advances in computing technology, the benefits and burdens we experience in our lives are now increasingly decided by automata, rather than each other. How are we to morally assess these technologies? In this chapter, I propose a preliminary conceptual schema for identifying and locating the various injustices of automated, algorithmic social decision systems, from a broadly liberal perspective.