Liberalism and Automated Injustice

In Duncan Ivison (ed.), Research Handbook on Liberalism. Cheltenham: Edward Elgar Publishing (2024)
  Copy   BIBTEX

Abstract

Many of the benefits and burdens we might experience in our lives — from bank loans to bail terms — are increasingly decided by institutions relying on algorithms. In a sense, this is nothing new: algorithms — instructions whose steps can, in principle, be mechanically executed to solve a decision problem — are at least as old as allocative social institutions themselves. Algorithms, after all, help decision-makers to navigate the complexity and variation of whatever domains they are designed for. In another sense, however, this development is startlingly new: not only are algorithms being deployed in ever more social contexts, they are being mechanically executed not merely in principle, but pervasively in practice. Due to recent advances in computing technology, the benefits and burdens we experience in our lives are now increasingly decided by automata, rather than each other. How are we to morally assess these technologies? In this chapter, I propose a preliminary conceptual schema for identifying and locating the various injustices of automated, algorithmic social decision systems, from a broadly liberal perspective.

Other Versions

No versions found

Links

PhilArchive

External links

  • This entry has no external links. Add one.
Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Analytics

Added to PP
2023-06-20

Downloads
338 (#83,648)

6 months
91 (#68,247)

Historical graph of downloads
How can I increase my downloads?

Author's Profile

Chad Lee-Stronach
Northeastern University

Citations of this work

No citations found.

Add more citations

References found in this work

No references found.

Add more references