Abstract
Data-driven, decision-making technologies used in the justice system to inform decisions about bail, parole, and prison sentencing are biased against historically marginalized groups (Angwin, Larson, Mattu, & Kirchner 2016). But these technologies’ judgments—which reproduce patterns of wrongful discrimination embedded in the historical datasets that they are trained on—are well-evidenced. This presents a puzzle: how can we account for the wrong these judgments engender without also indicting morally permissible statistical inferences about persons? I motivate this puzzle and attempt an answer.