Abstract
As predictive risk algorithms become more commonplace so have concerns about their use. The perceived bias, inaccuracy, and opacity of predictive risk algorithms has given rise to concerns about fairness when used in criminal justice contexts, especially in predicting an offender’s risk of re-offense. Opacity in these algorithms has negative consequences for citizens’ trust in their government, ability to give informed consent and ability to utilise certain rights. Edwards and Veale have argued that making predictive risk algorithms transparent would go some way toward mitigating these concerns. However, transparency is not a panacea: in fact, it introduces the possibility of offenders “gaming the system”, thereby decreasing the potential accuracy and effectiveness—and indeed, the fairness—of the system. Complete transparency, then, does not guarantee fairness either. With this tension in mind, I explore the justifications for transparency and opacity in predictive risk algorithms. I argue that while neither complete transparency nor complete opacity is desirable, there are certain parts of predictive risk algorithms that are best kept opaque, while others should be transparent. I then propose a set of transparency conditions that should be met when operating algorithms in the criminal justice system.