Abstract
This contribution moves from the assumption that algorithmic outcomes disadvantaging one or more stakeholder groups is not the only way a recommender system can be unfair since additional forms of structural injustice should be considered as well. After describing different ways of supplying digital labor as waged labor or consumer labor, it is shown that the current design of recommender systems necessarily requires digital labor for training and tuning, making it a structural issue. The chapter then presents several fairness concerns raised by the exploitation of digital labor. These regard, among other things, the unequal distribution of produced value, the poor work conditions of digital laborers, and the unawareness of many individuals of their laborer’s condition. To address this structural fairness issue, compensatory measures are not adequate, and a structural change of the ways training data are collected is necessary.