Abstract
The development and implementation of Artificial Intelligence (AI) health systems represent a great power that comes with great responsibility. Their capacity to improve and transform healthcare involves inevitable risks. A major risk in this regard is the propagation of bias throughout the life cycle of the AI system, leading to harmful or discriminatory outcomes. This paper argues that the European medical device regulations may prove inadequate to address this—not only technical but also social challenge. With the advent of new regulatory remedies, it seems that the European policymakers also want to reinforce the current medical device legal framework. In this paper, we analyse different policies to mitigate bias in AI health systems included in the Artificial Intelligence Act and in the proposed European Health Data Space. As we shall see, the different remedies based on processing sensitive data for such purpose devised by the European policymakers may have very different effects both on privacy and on protection against discrimination. We find the focus on mitigation during the pre‐commercialisation stages rather weak, and believe that bias control once the system has been implemented in the real world would have merited greater ambition.