Abstract
In their article, ‘Responsibility, Second Opinions, and Peer-Disagreement—Ethical and Epistemological Challenges of Using AI in Clinical Diagnostic Contexts,’ Kempt and Nagel argue for a ‘rule of disagreement’ for the integration of diagnostic AI in healthcare contexts. The type of AI in question is a ‘decision support system’, the purpose of which is to augment human judgement and decision-making in the clinical context by automating or supplementing parts of the cognitive labor. Under the authors’ proposal, artificial decision support systems which produce automated diagnoses should serve chiefly as confirmatory tools; so long as the physician and AI agree, the matter is settled, and the physician’s initial judgement is considered epistemically justified. If, however, the AI-DSS and physician disagree, then a second physician’s opinion is called on to resolve the dispute. While the cognitive labour of the decision is shared between the physicians and AI, the final decision remains at the discretion of the first physician, and with it the moral and legal culpability. The putative benefits of this approach are twofold: healthcare administration can improve diagnostic performance by introducing AI-DSS without the unintended byproduct of a responsibility gap, and assuming the physician and AI disagree less than the general rate of requested second opinions, and the AI’s diagnostic accuracy supersedes or at least …