Abstract
Artificial intelligence (AI) has become a transformative force in the legal domain, automating complex tasks such as contract analysis, compliance checks, and legal research. However, the intersection of AI and moral decision-making exposes significant limitations. Legal systems are not merely instruments for enforcing rules—they are platforms where human morality, intent, and societal impact are weighed. This paper explores the critical question: Can AI truly deliver justice, or does it merely replicate historical biases encoded in training data? Using the concept of the “Moral Turing Test,” the paper argues that AI lacks the capacity for ethical reasoning and moral discretion, which are fundamental to adjudicating complex legal disputes. Case studies, including Estonia’s AI judges for small claims and the controversial COMPAS algorithm in the United States, illustrate the risks of bias and accountability gaps in AI-driven legal decision-making. The paper advocates for a Human-in-the-Loop (HITL) framework, where AI assists but does not replace human judgment, ensuring that moral reasoning and accountability remain central to the legal process. It concludes that while AI holds promise in enhancing efficiency, it must be carefully regulated to preserve the ethical foundations of justice.