Abstract
This paper critically investigates the explainable artificial intelligence (XAI) project. I analyze the word “explain” in XAI and the theory of explanation and identify the discrepancy between the meaning of the explanation claimed to be necessary and that which is actually presented. After summarizing the history of AI related to explainability, I argue that American philosophy in the 1900s operated in the background of said history. I then extract the meaning of explanation in view of XAI, to elucidate the relationship among AI, logic, and the theory of explanation. In so doing, I aim to reveal DARPA’s surreptitious definitional retreat in terms of its contents and formal fallacy of sophisma figurae dictionis, drawing from Kant’s paralogism. I conclude that this intentional fallacy preexists the XAI project and that presumptuous use of reason, which Kant criticizes, is underlying.