Abstract
Distributed Constraint Optimization Problems (DCOPs) provide a framework for solving multi-agent
coordination tasks efficiently. However, their black-box nature often limits transparency and trust in decision-making
processes. This paper explores methods to enhance interpretability in DCOPs, leveraging explainable AI (XAI)
techniques. We introduce a novel approach incorporating heuristic explanations, constraint visualization, and modelagnostic methods to provide insights into DCOP solutions. Experimental results demonstrate that our method improves
human understanding and debugging of DCOP solutions while maintaining solution quality.