Abstract
The ongoing debate about reliance and trust in artificial intelligence (AI) systems continues to challenge our understanding and application of these concepts in human-AI interactions. In this work, we argue for a pragmatic approach to defining reliance and trust in AI. Our approach is grounded in three expectations that should guide human-AI interactions: appropriate reliance, efficiency, and motivation by objective reasons. By focusing on these expectations, we show that it is possible to reconcile reliance with trust in a manner that is both theoretically sound and practically useful. To do so, one needs to re-frame trust in AI as a quantitative property of reliance that is antithetical to the investment of resources during the interaction with the system. Our reliance-centered framework does not dismiss the concept of trust in AI but repositions it as a key property of reliance, offering a practical alternative to rational or motivational accounts. As AI continues to integrate into society, particularly in high-stakes environments like healthcare, our pragmatic approach provides a practical and meaningful framework for addressing the nuances of trust in AI.