AI Can Help Us Live More Deliberately

MIT Sloan Management Review 60 (4) (2019)
  Copy   BIBTEX

Abstract

Our rapidly increasing reliance on frictionless AI interactions may increase cognitive and emotional distance, thereby letting our adaptive resilience slacken and our ethical virtues atrophy from disuse. Many trends already well underway involve the offloading of cognitive, emotional, and ethical labor to AI software in myriad social, civil, personal, and professional contexts. Gradually, we may lose the inclination and capacity to engage in critically reflective thought, making us more cognitively and emotionally vulnerable and thus more anxious and prone to manipulation from false news, deceptive advertising, and political rhetoric. In this article, I consider the overarching features of this problem and provide a framework to help AI designers tackle it through system enhancements in smartphones and other products and services in the burgeoning internet of things (IoT) marketplace. The framework is informed by two ideas: psychologist Daniel Kahneman’s cognitive dual process theory and moral self-awareness theory, a four-level model of moral identity that I developed with Benjamin M. Cole.

Other Versions

No versions found

Links

PhilArchive

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Analytics

Added to PP
2019-08-31

Downloads
1,210 (#15,271)

6 months
204 (#15,366)

Historical graph of downloads
How can I increase my downloads?

Author's Profile

Julian Friedland
Metropolitan State University of Denver