Personal AI, deception, and the problem of emotional bubbles

AI and Society:1-12 (forthcoming)
  Copy   BIBTEX

Abstract

Personal AI is a new type of AI companion, distinct from the prevailing forms of AI companionship. Instead of playing a narrow and well-defined social role, like friend, lover, caretaker, or colleague, with a set of pre-determined responses and behaviors, Personal AI is engineered to tailor itself to the user, including learning to mirror the user’s unique emotional language and attitudes. This paper identifies two issues with Personal AI. First, like other AI companions, it is deceptive about the presence of their emotions, which undermines the moral value of companionship. Second, Personal AI leads to a distinctly new form of deception concerning the origins of its emotions. Its emotional attitudes appear to belong to it, when in fact they are only reflections of the user. This results in what I dub “emotional bubbles”—the false impression that personal emotions are externally validated—which have at least two troubling implications. First, emotional bubbles prevent us from encountering emotional attitudes that differ from our own, which is likely to cripple emotional growth and the ability to form diverse social and emotional relationships. Second, if we assume, as some philosophers claim, that shared emotions are constitutive of shared values, it follows that Personal AI subverts joint moral deliberation. Users believe their personal values are externally validated, when they are only validated by themselves. Because of the absence of technovirtues able to handle this problem, I suggest that we proceed very cautiously with the development and marketing of Personal AI.

Other Versions

No versions found

Links

PhilArchive

    This entry is not archived by us. If you are the author and have permission from the publisher, we recommend that you archive it. Many publishers automatically grant permission to authors to archive pre-prints. By uploading a copy of your work, you will enable us to better index it, making it easier to find.

    Upload a copy of this work     Papers currently archived: 104,246

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Analytics

Added to PP
2024-05-21

Downloads
61 (#377,962)

6 months
35 (#115,316)

Historical graph of downloads
How can I increase my downloads?

Citations of this work

No citations found.

Add more citations

References found in this work

Echo chambers and epistemic bubbles.C. Thi Nguyen - 2020 - Episteme 17 (2):141-161.
Welcoming Robots into the Moral Circle: A Defence of Ethical Behaviourism.John Danaher - 2020 - Science and Engineering Ethics 26 (4):2023-2049.
Politics: Books V and Vi.David Aristotle Keyt (ed.) - 1999 - Cambridge, Mass.: Oxford University Press UK.
Robot Betrayal: a guide to the ethics of robotic deception.John Danaher - 2020 - Ethics and Information Technology 22 (2):117-128.

View all 28 references / Add more references