Enhancing human agency through redress in Artificial Intelligence Systems

AI and Society 38 (2):537-547 (2023)
  Copy   BIBTEX

Abstract

Recently, scholars across disciplines raised ethical, legal and social concerns about the notion of human intervention, control, and oversight over Artificial Intelligence (AI) systems. This observation becomes particularly important in the age of ubiquitous computing and the increasing adoption of AI in everyday communication infrastructures. We apply Nicholas Garnham's conceptual perspective on mediation to users who are challenged both individually and societally when interacting with AI-enabled systems. One way to increase user agency are mechanisms to contest faulty or flawed AI systems and their decisions, as well as to request redress. Currently, however, users structurally lack such mechanisms, which increases risks for vulnerable communities, for instance patients interacting with AI healthcare chatbots. To empower users in AI-mediated communication processes, this article introduces the concept of active human agency. We link our concept to contestability and redress mechanism examples and explain why these are necessary to strengthen active human agency. We argue that AI policy should introduce rights for users to swiftly contest or rectify an AI-enabled decision. This right would empower individual autonomy and strengthen fundamental rights in the digital age. We conclude by identifying routes for future theoretical and empirical research on active human agency in times of ubiquitous AI.

Other Versions

No versions found

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 100,063

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Analytics

Added to PP
2022-06-05

Downloads
50 (#431,538)

6 months
10 (#379,980)

Historical graph of downloads
How can I increase my downloads?