How Could We Know When a Robot was a Moral Patient?

Cambridge Quarterly of Healthcare Ethics 30 (3):459-471 (2021)
  Copy   BIBTEX

Abstract

There is growing interest in machine ethics in the question of whether and under what circumstances an artificial intelligence would deserve moral consideration. This paper explores a particular type of moral status that the author terms psychological moral patiency, focusing on the epistemological question of what sort of evidence might lead us to reasonably conclude that a given artificial system qualified as having this status. The paper surveys five possible criteria that might be applied: intuitive judgments, assessments of intelligence, the presence of desires and autonomous behavior, evidence of sentience, and behavioral equivalence. The author suggests that despite its limitations, the latter approach offers the best way forward, and defends a variant of that, termed the cognitive equivalence strategy. In short, this holds that an artificial system should be considered a psychological moral patient to the extent that it possesses cognitive mechanisms shared with other beings such as nonhuman animals whom we also consider to be psychological moral patients.

Other Versions

No versions found

Links

PhilArchive

External links

  • This entry has no external links. Add one.
Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Analytics

Added to PP
2024-02-23

Downloads
10 (#1,469,173)

6 months
4 (#1,247,093)

Historical graph of downloads
How can I increase my downloads?

Author's Profile

Henry Shevlin
Cambridge University

References found in this work

Two Distinctions in Goodness.Christine Korsgaard - 1997 - In Thomas L. Carson & Paul K. Moser (eds.), Morality and the good life. New York: Oxford University Press.

Add more references