Moral difference between humans and robots: paternalism and human-relative reason

AI and Society 37 (4):1533-1543 (2022)
  Copy   BIBTEX

Abstract

According to some philosophers, if moral agency is understood in behaviourist terms, robots could become moral agents that are as good as or even better than humans. Given the behaviourist conception, it is natural to think that there is no interesting moral difference between robots and humans in terms of moral agency (call it the _equivalence thesis_). However, such moral differences exist: based on Strawson’s account of participant reactive attitude and Scanlon’s relational account of blame, I argue that a distinct kind of reason available to humans—call it _human-relative reason_—is not available to robots. The difference in moral reason entails that sometimes an action is morally permissible for humans, but not for robots. Therefore, when developing moral robots, we cannot consider only what humans can or cannot do. I use examples of paternalism to illustrate my argument.

Other Versions

No versions found

Analytics

Added to PP
2021-05-24

Downloads
1,171 (#15,917)

6 months
280 (#8,194)

Historical graph of downloads
How can I increase my downloads?

Author's Profile

Tsung-Hsing Ho (何宗興)
National Chung Cheng University

Citations of this work

No citations found.

Add more citations

References found in this work

Moral dimensions: permissibility, meaning, blame.Thomas Scanlon - 2008 - Cambridge: Belknap Press of Harvard University Press.
Emotions, Value, and Agency.Christine Tappolet - 2016 - Oxford: Oxford University Press UK.
Superintelligence: paths, dangers, strategies.Nick Bostrom (ed.) - 2003 - Oxford University Press.
Persons and Bodies: A Constitution View.Lynne Rudder Baker - 2000 - New York: Cambridge University Press.
On the morality of artificial agents.Luciano Floridi & J. W. Sanders - 2004 - Minds and Machines 14 (3):349-379.

View all 31 references / Add more references