Superintelligence as Moral Philosopher

Journal of Consciousness Studies 24 (5-6):128-149 (2017)
  Copy   BIBTEX

Abstract

Non-biological superintelligent artificial minds are scary things. Some theorists believe that if they came to exist, they might easily destroy human civilization, even if destroying human civilization was not a high priority for them. Consequently, philosophers are increasingly worried about the future of human beings and much of the rest of the biological world in the face of the potential development of superintelligent AI. This paper explores whether the increased attention philosophers have paid to the dangers of superintelligent AI is justified. I argue that, even if such a thing is developed and even if it is able to gain enormous knowledge, there are several reasons to believe that the motivation of such an AI will be more complicated than what most theorists have supposed thus far. In particular, I explore the relationship between a superintelligent AI's intelligence and its moral reasoning, in an effort to show that there is a realistic possibility that the AI will be unable to act, due to conflicts between various goals that it might adopt. Although no firm conclusions can be drawn at present, I seek to show that further work is needed and to provide a framework for future discussion.

Other Versions

No versions found

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 103,169

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Analytics

Added to PP
2019-06-22

Downloads
27 (#864,536)

6 months
2 (#1,294,541)

Historical graph of downloads
How can I increase my downloads?

Author's Profile

Joseph Corabi
Saint Joseph's University of Pennsylvania

Citations of this work

No citations found.

Add more citations

References found in this work

No references found.

Add more references