Is superintelligence necessarily moral?

Analysis 84 (4):730-738 (2024)
  Copy   BIBTEX

Abstract

Numerous authors have expressed concern that advanced artificial intelligence (AI) poses an existential risk to humanity. These authors argue that we might build AI which is vastly intellectually superior to humans (a ‘superintelligence’), and which optimizes for goals that strike us as morally bad, or even irrational. Thus this argument assumes that a superintelligence might have morally bad goals. However, according to some views, a superintelligence necessarily has morally adequate goals. This might be the case either because abilities for moral reasoning and intelligence mutually depend on each other, or because moral realism and moral internalism are true. I argue that the former argument misconstrues the view that intelligence and goals are independent, and that the latter argument misunderstands the implications of moral internalism. Moreover, the current state of AI research provides additional reasons to think that a superintelligence could have bad goals.

Other Versions

No versions found

Analytics

Added to PP
2024-05-21

Downloads
733 (#38,703)

6 months
359 (#6,385)

Historical graph of downloads
How can I increase my downloads?

Author's Profile

Leonard Dung
Ruhr-Universität Bochum

References found in this work

Understanding Artificial Agency.Leonard Dung - 2025 - Philosophical Quarterly 75 (2):450-472.
Superintelligence: Paths, Dangers, Strategies.Tim Mulgan - forthcoming - Philosophical Quarterly:pqv034.
Black Boxes or Unflattering Mirrors? Comparative Bias in the Science of Machine Behaviour.Cameron Buckner - 2023 - British Journal for the Philosophy of Science 74 (3):681-712.

View all 15 references / Add more references