A Sleight of Hand

Journal of Medical Ethics 50 (12):825-826 (2024)
  Copy   BIBTEX

Abstract

Jecker et al 1 offer a valuable analysis of risk discussion in relation to Artifical Intelligence (AI) and in the context of longtermism generally, a philosophy prevalent among technocrats and tech billionaires who significantly shape the direction of technological progress in our world. Longtermists accomplish a significant justificatory win, when they use a utilitarian calculation that pits all future humanity against concerns about current humans and societies. By making this argument, they are able to have abstract (and uncertain) benefits for an infinite group of people, outweigh concrete harms for current people—a nifty trick given it also often seems to align with their own personal benefits related to status, power and profit-making. In the AI space, specifically, this has worked well. As Jecker et al 1point out, while AI leaders have spoken openly and loudly about existential risk (X-risk) that conjures thoughts of sci-fi fantasies rather than realities (a strategy that also serves to hype up their work well beyond its current capacity), it fails to engage with concrete and less existential risks that have significant impacts. These risks are inconvenient to development in the Silicon Valley model that requires people to ‘move fast and break things’ rather than proceed carefully and thoughtfully. Not only that but …

Other Versions

No versions found

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 100,937

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

Analytics

Added to PP
2024-08-30

Downloads
6 (#1,694,809)

6 months
6 (#858,075)

Historical graph of downloads
How can I increase my downloads?

Author's Profile

Emma Tumilty
University of Texas Medical Branch