Abstract
> Future people count. There could be a lot of them. We can make their lives better. > > -- William MacAskill, What We Owe The Future > > [Longtermism is] quite possibly the most dangerous secular belief system in the world today. > > -- Émile P. Torres, Against Longtermism Philosophers,1 2 psychologists,3 4 politicians5 and even some tech billionaires6 have sounded the alarm about artificial intelligence (AI) and the dangers it may pose to the long-term future of humanity. Some believe it poses an existential risk (X-Risk)7 to our species, potentially causing our extinction or bringing about the collapse of human civilisation as we know it. The above quote from philosopher Will MacAskill captures the key tenets of ‘longtermism’, an ethical standpoint that places the onus on current generations to prevent AI-related—and other—X-Risks for the sake of people living in the future.1 2 Developing from an adjacent social movement commonly associated with utilitarian philosophy, ‘effective altruism’, longtermism has amassed following of its own. Its supporters argue that preventing X-Risks is at least as morally significant as addressing current challenges like global poverty. However, critics are concerned that such a distant-future focus will sideline efforts to tackle the many pressing moral issues facing humanity now.8 9 Indeed, according to ‘strong’ longtermism,10 future needs arguably should take precedence over present ones. In essence, the claim is that there is greater expected utility to allocating available resources to prevent human extinction in the future than there is to focusing on present lives, since doing so stands to benefit the incalculably large number of people in later generations who will far outweigh existing populations.1 Taken to the extreme, this view suggests it would be morally permissible, or even required, to actively neglect, harm or destroy large swathes of humanity as …