Abstract
Many leading intellectuals, technologists, commentators, and ordinary people have in recent weeks become embroiled in a fiery debate (yet to hit the pages of scholarly journals) on the alleged need to press pause on the development of generative artificial intelligence (AI). Spurred by an open letter from the Future of Life Institute (FLI) calling for just such a pause, the debate occasioned, at lightning speed, a large number of responses from a variety of sources pursuing a variety of argumentative strategies. Not all of the respondents resist the calls for the pause. For example, the Distributed AI Research Institute (DAIRI) has issued a statement claiming that while the pause is indeed desirable, the FLI’s focus on predicted existential risks to the exclusion of present-day concerns is misguided. The discussion shows no signs of abating.
I wish to raise an objection both to the FLI’s open letter, and to the statement by DAIRI (I will refer to both collectively as “letters” or “open letters”). While in the context of the issues brought up by the two Institutes, this objection is, to my knowledge, novel, I pretend to little originality in voicing it. Rather, I offer a simple (yet unappreciated) application of some insights from political philosophy and economics to show that the recommendations contained in both missives are severely underargued.