Abstract
Jecker et al 1 offer a valuable analysis of risk discussion in relation to Artifical Intelligence (AI) and in the context of longtermism generally, a philosophy prevalent among technocrats and tech billionaires who significantly shape the direction of technological progress in our world. Longtermists accomplish a significant justificatory win, when they use a utilitarian calculation that pits all future humanity against concerns about current humans and societies. By making this argument, they are able to have abstract (and uncertain) benefits for an infinite group of people, outweigh concrete harms for current people—a nifty trick given it also often seems to align with their own personal benefits related to status, power and profit-making. In the AI space, specifically, this has worked well. As Jecker et al 1point out, while AI leaders have spoken openly and loudly about existential risk (X-risk) that conjures thoughts of sci-fi fantasies rather than realities (a strategy that also serves to hype up their work well beyond its current capacity), it fails to engage with concrete and less existential risks that have significant impacts. These risks are inconvenient to development in the Silicon Valley model that requires people to ‘move fast and break things’ rather than proceed carefully and thoughtfully. Not only that but …