Abstract
Debates about the development of artificial superintelligence and its potential threats to humanity tend to assume that such a system would be historically unprecedented, and that its behavior must be predicted from first principles. I argue that this is not true: we can analyze multiagent intelligent systems (the best candidates for practical superintelligence) by comparing them to states, which also unite heterogeneous intelligences to achieve superhuman goals. States provide a model for several problems discussed in the literature on superintelligence, such as principal-agent problems and Instrumental Convergence. Philosophical arguments about governance, therefore, provide possible solutions to these problems, or point out problems in previously suggested solutions. In particular, the liberal concept of checks and balances, and Hannah Arendt’s concept of legitimacy, describe how state behavior is constrained by the preferences of constituents that could also apply to artificial systems. However, they also point out ways in which present-day computational developments could destabilize the international order by reducing the number of decision-makers involved in state actions. Thus, interstate competition not only serves as a model for the behavior of dangerous computational intelligences but also as the impetus for their development.