Abstract
The idea of superintelligence is a source of mainly philosophical and ethical considerations. Those considerations are rooted in the idea that an entity which is more intelligent than humans, may evolve in some point in the future. For obvious reasons, the superintelligence is considered as a kind of existential threat for humanity. In this essay, we discuss two ideas. One of them is the putative nature of future superintelligence which does not necessary need to be harmful for humanity. Our key idea states that the superintelligence does not need to assess its own survival as the highest value. As a kind of intelligence that is not biological, it is not clear what kind of attitude the superintelligent entity may evolve towards living organisms. Our second idea refers to the possible revelation of superintelligence. We assume that the self-revelation of such entity cannot be random. The metaphor of God as a superintelligence is introduced here as a helpful conceptual tool.