Singularitarianism and schizophrenia

AI and Society 32 (4):573-590 (2017)
  Copy   BIBTEX

Abstract

Given the contemporary ambivalent standpoints toward the future of artificial intelligence, recently denoted as the phenomenon of Singularitarianism, Gregory Bateson’s core theories of ecology of mind, schismogenesis, and double bind, are hereby revisited, taken out of their respective sociological, anthropological, and psychotherapeutic contexts and recontextualized in the field of Roboethics as to a twofold aim: the proposal of a rigid ethical standpoint toward both artificial and non-artificial agents, and an explanatory analysis of the reasons bringing about such a polarized outcome of contradictory views in regard to the future of robots. Firstly, the paper applies the Batesonian ecology of mind for constructing a unified roboethical framework which endorses a flat ontology embracing multiple forms of agency, borrowing elements from Floridi’s information ethics, classic virtue ethics, Felix Guattari’s ecosophy, Braidotti’s posthumanism, and the Japanese animist doctrine of Rinri. The proposed framework wishes to act as a pragmatic solution to the endless dispute regarding the nature of consciousness or the natural/artificial dichotomy and as a further argumentation against the recognition of future artificial agency as a potential existential threat. Secondly, schismogenic analysis is employed to describe the emergence of the hostile human–robot cultural contact, tracing its origins in the early scientific discourse of man–machine symbiosis up to the contemporary countermeasures against superintelligent agents. Thirdly, Bateson’s double bind theory is utilized as an analytic methodological tool of humanity’s collective agency, leading to the hypothesis of collective schizophrenic symptomatology, due to the constancy and intensity of confronting messages emitted by either proponents or opponents of artificial intelligence. The double bind’s treatment is the mirroring “therapeutic double bind,” and the article concludes in proposing the conceptual pragmatic imperative necessary for such a condition to follow: humanity’s conscience of habitualizing danger and familiarization with its possible future extinction, as the result of a progressive blurrification between natural and artificial agency, succeeded by a totally non-organic intelligent form of agency.

Other Versions

No versions found

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 103,401

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

Action and Agency in Artificial Intelligence: A Philosophical Critique.Justin Nnaemeka Onyeukaziri - 2023 - Philosophia: International Journal of Philosophy (Philippine e-journal) 24 (1):73-90.
The Problem Of Moral Agency In Artificial Intelligence.Riya Manna & Rajakishore Nath - 2021 - 2021 IEEE Conference on Norbert Wiener in the 21st Century (21CW).
Artificial intelligence and the ethics of human extinction.T. Lorenc - 2015 - Journal of Consciousness Studies 22 (9-10):194-214.

Analytics

Added to PP
2017-10-06

Downloads
35 (#678,037)

6 months
5 (#702,808)

Historical graph of downloads
How can I increase my downloads?

Author's Profile

Vassilis Galanos
University of Edinburgh

References found in this work

The posthuman.Rosi Braidotti - 2013 - Malden, MA, USA: Polity Press.
The Question concerning Technology and Other Essays.Martin Heidegger & William Lovitt - 1981 - International Journal for Philosophy of Religion 12 (3):186-188.
The will to power.Friedrich Wilhelm Nietzsche - 1968 - Mineola, New York: Dover Publications. Edited by Anthony M. Ludovici.
The fourth revolution.Luciano Floridi - 2012 - The Philosophers' Magazine 57 (57):96-101.

View all 13 references / Add more references