Results for ' superintelligent artificial intelligence'

972 found
Order:
  1.  99
    The ethics of artificial intelligence: superintelligence, life 3.0 and robot rights.Kati Tusinski Berg - 2018 - Journal of Media Ethics 33 (3):151-153.
  2.  98
    Artificial Intelligence versus Agape Love.Ted Peters - 2019 - Forum Philosophicum: International Journal for Philosophy 24 (2):259-278.
    As Artificial Intelligence researchers attempt to emulate human intelligence and transhumanists work toward superintelligence, philosophers and theologians confront a dilemma: we must either, on the one horn, (1) abandon the view that the defining feature of humanity is rationality and propose an account of spirituality that dissociates it from reason; or, on the other horn, (2) find a way to invalidate the growing faith in a posthuman future shaped by the enhancements of Intelligence Amplification (IA) or (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  3. (1 other version)Future progress in artificial intelligence: A survey of expert opinion.Vincent C. Müller & Nick Bostrom - 2016 - In Vincent C. Müller (ed.), Fundamental Issues of Artificial Intelligence. Cham: Springer. pp. 553-571.
    There is, in some quarters, concern about high–level machine intelligence and superintelligent AI coming up in a few decades, bringing with it significant risks for humanity. In other quarters, these issues are ignored or considered science fiction. We wanted to clarify what the distribution of opinions actually is, what probability the best experts currently assign to high–level machine intelligence coming up within a particular time–frame, which risks they see with that development, and how fast they see these (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   39 citations  
  4. Ethics of Artificial Intelligence and Robotics.Vincent C. Müller - 2020 - In Edward N. Zalta (ed.), Stanford Encylopedia of Philosophy. pp. 1-70.
    Artificial intelligence (AI) and robotics are digital technologies that will have significant impact on the development of humanity in the near future. They have raised fundamental questions about what we should do with these systems, what the systems themselves should do, what risks they involve, and how we can control these. - After the Introduction to the field (§1), the main themes (§2) of this article are: Ethical issues that arise with AI systems as objects, i.e., tools made (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   35 citations  
  5. AAAI: an Argument Against Artificial Intelligence.Sander Beckers - 2017 - In Vincent C. Müller (ed.), Philosophy and theory of artificial intelligence 2017. Berlin: Springer. pp. 235-247.
    The ethical concerns regarding the successful development of an Artificial Intelligence have received a lot of attention lately. The idea is that even if we have good reason to believe that it is very unlikely, the mere possibility of an AI causing extreme human suffering is important enough to warrant serious consideration. Others look at this problem from the opposite perspective, namely that of the AI itself. Here the idea is that even if we have good reason to (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  6.  38
    Can Artificial Intelligence Lead Us to Genuine Virtue? A Confucian Perspective.Stephen C. Angle - 2021 - In Bing Song (ed.), Intelligence and Wisdom: Artificial Intelligence Meets Chinese Philosophers. Springer Singapore. pp. 49-64.
    Philosophers, technologists, and pundits are beginning to recognize the deep ethical questions raised by artificial intelligence. So far, attention has concentrated in three areas: how we are being damaged or controlled by profit-driven algorithms, and what to do about it; how to ensure that autonomous, intelligent machines make “good” decisions, and how to define what these decisions are; and how to think about the possibility of artificial superintelligence surpassing and perhaps controlling us. To the extent that theorists (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  7. Artificial Intelligence 2024 - 2034: What to expect in the next ten years.Demetrius Floudas - 2024 - 'Agi Talks' Series at Daniweb.
    In this public communication, AI policy theorist Demetrius Floudas introduces a novel era classification for the AI epoch and reveals the hidden dangers of AGI, predicting the potential obsolescence of humanity. In retort, he proposes a provocative International Control Treaty. -/- According to this scheme, the age of AI will unfold in three distinct phases, introduced here for the first time. An AGI Control & non-Proliferation Treaty may be humanity’s only safeguard. This piece aims to provide a publicly accessible exposé (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  8. Philosophy and theory of artificial intelligence 2017.Vincent C. Müller (ed.) - 2017 - Berlin: Springer.
    This book reports on the results of the third edition of the premier conference in the field of philosophy of artificial intelligence, PT-AI 2017, held on November 4 - 5, 2017 at the University of Leeds, UK. It covers: advanced knowledge on key AI concepts, including complexity, computation, creativity, embodiment, representation and superintelligence; cutting-edge ethical issues, such as the AI impact on human dignity and society, responsibilities and rights of machines, as well as AI threats to humanity and (...)
    Direct download (7 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  9. Editorial: Risks of general artificial intelligence.Vincent C. Müller - 2014 - Journal of Experimental and Theoretical Artificial Intelligence 26 (3):297-301.
    This is the editorial for a special volume of JETAI, featuring papers by Omohundro, Armstrong/Sotala/O’Heigeartaigh, T Goertzel, Brundage, Yampolskiy, B. Goertzel, Potapov/Rodinov, Kornai and Sandberg. - If the general intelligence of artificial systems were to surpass that of humans significantly, this would constitute a significant risk for humanity – so even if we estimate the probability of this event to be fairly low, it is necessary to think about it now. We need to estimate what progress we can (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  10. Future progress in artificial intelligence: A poll among experts.Vincent C. Müller & Nick Bostrom - 2014 - AI Matters 1 (1):9-11.
    [This is the short version of: Müller, Vincent C. and Bostrom, Nick (forthcoming 2016), ‘Future progress in artificial intelligence: A survey of expert opinion’, in Vincent C. Müller (ed.), Fundamental Issues of Artificial Intelligence (Synthese Library 377; Berlin: Springer).] - - - In some quarters, there is intense concern about high–level machine intelligence and superintelligent AI coming up in a few dec- ades, bringing with it significant risks for human- ity; in other quarters, these (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  11. A Proposed Taxonomy for the Evolutionary Stages of Artificial Intelligence: Towards a Periodisation of the Machine Intellect Era.Demetrius Floudas - manuscript
    As artificial intelligence (AI) systems continue their rapid advancement, a framework for contextualising the major transitional phases in the development of machine intellect becomes increasingly vital. This paper proposes a novel chronological classification scheme to characterise the key temporal stages in AI evolution. The Prenoëtic era, spanning all of history prior to the year 2020, is defined as the preliminary phase before substantive artificial intellect manifestations. The Protonoëtic period, which humanity has recently entered, denotes the initial emergence (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  12. Ethics of Artificial Intelligence.S. Matthew Liao (ed.) - 2020 - Oxford University Press.
    "Featuring seventeen original essays on the ethics of Artificial Intelligence by some of the most prominent AI scientists and academic philosophers today, this volume represents the state-of-the-art thinking in this fast-growing field and highlights some of the central themes in AI and morality such as how to build ethics into AI, how to address mass unemployment as a result of automation, how to avoiding designing AI systems that perpetuate existing biases, and how to determine whether an AI is (...)
    Direct download  
     
    Export citation  
     
    Bookmark   2 citations  
  13.  43
    How Artificial Intelligence Affects School Education.Vesselina Kachakova - 2023 - Filosofiya-Philosophy 32 (4):430-439.
    The text examines how the integration of technology and artificial intelligence into school education fundamentally alters classical notions of the role and functions of education held by representatives of various scientific disciplines. The literature review is structured around the following research questions: 1) How do technology (including artificial intelligence) reshape the sociologist Emile Durkheim's thesis on the authority of the teacher and their role in the socialization of students?; 2) How does the presence of "superintelligent" (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  14. Rebooting Ai: Building Artificial Intelligence We Can Trust.Gary Marcus & Ernest Davis - 2019 - Vintage.
    Two leaders in the field offer a compelling analysis of the current state of the art and reveal the steps we must take to achieve a truly robust artificial intelligence. Despite the hype surrounding AI, creating an intelligence that rivals or exceeds human levels is far more complicated than we have been led to believe. Professors Gary Marcus and Ernest Davis have spent their careers at the forefront of AI research and have witnessed some of the greatest (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark   13 citations  
  15.  68
    Artificial superintelligence and its limits: why AlphaZero cannot become a general agent.Karim Jebari & Joakim Lundborg - forthcoming - AI and Society.
    An intelligent machine surpassing human intelligence across a wide set of skills has been proposed as a possible existential catastrophe. Among those concerned about existential risk related to artificial intelligence, it is common to assume that AI will not only be very intelligent, but also be a general agent. This article explores the characteristics of machine agency, and what it would mean for a machine to become a general agent. In particular, it does so by articulating some (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  16.  33
    The Impact of Artificial Intelligence on Human Rights Legislation: A Plea for an AI Convention.John-Stewart Gordon - 2023 - Springer Nature Switzerland.
    The unmatched technological achievements in artificial intelligence (AI), robotics, computer science, and related fields over the last few decades can be considered a success story. The technological sophistication has been so groundbreaking in various types of applications that many experts believe that we will see, at some point or another, the emergence of general AI (AGI) and, eventually, superintelligence. This book examines the impact of AI on human rights by focusing on potential risks and human rights legislation and (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  17. How does Artificial Intelligence Pose an Existential Risk?Karina Vold & Daniel R. Harris - 2021 - In Carissa Véliz (ed.), The Oxford Handbook of Digital Ethics. Oxford University Press.
    Alan Turing, one of the fathers of computing, warned that Artificial Intelligence (AI) could one day pose an existential risk to humanity. Today, recent advancements in the field AI have been accompanied by a renewed set of existential warnings. But what exactly constitutes an existential risk? And how exactly does AI pose such a threat? In this chapter we aim to answer these questions. In particular, we will critically explore three commonly cited reasons for thinking that AI poses (...)
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  18. The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial Agents. [REVIEW]Nick Bostrom - 2012 - Minds and Machines 22 (2):71-85.
    This paper discusses the relation between intelligence and motivation in artificial agents, developing and briefly arguing for two theses. The first, the orthogonality thesis, holds (with some caveats) that intelligence and final goals (purposes) are orthogonal axes along which possible artificial intellects can freely vary—more or less any level of intelligence could be combined with more or less any final goal. The second, the instrumental convergence thesis, holds that as long as they possess a sufficient (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   46 citations  
  19. Leakproofing the Singularity Artificial Intelligence Confinement Problem.Roman Yampolskiy - 2012 - Journal of Consciousness Studies 19 (1-2):194-214.
    This paper attempts to formalize and to address the 'leakproofing' of the Singularity problem presented by David Chalmers. The paper begins with the definition of the Artificial Intelligence Confinement Problem. After analysis of existing solutions and their shortcomings, a protocol is proposed aimed at making a more secure confinement environment which might delay potential negative effect from the technological singularity while allowing humanity to benefit from the superintelligence.
    Direct download  
     
    Export citation  
     
    Bookmark   5 citations  
  20. Ethical issues in advanced artificial intelligence.Nick Bostrom - manuscript
    The ethical issues related to the possible future creation of machines with general intellectual capabilities far outstripping those of humans are quite distinct from any ethical problems arising in current automation and information systems. Such superintelligence would not be just another technological development; it would be the most important invention ever made, and would lead to explosive progress in all scientific and technological fields, as the superintelligence would conduct research with superhuman efficiency. To the extent that ethics is a cognitive (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   30 citations  
  21. Superintelligence: paths, dangers, strategies.Nick Bostrom (ed.) - 2003 - Oxford University Press.
    The human brain has some capabilities that the brains of other animals lack. It is to these distinctive capabilities that our species owes its dominant position. Other animals have stronger muscles or sharper claws, but we have cleverer brains. If machine brains one day come to surpass human brains in general intelligence, then this new superintelligence could become very powerful. As the fate of the gorillas now depends more on us humans than on the gorillas themselves, so the fate (...)
    No categories
  22.  21
    On quantum computing for artificial superintelligence.Anna Grabowska & Artur Gunia - 2024 - European Journal for Philosophy of Science 14 (2):1-30.
    Artificial intelligence algorithms, fueled by continuous technological development and increased computing power, have proven effective across a variety of tasks. Concurrently, quantum computers have shown promise in solving problems beyond the reach of classical computers. These advancements have contributed to a misconception that quantum computers enable hypercomputation, sparking speculation about quantum supremacy leading to an intelligence explosion and the creation of superintelligent agents. We challenge this notion, arguing that current evidence does not support the idea that (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  23. Superintelligence as superethical.Steve Petersen - 2017 - In Patrick Lin, Keith Abney & Ryan Jenkins (eds.), Robot Ethics 2.0: From Autonomous Cars to Artificial Intelligence. Oxford University Press. pp. 322-337.
    Nick Bostrom's book *Superintelligence* outlines a frightening but realistic scenario for human extinction: true artificial intelligence is likely to bootstrap itself into superintelligence, and thereby become ideally effective at achieving its goals. Human-friendly goals seem too abstract to be pre-programmed with any confidence, and if those goals are *not* explicitly favorable toward humans, the superintelligence will extinguish us---not through any malice, but simply because it will want our resources for its own purposes. In response I argue that things (...)
    Direct download  
     
    Export citation  
     
    Bookmark   6 citations  
  24. Risks of artificial general intelligence.Vincent C. Müller (ed.) - 2014 - Taylor & Francis (JETAI).
    Special Issue “Risks of artificial general intelligence”, Journal of Experimental and Theoretical Artificial Intelligence, 26/3 (2014), ed. Vincent C. Müller. http://www.tandfonline.com/toc/teta20/26/3# - Risks of general artificial intelligence, Vincent C. Müller, pages 297-301 - Autonomous technology and the greater human good - Steve Omohundro - pages 303-315 - - - The errors, insights and lessons of famous AI predictions – and what they mean for the future - Stuart Armstrong, Kaj Sotala & Seán S. Ó (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  25.  40
    Optimising peace through a Universal Global Peace Treaty to constrain the risk of war from a militarised artificial superintelligence.Elias G. Carayannis & John Draper - 2023 - AI and Society 38 (6):2679-2692.
    This article argues that an artificial superintelligence (ASI) emerging in a world where war is still normalised constitutes a catastrophic existential risk, either because the ASI might be employed by a nation–state to war for global domination, i.e., ASI-enabled warfare, or because the ASI wars on behalf of itself to establish global domination, i.e., ASI-directed warfare. Presently, few states declare war or even war on each other, in part due to the 1945 UN Charter, which states Member States should (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  26. Sam Harris and the Myth of Artificial Intelligence.Jobst Landgrebe & Barry Smith - 2023 - In Sandra Woien (ed.), Sam Harris: Critical Responses. Chicago: Carus Books. pp. 153-61.
    Sam Harris is a contemporary illustration of the difficulties standing in the way of coherent interdisciplinary thinking in an age where science and the humanities have drifted so far apart. We are here with Harris’s views on AI, and specifically with his view according to which, with the advance of AI, there will evolve a machine superintelligence with powers that far exceed those of the human mind. This he sees as something that is not merely possible, but rather a matter (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark  
  27.  31
    Artificial consciousness in AI: a posthuman fallacy.M. Prabhu & J. Anil Premraj - forthcoming - AI and Society:1-14.
    Obsession toward technology has a long background of parallel evolution between humans and machines. This obsession became irrevocable when AI began to be a part of our daily lives. However, this AI integration became a subject of controversy when the fear of AI advancement in acquiring consciousness crept among mankind. Artificial consciousness is a long-debated topic in the field of artificial intelligence and neuroscience which has many ethical challenges and threats in society ranging from daily chores to (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  28. Is superintelligence necessarily moral?Leonard Dung - forthcoming - Analysis.
    Numerous authors have expressed concern that advanced artificial intelligence (AI) poses an existential risk to humanity. These authors argue that we might build AI which is vastly intellectually superior to humans (a ‘superintelligence’), and which optimizes for goals that strike us as morally bad, or even irrational. Thus, this argument assumes that a superintelligence might have morally bad goals. However, according to some views, a superintelligence necessarily has morally adequate goals. This might be the case either because abilities (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  29.  49
    Risk management standards and the active management of malicious intent in artificial superintelligence.Patrick Bradley - 2020 - AI and Society 35 (2):319-328.
    The likely near future creation of artificial superintelligence carries significant risks to humanity. These risks are difficult to conceptualise and quantify, but malicious use of existing artificial intelligence by criminals and state actors is already occurring and poses risks to digital security, physical security and integrity of political systems. These risks will increase as artificial intelligence moves closer to superintelligence. While there is little research on risk management tools used in artificial intelligence development, (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  30. Are superintelligent robots entitled to human rights?John-Stewart Gordon - 2022 - Ratio 35 (3):181-193.
  31. How long before superintelligence?Nick Bostrom - 1998 - International Journal of Futures Studies 2.
    _This paper outlines the case for believing that we will have superhuman artificial intelligence_ _within the first third of the next century. It looks at different estimates of the processing power of_ _the human brain; how long it will take until computer hardware achieve a similar performance;_ _ways of creating the software through bottom-up approaches like the one used by biological_ _brains; how difficult it will be for neuroscience figure out enough about how brains work to_ _make this approach (...)
    Direct download  
     
    Export citation  
     
    Bookmark   36 citations  
  32. Superintelligence and the Future of Governance: On Prioritizing the Control Problem at the End of History.Phil Torres - 2018 - In Yampolskiy Roman (ed.), Artificial Intelligence Safety and Security. CRC Press.
    This chapter argues that dual-use emerging technologies are distributing unprecedented offensive capabilities to nonstate actors. To counteract this trend, some scholars have proposed that states become a little “less liberal” by implementing large-scale surveillance policies to monitor the actions of citizens. This is problematic, though, because the distribution of offensive capabilities is also undermining states’ capacity to enforce the rule of law. I will suggest that the only plausible escape from this conundrum, at least from our present vantage point, is (...)
    Direct download  
     
    Export citation  
     
    Bookmark   2 citations  
  33.  21
    Appearance in this list neither guarantees nor precludes a future review of the book. Abdoullaev, Azamat, Artificial Superintelligence, Moscow, Russia, EIS Encyclopedic Intelligent Systems, Ltd., 1999, pp. 184. Adams, Robert Merrihew, Finite and Infinite Goods, Oxford, UK, Oxford University Press, 1999, pp. 410,£ 35.00. [REVIEW]Theodor Adorno & Walter Benjamin - 1999 - Mind 108:432.
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark  
  34. Will Superintelligence Lead to Spiritual Enhancement?Ted Peters - 2022 - Religions 13 (5):399.
    If we human beings are successful at enhancing our intelligence through technology, will this count as spiritual advance? No. Intelligence alone— whether what we are born with or what is superseded by artificial intelligence or intelligence amplification— has no built-in moral compass. Christian spirituality values love more highly than intelligence, because love orients us toward God, toward the welfare of the neighbor, and toward the common good. Spiritual advance would require orienting our enhanced (...) toward loving God and neighbor with heart, mind, and soul. (shrink)
    No categories
     
    Export citation  
     
    Bookmark  
  35.  56
    Hybrid collective intelligence in a human–AI society.Marieke M. M. Peeters, Jurriaan van Diggelen, Karel van den Bosch, Adelbert Bronkhorst, Mark A. Neerincx, Jan Maarten Schraagen & Stephan Raaijmakers - 2021 - AI and Society 36 (1):217-238.
    Within current debates about the future impact of Artificial Intelligence on human society, roughly three different perspectives can be recognised: the technology-centric perspective, claiming that AI will soon outperform humankind in all areas, and that the primary threat for humankind is superintelligence; the human-centric perspective, claiming that humans will always remain superior to AI when it comes to social and societal aspects, and that the main threat of AI is that humankind’s social nature is overlooked in technological designs; (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   11 citations  
  36. Thinking Inside the Box: Controlling and Using an Oracle AI.Stuart Armstrong, Anders Sandberg & Nick Bostrom - 2012 - Minds and Machines 22 (4):299-324.
    There is no strong reason to believe that human-level intelligence represents an upper limit of the capacity of artificial intelligence, should it be realized. This poses serious safety issues, since a superintelligent system would have great power to direct the future according to its possibly flawed motivation system. Solving this issue in general has proven to be considerably harder than expected. This paper looks at one particular approach, Oracle AI. An Oracle AI is an AI that (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   18 citations  
  37.  31
    Alien Minds.Susan Schneider - 2009 - In Science Fiction and Philosophy: From Time Travel to Superintelligence. Wiley-Blackwell. pp. 225–242.
    This chapter first explains why it is likely that the alien civilizations we encounter will be forms of superintelligent artificial intelligence (SAI). Next, it turns to the question of whether superintelligent aliens can be conscious – whether it feels a certain way to be an alien, despite their non‐biological nature. The chapter draws from the literature in philosophy of AI, and urges that although we cannot be certain that superintelligent aliens can be conscious, it is (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark  
  38.  21
    On the Prudential Irrationality of Mind Uploading.Nicholas Agar - 2014 - In Russell Blackford & Damien Broderick (eds.), Intelligence Unbound. Wiley. pp. 146–160.
    For Ray Kurzweil, artificial intelligence (AI) is not just about making artificial things intelligent; it's also about making humans artificially superintelligent. The author challenges Kurzweil's predictions about the destiny of the human mind. He argues that it is unlikely ever to be rational for human beings to upload their minds completely onto computers. The author uses the term “mind uploading” to describe two processes. Most straightforwardly, it describes the one‐off event when a fully biological being presses (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark   2 citations  
  39.  26
    Superintelligence: Paths, Dangers, Strategies vol. 1.Nick Bostrom - 2014 - Oxford University Press; 1st edition.
    The human brain has some capabilities that the brains of other animals lack. It is to these distinctive capabilities that our species owes its dominant position. Other animals have stronger muscles or sharper claws, but we have cleverer brains. If machine brains one day come to surpass human brains in general intelligence, then this new superintelligence could become very powerful. As the fate of the gorillas now depends more on us humans than on the gorillas themselves, so the fate (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  40. Theory and philosophy of AI (Minds and Machines, 22/2 - Special volume).Vincent C. Müller (ed.) - 2012 - Springer.
    Invited papers from PT-AI 2011. - Vincent C. Müller: Introduction: Theory and Philosophy of Artificial Intelligence - Nick Bostrom: The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial Agents - Hubert L. Dreyfus: A History of First Step Fallacies - Antoni Gomila, David Travieso and Lorena Lobo: Wherein is Human Cognition Systematic - J. Kevin O'Regan: How to Build a Robot that Is Conscious and Feels - Oron Shagrir: Computation, Implementation, Cognition.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  41. (1 other version)Friendly Superintelligent AI: All You Need is Love.Michael Prinzing - 2012 - In Vincent C. Müller (ed.), The Philosophy & Theory of Artificial Intelligence. Springer. pp. 288-301.
    There is a non-trivial chance that sometime in the (perhaps somewhat distant) future, someone will build an artificial general intelligence that will surpass human-level cognitive proficiency and go on to become "superintelligent", vastly outperforming humans. The advent of superintelligent AI has great potential, for good or ill. It is therefore imperative that we find a way to ensure-long before one arrives-that any superintelligence we build will consistently act in ways congenial to our interests. This is a (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  42.  79
    In Algorithms We Trust: Magical Thinking, Superintelligent Ai and Quantum Computing.Nathan Schradle - 2020 - Zygon 55 (3):733-747.
    This article analyzes current attitudes toward artificial intelligence (AI) and quantum computing and argues that they represent a modern‐day form of magical thinking. It proposes that AI and quantum computing are thus excellent examples of the ways that traditional distinctions between religion, science, and magic fail to account for the vibrancy and energy that surround modern technologies.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  43.  25
    Answering Divine Love: Human Distinctiveness in the Light of Islam and Artificial Superintelligence.Yusuf Çelik - 2023 - Sophia 62 (4):679-696.
    In the Qur’an, human distinctiveness was first questioned by angels. These established denizens of the cosmos could not understand why God would create a seemingly pernicious human when immaculate devotees of God such as themselves existed. In other words, the angels asked the age-old question: what makes humans so special and different? Fast forward to our present age and this question is made relevant again in light of the encroaching arrival of an artificial superintelligence (ASI). Up to this point (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  44.  43
    The Singularity.David J. Chalmers - 2009 - In Susan Schneider (ed.), Science Fiction and Philosophy: From Time Travel to Superintelligence. Wiley-Blackwell. pp. 171–224.
    This chapter provides a rich philosophical discussion of superintelligence, a widely discussed piece that has encouraged philosophers of mind to take transhumanism, mind uploading, and the singularity more seriously. It starts with the argument for a singularity: is there good reason to believe that there will be an intelligence explosion? Next, the chapter considers how to negotiate the singularity: if it is possible that there will be a singularity, how can we maximize the chances of a good outcome? Finally, (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark   8 citations  
  45. Nick Bostrom: Superintelligence: Paths, Dangers, Strategies: Oxford University Press, Oxford, 2014, xvi+328, £18.99, ISBN: 978-0-19-967811-2. [REVIEW]Paul D. Thorn - 2015 - Minds and Machines 25 (3):285-289.
  46. Why AI Doomsayers are Like Sceptical Theists and Why it Matters.John Danaher - 2015 - Minds and Machines 25 (3):231-246.
    An advanced artificial intelligence could pose a significant existential risk to humanity. Several research institutes have been set-up to address those risks. And there is an increasing number of academic publications analysing and evaluating their seriousness. Nick Bostrom’s superintelligence: paths, dangers, strategies represents the apotheosis of this trend. In this article, I argue that in defending the credibility of AI risk, Bostrom makes an epistemic move that is analogous to one made by so-called sceptical theists in the debate (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  47.  59
    Friendly AI will still be our master. Or, why we should not want to be the pets of super-intelligent computers.Robert Sparrow - 2024 - AI and Society 39 (5):2439-2444.
    When asked about humanity’s future relationship with computers, Marvin Minsky famously replied “If we’re lucky, they might decide to keep us as pets”. A number of eminent authorities continue to argue that there is a real danger that “super-intelligent” machines will enslave—perhaps even destroy—humanity. One might think that it would swiftly follow that we should abandon the pursuit of AI. Instead, most of those who purport to be concerned about the existential threat posed by AI default to worrying about what (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  48.  73
    Existential Hope and Existential Despair in Ai Apocalypticism and Transhumanism.Beth Singler - 2019 - Zygon 54 (1):156-176.
    Drawing on observations from on‐ and offline fieldwork among transhumanists and artificial superintelligence/singularity‐focused groups, this article will explore an anthropology of anxiety around the hoped for, or feared, posthuman future. It will lay out some of the varieties of existential hope and existential despair found in these discussions about predicted events such as the “end of the world” and place them within an anthropological theoretical framework. Two examples will be considered. First, the optimism observed at a transhumanist event will (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  49.  57
    What overarching ethical principle should a superintelligent AI follow?Atle Ottesen Søvik - 2022 - AI and Society 37 (4):1505-1518.
    What is the best overarching ethical principle to give a possible future superintelligent machine, given that we do not know what the best ethics are today or in the future? Eliezer Yudkowsky has suggested that a superintelligent AI should have as its goal to carry out the coherent extrapolated volition of humanity (CEV), the most coherent way of combining human goals. The article discusses some problems with this proposal and some alternatives suggested by Nick Bostrom. A slightly different (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  50. Two arguments against human-friendly AI.Ken Daley - 2021 - AI and Ethics 1 (1):435-444.
    The past few decades have seen a substantial increase in the focus on the myriad ethical implications of artificial intelligence. Included amongst the numerous issues is the existential risk that some believe could arise from the development of artificial general intelligence (AGI) which is an as-of-yet hypothetical form of AI that is able to perform all the same intellectual feats as humans. This has led to extensive research into how humans can avoid losing control of an (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
1 — 50 / 972