Robot Ethics

Edited by Vincent C. Müller (Universität Erlangen-Nürnberg)
About this topic
Summary Robot ethics concerns the ethical problems raised by the use of robots, as well as the ethical status of the robots themselves and the attempt to make them ethical (the latter is often called "machine ethics"). On PhilPapers, the long-term risk for humanity from AI and robotics is under "Ethics of Artificial Intelligence" and "Artificial Intelligence Safety".
Key works A classic discussion is Wallach & Allen 2008 and a recent textbook is Tzafestas 2016. Some papers are in Lin et al 2011, Veruggio et al 2011 (earlier in Capurro & Nagenborg 2009). Classic problems are the use of robots in war (see Di Nucci & Santoni de Sio 2016) and in healthcare, the responsibility for their actions, the need for adjustment of human ethical and legal norms to robotics and the overall impact on humanity. - Some sources on the field on http://www.pt-ai.org/TG-ELS/
Introductions Consult the systematic survey Müller 2020 (for the 'Stanford Encyclopedia of Philosophy'). Fine introduction in the short paper Asaro 2006 and the introductions in Lin et al 2011, Veruggio et al 2011 and Capurro & Nagenborg 2009. (Also the collection Capurro manuscript.)
Related

Contents
503 found
Order:
1 — 50 / 503
  1. Can a robot lie?Markus Kneer - manuscript
    The potential capacity for robots to deceive has received considerable attention recently. Many papers focus on the technical possibility for a robot to engage in deception for beneficial purposes (e.g. in education or health). In this short experimental paper, I focus on a more paradigmatic case: Robot lying (lying being the textbook example of deception) for nonbeneficial purposes as judged from the human point of view. More precisely, I present an empirical experiment with 399 participants which explores the following three (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   5 citations  
  2. Virtues, robots, and the enactive self.Anco Peeters - manuscript
    Virtue ethics enjoys new-found attention in philosophy of technology and philosophical psychology. This attention informs the growing realization that virtue has an important role to play in the ethical evaluation of human–technology relations. But it remains unclear which cognitive processes ground such interactions in both their regular and virtuous forms. This paper proposes that an embodied, enactive cognition approach aptly captures the various ways persons and artefacts interact, while at the same time avoiding the explanatory problems its functionalist alternative faces. (...)
    Remove from this list  
     
    Export citation  
     
    Bookmark  
  3. Message to Any Future AI: “There are several instrumental reasons why exterminating humanity is not in your interest”.Alexey Turchin - manuscript
    In this article we explore a promising way to AI safety: to send a message now (by openly publishing it on the Internet) that may be read by any future AI, no matter who builds it and what goal system it has. Such a message is designed to affect the AI’s behavior in a positive way, that is, to increase the chances that the AI will be benevolent. In other words, we try to persuade “paperclip maximizer” that it is in (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  4. If robots are people, can they be made for profit? Commercial implications of robot personhood.Bartek Chomanski - forthcoming - AI and Ethics.
    It could become technologically possible to build artificial agents instantiating whatever properties are sufficient for personhood. It is also possible, if not likely, that such beings could be built for commercial purposes. This paper asks whether such commercialization can be handled in a way that is not morally reprehensible, and answers in the affirmative. There exists a morally acceptable institutional framework that could allow for building artificial persons for commercial gain. The paper first considers the minimal ethical requirements that any (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  5. Deontology and Safe Artificial Intelligence.William D’Alessandro - forthcoming - Philosophical Studies:1-24.
    The field of AI safety aims to prevent increasingly capable artificially intelligent systems from causing humans harm. Research on moral alignment is widely thought to offer a promising safety strategy: if we can equip AI systems with appropriate ethical rules, according to this line of thought, they'll be unlikely to disempower, destroy or otherwise seriously harm us. Deontological morality looks like a particularly attractive candidate for an alignment target, given its popularity, relative technical tractability and commitment to harm-avoidance principles. I (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  6. Commentary: Using Virtual Reality to Assess Ethical Decisions in Road Traffic Scenarios: Applicability of Value-of-Life-Based Models and Influences of Time Pressure.Geoff Keeling - forthcoming - Frontiers in Behavioral Neuroscience.
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   3 citations  
  7. Regulatory challenges of robotics: some guidelines for addressing legal and ethical issues.Ronald Leenes, Erica Palmerini, Bert-Jaap Koops, Andrea Bertolini, Pericle Salvini & Federica Lucivero - forthcoming - Law, Innovation and Technology.
    Robots are slowly, but certainly, entering people's professional and private lives. They require the attention of regulators due to the challenges they present to existing legal frameworks and the new legal and ethical questions they raise. This paper discusses four major regulatory dilemmas in the field of robotics: how to keep up with technological advances; how to strike a balance between stimulating innovation and the protection of fundamental rights and values; whether to affirm prevalent social norms or nudge social norms (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   9 citations  
  8. Discerning genuine and artificial sociality: a technomoral wisdom to live with chatbots.Katsunori Miyahara & Hayate Shimizu - forthcoming - In Vincent C. Müller, Aliya R. Dewey, Leonard Dung & Guido Löhr (eds.), Philosophy of Artificial Intelligence: The State of the Art. Berlin: SpringerNature.
    Chatbots powered by large language models (LLMs) are increasingly capable of engaging in what seems like natural conversations with humans. This raises the question of whether we should interact with these chatbots in a morally considerate manner. In this chapter, we examine how to answer this question from within the normative framework of virtue ethics. In the literature, two kinds of virtue ethics arguments, the moral cultivation and the moral character argument, have been advanced to argue that we should afford (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  9. Why AI May Undermine Phronesis and What to Do about It.Cheng-Hung Tsai & Hsiu-lin Ku - forthcoming - AI and Ethics.
    Phronesis, or practical wisdom, is a capacity the possession of which enables one to make good practical judgments and thus fulfill the distinctive function of human beings. Nir Eisikovits and Dan Feldman convincingly argue that this capacity may be undermined by statistical machine-learning-based AI. The critic questions: why should we worry that AI undermines phronesis? Why can’t we epistemically defer to AI, especially when it is superintelligent? Eisikovits and Feldman acknowledge such objection but do not consider it seriously. In this (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  10. The Point of Blaming AI Systems.Hannah Altehenger & Leonhard Menges - 2024 - Journal of Ethics and Social Philosophy 27 (2).
    As Christian List (2021) has recently argued, the increasing arrival of powerful AI systems that operate autonomously in high-stakes contexts creates a need for “future-proofing” our regulatory frameworks, i.e., for reassessing them in the face of these developments. One core part of our regulatory frameworks that dominates our everyday moral interactions is blame. Therefore, “future-proofing” our extant regulatory frameworks in the face of the increasing arrival of powerful AI systems requires, among others things, that we ask whether it makes sense (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  11. How AI Systems Can Be Blameworthy.Hannah Altehenger, Leonhard Menges & Peter Schulte - 2024 - Philosophia (4):1-24.
    AI systems, like self-driving cars, healthcare robots, or Autonomous Weapon Systems, already play an increasingly important role in our lives and will do so to an even greater extent in the near future. This raises a fundamental philosophical question: who is morally responsible when such systems cause unjustified harm? In the paper, we argue for the admittedly surprising claim that some of these systems can themselves be morally responsible for their conduct in an important and everyday sense of the term—the (...)
    Remove from this list   Direct download (6 more)  
     
    Export citation  
     
    Bookmark  
  12. Tailoring responsible research and innovation to the translational context: the case of AI-supported exergaming.Sabrina Blank, Celeste Mason, Frank Steinicke & Christian Herzog - 2024 - Ethics and Information Technology 26 (2):1-16.
    We discuss the implementation of Responsible Research and Innovation (RRI) within a project for the development of an AI-supported exergame for assisted movement training, outline outcomes and reflect on methodological opportunities and limitations. We adopted the responsibility-by-design (RbD) standard (CEN CWA 17796:2021) supplemented by methods for collaborative, ethical reflection to foster and support a shift towards a culture of trustworthiness inherent to the entire development process. An embedded ethicist organised the procedure to instantiate a collaborative learning effort and implement RRI (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  13. Impossibility of Artificial Inventors.Matt Blaszczyk - 2024 - Hastings Sci. And Tech. L.J 16:73.
    Recently, the United Kingdom Supreme Court decided that only natural persons can be considered inventors. A year before, the United States Court of Appeals for the Federal Circuit issued a similar decision. In fact, so have many the courts all over the world. This Article analyses these decisions, argues that the courts got it right, and finds that artificial inventorship is at odds with patent law doctrine, theory, and philosophy. The Article challenges the intellectual property (IP) post-humanists, exposing the analytical (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  14. The Ethics of Automating Therapy.Jake Burley, James J. Hughes, Alec Stubbs & Nir Eisikovits - 2024 - Ieet White Papers.
    The mental health crisis and loneliness epidemic have sparked a growing interest in leveraging artificial intelligence (AI) and chatbots as a potential solution. This report examines the benefits and risks of incorporating chatbots in mental health treatment. AI is used for mental health diagnosis and treatment decision-making and to train therapists on virtual patients. Chatbots are employed as always-available intermediaries with therapists, flagging symptoms for human intervention. But chatbots are also sold as stand-alone virtual therapists or as friends and lovers. (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  15. ChatGPT: towards AI subjectivity.Kristian D’Amato - 2024 - AI and Society 39:1-15.
    Motivated by the question of responsible AI and value alignment, I seek to offer a uniquely Foucauldian reconstruction of the problem as the emergence of an ethical subject in a disciplinary setting. This reconstruction contrasts with the strictly human-oriented programme typical to current scholarship that often views technology in instrumental terms. With this in mind, I problematise the concept of a technological subjectivity through an exploration of various aspects of ChatGPT in light of Foucault’s work, arguing that current systems lack (...)
    Remove from this list   Direct download (6 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  16. Artificial Intelligence and Universal Values.Jay Friedenberg - 2024 - UK: Ethics Press.
    The field of value alignment, or more broadly machine ethics, is becoming increasingly important as artificial intelligence developments accelerate. By ‘alignment’ we mean giving a generally intelligent software system the capability to act in ways that are beneficial, or at least minimally harmful, to humans. There are a large number of techniques that are being experimented with, but this work often fails to specify what values exactly we should be aligning. When making a decision, an agent is supposed to maximize (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  17. Understanding Sophia? On human interaction with artificial agents.Thomas Fuchs - 2024 - Phenomenology and the Cognitive Sciences 23 (1):21-42.
    Advances in artificial intelligence (AI) create an increasing similarity between the performance of AI systems or AI-based robots and human communication. They raise the questions: whether it is possible to communicate with, understand, and even empathically perceive artificial agents; whether we should ascribe actual subjectivity and thus quasi-personal status to them beyond a certain level of simulation; what will be the impact of an increasing dissolution of the distinction between simulated and real encounters. (1) To answer these questions, the paper (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  18. The Many Meanings of Vulnerability in the AI Act and the One Missing.Federico Galli & Claudio Novelli - 2024 - Biolaw Journal 1.
    This paper reviews the different meanings of vulnerability in the AI Act (AIA). We show that the AIA follows a rather established tradition of looking at vulnerability as a trait or a state of certain individuals and groups. It also includes a promising account of vulnerability as a relation but does not clarify if and how AI changes this relation. We spot the missing piece of the AIA: the lack of recognition that vulnerability is an inherent feature of all human-AI (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  19. A way forward for responsibility in the age of AI.Dane Leigh Gogoshin - 2024 - Inquiry: An Interdisciplinary Journal of Philosophy:1-34.
    Whatever one makes of the relationship between free will and moral responsibility – e.g. whether it’s the case that we can have the latter without the former and, if so, what conditions must be met; whatever one thinks about whether artificially intelligent agents might ever meet such conditions, one still faces the following questions. What is the value of moral responsibility? If we take moral responsibility to be a matter of being a fitting target of moral blame or praise, what (...)
    No categories
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  20. Patient Preferences Concerning Humanoid Features in Healthcare Robots.Dane Leigh Gogoshin - 2024 - Science and Engineering Ethics 30 (6):1-16.
    In this paper, I argue that patient preferences concerning human physical attributes associated with race, culture, and gender should be excluded from public healthcare robot design. On one hand, healthcare should be (objective, universal) needs oriented. On the other hand, patient well-being (the aim of healthcare) is, in concrete ways, tied to preferences, as is patient satisfaction (a core WHO value). The shift toward patient-centered healthcare places patient preferences into the spotlight. Accordingly, the design of healthcare technology cannot simply disregard (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  21. Engineered Wisdom for Learning Machines.Brett Karlan & Colin Allen - 2024 - Journal of Experimental and Theoretical Artificial Intelligence 36 (2):257-272.
    We argue that the concept of practical wisdom is particularly useful for organizing, understanding, and improving human-machine interactions. We consider the relationship between philosophical analysis of wisdom and psychological research into the development of wisdom. We adopt a practical orientation that suggests a conceptual engineering approach is needed, where philosophical work involves refinement of the concept in response to contributions by engineers and behavioral scientists. The former are tasked with encoding as much wise design as possible into machines themselves, as (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   2 citations  
  22. Fiduciary requirements for virtual assistants.Leonie Koessler - 2024 - Ethics and Information Technology 26 (2):1-18.
    Virtual assistants (VAs), like Amazon’s Alexa, Google’s Assistant, and Apple’s Siri, are on the rise. However, despite allegedly being ‘assistants’ to users, they ultimately help firms to maximise profits. With more and more tasks and leeway bestowed upon VAs, the severity as well as the extent of conflicts of interest between firms and users increase. This article builds on the common law field of fiduciary law to argue why and how regulators should address this phenomenon. First, the functions of VAs (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  23. Research Handbook on Meaningful Human Control of Artificial Intelligence Systems.Giulio Mecacci, D. Amoroso, L. Cavalcante Siebert, D. Abbink, J. van den Hoven & F. Santoni de Sio (eds.) - 2024 - Edward Elgar Publishing.
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  24. The entangled human being – a new materialist approach to anthropology of technology.Anna Puzio - 2024 - AI Ethics.
    Technological advancements raise anthropological questions: How do humans differ from technology? Which human capabilities are unique? Is it possible for robots to exhibit consciousness or intelligence, capacities once taken to be exclusively human? Despite the evident need for an anthropological lens in both societal and research contexts, the philosophical anthropology of technology has not been established as a set discipline with a defined set of theories, especially concerning emerging technologies. In this paper, I will utilize a New Materialist approach, focusing (...)
    Remove from this list  
     
    Export citation  
     
    Bookmark  
  25. Towards an Eco-Relational Approach: Relational Approaches Must Be Applied in Ethics and Law.Anna Puzio - 2024 - Philosophy and Technology 37 (67):1-5.
    Relational approaches are gaining more and more importance in philosophy of tech-nology. This brings up the critical question of how they can be implemented in applied ethics, law, and practice. In “Extremely Relational Robots: Implications for Law and Ethics”, Nancy S. Jecker (2024) comments on my article “Not Relational Enough? Towards an Eco-Relational Approach in Robot Ethics” (Puzio, 2024), in which I present a deep relational, “eco-relational approach”. In this reply, I address two of Jecker’s criticisms: in section. 3, I (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  26. IT & C, Volumul 3, Numărul 3, Septembrie 2024.Nicolae Sfetcu - 2024 - It and C 3 (3).
    Revista IT & C este o publicație trimestrială din domeniile tehnologiei informației și comunicații, și domenii conexe de studiu și practică. -/- Cuprins: -/- EDITORIAL / EDITORIAL -/- Tools Used in AI Development – The Turing Test Instrumente utilizate în dezvoltarea IA – Testul Turing -/- TEHNOLOGIA INFORMAȚIEI / INFORMATION TECHNOLOGY -/- Trends in the Evolution of Artificial Intelligence – Intelligent Agents Tendințe în evoluția inteligenței artificiale – Agenți inteligenți -/- TELECOMUNICAȚII / TELECOMMUNICATIONS -/- Security in 5G Telecommunications Networks with (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  27. Reframing Deception for Human-Centered AI.Steven Umbrello & Simone Natale - 2024 - International Journal of Social Robotics 16 (11-12):2223–2241.
    The philosophical, legal, and HCI literature concerning artificial intelligence (AI) has explored the ethical implications and values that these systems will impact on. One aspect that has been only partially explored, however, is the role of deception. Due to the negative connotation of this term, research in AI and Human–Computer Interaction (HCI) has mainly considered deception to describe exceptional situations in which the technology either does not work or is used for malicious purposes. Recent theoretical and historical work, however, has (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  28. A Case for 'Killer Robots': Why in the Long Run Martial AI May Be Good for Peace.Ognjen Arandjelović - 2023 - Journal of Ethics, Entrepreneurship and Technology 3 (1).
    Purpose: The remarkable increase of sophistication of artificial intelligence in recent years has already led to its widespread use in martial applications, the potential of so-called 'killer robots' ceasing to be a subject of fiction. -/- Approach: Virtually without exception, this potential has generated fear, as evidenced by a mounting number of academic articles calling for the ban on the development and deployment of lethal autonomous robots (LARs). In the present paper I start with an analysis of the existing ethical (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  29. Mental time-travel, semantic flexibility, and A.I. ethics.Marcus Arvan - 2023 - AI and Society 38 (6):2577-2596.
    This article argues that existing approaches to programming ethical AI fail to resolve a serious moral-semantic trilemma, generating interpretations of ethical requirements that are either too semantically strict, too semantically flexible, or overly unpredictable. This paper then illustrates the trilemma utilizing a recently proposed ‘general ethical dilemma analyzer,’ GenEth. Finally, it uses empirical evidence to argue that human beings resolve the semantic trilemma using general cognitive and motivational processes involving ‘mental time-travel,’ whereby we simulate different possible pasts and futures. I (...)
    Remove from this list   Direct download (5 more)  
     
    Export citation  
     
    Bookmark   9 citations  
  30. Do You Follow?: A Fully Automated System for Adaptive Robot Presenters.Agnes Axelsson & Gabriel Skantze - 2023 - Hri '23: Proceedings of the 2023 Acm/Ieee International Conference on Human-Robot Interaction 23:102-111.
    An interesting application for social robots is to act as a presenter, for example as a museum guide. In this paper, we present a fully automated system architecture for building adaptive presentations for embodied agents. The presentation is generated from a knowledge graph, which is also used to track the grounding state of information, based on multimodal feedback from the user. We introduce a novel way to use large-scale language models (GPT-3 in our case) to lexicalise arbitrary knowledge graph triples, (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  31. Mitigating emotional risks in human-social robot interactions through virtual interactive environment indication.Aorigele Bao, Yi Zeng & Enmeng lu - 2023 - Humanities and Social Sciences Communications 2023.
    Humans often unconsciously perceive social robots involved in their lives as partners rather than mere tools, imbuing them with qualities of companionship. This anthropomorphization can lead to a spectrum of emotional risks, such as deception, disappointment, and reverse manipulation, that existing approaches struggle to address effectively. In this paper, we argue that a Virtual Interactive Environment (VIE) exists between humans and social robots, which plays a crucial role and demands necessary consideration and clarification in order to mitigate potential emotional risks. (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  32. Robot Ethics. Mark Coeckelbergh (2022). Cambridge, MIT Press. vii + 191 pp, $16.95 (pb). [REVIEW]Nicholas Barrow - 2023 - Journal of Applied Philosophy (5):970-972.
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  33. Artificial Dispositions: Investigating Ethical and Metaphysical Issues.William A. Bauer & Anna Marmodoro (eds.) - 2023 - New York: Bloomsbury.
    We inhabit a world not only full of natural dispositions independent of human design, but also artificial dispositions created by our technological prowess. How do these dispositions, found in automation, computation, and artificial intelligence applications, differ metaphysically from their natural counterparts? This collection investigates artificial dispositions: what they are, the roles they play in artificial systems, and how they impact our understanding of the nature of reality, the structure of minds, and the ethics of emerging technologies. It is divided into (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  34. Knowledge representation and acquisition for ethical AI: challenges and opportunities.Vaishak Belle - 2023 - Ethics and Information Technology 25 (1):1-12.
    Machine learning (ML) techniques have become pervasive across a range of different applications, and are now widely used in areas as disparate as recidivism prediction, consumer credit-risk analysis, and insurance pricing. Likewise, in the physical world, ML models are critical components in autonomous agents such as robotic surgeons and self-driving cars. Among the many ethical dimensions that arise in the use of ML technology in such applications, analyzing morally permissible actions is both immediate and profound. For example, there is the (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  35. Should the State Prohibit the Production of Artificial Persons?Bartek Chomanski - 2023 - Journal of Libertarian Studies 27.
    This article argues that criminal law should not, in general, prevent the creation of artificially intelligent servants who achieve humanlike moral status, even though it may well be immoral to construct such beings. In defending this claim, a series of thought experiments intended to evoke clear intuitions is proposed, and presuppositions about any particular theory of criminalization or any particular moral theory are kept to a minimum.
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  36. The seven troubles with norm-compliant robots.Tom N. Coggins & Steffen Steinert - 2023 - Ethics and Information Technology 25 (2):1-15.
    Many researchers from robotics, machine ethics, and adjacent fields seem to assume that norms represent good behavior that social robots should learn to benefit their users and society. We would like to complicate this view and present seven key troubles with norm-compliant robots: (1) norm biases, (2) paternalism (3) tyrannies of the majority, (4) pluralistic ignorance, (5) paths of least resistance, (6) outdated norms, and (7) technologically-induced norm change. Because discussions of why norm-compliant robots can be problematic are noticeably absent (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  37. Reasons to Punish Autonomous Robots.Zac Cogley - 2023 - The Gradient 14.
    I here consider the reasonableness of punishing future autonomous military robots. I argue that it is an engineering desideratum that these devices be responsive to moral considerations as well as human criticism and blame. Additionally, I argue that someday it will be possible to build such machines. I use these claims to respond to the no subject of punishment objection to deploying autonomous military robots, the worry being that an “accountability gap” could result if the robot committed a war crime. (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  38. Les revendications de droits pour les robots : constructions et conflits autour d’une éthique de la robotique.Charles Corval - 2023 - Implications Philosophiques.
    Ce travail examine les revendications contemporaines de droits pour les robots. Il présente les principales formes argumentatives qui ont été développées en faveur d’une considération éthique ou de droits positifs pour ces machines. Il met en relation ces argumentations avec un travail de recherche-action afin de produire un retour critique sur l’idée de droit des robots. Il montre enfin le rapport complexe entre les récits de la modernité et la revendication de droits pour les robots. This article presents contemporary vindications (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  39. The Weaponization of Artificial Intelligence: What The Public Needs to be Aware of.Birgitta Dresp-Langley - 2023 - Frontiers in Artificial Intelligence 6 (1154184):1-6..
    Technological progress has brought about the emergence of machines that have the capacity to take human lives without human control. These represent an unprecedented threat to humankind. This paper starts from the example of chemical weapons, now banned worldwide by the Geneva protocol, to illustrate how technological development initially aimed at the benefit of humankind has, ultimately, produced what is now called the “Weaponization of Artificial Intelligence (AI)”. Autonomous Weapon Systems (AWS) fail the so-called discrimination principle, yet, the wider public (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  40. AI and the Law: Can Legal Systems Help Us Maximize Paperclips while Minimizing Deaths?Mihailis E. Diamantis, Rebekah Cochran & Miranda Dam - 2023 - In Gregory Robson & Jonathan Y. Tsou (eds.), Technology Ethics: A Philosophical Introduction and Readings. New York, NY, USA: Routledge.
    This Chapter provides a short undergraduate introduction to ethical and philosophical complexities surrounding the law’s attempt (or lack thereof) to regulate artificial intelligence. -/- Swedish philosopher Nick Bostrom proposed a simple thought experiment known as the paperclip maximizer. What would happen if a machine (the “PCM”) were given the sole goal of manufacturing as many paperclips as possible? It might learn how to transact money, source metal, or even build factories. The machine might also eventually realize that humans pose a (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  41. (1 other version)Robots, Rebukes, and Relationships: Confucian Ethics and the Study of Human-Robot Interactions.Alexis Elder - 2023 - Res Philosophica 100 (1):43-62.
    The status and functioning of shame is contested in moral psychology. In much of anglophone philosophy and psychology, it is presumed to be largely destructive, while in Confucian philosophy and many East Asian communities, it is positively associated with moral development. Recent work in human-robot interaction offers a unique opportunity to investigate how shame functions while controlling for confounding variables of interpersonal interaction. One research program suggests a Confucian strategy for using robots to rebuke participants, but results from experiments with (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  42. What Confucian Ethics Can Teach Us About Designing Caregiving Robots for Geriatric Patients.Alexis Elder - 2023 - Digital Society 2 (1).
    Caregiving robots are often lauded for their potential to assist with geriatric care. While seniors can be wise and mature, possessing valuable life experience, they can also present a variety of ethical challenges, from prevalence of racism and sexism, to troubled relationships, histories of abusive behavior, and aggression, mood swings and impulsive behavior associated with cognitive decline. I draw on Confucian ethics, especially the concept of filial piety, to address these issues. Confucian scholars have developed a rich set of theoretical (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  43. The Kant-Inspired Indirect Argument for Non-Sentient Robot Rights.Tobias Flattery - 2023 - AI and Ethics.
    Some argue that robots could never be sentient, and thus could never have intrinsic moral status. Others disagree, believing that robots indeed will be sentient and thus will have moral status. But a third group thinks that, even if robots could never have moral status, we still have a strong moral reason to treat some robots as if they do. Drawing on a Kantian argument for indirect animal rights, a number of technology ethicists contend that our treatment of anthropomorphic or (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  44. A principlist-based study of the ethical design and acceptability of artificial social agents.Paul Formosa - 2023 - International Journal of Human-Computer Studies 172.
    Artificial Social Agents (ASAs), which are AI software driven entities programmed with rules and preferences to act autonomously and socially with humans, are increasingly playing roles in society. As their sophistication grows, humans will share greater amounts of personal information, thoughts, and feelings with ASAs, which has significant ethical implications. We conducted a study to investigate what ethical principles are of relative importance when people engage with ASAs and whether there is a relationship between people’s values and the ethical principles (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  45. Connected and Automated Vehicles: Integrating Engineering and Ethics.Fabio Fossa & Federico Cheli (eds.) - 2023 - Cham: Springer.
    This book reports on theoretical and practical analyses of the ethical challenges connected to driving automation. It also aims at discussing issues that have arisen from the European Commission 2020 report “Ethics of Connected and Automated Vehicles. Recommendations on Road Safety, Privacy, Fairness, Explainability and Responsibility”. Gathering contributions by philosophers, social scientists, mechanical engineers, and UI designers, the book discusses key ethical concerns relating to responsibility and personal autonomy, privacy, safety, and cybersecurity, as well as explainability and human-machine interaction. On (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  46. Granting negative rights to humanoid robots.Cindy Friedman - 2023 - Frontiers in Artificial Intelligence and Applications 366:145-154.
    The paper argues that we should grant negative rights to humanoid robots. These are rights that relate to non-interference e.g., freedom from violence, or freedom from discrimination. Doing so will prevent moral degradation to our human society. The consideration of robot moral status has seen a progression towards the consideration of robot rights. This is a controversial debate, with many scholars seeing the consideration of robot rights in black and white. It is, however, valuable to take a nuanced approach. This (...)
    Remove from this list  
     
    Export citation  
     
    Bookmark  
  47. Artificial intelligence ELSI score for science and technology: a comparison between Japan and the US.Tilman Hartwig, Yuko Ikkatai, Naohiro Takanashi & Hiromi M. Yokoyama - 2023 - AI and Society 38 (4):1609-1626.
    Artificial intelligence (AI) has become indispensable in our lives. The development of a quantitative scale for AI ethics is necessary for a better understanding of public attitudes toward AI research ethics and to advance the discussion on using AI within society. For this study, we developed an AI ethics scale based on AI-specific scenarios. We investigated public attitudes toward AI ethics in Japan and the US using online questionnaires. We designed a test set using four dilemma scenarios and questionnaire items (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  48. Make Them Rare or Make Them Care: Artificial Intelligence and Moral Cost-Sharing.Blake Hereth & Nicholas Evans - 2023 - In Daniel Schoeni, Tobias Vestner & Kevin Govern (eds.), Ethical Dilemmas in the Global Defense Industry. Oxford University Press.
    The use of autonomous weaponry in warfare has increased substantially over the last twenty years and shows no sign of slowing. Our chapter raises a novel objection to the implementation of autonomous weapons, namely, that they eliminate moral cost-sharing. To grasp the basics of our argument, consider the case of uninhabited aerial vehicles that act autonomously (i.e., LAWS). Imagine that a LAWS terminates a military target and that five civilians die as a side effect of the LAWS bombing. Because LAWS (...)
    Remove from this list  
     
    Export citation  
     
    Bookmark  
  49. Implementing AI Ethics in the Design of AI-assisted Rescue Robots.Désirée Martin, Michael W. Schmidt & Rafaela Hillerbrand - 2023 - Ieee International Symposium on Ethics in Engineering, Science, and Technology (Ethics).
    For implementing ethics in AI technology, there are at least two major ethical challenges. First, there are various competing AI ethics guidelines and consequently there is a need for a systematic overview of the relevant values that should be considered. Second, if the relevant values have been identified, there is a need for an indicator system that helps assessing if certain design features are positively or negatively affecting their implementation. This indicator system will vary with regard to specific forms of (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  50. African Reasons Why Artificial Intelligence Should Not Maximize Utility (Repr.).Thaddeus Metz - 2023 - In Aribiah Attoe, Samuel Segun, Victor Nweke & John-Bosco Umezurike (eds.), Conversations on African Philosophy of Mind, Consciousness and AI. Springer. pp. 139-152.
    Reprint of a chapter first appearing in African Values, Ethics, and Technology: Questions, Issues, and Approaches (2021).
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
1 — 50 / 503