This category needs an editor. We encourage you to help if you are qualified.
Volunteer, or read more about what this involves.
About this topic
Summary In the early 2000s, James Moor set out four classes of ethical machine, advising that the near-term focus of machine ethics research should be on "explicit ethical agents", agents designed from an understanding of human theoretical ethics to operate according with these theoretical principles. Above this class, the ultimate aim of inquiry into machine ethics is understanding human morality and natural science well enough to engineer a fully autonomous, moral machine. This sub-category focuses on supporting this inquiry. Other work on other sorts of computer applications and their ethical impacts appear in different categories, including Ethics of Artificial Intelligence, Moral Status of Artificial Systems, and also Robot Ethics, Algorithmic Fairness, Computer Ethics, and others. Machine ethics is ethics, and it is also a study of machines. Machine ethicists wonder why people, human beings, other organisms, do what they do when they do it, and what makes these things the right things to do - they are ethicists. In addition, machine ethicists work out how to articulate such processes in an independent artificial system (rather than by parenting a biological child, or training a human minion, as traditional alternatives). So, machine ethics researchers engage directly with rapidly advancing work in cognitive science and psychology alongside that in robotics and AI, applied ethics such as medical ethics and philosophy of mind, computer modeling and data science, and so on. Drawing from so many disciplines with all of these advancing rapidly and with their own impacts, machine ethics is in the middle of a maelstrom of current research activity. Advances in materials science and physical chemistry leverage advances in cognitive science and neurology which feed advances in AI and robotics, including in regards to its interpretability for illustration. Putting this all together is the challenge for the machine ethics researcher. This sub-category is intended to support efforts to meet this challenge.  
Key works Allen et al 2005Wallach et al 2008Tonkens 2012Tonkens 2009Müller & Bostrom 2014White 2013White 2015
Introductions Anderson & Anderson 2007, Segun 2021, Powers 2011, Moor 2006
Related

Contents
550 found
Order:
1 — 50 / 550
  1. Ética e Segurança da Inteligência Artificial: ferramentas práticas para se criar "bons" modelos.Nicholas Kluge Corrêa - manuscript
    A AI Robotics Ethics Society (AIRES) é uma organização sem fins lucrativos fundada em 2018 por Aaron Hui, com o objetivo de se promover a conscientização e a importância da implementação e regulamentação ética da AI. A AIRES é hoje uma organização com capítulos em universidade como UCLA (Los Angeles), USC (University of Southern California), Caltech (California Institute of Technology), Stanford University, Cornell University, Brown University e a Pontifícia Universidade Católica do Rio Grande do Sul (Brasil). AIRES na PUCRS é (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  2. Three tragedies and three shades of finitude that shape human life in the AI era.Manh-Tung Ho & Manh-Toan Ho - manuscript
    This essay seeks to understand what it means for the human collective when AI technologies have become a predominant force in each of our lives through identifying three moral dilemmas (i.e., tragedy of the commons, tragedy of commonsense morality, tragedy of apathy) that shape human choices. In the first part, we articulate AI-driven versions of the three moral dilemmas. Then, in the second part, drawing from evolutionary psychology, existentialism, and East Asian philosophies, we argue that a deep appreciation of three (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  3. Can a robot lie?Markus Kneer - manuscript
    The potential capacity for robots to deceive has received considerable attention recently. Many papers focus on the technical possibility for a robot to engage in deception for beneficial purposes (e.g. in education or health). In this short experimental paper, I focus on a more paradigmatic case: Robot lying (lying being the textbook example of deception) for nonbeneficial purposes as judged from the human point of view. More precisely, I present an empirical experiment with 399 participants which explores the following three (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   5 citations  
  4. (1 other version)Beneficent Intelligence: A Capability Approach to Modeling Benefit, Assistance, and Associated Moral Failures through AI Systems.Alex John London & Hoda Heidari - manuscript
    The prevailing discourse around AI ethics lacks the language and formalism necessary to capture the diverse ethical concerns that emerge when AI systems interact with individuals. Drawing on Sen and Nussbaum's capability approach, we present a framework formalizing a network of ethical concepts and entitlements necessary for AI systems to confer meaningful benefit or assistance to stakeholders. Such systems enhance stakeholders' ability to advance their life plans and well-being while upholding their fundamental rights. We characterize two necessary conditions for morally (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  5. A Talking Cure for Autonomy Traps : How to share our social world with chatbots.Regina Rini - manuscript
    Large Language Models (LLMs) like ChatGPT were trained on human conversation, but in the future they will also train us. As chatbots speak from our smartphones and customer service helplines, they will become a part of everyday life and a growing share of all the conversations we ever have. It’s hard to doubt this will have some effect on us. Here I explore a specific concern about the impact of artificial conversation on our capacity to deliberate and hold ourselves accountable (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  6. First human upload as AI Nanny.Alexey Turchin - manuscript
    Abstract: As there are no visible ways to create safe self-improving superintelligence, but it is looming, we probably need temporary ways to prevent its creation. The only way to prevent it, is to create special AI, which is able to control and monitor all places in the world. The idea has been suggested by Goertzel in form of AI Nanny, but his Nanny is still superintelligent and not easy to control, as was shown by Bensinger at al. We explore here (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  7. Literature Review: What Artificial General Intelligence Safety Researchers Have Written About the Nature of Human Values.Alexey Turchin & David Denkenberger - manuscript
    Abstract: The field of artificial general intelligence (AGI) safety is quickly growing. However, the nature of human values, with which future AGI should be aligned, is underdefined. Different AGI safety researchers have suggested different theories about the nature of human values, but there are contradictions. This article presents an overview of what AGI safety researchers have written about the nature of human values, up to the beginning of 2019. 21 authors were overviewed, and some of them have several theories. A (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  8. (1 other version)Autonomous Reboot: the challenges of artificial moral agency and the ends of Machine Ethics.Jeffrey White - manuscript
    Ryan Tonkens (2009) has issued a seemingly impossible challenge, to articulate a comprehensive ethical framework within which artificial moral agents (AMAs) satisfy a Kantian inspired recipe - both "rational" and "free" - while also satisfying perceived prerogatives of Machine Ethics to create AMAs that are perfectly, not merely reliably, ethical. Challenges for machine ethicists have also been presented by Anthony Beavers and Wendell Wallach, who have pushed for the reinvention of traditional ethics in order to avoid "ethical nihilism" due to (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  9. Artificial Intelligence Ethics and Safety: practical tools for creating "good" models.Nicholas Kluge Corrêa -
    The AI Robotics Ethics Society (AIRES) is a non-profit organization founded in 2018 by Aaron Hui to promote awareness and the importance of ethical implementation and regulation of AI. AIRES is now an organization with chapters at universities such as UCLA (Los Angeles), USC (University of Southern California), Caltech (California Institute of Technology), Stanford University, Cornell University, Brown University, and the Pontifical Catholic University of Rio Grande do Sul (Brazil). AIRES at PUCRS is the first international chapter of AIRES, and (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  10. Does Predictive Sentencing Make Sense?Clinton Castro, Alan Rubel & Lindsey Schwartz - forthcoming - Inquiry: An Interdisciplinary Journal of Philosophy.
    This paper examines the practice of using predictive systems to lengthen the prison sentences of convicted persons when the systems forecast a higher likelihood of re-offense or re-arrest. There has been much critical discussion of technologies used for sentencing, including questions of bias and opacity. However, there hasn’t been a discussion of whether this use of predictive systems makes sense in the first place. We argue that it does not by showing that there is no plausible theory of punishment that (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  11. A qualified defense of top-down approaches in machine ethics.Tyler Cook - forthcoming - AI and Society:1-15.
    This paper concerns top-down approaches in machine ethics. It is divided into three main parts. First, I briefly describe top-down design approaches, and in doing so I make clear what those approaches are committed to and what they involve when it comes to training an AI to behave ethically. In the second part, I formulate two underappreciated motivations for endorsing them, one relating to predictability of machine behavior and the other relating to scrutability of machine decision-making. Finally, I present three (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  12. I Contain Multitudes: A Typology of Digital Doppelgängers.William D'Alessandro, Trenton W. Ford & Michael Yankoski - forthcoming - American Journal of Bioethics.
    In "Digital Doppelgängers and Lifespan Extension: What Matters?", Iglesias et al. argue that “some of the aims or ostensible goods of person-span expansion could plausibly be fulfilled in part by creating a digital doppelgänger”. Since person-extension aims are deeply heterogeneous, however, no single type of doppelgänger system is likely to suffice to meet all such needs. We propose a partial typology of doppelgängers—the family heirloom, the research archive, the public legacy, the project surrogate—and suggest appropriate training methods, design features and (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  13. Deontology and Safe Artificial Intelligence.William D’Alessandro - forthcoming - Philosophical Studies:1-24.
    The field of AI safety aims to prevent increasingly capable artificially intelligent systems from causing humans harm. Research on moral alignment is widely thought to offer a promising safety strategy: if we can equip AI systems with appropriate ethical rules, according to this line of thought, they'll be unlikely to disempower, destroy or otherwise seriously harm us. Deontological morality looks like a particularly attractive candidate for an alignment target, given its popularity, relative technical tractability and commitment to harm-avoidance principles. I (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  14. Norms and Causation in Artificial Morality.Laura Fearnley - forthcoming - Joint Proceedings of Acm Iui:1-4.
    There has been an increasing interest into how to build Artificial Moral Agents (AMAs) that make moral decisions on the basis of causation rather than mere correction. One promising avenue for achieving this is to use a causal modelling approach. This paper explores an open and important problem with such an approach; namely, the problem of what makes a causal model an appropriate model. I explore why we need to establish criteria for what makes a model appropriate, and offer-up such (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  15. What makes full artificial agents morally different.Erez Firt - forthcoming - AI and Society:1-10.
    In the research field of machine ethics, we commonly categorize artificial moral agents into four types, with the most advanced referred to as a full ethical agent, or sometimes a full-blown Artificial Moral Agent (AMA). This type has three main characteristics: autonomy, moral understanding and a certain level of consciousness, including intentional mental states, moral emotions such as compassion, the ability to praise and condemn, and a conscience. This paper aims to discuss various aspects of full-blown AMAs and presents the (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  16. Making moral machines: why we need artificial moral agents.Paul Formosa & Malcolm Ryan - forthcoming - AI and Society.
    As robots and Artificial Intelligences become more enmeshed in rich social contexts, it seems inevitable that we will have to make them into moral machines equipped with moral skills. Apart from the technical difficulties of how we could achieve this goal, we can also ask the ethical question of whether we should seek to create such Artificial Moral Agents (AMAs). Recently, several papers have argued that we have strong reasons not to develop AMAs. In response, we develop a comprehensive analysis (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark   12 citations  
  17. The moral decision machine: a challenge for artificial moral agency based on moral deference.Z. Gudmunsen - forthcoming - AI and Ethics.
    Humans are responsible moral agents in part because they can competently respond to moral reasons. Several philosophers have argued that artificial agents cannot do this and therefore cannot be responsible moral agents. I present a counterexample to these arguments: the ‘Moral Decision Machine’. I argue that the ‘Moral Decision Machine’ responds to moral reasons just as competently as humans do. However, I suggest that, while a hopeful development, this does not warrant strong optimism about ‘artificial moral agency’. The ‘Moral Decision (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  18. Machine morality, moral progress, and the looming environmental disaster.Ben Kenward & Thomas Sinclair - forthcoming - Cognitive Computation and Systems.
    The creation of artificial moral systems requires us to make difficult choices about which of varying human value sets should be instantiated. The industry-standard approach is to seek and encode moral consensus. Here we argue, based on evidence from empirical psychology, that encoding current moral consensus risks reinforcing current norms, and thus inhibiting moral progress. However, so do efforts to encode progressive norms. Machine ethics is thus caught between a rock and a hard place. The problem is particularly acute when (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  19. Disagreement, AI alignment, and bargaining.Harry R. Lloyd - forthcoming - Philosophical Studies:1-31.
    New AI technologies have the potential to cause unintended harms in diverse domains including warfare, judicial sentencing, biomedicine and governance. One strategy for realising the benefits of AI whilst avoiding its potential dangers is to ensure that new AIs are properly ‘aligned’ with some form of ‘alignment target.’ One danger of this strategy is that – dependent on the alignment target chosen – our AIs might optimise for objectives that reflect the values only of a certain subset of society, and (...)
    Remove from this list   Direct download (5 more)  
     
    Export citation  
     
    Bookmark  
  20. Handbook of Research on Machine Ethics and Morality.Steven John Thompson (ed.) - forthcoming - Hershey, PA: IGI-Global.
    This book is dedicated to expert research topics, and analyses of ethics-related inquiry, at the machine ethics and morality level: key players, benefits, problems, policies, and strategies. Gathering some of the leading voices that recognize and understand the complexities and intricacies of human-machine ethics provides a resourceful compendium to be accessed by decision-makers and theorists concerned with identification and adoption of human-machine ethics initiatives, leading to needed policy adoption and reform for human-machine entities, their technologies, and their societal and legal (...)
    Remove from this list  
     
    Export citation  
     
    Bookmark  
  21. Existentialist risk and value misalignment.Ariela Tubert & Justin Tiehen - forthcoming - Philosophical Studies.
    We argue that two long-term goals of AI research stand in tension with one another. The first involves creating AI that is safe, where this is understood as solving the problem of value alignment. The second involves creating artificial general intelligence, meaning AI that operates at or beyond human capacity across all or many intellectual domains. Our argument focuses on the human capacity to make what we call “existential choices”, choices that transform who we are as persons, including transforming what (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  22. Variable Value Alignment by Design; averting risks with robot religion.Jeffrey White - forthcoming - Embodied Intelligence 2023.
    Abstract: One approach to alignment with human values in AI and robotics is to engineer artiTicial systems isomorphic with human beings. The idea is that robots so designed may autonomously align with human values through similar developmental processes, to realize project ideal conditions through iterative interaction with social and object environments just as humans do, such as are expressed in narratives and life stories. One persistent problem with human value orientation is that different human beings champion different values as ideal, (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  23. And Then the Hammer Broke: Reflections on Machine Ethics from Feminist Philosophy of Science.Andre Ye - forthcoming - Pacific University Philosophy Conference.
    Vision is an important metaphor in ethical and political questions of knowledge. The feminist philosopher Donna Haraway points out the “perverse” nature of an intrusive, alienating, all-seeing vision (to which we might cry out “stop looking at me!”), but also encourages us to embrace the embodied nature of sight and its promises for genuinely situated knowledge. Current technologies of machine vision – surveillance cameras, drones (for war or recreation), iPhone cameras – are usually construed as instances of the former rather (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  24. Estimating weights of reasons using metaheuristics: A hybrid approach to machine ethics.Benoît Alcaraz, Aleks Knoks & David Streit - 2024 - In Sanmay Das, Brian Patrick Green, Kush Varshney, Marianna Ganapini & Andrea Renda (eds.), Proceedings of the Seventh AAAI/ACM Conference on AI, Ethics, and Society (AIES-24). ACM Press. pp. 27-38.
    We present a new approach to representation and acquisition of normative information for machine ethics. It combines an influential philosophical account of the fundamental structure of morality with argumentation theory and machine learning. According to the philosophical account, the deontic status of an action – whether it is required, forbidden, or permissible – is determined through the interaction of “normative reasons” of varying strengths or weights. We first provide a formal characterization of this account, by modeling it in(weighted) argumentation graphs. (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  25. Can’t Bottom-up Artificial Moral Agents Make Moral Judgements?Robert James M. Boyles - 2024 - Filosofija. Sociologija 35 (1).
    This article examines if bottom-up artificial moral agents are capable of making genuine moral judgements, specifically in light of David Hume’s is-ought problem. The latter underscores the notion that evaluative assertions could never be derived from purely factual propositions. Bottom-up technologies, on the other hand, are those designed via evolutionary, developmental, or learning techniques. In this paper, the nature of these systems is looked into with the aim of preliminarily assessing if there are good reasons to suspect that, on the (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  26. Aligning artificial intelligence with moral intuitions: an intuitionist approach to the alignment problem.Dario Cecchini, Michael Pflanzer & Veljko Dubljevic - 2024 - AI and Ethics:1-11.
    As artificial intelligence (AI) continues to advance, one key challenge is ensuring that AI aligns with certain values. However, in the current diverse and democratic society, reaching a normative consensus is complex. This paper delves into the methodological aspect of how AI ethicists can effectively determine which values AI should uphold. After reviewing the most influential methodologies, we detail an intuitionist research agenda that offers guidelines for aligning AI applications with a limited set of reliable moral intuitions, each underlying a (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  27. Dubito Ergo Sum: Exploring AI Ethics.Viktor Dörfler & Giles Cuthbert - 2024 - Hicss 57: Hawaii International Conference on System Sciences, Honolulu, Hi.
    We paraphrase Descartes’ famous dictum in the area of AI ethics where the “I doubt and therefore I am” is suggested as a necessary aspect of morality. Therefore AI, which cannot doubt itself, cannot possess moral agency. Of course, this is not the end of the story. We explore various aspects of the human mind that substantially differ from AI, which includes the sensory grounding of our knowing, the act of understanding, and the significance of being able to doubt ourselves. (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  28. ChatGPT: towards AI subjectivity.Kristian D’Amato - 2024 - AI and Society 39:1-15.
    Motivated by the question of responsible AI and value alignment, I seek to offer a uniquely Foucauldian reconstruction of the problem as the emergence of an ethical subject in a disciplinary setting. This reconstruction contrasts with the strictly human-oriented programme typical to current scholarship that often views technology in instrumental terms. With this in mind, I problematise the concept of a technological subjectivity through an exploration of various aspects of ChatGPT in light of Foucault’s work, arguing that current systems lack (...)
    Remove from this list   Direct download (6 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  29. Artificial Intelligence and Universal Values.Jay Friedenberg - 2024 - UK: Ethics Press.
    The field of value alignment, or more broadly machine ethics, is becoming increasingly important as artificial intelligence developments accelerate. By ‘alignment’ we mean giving a generally intelligent software system the capability to act in ways that are beneficial, or at least minimally harmful, to humans. There are a large number of techniques that are being experimented with, but this work often fails to specify what values exactly we should be aligning. When making a decision, an agent is supposed to maximize (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  30. The Hazards of Putting Ethics on Autopilot.Julian Friedland, B. Balkin, David & Kristian Myrseth - 2024 - MIT Sloan Management Review 65 (4).
    The generative AI boom is unleashing its minions. Enterprise software vendors have rolled out legions of automated assistants that use large language model (LLM) technology, such as ChatGPT, to offer users helpful suggestions or to execute simple tasks. These so-called copilots and chatbots can increase productivity and automate tedious manual work. In this article, we explain how that leads to the risk that users' ethical competence may degrade over time — and what to do about it.
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  31. The Many Meanings of Vulnerability in the AI Act and the One Missing.Federico Galli & Claudio Novelli - 2024 - Biolaw Journal 1.
    This paper reviews the different meanings of vulnerability in the AI Act (AIA). We show that the AIA follows a rather established tradition of looking at vulnerability as a trait or a state of certain individuals and groups. It also includes a promising account of vulnerability as a relation but does not clarify if and how AI changes this relation. We spot the missing piece of the AIA: the lack of recognition that vulnerability is an inherent feature of all human-AI (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  32. A way forward for responsibility in the age of AI.Dane Leigh Gogoshin - 2024 - Inquiry: An Interdisciplinary Journal of Philosophy:1-34.
    Whatever one makes of the relationship between free will and moral responsibility – e.g. whether it’s the case that we can have the latter without the former and, if so, what conditions must be met; whatever one thinks about whether artificially intelligent agents might ever meet such conditions, one still faces the following questions. What is the value of moral responsibility? If we take moral responsibility to be a matter of being a fitting target of moral blame or praise, what (...)
    No categories
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  33. Moral sensitivity and the limits of artificial moral agents.Joris Graff - 2024 - Ethics and Information Technology 26 (1):1-12.
    Machine ethics is the field that strives to develop ‘artificial moral agents’ (AMAs), artificial systems that can autonomously make moral decisions. Some authors have questioned the feasibility of machine ethics, by questioning whether artificial systems can possess moral competence, or the capacity to reach morally right decisions in various situations. This paper explores this question by drawing on the work of several moral philosophers (McDowell, Wiggins, Hampshire, and Nussbaum) who have characterised moral competence in a manner inspired by Aristotle. Although (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  34. Apprehending AI moral purpose in practical wisdom.Mark Graves - 2024 - AI and Society 39 (3):1335-1348.
    Practical wisdom enables moral decision-making and action by aligning one’s apprehension of proximate goods with a distal, socially embedded interpretation of a more ultimate Good. A focus on purpose within the overall process mutually informs human moral psychology and moral AI development in their examinations of practical wisdom. AI practical wisdom could ground an AI system’s apprehension of reality in a sociotechnical moral process committed to orienting AI development and action in light of a pluralistic, diverse interpretation of that Good. (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  35. Năm yếu tố tiền đề của tương tác giữa người và máy trong kỷ nguyên trí tuệ nhân tạo.Manh-Tung Ho & T. Hong-Kong Nguyen - 2024 - Tạp Chí Thông Tin Và Truyền Thông 4 (4/2024):84-91.
    Bài viết này giới thiệu năm yếu tố tiền đề đó với mục đích gia tăng nhận thức về quan hệ giữa người và máy trong bối cảnh công nghệ ngày càng thay đổi cuộc sống thường nhật. Năm tiền đề bao gồm: Tiền đề về cấu trúc xã hội, văn hóa, chính trị, và lịch sử; về tính tự chủ và sự tự do của con người; về nền tảng triết học và nhân văn của nhân loại; về hiện (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  36. ‘Virtue gone nuts’: Machine Ethics in Ian McEwan’s Machines Like Me (2019).Anna Margaretha Horatschek - 2024 - In Prem Saran Satsangi, Anna Margaretha Horatschek & Anand Srivastav (eds.), Consciousness Studies in Sciences and Humanities: Eastern and Western Perspectives. Springer Verlag. pp. 125-132.
    According to a 2016 survey conducted by Müller and Bostrom, 30% of top experts on Artificial Intelligence (AI) expect bad or very bad consequences for humanity, if super-intelligent High/Human-Level Machine Intelligence (HLMI) can be developed. As a counterpoint, machine and robot ethics are being developed in the European Parliament and in international organisations like the Association for the Advancement of Artificial Intelligence (AAAI) and the Institute of Electrical and Electronics Engineers (IEEE) to ensure that AI will benefit mankind. Ian McEwan’s (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  37. Machine Ethics in Care: Could a Moral Avatar Enhance the Autonomy of Care-Dependent Persons?Catrin Misselhorn - 2024 - Cambridge Quarterly of Healthcare Ethics 33 (3):346-359.
    It is a common view that artificial systems could play an important role in dealing with the shortage of caregivers due to demographic change. One argument to show that this is also in the interest of care-dependent persons is that artificial systems might significantly enhance user autonomy since they might stay longer in their homes. This argument presupposes that the artificial systems in question do not require permanent supervision and control by human caregivers. For this reason, they need the capacity (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  38. Spanning in and Spacing out? A Reply to Eva.Michael Nielsen & Rush Stewart - 2024 - Philosophy and Technology 37 (4):1-4.
    We reply to Eva's comment on our "New Possibilities for Fair Algorithms," comparing and contrasting our Spanning criterion with his suggested Spacing criterion.
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  39. The entangled human being – a new materialist approach to anthropology of technology.Anna Puzio - 2024 - AI Ethics.
    Technological advancements raise anthropological questions: How do humans differ from technology? Which human capabilities are unique? Is it possible for robots to exhibit consciousness or intelligence, capacities once taken to be exclusively human? Despite the evident need for an anthropological lens in both societal and research contexts, the philosophical anthropology of technology has not been established as a set discipline with a defined set of theories, especially concerning emerging technologies. In this paper, I will utilize a New Materialist approach, focusing (...)
    Remove from this list  
     
    Export citation  
     
    Bookmark  
  40. Moral disagreement and artificial intelligence.Pamela Robinson - 2024 - AI and Society 39 (5):2425-2438.
    Artificially intelligent systems will be used to make increasingly important decisions about us. Many of these decisions will have to be made without universal agreement about the relevant moral facts. For other kinds of disagreement, it is at least usually obvious what kind of solution is called for. What makes moral disagreement especially challenging is that there are three different ways of handling it. _Moral solutions_ apply a moral theory or related principles and largely ignore the details of the disagreement. (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  41. Can artificial agents act? Conceptual costellation for a de-humanized theory of action.Francesco Striano - 2024 - Scienza E Filosofia 31:224-244.
    Can artificial agents act? Conceptual constellation for a de-humanised theory of action This paper embarks on an exploration of the concept of agency, traditionally ascribed to humans, in the context of artificial intelligence (AI). In the first two sections, it challenges the conventional dichotomy of human agency and non- human instrumentality, arguing that advancements in technology have blurred these boundaries. In the third section, the paper introduces the reader to the philosophical perspective of new materialism, which assigns causal power to (...)
    No categories
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  42. A Confucian Algorithm for Autonomous Vehicles.Tingting Sui & Sebastian Sunday Grève - 2024 - Science and Engineering Ethics 30 (52):1-22.
    Any moral algorithm for autonomous vehicles must provide a practical solution to moral problems of the trolley type, in which all possible courses of action will result in damage, injury, or death. This article discusses a hitherto neglected variety of this type of problem, based on a recent psychological study whose results are reported here. It argues that the most adequate solution to this problem will be achieved by a moral algorithm that is based on Confucian ethics. In addition to (...)
    Remove from this list   Direct download (5 more)  
     
    Export citation  
     
    Bookmark  
  43. Exploring Affinity-Based Reinforcement Learning for Designing Artificial Virtuous Agents in Stochastic Environments.Ajay Vishwanath & Christian Omlin - 2024 - In Mina Farmanbar, Maria Tzamtzi, Ajit Kumar Verma & Antorweep Chakravorty (eds.), Frontiers of Artificial Intelligence, Ethics, and Multidisciplinary Applications: 1st International Conference on Frontiers of AI, Ethics, and Multidisciplinary Applications (FAIEMA), Greece, 2023. Springer Nature Singapore. pp. 25-38.
    Artificial virtuous agents are artificial intelligence agents capable of virtuous behavior. Virtues are defined as an excellence in moral character, for example, compassion, honesty, etc. Developing virtues in AI comes under the umbrella of machine ethics research, which aims to embed ethical theories into artificial intelligence systems. We have recently suggested the use of affinity-based reinforcement learning to impart virtuous behavior. Such a technique uses policy regularization on reinforcement learning algorithms, and it has advantages such as interpretability and convergence properties. (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  44. Ethical Preferences in the Digital World: The EXOSOUL Questionnaire.Costanza Alfieri, Donatella Donati, Simone Gozzano, Lorenzo Greco & Marco Segala - 2023 - In Paul Lukowicz, Sven Mayer, Janin Koch, John Shawe-Taylor & Ilaria Tiddi (eds.), Ebook: HHAI 2023: Augmenting Human Intellect. IOS Press. pp. 290-99.
  45. Mental time-travel, semantic flexibility, and A.I. ethics.Marcus Arvan - 2023 - AI and Society 38 (6):2577-2596.
    This article argues that existing approaches to programming ethical AI fail to resolve a serious moral-semantic trilemma, generating interpretations of ethical requirements that are either too semantically strict, too semantically flexible, or overly unpredictable. This paper then illustrates the trilemma utilizing a recently proposed ‘general ethical dilemma analyzer,’ GenEth. Finally, it uses empirical evidence to argue that human beings resolve the semantic trilemma using general cognitive and motivational processes involving ‘mental time-travel,’ whereby we simulate different possible pasts and futures. I (...)
    Remove from this list   Direct download (5 more)  
     
    Export citation  
     
    Bookmark   9 citations  
  46. Deep Learning Opacity, and the Ethical Accountability of AI Systems. A New Perspective.Gianfranco Basti & Giuseppe Vitiello - 2023 - In Raffaela Giovagnoli & Robert Lowe (eds.), The Logic of Social Practices II. Springer Nature Switzerland. pp. 21-73.
    In this paper we analyse the conditions for attributing to AI autonomous systems the ontological status of “artificial moral agents”, in the context of the “distributed responsibility” between humans and machines in Machine Ethics (ME). In order to address the fundamental issue in ME of the unavoidable “opacity” of their decisions with ethical/legal relevance, we start from the neuroethical evidence in cognitive science. In humans, the “transparency” and then the “ethical accountability” of their actions as responsible moral agents is not (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  47. Artificial Dispositions: Investigating Ethical and Metaphysical Issues.William A. Bauer & Anna Marmodoro (eds.) - 2023 - New York: Bloomsbury.
    We inhabit a world not only full of natural dispositions independent of human design, but also artificial dispositions created by our technological prowess. How do these dispositions, found in automation, computation, and artificial intelligence applications, differ metaphysically from their natural counterparts? This collection investigates artificial dispositions: what they are, the roles they play in artificial systems, and how they impact our understanding of the nature of reality, the structure of minds, and the ethics of emerging technologies. It is divided into (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  48. Nonhuman Moral Agency: A Practice-Focused Exploration of Moral Agency in Nonhuman Animals and Artificial Intelligence.Dorna Behdadi - 2023 - Dissertation, University of Gothenburg
    Can nonhuman animals and artificial intelligence (AI) entities be attributed moral agency? The general assumption in the philosophical literature is that moral agency applies exclusively to humans since they alone possess free will or capacities required for deliberate reflection. Consequently, only humans have been taken to be eligible for ascriptions of moral responsibility in terms of, for instance, blame or praise, moral criticism, or attributions of vice and virtue. Animals and machines may cause harm, but they cannot be appropriately ascribed (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  49. When Something Goes Wrong: Who is Responsible for Errors in ML Decision-making?Andrea Berber & Sanja Srećković - 2023 - AI and Society 38 (2):1-13.
    Because of its practical advantages, machine learning (ML) is increasingly used for decision-making in numerous sectors. This paper demonstrates that the integral characteristics of ML, such as semi-autonomy, complexity, and non-deterministic modeling have important ethical implications. In particular, these characteristics lead to a lack of insight and lack of comprehensibility, and ultimately to the loss of human control over decision-making. Errors, which are bound to occur in any decision-making process, may lead to great harm and human rights violations. It is (...)
    Remove from this list   Direct download (5 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  50. A Comparative Defense of Self-initiated Prospective Moral Answerability for Autonomous Robot harm.Marc Champagne & Ryan Tonkens - 2023 - Science and Engineering Ethics 29 (4):1-26.
    As artificial intelligence becomes more sophisticated and robots approach autonomous decision-making, debates about how to assign moral responsibility have gained importance, urgency, and sophistication. Answering Stenseke’s (2022a) call for scaffolds that can help us classify views and commitments, we think the current debate space can be represented hierarchically, as answers to key questions. We use the resulting taxonomy of five stances to differentiate—and defend—what is known as the “blank check” proposal. According to this proposal, a person activating a robot could (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
1 — 50 / 550