Results for 'Responsibility gap'

983 found
Order:
  1. Four Responsibility Gaps with Artificial Intelligence: Why they Matter and How to Address them.Filippo Santoni de Sio & Giulio Mecacci - 2021 - Philosophy and Technology 34 (4):1057-1084.
    The notion of “responsibility gap” with artificial intelligence (AI) was originally introduced in the philosophical debate to indicate the concern that “learning automata” may make more difficult or impossible to attribute moral culpability to persons for untoward events. Building on literature in moral and legal philosophy, and ethics of technology, the paper proposes a broader and more comprehensive analysis of the responsibility gap. The responsibility gap, it is argued, is not one problem but a set of at (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   76 citations  
  2. The Responsibility Gap and LAWS: a Critical Mapping of the Debate.Ann-Katrien Oimann - 2023 - Philosophy and Technology 36 (1):1-22.
    AI has numerous applications and in various fields, including the military domain. The increase in the degree of autonomy in some decision-making systems leads to discussions on the possible future use of lethal autonomous weapons systems (LAWS). A central issue in these discussions is the assignment of moral responsibility for some AI-based outcomes. Several authors claim that the high autonomous capability of such systems leads to a so-called “responsibility gap.” In recent years, there has been a surge in (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   15 citations  
  3.  76
    AI responsibility gap: not new, inevitable, unproblematic.Huzeyfe Demirtas - 2025 - Ethics and Information Technology 27 (1):1-10.
    Who is responsible for a harm caused by AI, or a machine or system that relies on artificial intelligence? Given that current AI is neither conscious nor sentient, it’s unclear that AI itself is responsible for it. But given that AI acts independently of its developer or user, it’s also unclear that the developer or user is responsible for the harm. This gives rise to the so-called responsibility gap: cases where AI causes a harm, but no one is responsible (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  4.  11
    What responsibility gaps are and what they should be.Herman Veluwenkamp - 2025 - Ethics and Information Technology 27 (1):1-13.
    Responsibility gaps traditionally refer to scenarios in which no one is responsible for harm caused by artificial agents, such as autonomous machines or collective agents. By carefully examining the different ways this concept has been defined in the social ontology and ethics of technology literature, I argue that our current concept of responsibility gaps is defective. To address this conceptual flaw, I argue that the concept of responsibility gaps should be revised by distinguishing it into two more (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  5. Responsibility gaps and the reactive attitudes.Fabio Tollon - 2022 - AI and Ethics 1 (1).
    Artificial Intelligence (AI) systems are ubiquitous. From social media timelines, video recommendations on YouTube, and the kinds of adverts we see online, AI, in a very real sense, filters the world we see. More than that, AI is being embedded in agent-like systems, which might prompt certain reactions from users. Specifically, we might find ourselves feeling frustrated if these systems do not meet our expectations. In normal situations, this might be fine, but with the ever increasing sophistication of AI-systems, this (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  6.  86
    Responsibility Gaps and Black Box Healthcare AI: Shared Responsibilization as a Solution.Benjamin H. Lang, Sven Nyholm & Jennifer Blumenthal-Barby - 2023 - Digital Society 2 (3):52.
    As sophisticated artificial intelligence software becomes more ubiquitously and more intimately integrated within domains of traditionally human endeavor, many are raising questions over how responsibility (be it moral, legal, or causal) can be understood for an AI’s actions or influence on an outcome. So called “responsibility gaps” occur whenever there exists an apparent chasm in the ordinary attribution of moral blame or responsibility when an AI automates physical or cognitive labor otherwise performed by human beings and commits (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  7. Collective Responsibility Gaps.Stephanie Collins - 2019 - Journal of Business Ethics 154 (4):943-954.
    Which kinds of responsibility can we attribute to which kinds of collective, and why? In contrast, which kinds of collective responsibility can we not attribute—which kinds are ‘gappy’? This study provides a framework for answering these questions. It begins by distinguishing between three kinds of collective and three kinds of responsibility. It then explains how gaps—i.e. cases where we cannot attribute the responsibility we might want to—appear to arise within each type of collective responsibility. It (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   23 citations  
  8. The responsibility gap: Ascribing responsibility for the actions of learning automata. [REVIEW]Andreas Matthias - 2004 - Ethics and Information Technology 6 (3):175-183.
    Traditionally, the manufacturer/operator of a machine is held (morally and legally) responsible for the consequences of its operation. Autonomous, learning machines, based on neural networks, genetic algorithms and agent architectures, create a new situation, where the manufacturer/operator of the machine is in principle not capable of predicting the future machine behaviour any more, and thus cannot be held morally responsible or liable for it. The society must decide between not using this kind of machine any more (which is not a (...)
    Direct download (11 more)  
     
    Export citation  
     
    Bookmark   228 citations  
  9.  27
    Responsibility Gaps.Michael Da Silva - 2024 - Philosophy Compass 19 (9-10):e70002.
    Responsibility gaps arise when there is a mismatch between the amount of responsibility that can be attributed to any person or collection of persons on leading accounts of moral responsibility and the amount that robust intuitions suggest should be allocated to someone in a case. Claimed responsibility gaps arise in numerous philosophical debates, including those concerning government, corporate, and other forms of group agency and new technologies and those concerning theoretical issues in the philosophy of (...). This work is an opinionated introduction to and overview of recent work on responsibility gaps. It outlines and evaluates paradigmatic responsibility gap cases and ways of understanding the phenomenon as well as the existence conditions and moral status of and possible responses to responsibility gaps. It thereby contributes to ongoing work in the philosophy of responsibility and several applied domains. (shrink)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  10.  30
    Responsibility Gaps and Technology: Old Wine in New Bottles?Ann-Katrien Oimann & Fabio Tollon - 2025 - Journal of Applied Philosophy 42 (1):337-356.
    Recent work in philosophy of technology has come to bear on the question of responsibility gaps. Some authors argue that the increase in the autonomous capabilities of decision-making systems makes it impossible to properly attribute responsibility for AI-based outcomes. In this article we argue that one important, and often neglected, feature of recent debates on responsibility gaps is how this debate maps on to old debates in responsibility theory. More specifically, we suggest that one of the (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  11. There Is No Techno-Responsibility Gap.Daniel W. Tigard - 2021 - Philosophy and Technology 34 (3):589-607.
    In a landmark essay, Andreas Matthias claimed that current developments in autonomous, artificially intelligent (AI) systems are creating a so-called responsibility gap, which is allegedly ever-widening and stands to undermine both the moral and legal frameworks of our society. But how severe is the threat posed by emerging technologies? In fact, a great number of authors have indicated that the fear is thoroughly instilled. The most pessimistic are calling for a drastic scaling-back or complete moratorium on AI systems, while (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   51 citations  
  12. Artificial intelligence and responsibility gaps: what is the problem?Peter Königs - 2022 - Ethics and Information Technology 24 (3):1-11.
    Recent decades have witnessed tremendous progress in artificial intelligence and in the development of autonomous systems that rely on artificial intelligence. Critics, however, have pointed to the difficulty of allocating responsibility for the actions of an autonomous system, especially when the autonomous system causes harm or damage. The highly autonomous behavior of such systems, for which neither the programmer, the manufacturer, nor the operator seems to be responsible, has been suspected to generate responsibility gaps. This has been the (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   29 citations  
  13. Beyond the responsibility gap. Discussion note on responsibility and liability in the use of brain-computer interfaces.Gerd Grübler - 2011 - AI and Society 26 (4):377-382.
    The article shows where the argument of responsibility-gap regarding brain-computer interfaces acquires its plausibility from, and suggests why the argument is not plausible. As a way of an explanation, a distinction between the descriptive third-person perspective and the interpretative first-person perspective is introduced. Several examples and metaphors are used to show that ascription of agency and responsibility does not, even in simple cases, require that people be in causal control of every individual detail involved in an event. Taking (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   12 citations  
  14. Bridging the Responsibility Gap in Automated Warfare.Marc Champagne & Ryan Tonkens - 2015 - Philosophy and Technology 28 (1):125-137.
    Sparrow argues that military robots capable of making their own decisions would be independent enough to allow us denial for their actions, yet too unlike us to be the targets of meaningful blame or praise—thereby fostering what Matthias has dubbed “the responsibility gap.” We agree with Sparrow that someone must be held responsible for all actions taken in a military conflict. That said, we think Sparrow overlooks the possibility of what we term “blank check” responsibility: A person of (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   34 citations  
  15.  19
    Responsibility Gap(s) Due to the Introduction of AI in Healthcare: An Ubuntu-Inspired Approach.Brandon Ferlito, Seppe Segers, Michiel De Proost & Heidi Mertes - 2024 - Science and Engineering Ethics 30 (4):1-14.
    Due to its enormous potential, artificial intelligence (AI) can transform healthcare on a seemingly infinite scale. However, as we continue to explore the immense potential of AI, it is vital to consider the ethical concerns associated with its development and deployment. One specific concern that has been flagged in the literature is the responsibility gap (RG) due to the introduction of AI in healthcare. When the use of an AI algorithm or system results in a negative outcome for a (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  16. The value of responsibility gaps in algorithmic decision-making.Lauritz Munch, Jakob Mainz & Jens Christian Bjerring - 2023 - Ethics and Information Technology 25 (1):1-11.
    Many seem to think that AI-induced responsibility gaps are morally bad and therefore ought to be avoided. We argue, by contrast, that there is at least a pro tanto reason to welcome responsibility gaps. The central reason is that it can be bad for people to be responsible for wrongdoing. This, we argue, gives us one reason to prefer automated decision-making over human decision-making, especially in contexts where the risks of wrongdoing are high. While we are not the (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  17.  9
    (1 other version)Responsibility Gaps and Retributive Dispositions: Evidence from the US, Japan and Germany.Markus Kneer & Markus Christen - 2024 - Science and Engineering Ethics 30 (6):1-19.
    Danaher (2016) has argued that increasing robotization can lead to retribution gaps: Situations in which the normative fact that nobody can be justly held responsible for a harmful outcome stands in conflict with our retributivist moral dispositions. In this paper, we report a cross-cultural empirical study based on Sparrow’s (2007) famous example of an autonomous weapon system committing a war crime, which was conducted with participants from the US, Japan and Germany. We find that (1) people manifest a considerable willingness (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  18.  56
    The Responsibility Gap in Corporate Crime.Samuel W. Buell - 2018 - Criminal Law and Philosophy 12 (3):471-491.
    In many cases of criminality within large corporations, senior management does not commit the operative offense—or conspire or assist in it—but nonetheless bears serious responsibility for the crime. That responsibility can derive from, among other things, management’s role in cultivating corporate culture, in failing to police effectively within the firm, and in accepting lavish compensation for taking the firm’s reins. Criminal law does not include any doctrinal means for transposing that form of responsibility into punishment. Arguments for (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  19.  94
    Autonomous weapon systems and responsibility gaps: a taxonomy.Nathan Gabriel Wood - 2023 - Ethics and Information Technology 25 (1):1-14.
    A classic objection to autonomous weapon systems (AWS) is that these could create so-called responsibility gaps, where it is unclear who should be held responsible in the event that an AWS were to violate some portion of the law of armed conflict (LOAC). However, those who raise this objection generally do so presenting it as a problem for AWS as a whole class of weapons. Yet there exists a rather wide range of systems that can be counted as “autonomous (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  20. Mind the Gap: Autonomous Systems, the Responsibility Gap, and Moral Entanglement.Trystan S. Goetze - 2022 - Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency (FAccT ’22).
    When a computer system causes harm, who is responsible? This question has renewed significance given the proliferation of autonomous systems enabled by modern artificial intelligence techniques. At the root of this problem is a philosophical difficulty known in the literature as the responsibility gap. That is to say, because of the causal distance between the designers of autonomous systems and the eventual outcomes of those systems, the dilution of agency within the large and complex teams that design autonomous systems, (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  21. Tragic Choices and the Virtue of Techno-Responsibility Gaps.John Danaher - 2022 - Philosophy and Technology 35 (2):1-26.
    There is a concern that the widespread deployment of autonomous machines will open up a number of ‘responsibility gaps’ throughout society. Various articulations of such techno-responsibility gaps have been proposed over the years, along with several potential solutions. Most of these solutions focus on ‘plugging’ or ‘dissolving’ the gaps. This paper offers an alternative perspective. It argues that techno-responsibility gaps are, sometimes, to be welcomed and that one of the advantages of autonomous machines is that they enable (...)
    Direct download (6 more)  
     
    Export citation  
     
    Bookmark   16 citations  
  22. Understanding Moral Responsibility in Automated Decision-Making: Responsibility Gaps and Strategies to Address Them.Andrea Berber & Jelena Mijić - 2024 - Theoria: Beograd 67 (3):177-192.
    This paper delves into the use of machine learning-based systems in decision-making processes and its implications for moral responsibility as traditionally defined. It focuses on the emergence of responsibility gaps and examines proposed strategies to address them. The paper aims to provide an introductory and comprehensive overview of the ongoing debate surrounding moral responsibility in automated decision-making. By thoroughly examining these issues, we seek to contribute to a deeper understanding of the implications of AI integration in society.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  23.  61
    Correction to: The Responsibility Gap and LAWS: a Critical Mapping of the Debate.Ann-Katrien Oimann - 2023 - Philosophy and Technology 36 (1):1-2.
    AI has numerous applications and in various fields, including the military domain. The increase in the degree of autonomy in some decision-making systems leads to discussions on the possible future use of lethal autonomous weapons systems (LAWS). A central issue in these discussions is the assignment of moral responsibility for some AI-based outcomes. Several authors claim that the high autonomous capability of such systems leads to a so-called “responsibility gap.” In recent years, there has been a surge in (...)
    No categories
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  24.  74
    The risks of autonomous machines: from responsibility gaps to control gaps.Frank Hindriks & Herman Veluwenkamp - 2023 - Synthese 201 (1):1-17.
    Responsibility gaps concern the attribution of blame for harms caused by autonomous machines. The worry has been that, because they are artificial agents, it is impossible to attribute blame, even though doing so would be appropriate given the harms they cause. We argue that there are no responsibility gaps. The harms can be blameless. And if they are not, the blame that is appropriate is indirect and can be attributed to designers, engineers, software developers, manufacturers or regulators. The (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   15 citations  
  25. Can we Bridge AI’s responsibility gap at Will?Maximilian Kiener - 2022 - Ethical Theory and Moral Practice 25 (4):575-593.
    Artificial intelligence increasingly executes tasks that previously only humans could do, such as drive a car, fight in war, or perform a medical operation. However, as the very best AI systems tend to be the least controllable and the least transparent, some scholars argued that humans can no longer be morally responsible for some of the AI-caused outcomes, which would then result in a responsibility gap. In this paper, I assume, for the sake of argument, that at least some (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   14 citations  
  26.  52
    Why Command Responsibility May (not) Be a Solution to Address Responsibility Gaps in LAWS.Ann-Katrien Oimann - 2024 - Criminal Law and Philosophy 18 (3):765-791.
    The possible future use of lethal autonomous weapons systems (LAWS) and the challenges associated with assigning moral responsibility leads to several debates. Some authors argue that the highly autonomous capability of such systems may lead to a so-called responsibility gap in situations where LAWS cause serious violations of international humanitarian law. One proposed solution is the doctrine of command responsibility. Despite the doctrine’s original development to govern human interactions on the battlefield, it is worth considering whether the (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  27.  22
    When to Fill Responsibility Gaps: A Proposal.Michael Da Silva - forthcoming - Journal of Value Inquiry:1-26.
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  28.  42
    Customizable Ethics Settings for Building Resilience and Narrowing the Responsibility Gap: Case Studies in the Socio-Ethical Engineering of Autonomous Systems.Sadjad Soltanzadeh, Jai Galliott & Natalia Jevglevskaja - 2020 - Science and Engineering Ethics 26 (5):2693-2708.
    Ethics settings allow for morally significant decisions made by humans to be programmed into autonomous machines, such as autonomous vehicles or autonomous weapons. Customizable ethics settings are a type of ethics setting in which the users of autonomous machines make such decisions. Here two arguments are provided in defence of customizable ethics settings. Firstly, by approaching ethics settings in the context of failure management, it is argued that customizable ethics settings are instrumentally and inherently valuable for building resilience into the (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  29. "Responsibility" Plus "Gap" Equals "Problem".Marc Champagne - 2025 - In Johanna Seibt, Peter Fazekas & Oliver Santiago Quick, Social Robots with AI: Prospects, Risks, and Responsible Methods. Amsterdam: IOS Press. pp. 244–252.
    Peter Königs recently argued that, while autonomous robots generate responsibility gaps, such gaps need not be considered problematic. I argue that Königs’ compromise dissolves under analysis since, on a proper understanding of what “responsibility” is and what “gap” (metaphorically) means, their joint endorsement must repel an attitude of indifference. So, just as “calamities that happen but don’t bother anyone” makes no sense, the idea of “responsibility gaps that exist but leave citizens and ethicists unmoved” makes no sense.
    Direct download  
     
    Export citation  
     
    Bookmark  
  30. Artificial agents: responsibility & control gaps.Herman Veluwenkamp & Frank Hindriks - forthcoming - Inquiry: An Interdisciplinary Journal of Philosophy.
    Artificial agents create significant moral opportunities and challenges. Over the last two decades, discourse has largely focused on the concept of a ‘responsibility gap.’ We argue that this concept is incoherent, misguided, and diverts attention from the core issue of ‘control gaps.’ Control gaps arise when there is a discrepancy between the causal control an agent exercises and the moral control it should possess or emulate. Such gaps present moral risks, often leading to harm or ethical violations. We propose (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  31.  15
    An Ethical Study on the Responsibility Gap of AI Robots. 김상득 - 2024 - Journal of Korean Philosophical Society 169:65-90.
    이 논문은 자율적인 AI 로봇이 야기하는 책임 공백의 물음에 관한 철학적 논의를 통해 윤리학적 해결책으로 공동 책임론을 옹호하는 데 그 목적이 있다. 이를 위해 A. 매티어스가 제기한 책임 공백이 무엇인지를 해명한 다음 이에 관한 세 접근법, 즉, 인간 중심주의 접근법(도구주의 2.0), 기술중심주의 접근법(기계 윤리), 그리고 구성주의 접근법(하이브리드 책임)을 비판적으로 천착할 것이다. 이러한 비판적 논의는 책임 공백 물음의 해결에 행위성 개념이 중요한 역할을 차지하고 있음을 보여줄 것이다. ‘확장된 행위성 이론’에 근거하여 필자는 자율적인 AI 로봇 역시 도덕적 행위성을 지니기에 도덕적 책임의 주체가 (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  32.  47
    Uncovering the gap: challenging the agential nature of AI responsibility problems.Joan Llorca Albareda - 2025 - AI and Ethics:1-14.
    In this paper, I will argue that the responsibility gap arising from new AI systems is reducible to the problem of many hands and collective agency. Systematic analysis of the agential dimension of AI will lead me to outline a disjunctive between the two problems. Either we reduce individual responsibility gaps to the many hands, or we abandon the individual dimension and accept the possibility of responsible collective agencies. Depending on which conception of AI agency we begin with, (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  33. Find the Gap: AI, Responsible Agency and Vulnerability.Shannon Vallor & Tillmann Vierkant - 2024 - Minds and Machines 34 (3):1-23.
    The responsibility gap, commonly described as a core challenge for the effective governance of, and trust in, AI and autonomous systems (AI/AS), is traditionally associated with a failure of the epistemic and/or the control condition of moral responsibility: the ability to know what we are doing and exercise competent control over this doing. Yet these two conditions are a red herring when it comes to understanding the responsibility challenges presented by AI/AS, since evidence from the cognitive sciences (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  34.  48
    Reasonable partiality for compatriots and the global responsibility gap.Robert Der Veevann - 2008 - Critical Review of International Social and Political Philosophy 11 (4):413-432.
    According to David Miller, duties of domestic national and global justice are of equal importance, given that nationhood is both intrinsically valuable and not inherently an unjust way of excluding outsiders. The consequence of this?split?level? view is that it may be reasonable to prioritize domestic justice in some cases, while letting demands of global justice take precedence in others, depending on a weighting model which seeks to account for the relative urgency of domestic and global claims and the extent to (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  35.  81
    Lethal Autonomous Weapon Systems and Responsibility Gaps.Anne Gerdes - 2018 - Philosophy Study 8 (5).
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark   5 citations  
  36.  20
    The Effect of Corporate Social Responsibility Compatibility and Authenticity on Brand Trust and Corporate Sustainability Management: For Korean Cosmetics Companies.Su-Hee Lee & Gap-Yeon Jeong - 2022 - Frontiers in Psychology 13.
    The purpose of this study is to examine whether corporate social responsibility activities perceived by consumers affect brand trust and corporate sustainability management. In other words, this study tried to examine whether the compatibility and authenticity of CSR influences brand trust, thereby affecting CSM including economic viability, environmental soundness, and social responsibility. To measure this, an empirical analysis was conducted on 479 consumers who had experience purchasing products from cosmetic companies that are carrying out CSR. As a result (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  37.  17
    Air Canada’s chatbot illustrates persistent agency and responsibility gap problems for AI.Joshua L. M. Brand - forthcoming - AI and Society:1-3.
  38.  65
    Gap : Social Responsibility Campaign or Window Dressing?Michelle Amazeen - 2011 - Journal of Business Ethics 99 (2):167-182.
    This study interrogates the Gap campaign from a political economic perspective to determine whether it goes beyond merely touting the virtuous line of social responsibility. Critics cite the irony of capitalist-based solutions that perpetuate the inequities they are trying to address. Others suggest the aid generated is problematic in and of itself because it keeps Africa from becoming self-sufficient. This research contends the purpose of the Gap’s participation is genuine, going beyond window dressing and the surface level benefit of (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  39.  28
    Reasonable partiality for compatriots and the global responsibility gap.Robert van der Veen - 2008 - Critical Review of International Social and Political Philosophy 11 (4):413-432.
  40. From Responsibility to Reason-Giving Explainable Artificial Intelligence.Kevin Baum, Susanne Mantel, Timo Speith & Eva Schmidt - 2022 - Philosophy and Technology 35 (1):1-30.
    We argue that explainable artificial intelligence (XAI), specifically reason-giving XAI, often constitutes the most suitable way of ensuring that someone can properly be held responsible for decisions that are based on the outputs of artificial intelligent (AI) systems. We first show that, to close moral responsibility gaps (Matthias 2004), often a human in the loop is needed who is directly responsible for particular AI-supported decisions. Second, we appeal to the epistemic condition on moral responsibility to argue that, in (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   18 citations  
  41.  88
    The Retribution-Gap and Responsibility-Loci Related to Robots and Automated Technologies: A Reply to Nyholm.Roos de Jong - 2020 - Science and Engineering Ethics 26 (2):727-735.
    Automated technologies and robots make decisions that cannot always be fully controlled or predicted. In addition to that, they cannot respond to punishment and blame in the ways humans do. Therefore, when automated cars harm or kill people, for example, this gives rise to concerns about responsibility-gaps and retribution-gaps. According to Sven Nyholm, however, automated cars do not pose a challenge on human responsibility, as long as humans can control them and update them. He argues that the agency (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   22 citations  
  42. Group Responsibility.Christian List - 2022 - In Dana Kay Nelkin & Derk Pereboom, The Oxford Handbook of Moral Responsibility. New York: Oxford University Press.
    Are groups ever capable of bearing responsibility, over and above their individual members? This chapter discusses and defends the view that certain organized collectives – namely, those that qualify as group moral agents – can be held responsible for their actions, and that group responsibility is not reducible to individual responsibility. The view has important implications. It supports the recognition of corporate civil and even criminal liability in our legal systems, and it suggests that, by recognizing group (...)
    Direct download  
     
    Export citation  
     
    Bookmark   2 citations  
  43.  38
    John Dewey's Pragmatic Ethics and “Manipulation”: A Response to Walter Feinberg and Clarence Karier.Pyong Gap Min - 1979 - Educational Theory 29 (4):311-323.
  44. Responsibility Internalism and Responsibility for AI.Huzeyfe Demirtas - 2023 - Dissertation, Syracuse University
    I argue for responsibility internalism. That is, moral responsibility (i.e., accountability, or being apt for praise or blame) depends only on factors internal to agents. Employing this view, I also argue that no one is responsible for what AI does but this isn’t morally problematic in a way that counts against developing or using AI. Responsibility is grounded in three potential conditions: the control (or freedom) condition, the epistemic (or awareness) condition, and the causal responsibility condition (...)
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  45. Mind the gap: responsible robotics and the problem of responsibility.David J. Gunkel - 2020 - Ethics and Information Technology 22 (4):307-320.
    The task of this essay is to respond to the question concerning robots and responsibility—to answer for the way that we understand, debate, and decide who or what is able to answer for decisions and actions undertaken by increasingly interactive, autonomous, and sociable mechanisms. The analysis proceeds through three steps or movements. It begins by critically examining the instrumental theory of technology, which determines the way one typically deals with and responds to the question of responsibility when it (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   50 citations  
  46.  45
    AI and Responsibility: No Gap, but Abundance.Maximilian Kiener - 2025 - Journal of Applied Philosophy 42 (1):357-374.
    The best-performing AI systems, such as deep neural networks, tend to be the ones that are most difficult to control and understand. For this reason, scholars worry that the use of AI would lead to so-called responsibility gaps, that is, situations in which no one is morally responsible for the harm caused by AI, because no one satisfies the so-called control condition and epistemic condition of moral responsibility. In this article, I acknowledge that there is a significant challenge (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  47. Responsibility for Killer Robots.Johannes Himmelreich - 2019 - Ethical Theory and Moral Practice 22 (3):731-747.
    Future weapons will make life-or-death decisions without a human in the loop. When such weapons inflict unwarranted harm, no one appears to be responsible. There seems to be a responsibility gap. I first reconstruct the argument for such responsibility gaps to then argue that this argument is not sound. The argument assumes that commanders have no control over whether autonomous weapons inflict harm. I argue against this assumption. Although this investigation concerns a specific case of autonomous weapons systems, (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   42 citations  
  48.  20
    Response latency as a function of size of gap in the elevated runway.David Birch, L. Thomas Clifford & Julie Butterfield - 1961 - Journal of Experimental Psychology 62 (2):179.
  49.  87
    Climate change and individual responsibility. Agency, moral disengagement and the motivational gap.Wouter Peeters, Andries De Smet, Lisa Diependaele, Sigrid Sterckx, R. H. McNeal & A. D. Smet - 2015 - Palgrave MacMillan.
    If climate change represents a severe threat to humankind, why then is response to it characterized by inaction at all levels? The authors argue there are two complementary explanations for the lack of motivation. First, our moral judgment system appears to be unable to identify climate change as an important moral problem and there are pervasive doubts about the agency of individuals. This explanation, however, is incomplete: Individual emitters can effectively be held morally responsible for their luxury emissions. Second, doubts (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  50.  91
    Is Explainable AI Responsible AI?Isaac Taylor - forthcoming - AI and Society.
    When artificial intelligence (AI) is used to make high-stakes decisions, some worry that this will create a morally troubling responsibility gap—that is, a situation in which nobody is morally responsible for the actions and outcomes that result. Since the responsibility gap might be thought to result from individuals lacking knowledge of the future behavior of AI systems, it can be and has been suggested that deploying explainable artificial intelligence (XAI) techniques will help us to avoid it. These techniques (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
1 — 50 / 983