Results for 'Reactive attitudes Responsibility gaps Artificial intelligence'

963 found
Order:
  1. Responsibility gaps and the reactive attitudes.Fabio Tollon - 2022 - AI and Ethics 1 (1).
    Artificial Intelligence (AI) systems are ubiquitous. From social media timelines, video recommendations on YouTube, and the kinds of adverts we see online, AI, in a very real sense, filters the world we see. More than that, AI is being embedded in agent-like systems, which might prompt certain reactions from users. Specifically, we might find ourselves feeling frustrated if these systems do not meet our expectations. In normal situations, this might be fine, but with the ever increasing sophistication of (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  2.  58
    Statistically responsible artificial intelligences.Smith Nicholas & Darby Vickers - 2021 - Ethics and Information Technology 23 (3):483-493.
    As artificial intelligence becomes ubiquitous, it will be increasingly involved in novel, morally significant situations. Thus, understanding what it means for a machine to be morally responsible is important for machine ethics. Any method for ascribing moral responsibility to AI must be intelligible and intuitive to the humans who interact with it. We argue that the appropriate approach is to determine how AIs might fare on a standard account of human moral responsibility: a Strawsonian account. We (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  3.  34
    Public perceptions of the use of artificial intelligence in Defence: a qualitative exploration.Lee Hadlington, Maria Karanika-Murray, Jane Slater, Jens Binder, Sarah Gardner & Sarah Knight - forthcoming - AI and Society:1-14.
    There are a wide variety of potential applications of artificial intelligence (AI) in Defence settings, ranging from the use of autonomous drones to logistical support. However, limited research exists exploring how the public view these, especially in view of the value of public attitudes for influencing policy-making. An accurate understanding of the public’s perceptions is essential for crafting informed policy, developing responsible governance, and building responsive assurance relating to the development and use of AI in military settings. (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  4.  8
    Reactive Attitudes and AI-Agents – Making Sense of Responsibility and Control Gaps.Andrew P. Rebera - 2024 - Philosophy and Technology 37 (4):1-20.
    Responsibility gaps occur when autonomous machines cause harms for which nobody can be justifiably held morally responsible. The debate around responsibility gaps has focused primarily on the question of responsibility, but other approaches focus on the victims of the associated harms. In this paper I consider how the victims of ‘AI-harm’—by which I mean harms implicated in responsibility gap cases and caused by AI-agents—can make sense of what has happened to them. The reactive (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  5. Four Responsibility Gaps with Artificial Intelligence: Why they Matter and How to Address them.Filippo Santoni de Sio & Giulio Mecacci - 2021 - Philosophy and Technology 34 (4):1057-1084.
    The notion of “responsibility gap” with artificial intelligence (AI) was originally introduced in the philosophical debate to indicate the concern that “learning automata” may make more difficult or impossible to attribute moral culpability to persons for untoward events. Building on literature in moral and legal philosophy, and ethics of technology, the paper proposes a broader and more comprehensive analysis of the responsibility gap. The responsibility gap, it is argued, is not one problem but a set (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   71 citations  
  6. Artificial intelligence and responsibility gaps: what is the problem?Peter Königs - 2022 - Ethics and Information Technology 24 (3):1-11.
    Recent decades have witnessed tremendous progress in artificial intelligence and in the development of autonomous systems that rely on artificial intelligence. Critics, however, have pointed to the difficulty of allocating responsibility for the actions of an autonomous system, especially when the autonomous system causes harm or damage. The highly autonomous behavior of such systems, for which neither the programmer, the manufacturer, nor the operator seems to be responsible, has been suspected to generate responsibility (...). This has been the cause of much concern. In this article, I propose a more optimistic view on artificial intelligence, raising two challenges for responsibility gap pessimists. First, proponents of responsibility gaps must say more about when responsibility gaps occur. Once we accept a difficult-to-reject plausibility constraint on the emergence of such gaps, it becomes apparent that the situations in which responsibility gaps occur are unclear. Second, assuming that responsibility gaps occur, more must be said about why we should be concerned about such gaps in the first place. I proceed by defusing what I take to be the two most important concerns about responsibility gaps, one relating to the consequences of responsibility gaps and the other relating to violations of jus in bello. (shrink)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   22 citations  
  7.  67
    Collective Responsibility and Artificial Intelligence.Isaac Taylor - 2024 - Philosophy and Technology 37 (1):1-18.
    The use of artificial intelligence (AI) to make high-stakes decisions is sometimes thought to create a troubling responsibility gap – that is, a situation where nobody can be held morally responsible for the outcomes that are brought about. However, philosophers and practitioners have recently claimed that, even though no individual can be held morally responsible, groups of individuals might be. Consequently, they think, we have less to fear from the use of AI than might appear to be (...)
    No categories
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark  
  8. From Responsibility to Reason-Giving Explainable Artificial Intelligence.Kevin Baum, Susanne Mantel, Timo Speith & Eva Schmidt - 2022 - Philosophy and Technology 35 (1):1-30.
    We argue that explainable artificial intelligence (XAI), specifically reason-giving XAI, often constitutes the most suitable way of ensuring that someone can properly be held responsible for decisions that are based on the outputs of artificial intelligent (AI) systems. We first show that, to close moral responsibility gaps (Matthias 2004), often a human in the loop is needed who is directly responsible for particular AI-supported decisions. Second, we appeal to the epistemic condition on moral responsibility (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   18 citations  
  9.  89
    Artificial intelligence and responsibility.Lode Lauwaert - 2021 - AI and Society 36 (3):1001-1009.
    In the debate on whether to ban LAWS, moral arguments are mainly used. One of these arguments, proposed by Sparrow, is that the use of LAWS goes hand in hand with the responsibility gap. Together with the premise that the ability to hold someone responsible is a necessary condition for the admissibility of an act, Sparrow believes that this leads to the conclusion that LAWS should be prohibited. In this article, it will be shown that Sparrow’s argumentation for both (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  10. Artificial intelligence and the ‘Good Society’: the US, EU, and UK approach.Corinne Cath, Sandra Wachter, Brent Mittelstadt, Mariarosaria Taddeo & Luciano Floridi - 2018 - Science and Engineering Ethics 24 (2):505-528.
    In October 2016, the White House, the European Parliament, and the UK House of Commons each issued a report outlining their visions on how to prepare society for the widespread use of artificial intelligence. In this article, we provide a comparative assessment of these three reports in order to facilitate the design of policies favourable to the development of a ‘good AI society’. To do so, we examine how each report addresses the following three topics: the development of (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   29 citations  
  11.  63
    An Attitude Towards an Artificial Soul? Responses to the “Nazi Chatbot”.Ondřej Beran - 2017 - Philosophical Investigations 41 (1):42-69.
    The article discusses the case of Microsoft's Twitter chatbot Tay that “turned into a Nazi” after less than 24 hours from its release on the Internet. The first section presents a brief recapitulation of Alan Turing's proposal for a test for artificial intelligence and the way it influenced subsequent discussions in the philosophy of mind. In the second section, I offer a few arguments appealing for caution regarding the identification of an accomplished chatbot as a thinking being. These (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  12. Responsibility assignment won’t solve the moral issues of artificial intelligence.Jan-Hendrik Heinrichs - 2022 - AI and Ethics 2 (4):727-736.
    Who is responsible for the events and consequences caused by using artificially intelligent tools, and is there a gap between what human agents can be responsible for and what is being done using artificial intelligence? Both questions presuppose that the term ‘responsibility’ is a good tool for analysing the moral issues surrounding artificial intelligence. This article will draw this presupposition into doubt and show how reference to responsibility obscures the complexity of moral situations and (...)
     
    Export citation  
     
    Bookmark   1 citation  
  13.  91
    Organisational responses to the ethical issues of artificial intelligence.Bernd Carsten Stahl, Josephina Antoniou, Mark Ryan, Kevin Macnish & Tilimbe Jiya - 2022 - AI and Society 37 (1):23-37.
    The ethics of artificial intelligence is a widely discussed topic. There are numerous initiatives that aim to develop the principles and guidance to ensure that the development, deployment and use of AI are ethically acceptable. What is generally unclear is how organisations that make use of AI understand and address these ethical issues in practice. While there is an abundance of conceptual work on AI ethics, empirical insights are rare and often anecdotal. This paper fills the gap in (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   10 citations  
  14.  55
    Ethical artificial intelligence framework for a good AI society: principles, opportunities and perils.Pradeep Paraman & Sanmugam Anamalah - 2023 - AI and Society 38 (2):595-611.
    The justification and rationality of this paper is to present some fundamental principles, theories, and concepts that we believe moulds the nucleus of a good artificial intelligence (AI) society. The morally accepted significance and utilitarian concerns that stems from the inception and realisation of an AI’s structural foundation are displayed in this study. This paper scrutinises the structural foundation, fundamentals, and cardinal righteous remonstrations, as well as the gaps in mechanisms towards novel prospects and perils in determining (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  15. Streaching the notion of moral responsibility in nanoelectronics by appying AI.Robert Albin & Amos Bardea - 2021 - In Robert Albin & Amos Bardea (eds.), Ethics in Nanotechnology Social Sciences and Philosophical Aspects, Vol. 2. Berlin: De Gruyter. pp. 75-87.
    The development of machine learning and deep learning (DL) in the field of AI (artificial intelligence) is the direct result of the advancement of nano-electronics. Machine learning is a function that provides the system with the capacity to learn from data without being programmed explicitly. It is basically a mathematical and probabilistic model. DL is part of machine learning methods based on artificial neural networks, simply called neural networks (NNs), as they are inspired by the biological NNs (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  16.  31
    Intentionality gap and preter-intentionality in generative artificial intelligence.Roberto Redaelli - forthcoming - AI and Society:1-8.
    The emergence of generative artificial intelligence, such as large language models and text-to-image models, has had a profound impact on society. The ability of these systems to simulate human capabilities such as text writing and image creation is radically redefining a wide range of practices, from artistic production to education. While there is no doubt that these innovations are beneficial to our lives, the pervasiveness of these technologies should not be underestimated, and raising increasingly pressing ethical questions that (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  17. Group Agency and Artificial Intelligence.Christian List - 2021 - Philosophy and Technology (4):1-30.
    The aim of this exploratory paper is to review an under-appreciated parallel between group agency and artificial intelligence. As both phenomena involve non-human goal-directed agents that can make a difference to the social world, they raise some similar moral and regulatory challenges, which require us to rethink some of our anthropocentric moral assumptions. Are humans always responsible for those entities’ actions, or could the entities bear responsibility themselves? Could the entities engage in normative reasoning? Could they even (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   35 citations  
  18. Accountability in Artificial Intelligence: What It Is and How It Works.Claudio Novelli, Mariarosaria Taddeo & Luciano Floridi - 2023 - AI and Society 1:1-12.
    Accountability is a cornerstone of the governance of artificial intelligence (AI). However, it is often defined too imprecisely because its multifaceted nature and the sociotechnical structure of AI systems imply a variety of values, practices, and measures to which accountability in AI can refer. We address this lack of clarity by defining accountability in terms of answerability, identifying three conditions of possibility (authority recognition, interrogation, and limitation of power), and an architecture of seven features (context, range, agent, forum, (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   12 citations  
  19. Truth, Objectivity, and Emotional Caring: Filling in the Gaps of Haugeland's Existentialist Ontology.Bennett W. Helm - 2017 - In Zed Adams (ed.), Truth & Understanding: Essays in Honor of John Haugeland. pp. 213-41.
    In a remarkable series of papers, Haugeland lays out what is both a striking interpretation of Heidegger and a compelling account of objectivity and truth. Central to his account is a notion of existential commitment: a commitment to insist that one's understanding of the world succeeds in making sense of the phenomena and so potentially to change or give up on that understanding in the face of apparently impossible phenomena. Although Haugeland never gives a clear account of existential commitment, he (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  20.  21
    Responsibility Gaps and Technology: Old Wine in New Bottles?Ann-Katrien Oimann & Fabio Tollon - forthcoming - Journal of Applied Philosophy.
    Recent work in philosophy of technology has come to bear on the question of responsibility gaps. Some authors argue that the increase in the autonomous capabilities of decision-making systems makes it impossible to properly attribute responsibility for AI-based outcomes. In this article we argue that one important, and often neglected, feature of recent debates on responsibility gaps is how this debate maps on to old debates in responsibility theory. More specifically, we suggest that one (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  21.  69
    Responsibility Gaps and Black Box Healthcare AI: Shared Responsibilization as a Solution.Benjamin H. Lang, Sven Nyholm & Jennifer Blumenthal-Barby - 2023 - Digital Society 2 (3):52.
    As sophisticated artificial intelligence software becomes more ubiquitously and more intimately integrated within domains of traditionally human endeavor, many are raising questions over how responsibility (be it moral, legal, or causal) can be understood for an AI’s actions or influence on an outcome. So called “responsibility gaps” occur whenever there exists an apparent chasm in the ordinary attribution of moral blame or responsibility when an AI automates physical or cognitive labor otherwise performed by human (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  22. The responsibility gap: Ascribing responsibility for the actions of learning automata. [REVIEW]Andreas Matthias - 2004 - Ethics and Information Technology 6 (3):175-183.
    Traditionally, the manufacturer/operator of a machine is held (morally and legally) responsible for the consequences of its operation. Autonomous, learning machines, based on neural networks, genetic algorithms and agent architectures, create a new situation, where the manufacturer/operator of the machine is in principle not capable of predicting the future machine behaviour any more, and thus cannot be held morally responsible or liable for it. The society must decide between not using this kind of machine any more (which is not a (...)
    Direct download (11 more)  
     
    Export citation  
     
    Bookmark   207 citations  
  23.  18
    Artificial Intelligence to support ethical decision-making for incapacitated patients: a survey among German anesthesiologists and internists.Lasse Benzinger, Jelena Epping, Frank Ursin & Sabine Salloch - 2024 - BMC Medical Ethics 25 (1):1-10.
    Background Artificial intelligence (AI) has revolutionized various healthcare domains, where AI algorithms sometimes even outperform human specialists. However, the field of clinical ethics has remained largely untouched by AI advances. This study explores the attitudes of anesthesiologists and internists towards the use of AI-driven preference prediction tools to support ethical decision-making for incapacitated patients. Methods A questionnaire was developed and pretested among medical students. The questionnaire was distributed to 200 German anesthesiologists and 200 German internists, thereby focusing (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  24.  51
    Autonomous Artificial Intelligence and Liability: a Comment on List.Michael Da Silva - 2022 - Philosophy and Technology 35 (2):1-6.
    Christian List argues that responsibility gaps created by viewing artificial intelligence as intentional agents are problematic enough that regulators should only permit the use of autonomous AI in high-stakes settings where AI is designed to be moral or a liability transfer agreement will fill any gaps. This work challenges List’s proposed condition. A requirement for “moral” AI is too onerous given technical challenges and other ways to check AI quality. Moreover, transfer agreements only plausibly fill (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  25.  45
    The social and ethical impacts of artificial intelligence in agriculture: mapping the agricultural AI literature.Mark Ryan - 2023 - AI and Society 38 (6):2473-2485.
    This paper will examine the social and ethical impacts of using artificial intelligence (AI) in the agricultural sector. It will identify what are some of the most prevalent challenges and impacts identified in the literature, how this correlates with those discussed in the domain of AI ethics, and are being implemented into AI ethics guidelines. This will be achieved by examining published articles and conference proceedings that focus on societal or ethical impacts of AI in the agri-food sector, (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  26.  4
    The Ethical Responsibilities of Researchers in Light of the Technological Advancement and Artificial Intelligence Methods: A Case Study of Management Ph.D. Researchers at Midocean University.Ahmed Farouk Aly Mohammed, Sarah Homoud Al-Himali Al-Kahtani & Sarah Mubarak Mohammed Al-Dossary - forthcoming - Evolutionary Studies in Imaginative Culture:194-218.
    This study aimed to assess the integration, ethical considerations, and governance of artificial intelligence (AI) within the PhD programs at Midocean University. It specifically sought to understand PhD researchers' perceptions and attitudes towards AI and identify areas for enhancement in AI-related policies and educational initiatives. A descriptive analytical approach was adopted, utilizing an electronic questionnaire distributed to 105 PhD researchers, with 54 completing the survey. The questionnaire was designed to measure various aspects of AI usage, ethical concerns, (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  27.  21
    Artificial intelligence risks, attention allocation and priorities.Aorigele Bao & Yi Zeng - 2024 - Journal of Medical Ethics 50 (12):822-823.
    Jecker et al critically analysed the predominant focus on existential risk (X-Risk) in artificial intelligence (AI) ethics, advocating for a balanced communication of AI’s risks and benefits and urging serious consideration of other urgent ethical issues alongside X-Risk.1 Building on this analysis, we argue for the necessity of acknowledging the unique attention-grabbing attributes of X-Risk and leveraging these traits to foster a comprehensive focus on AI ethics. First, we need to consider a discontinuous situation that is overlooked in (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  28.  43
    What can science fiction tell us about the future of artificial intelligence policy?Andrew Dana Hudson, Ed Finn & Ruth Wylie - 2023 - AI and Society 38 (1):197-211.
    This paper addresses the gap between familiar popular narratives describing Artificial Intelligence (AI), such as the trope of the killer robot, and the realistic near-future implications of machine intelligence and automation for technology policy and society. The authors conducted a series of interviews with technologists, science fiction writers, and other experts, as well as a workshop, to identify a set of key themes relevant to the near future of AI. In parallel, they led the analysis of almost (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  29.  37
    AI responsibility gap: not new, inevitable, unproblematic.Huzeyfe Demirtas - 2025 - Ethics and Information Technology 27 (1):1-10.
    Who is responsible for a harm caused by AI, or a machine or system that relies on artificial intelligence? Given that current AI is neither conscious nor sentient, it’s unclear that AI itself is responsible for it. But given that AI acts independently of its developer or user, it’s also unclear that the developer or user is responsible for the harm. This gives rise to the so-called responsibility gap: cases where AI causes a harm, but no one (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  30.  11
    Clinicians’ roles and necessary levels of understanding in the use of artificial intelligence: A qualitative interview study with German medical students.F. Funer, S. Tinnemeyer, W. Liedtke & S. Salloch - 2024 - BMC Medical Ethics 25 (1):1-13.
    Background Artificial intelligence-driven Clinical Decision Support Systems (AI-CDSS) are being increasingly introduced into various domains of health care for diagnostic, prognostic, therapeutic and other purposes. A significant part of the discourse on ethically appropriate conditions relate to the levels of understanding and explicability needed for ensuring responsible clinical decision-making when using AI-CDSS. Empirical evidence on stakeholders’ viewpoints on these issues is scarce so far. The present study complements the empirical-ethical body of research by, on the one hand, investigating (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  31. There Is No Techno-Responsibility Gap.Daniel W. Tigard - 2021 - Philosophy and Technology 34 (3):589-607.
    In a landmark essay, Andreas Matthias claimed that current developments in autonomous, artificially intelligent (AI) systems are creating a so-called responsibility gap, which is allegedly ever-widening and stands to undermine both the moral and legal frameworks of our society. But how severe is the threat posed by emerging technologies? In fact, a great number of authors have indicated that the fear is thoroughly instilled. The most pessimistic are calling for a drastic scaling-back or complete moratorium on AI systems, while (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   47 citations  
  32.  59
    Investigating the role of artificial intelligence in the US criminal justice system.Ace Vo & Miloslava Plachkinova - 2023 - Journal of Information, Communication and Ethics in Society 21 (4):550-567.
    Purpose The purpose of this study is to examine public perceptions and attitudes toward using artificial intelligence (AI) in the US criminal justice system. Design/methodology/approach The authors took a quantitative approach and administered an online survey using the Amazon Mechanical Turk platform. The instrument was developed by integrating prior literature to create multiple scales for measuring public perceptions and attitudes. Findings The findings suggest that despite the various attempts, there are still significant perceptions of sociodemographic bias (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  33.  37
    Artists or art thieves? media use, media messages, and public opinion about artificial intelligence image generators.Paul R. Brewer, Liam Cuddy, Wyatt Dawson & Robert Stise - forthcoming - AI and Society:1-11.
    This study investigates how patterns of media use and exposure to media messages are related to attitudes about artificial intelligence (AI) image generators. In doing so, it builds on theoretical accounts of media framing and public opinion about science and technology topics, including AI. The analyses draw on data from a survey of the US public (N = 1,035) that included an experimental manipulation of exposure to tweets framing AI image generators in terms of real art, artists’ (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  34. Can we Bridge AI’s responsibility gap at Will?Maximilian Kiener - 2022 - Ethical Theory and Moral Practice 25 (4):575-593.
    Artificial intelligence increasingly executes tasks that previously only humans could do, such as drive a car, fight in war, or perform a medical operation. However, as the very best AI systems tend to be the least controllable and the least transparent, some scholars argued that humans can no longer be morally responsible for some of the AI-caused outcomes, which would then result in a responsibility gap. In this paper, I assume, for the sake of argument, that at (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   11 citations  
  35.  10
    Responsibility Gap(s) Due to the Introduction of AI in Healthcare: An Ubuntu-Inspired Approach.Brandon Ferlito, Seppe Segers, Michiel De Proost & Heidi Mertes - 2024 - Science and Engineering Ethics 30 (4):1-14.
    Due to its enormous potential, artificial intelligence (AI) can transform healthcare on a seemingly infinite scale. However, as we continue to explore the immense potential of AI, it is vital to consider the ethical concerns associated with its development and deployment. One specific concern that has been flagged in the literature is the responsibility gap (RG) due to the introduction of AI in healthcare. When the use of an AI algorithm or system results in a negative outcome (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  36.  4
    Evaluation of Knowledge, Attitude, and Practice (KAP) of Artificial Intelligence in Endodontics and Implantology Education: A Cross-sectional Study.Majed Mohsen Alqahtani, Rakan Ibrahim Qutob, Bashayer Mansour Bukhari, Wiam Talal Sagr, Majed Abdulrahman Alshehri, Abeer Abdulrahman Alhano, Aqab Theyab S. Almutairi & Mohammed Hassan Ahmed Rizq - forthcoming - Evolutionary Studies in Imaginative Culture:53-60.
    Purpose: The purposes of this study were to assess the knowledge, attitude, and practice (KAP) of AI in Endodontics and implantology education among dental professionals' and dental students in Endodontics and implantology education at the Kingdom of Saudi Arabia. Materials and methods: The present study is a descriptive cross-sectional online survey that was carried out among dental students and dental professionals in the Kingdom of Saudi Arabia. A self-structured, close-ended questionnaire that was administered that consisted of 17 questions was included. (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  37. Mind the Gap: Autonomous Systems, the Responsibility Gap, and Moral Entanglement.Trystan S. Goetze - 2022 - Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency (FAccT ’22).
    When a computer system causes harm, who is responsible? This question has renewed significance given the proliferation of autonomous systems enabled by modern artificial intelligence techniques. At the root of this problem is a philosophical difficulty known in the literature as the responsibility gap. That is to say, because of the causal distance between the designers of autonomous systems and the eventual outcomes of those systems, the dilution of agency within the large and complex teams that design (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  38. Understanding Moral Responsibility in Automated Decision-Making: Responsibility Gaps and Strategies to Address Them.Andrea Berber & Jelena Mijić - 2024 - Theoria: Beograd 67 (3):177-192.
    This paper delves into the use of machine learning-based systems in decision-making processes and its implications for moral responsibility as traditionally defined. It focuses on the emergence of responsibility gaps and examines proposed strategies to address them. The paper aims to provide an introductory and comprehensive overview of the ongoing debate surrounding moral responsibility in automated decision-making. By thoroughly examining these issues, we seek to contribute to a deeper understanding of the implications of AI integration in (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  39.  54
    Is Explainable AI Responsible AI?Isaac Taylor - forthcoming - AI and Society.
    When artificial intelligence (AI) is used to make high-stakes decisions, some worry that this will create a morally troubling responsibility gap—that is, a situation in which nobody is morally responsible for the actions and outcomes that result. Since the responsibility gap might be thought to result from individuals lacking knowledge of the future behavior of AI systems, it can be and has been suggested that deploying explainable artificial intelligence (XAI) techniques will help us to (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  40.  52
    “Many roads lead to Rome and the Artificial Intelligence only shows me one road”: an interview study on physician attitudes regarding the implementation of computerised clinical decision support systems.Sigrid Sterckx, Tamara Leune, Johan Decruyenaere, Wim Van Biesen & Daan Van Cauwenberge - 2022 - BMC Medical Ethics 23 (1):1-14.
    Research regarding the drivers of acceptance of clinical decision support systems by physicians is still rather limited. The literature that does exist, however, tends to focus on problems regarding the user-friendliness of CDSS. We have performed a thematic analysis of 24 interviews with physicians concerning specific clinical case vignettes, in order to explore their underlying opinions and attitudes regarding the introduction of CDSS in clinical practice, to allow a more in-depth analysis of factors underlying acceptance of CDSS. We identified (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  41.  39
    The Epistemological Consequences of Artificial Intelligence, Precision Medicine, and Implantable Brain-Computer Interfaces.Ian Stevens - 2024 - Voices in Bioethics 10.
    ABSTRACT I argue that this examination and appreciation for the shift to abductive reasoning should be extended to the intersection of neuroscience and novel brain-computer interfaces too. This paper highlights the implications of applying abductive reasoning to personalized implantable neurotechnologies. Then, it explores whether abductive reasoning is sufficient to justify insurance coverage for devices absent widespread clinical trials, which are better applied to one-size-fits-all treatments. INTRODUCTION In contrast to the classic model of randomized-control trials, often with a large number of (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  42.  31
    The Human Roots of Artificial Intelligence: A Commentary on Susan Schneider's Artificial You.Inês Hipólito - 2024 - Philosophy East and West 74 (2):297-305.
    In lieu of an abstract, here is a brief excerpt of the content:The Human Roots of Artificial Intelligence:A Commentary on Susan Schneider's Artificial YouInês Hipólito (bio)Technologies are not mere tools waiting to be picked up and used by human agents, but rather are material-discursive practices that play a role in shaping and co-constituting the world in which we live.Karen BaradIntroductionSusan Schneider's book Artificial You: AI and the Future of Your Mind presents a compelling and bold argument (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  43.  25
    Developments in Intellectual Property Strategy: The Impact of Artificial Intelligence, Robotics and New Technologies.Nadia Naim (ed.) - 2024 - Springer Verlag.
    Research in the area of intellectual property (IP) is increasingly relevant to the rapidly growing artificial intelligence (AI) and robotics industries, affecting the legal, business, manufacturing, and healthcare sectors. This contributed volume aims to develop our understanding of the legal and ethical challenges posed by artificial intelligence and robotics technologies and the appropriate intellectual property based legal and regulatory responses. It provides a philosophical and legal framework for considering concepts and principles that relate to the development (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  44. Artificial Moral Responsibility: How We Can and Cannot Hold Machines Responsible.Daniel W. Tigard - 2021 - Cambridge Quarterly of Healthcare Ethics 30 (3):435-447.
    Our ability to locate moral responsibility is often thought to be a necessary condition for conducting morally permissible medical practice, engaging in a just war, and other high-stakes endeavors. Yet, with increasing reliance upon artificially intelligent systems, we may be facing a wideningresponsibility gap, which, some argue, cannot be bridged by traditional concepts of responsibility. How then, if at all, can we make use of crucial emerging technologies? According to Colin Allen and Wendell Wallach, the advent of so-called (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   17 citations  
  45. Bernard Lonergan and a Nouvelle théologie for Artificial Intelligence.Steven Umbrello - forthcoming - The Lonergan Review.
    This paper explores the intersection of Bernard Lonergan’s philosophy of intentional human consciousness and the evolving discourse on artificial intelligence (AI). By understanding the distinctions between human cognition and AI capabilities, we can develop a Nouvelle théologie that addresses the ethical and theological dimensions of AI’s integration into society. This approach not only highlights the unique human capacities for self-reflection and moral reasoning but also guides the deliberate and responsible design of AI to promote human flourishing and the (...)
     
    Export citation  
     
    Bookmark  
  46. Sinful AI?Michael Wilby - 2023 - In Critical Muslim, 47. London: Hurst Publishers. pp. 91-108.
    Could the concept of 'evil' apply to AI? Drawing on PF Strawson's framework of reactive attitudes, this paper argues that we can understand evil as involving agents who are neither fully inside nor fully outside our moral practices. It involves agents whose abilities and capacities are enough to make them morally responsible for their actions, but whose behaviour is far enough outside of the norms of our moral practices to be labelled 'evil'. Understood as such, the paper argues (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  47.  15
    Air Canada’s chatbot illustrates persistent agency and responsibility gap problems for AI.Joshua L. M. Brand - forthcoming - AI and Society:1-3.
  48.  5
    From liability gaps to liability overlaps: shared responsibilities and fiduciary duties in AI and other complex technologies.Bart Custers, Henning Lahmann & Benjamyn I. Scott - forthcoming - AI and Society:1-16.
    Complex technologies such as Artificial Intelligence (AI) can cause harm, raising the question of who is liable for the harm caused. Research has identified multiple liability gaps (i.e., unsatisfactory outcomes when applying existing liability rules) in legal frameworks. In this paper, the concepts of shared responsibilities and fiduciary duties are explored as avenues to address liability gaps. The development, deployment and use of complex technologies are not clearly distinguishable stages, as often suggested, but are processes of (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  49.  14
    Doing versus saying: responsible AI among large firms.Jacques Bughin - forthcoming - AI and Society:1-13.
    Responsible Artificial Intelligence (RAI) is a subset of the ethics associated with the use of artificial intelligence, which will only increase with the recent advent of new regulatory frameworks. However, if many firms have announced the establishment of AI governance rules, there is currently an important gap in understanding whether and why these announcements are being implemented or remain “decoupled” from operations. We assess how large global firms have so far implemented RAI, and the antecedents to (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  50. Find the Gap: AI, Responsible Agency and Vulnerability.Shannon Vallor & Tillmann Vierkant - 2024 - Minds and Machines 34 (3):1-23.
    The responsibility gap, commonly described as a core challenge for the effective governance of, and trust in, AI and autonomous systems (AI/AS), is traditionally associated with a failure of the epistemic and/or the control condition of moral responsibility: the ability to know what we are doing and exercise competent control over this doing. Yet these two conditions are a red herring when it comes to understanding the responsibility challenges presented by AI/AS, since evidence from the cognitive sciences (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   4 citations  
1 — 50 / 963