Results for 'Explainable artificial intelligence'

974 found
Order:
  1.  93
    Is explainable artificial intelligence intrinsically valuable?Nathan Colaner - 2022 - AI and Society 37 (1):231-238.
    There is general consensus that explainable artificial intelligence is valuable, but there is significant divergence when we try to articulate why, exactly, it is desirable. This question must be distinguished from two other kinds of questions asked in the XAI literature that are sometimes asked and addressed simultaneously. The first and most obvious is the ‘how’ question—some version of: ‘how do we develop technical strategies to achieve XAI?’ Another question is specifying what kind of explanation is worth (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   9 citations  
  2. Explainable Artificial Intelligence (XAI) 2.0: A Manifesto of Open Challenges and Interdisciplinary Research Directions.Luca Longo, Mario Brcic, Federico Cabitza, Jaesik Choi, Roberto Confalonieri, Javier Del Ser, Riccardo Guidotti, Yoichi Hayashi, Francisco Herrera, Andreas Holzinger, Richard Jiang, Hassan Khosravi, Freddy Lecue, Gianclaudio Malgieri, Andrés Páez, Wojciech Samek, Johannes Schneider, Timo Speith & Simone Stumpf - 2024 - Information Fusion 106 (June 2024).
    As systems based on opaque Artificial Intelligence (AI) continue to flourish in diverse real-world applications, understanding these black box models has become paramount. In response, Explainable AI (XAI) has emerged as a field of research with practical and ethical benefits across various domains. This paper not only highlights the advancements in XAI and its application in real-world scenarios but also addresses the ongoing challenges within XAI, emphasizing the need for broader perspectives and collaborative efforts. We bring together (...)
    Direct download  
     
    Export citation  
     
    Bookmark   2 citations  
  3. From Responsibility to Reason-Giving Explainable Artificial Intelligence.Kevin Baum, Susanne Mantel, Timo Speith & Eva Schmidt - 2022 - Philosophy and Technology 35 (1):1-30.
    We argue that explainable artificial intelligence (XAI), specifically reason-giving XAI, often constitutes the most suitable way of ensuring that someone can properly be held responsible for decisions that are based on the outputs of artificial intelligent (AI) systems. We first show that, to close moral responsibility gaps (Matthias 2004), often a human in the loop is needed who is directly responsible for particular AI-supported decisions. Second, we appeal to the epistemic condition on moral responsibility to argue (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   18 citations  
  4.  35
    Explainable Artificial Intelligence (XAI) to Enhance Trust Management in Intrusion Detection Systems Using Decision Tree Model.Basim Mahbooba, Mohan Timilsina, Radhya Sahal & Martin Serrano - 2021 - Complexity 2021:1-11.
    Despite the growing popularity of machine learning models in the cyber-security applications ), most of these models are perceived as a black-box. The eXplainable Artificial Intelligence has become increasingly important to interpret the machine learning models to enhance trust management by allowing human experts to understand the underlying data evidence and causal reasoning. According to IDS, the critical role of trust management is to understand the impact of the malicious data to detect any intrusion in the system. (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  5.  55
    Explainable Artificial Intelligence in Data Science.Joaquín Borrego-Díaz & Juan Galán-Páez - 2022 - Minds and Machines 32 (3):485-531.
    A widespread need to explain the behavior and outcomes of AI-based systems has emerged, due to their ubiquitous presence. Thus, providing renewed momentum to the relatively new research area of eXplainable AI (XAI). Nowadays, the importance of XAI lies in the fact that the increasing control transference to this kind of system for decision making -or, at least, its use for assisting executive stakeholders- already affects many sensitive realms (as in Politics, Social Sciences, or Law). The decision-making power handover (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  6.  26
    Information-seeking dialogue for explainable artificial intelligence: Modelling and analytics.Ilia Stepin, Katarzyna Budzynska, Alejandro Catala, Martín Pereira-Fariña & Jose M. Alonso-Moral - 2024 - Argument and Computation 15 (1):49-107.
    Explainable artificial intelligence has become a vitally important research field aiming, among other tasks, to justify predictions made by intelligent classifiers automatically learned from data. Importantly, efficiency of automated explanations may be undermined if the end user does not have sufficient domain knowledge or lacks information about the data used for training. To address the issue of effective explanation communication, we propose a novel information-seeking explanatory dialogue game following the most recent requirements to automatically generated explanations. Further, (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  7. Solving the Black Box Problem: A Normative Framework for Explainable Artificial Intelligence.Carlos Zednik - 2019 - Philosophy and Technology 34 (2):265-288.
    Many of the computing systems programmed using Machine Learning are opaque: it is difficult to know why they do what they do or how they work. Explainable Artificial Intelligence aims to develop analytic techniques that render opaque computing systems transparent, but lacks a normative framework with which to evaluate these techniques’ explanatory successes. The aim of the present discussion is to develop such a framework, paying particular attention to different stakeholders’ distinct explanatory requirements. Building on an analysis (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   66 citations  
  8.  8
    Explainable artificial intelligence and the social sciences: a plea for interdisciplinary research.Wim De Mulder - forthcoming - AI and Society:1-20.
    Recent research emphasizes the complexity of providing useful explanations of computer-generated output. In developing an explanation-generating tool, the computer scientist should take a user-centered perspective, while taking into account the user’s susceptibility to certain biases. The purpose of this paper is to expand the research results on explainability from the social sciences, and to indicate how these results are relevant to the field of XAI. This is done through the presentation of two surveys to university students. The analysis of the (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  9.  87
    Scientific Exploration and Explainable Artificial Intelligence.Carlos Zednik & Hannes Boelsen - 2022 - Minds and Machines 32 (1):219-239.
    Models developed using machine learning are increasingly prevalent in scientific research. At the same time, these models are notoriously opaque. Explainable AI aims to mitigate the impact of opacity by rendering opaque models transparent. More than being just the solution to a problem, however, Explainable AI can also play an invaluable role in scientific exploration. This paper describes how post-hoc analytic techniques from Explainable AI can be used to refine target phenomena in medical science, to identify starting (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   16 citations  
  10. The Pragmatic Turn in Explainable Artificial Intelligence.Andrés Páez - 2019 - Minds and Machines 29 (3):441-459.
    In this paper I argue that the search for explainable models and interpretable decisions in AI must be reformulated in terms of the broader project of offering a pragmatic and naturalistic account of understanding in AI. Intuitively, the purpose of providing an explanation of a model or a decision is to make it understandable to its stakeholders. But without a previous grasp of what it means to say that an agent understands a model or a decision, the explanatory strategies (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   39 citations  
  11. What do we want from Explainable Artificial Intelligence (XAI)? – A stakeholder perspective on XAI and a conceptual model guiding interdisciplinary XAI research.Markus Langer, Daniel Oster, Timo Speith, Lena Kästner, Kevin Baum, Holger Hermanns, Eva Schmidt & Andreas Sesing - 2021 - Artificial Intelligence 296 (C):103473.
    Previous research in Explainable Artificial Intelligence (XAI) suggests that a main aim of explainability approaches is to satisfy specific interests, goals, expectations, needs, and demands regarding artificial systems (we call these “stakeholders' desiderata”) in a variety of contexts. However, the literature on XAI is vast, spreads out across multiple largely disconnected disciplines, and it often remains unclear how explainability approaches are supposed to achieve the goal of satisfying stakeholders' desiderata. This paper discusses the main classes of (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   24 citations  
  12.  72
    The End of Vagueness: Technological Epistemicism, Surveillance Capitalism, and Explainable Artificial Intelligence.Alison Duncan Kerr & Kevin Scharp - 2022 - Minds and Machines 32 (3):585-611.
    Artificial Intelligence (AI) pervades humanity in 2022, and it is notoriously difficult to understand how certain aspects of it work. There is a movement—_Explainable_ Artificial Intelligence (XAI)—to develop new methods for explaining the behaviours of AI systems. We aim to highlight one important philosophical significance of XAI—it has a role to play in the elimination of vagueness. To show this, consider that the use of AI in what has been labeled _surveillance capitalism_ has resulted in humans (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  13.  24
    Subjectivity of Explainable Artificial Intelligence.Александр Николаевич Райков - 2022 - Russian Journal of Philosophical Sciences 65 (1):72-90.
    The article addresses the problem of identifying methods to develop the ability of artificial intelligence (AI) systems to provide explanations for their findings. This issue is not new, but, nowadays, the increasing complexity of AI systems is forcing scientists to intensify research in this direction. Modern neural networks contain hundreds of layers of neurons. The number of parameters of these networks reaches trillions, genetic algorithms generate thousands of generations of solutions, and the semantics of AI models become more (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  14.  46
    Conceptualizing understanding in explainable artificial intelligence (XAI): an abilities-based approach.Timo Speith, Barnaby Crook, Sara Mann, Astrid Schomäcker & Markus Langer - 2024 - Ethics and Information Technology 26 (2):1-15.
    A central goal of research in explainable artificial intelligence (XAI) is to facilitate human understanding. However, understanding is an elusive concept that is difficult to target. In this paper, we argue that a useful way to conceptualize understanding within the realm of XAI is via certain human abilities. We present four criteria for a useful conceptualization of understanding in XAI and show that these are fulfilled by an abilities-based approach: First, thinking about understanding in terms of specific (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  15.  31
    Levels of explainable artificial intelligence for human-aligned conversational explanations.Richard Dazeley, Peter Vamplew, Cameron Foale, Charlotte Young, Sunil Aryal & Francisco Cruz - 2021 - Artificial Intelligence 299 (C):103525.
  16. A Means-End Account of Explainable Artificial Intelligence.Oliver Buchholz - 2023 - Synthese 202 (33):1-23.
    Explainable artificial intelligence (XAI) seeks to produce explanations for those machine learning methods which are deemed opaque. However, there is considerable disagreement about what this means and how to achieve it. Authors disagree on what should be explained (topic), to whom something should be explained (stakeholder), how something should be explained (instrument), and why something should be explained (goal). In this paper, I employ insights from means-end epistemology to structure the field. According to means-end epistemology, different means (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  17.  26
    Special issue on Explainable Artificial Intelligence.Tim Miller, Robert Hoffman, Ofra Amir & Andreas Holzinger - 2022 - Artificial Intelligence 307 (C):103705.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  18. Artificial Intelligence, Responsibility Attribution, and a Relational Justification of Explainability.Mark Coeckelbergh - 2020 - Science and Engineering Ethics 26 (4):2051-2068.
    This paper discusses the problem of responsibility attribution raised by the use of artificial intelligence technologies. It is assumed that only humans can be responsible agents; yet this alone already raises many issues, which are discussed starting from two Aristotelian conditions for responsibility. Next to the well-known problem of many hands, the issue of “many things” is identified and the temporal dimension is emphasized when it comes to the control condition. Special attention is given to the epistemic condition, (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   65 citations  
  19. Artificial Intelligence and Black‐Box Medical Decisions: Accuracy versus Explainability.Alex John London - 2019 - Hastings Center Report 49 (1):15-21.
    Although decision‐making algorithms are not new to medicine, the availability of vast stores of medical data, gains in computing power, and breakthroughs in machine learning are accelerating the pace of their development, expanding the range of questions they can address, and increasing their predictive power. In many cases, however, the most powerful machine learning techniques purchase diagnostic or predictive accuracy at the expense of our ability to access “the knowledge within the machine.” Without an explanation in terms of reasons or (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   90 citations  
  20. Artificial intelligence & games: Should computational psychology be revalued?Marco Ernandes - 2005 - Topoi 24 (2):229-242.
    The aims of this paper are threefold: To show that game-playing (GP), the discipline of Artificial Intelligence (AI) concerned with the development of automated game players, has a strong epistemological relevance within both AI and the vast area of cognitive sciences. In this context games can be seen as a way of securely reducing (segmenting) real-world complexity, thus creating the laboratory environment necessary for testing the diverse types and facets of intelligence produced by computer models. This paper (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  21.  44
    Why artificial intelligence needs sociology of knowledge: parts I and II.Harry Collins - forthcoming - AI and Society:1-15.
    Recent developments in artificial intelligence based on neural nets—deep learning and large language models which together I refer to as NEWAI—have resulted in startling improvements in language handling and the potential to keep up with changing human knowledge by learning from the internet. Nevertheless, examples such as ChatGPT, which is a ‘large language model’, have proved to have no moral compass: they answer queries with fabrications with the same fluency as they provide facts. I try to explain why (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  22.  18
    Can artificial intelligence explain age changes in literary creativity?Carolyn Adams-Price - 1994 - Behavioral and Brain Sciences 17 (3):532-532.
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  23.  20
    Artificial intelligence: a “promising technology”.Hartmut Hirsch-Kreinsen - forthcoming - AI and Society:1-12.
    This paper addresses the question of how the ups and downs in the development of artificial intelligence (AI) since its inception can be explained. It focuses on the development of artificial intelligence in Germany since the 1970s, and particularly on its current dynamics. An assumption is made that a mere reference to rapid advances in information technologies and the various methods and concepts of artificial intelligence in recent decades cannot adequately explain these dynamics, because (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  24.  27
    Against explainability requirements for ethical artificial intelligence in health care.Suzanne Kawamleh - 2023 - AI and Ethics 3 (3):901-916.
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  25.  27
    Enaction-Based Artificial Intelligence: Toward Co-evolution with Humans in the Loop.Pierre Loor, Kristen Manac’H. & Jacques Tisseau - 2009 - Minds and Machines 19 (3):319-343.
    This article deals with the links between the enaction paradigm and artificial intelligence. Enaction is considered a metaphor for artificial intelligence, as a number of the notions which it deals with are deemed incompatible with the phenomenal field of the virtual. After explaining this stance, we shall review previous works regarding this issue in terms of artificial life and robotics. We shall focus on the lack of recognition of co-evolution at the heart of these approaches. (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  26. Embedding Values in Artificial Intelligence (AI) Systems.Ibo van de Poel - 2020 - Minds and Machines 30 (3):385-409.
    Organizations such as the EU High-Level Expert Group on AI and the IEEE have recently formulated ethical principles and (moral) values that should be adhered to in the design and deployment of artificial intelligence (AI). These include respect for autonomy, non-maleficence, fairness, transparency, explainability, and accountability. But how can we ensure and verify that an AI system actually respects these values? To help answer this question, I propose an account for determining when an AI system can be said (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   51 citations  
  27. CAN ARTIFICIAL INTELLIGENCE THINK WITHOUT THE UNCONSCIOUS ?Derya Ölçener - 2020
    Today, humanity is trying to turn the artificial intelligence that it produces into natural intelligence. Although this effort is technologically exciting, it often raises ethical concerns. Therefore, the intellectual ability of artificial intelligence will always bring new questions. Although there have been significant developments in the consciousness of artificial intelligence, the issue of consciousness must be fully explained in order to complete this development. When consciousness is fully understood by human beings, the subject (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  28. Artificial Intelligence as Philosophy.Giovanni Landi (ed.) - 2021 - Chișinău, Moldavia: Eliva Press.
    Artificial intelligence is not and has never been a technology. It began with Turing's famous "can machine think?", a philosophical question that too many were quick to transform into a more prosaic "can Thought be mechanized?" Only in this perspective can the history and the technological success of AI be duly explained and understood, one of the tasks this book engages in. -/- It is important for philosophers to take AI seriously, and for AI researchers to see their (...)
     
    Export citation  
     
    Bookmark  
  29.  57
    Explainability, Public Reason, and Medical Artificial Intelligence.Michael Da Silva - 2023 - Ethical Theory and Moral Practice 26 (5):743-762.
    The contention that medical artificial intelligence (AI) should be ‘explainable’ is widespread in contemporary philosophy and in legal and best practice documents. Yet critics argue that ‘explainability’ is not a stable concept; non-explainable AI is often more accurate; mechanisms intended to improve explainability do not improve understanding and introduce new epistemic concerns; and explainability requirements are ad hoc where human medical decision-making is often opaque. A recent ‘political response’ to these issues contends that AI used in (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  30. Artificial Intelligence, Robots and the Ethics of the Future.Constantin Vica & Cristina Voinea - 2019 - Revue Roumaine de Philosophie 63 (2):223–234.
    The future rests under the sign of technology. Given the prevalence of technological neutrality and inevitabilism, most conceptualizations of the future tend to ignore moral problems. In this paper we argue that every choice about future technologies is a moral choice and even the most technology-dominated scenarios of the future are, in fact, moral provocations we have to imagine solutions to. We begin by explaining the intricate connection between morality and the future. After a short excursion into the history of (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  31.  33
    Cognitive architectures for artificial intelligence ethics.Steve J. Bickley & Benno Torgler - 2023 - AI and Society 38 (2):501-519.
    As artificial intelligence (AI) thrives and propagates through modern life, a key question to ask is how to include humans in future AI? Despite human involvement at every stage of the production process from conception and design through to implementation, modern AI is still often criticized for its “black box” characteristics. Sometimes, we do not know what really goes on inside or how and why certain conclusions are met. Future AI will face many dilemmas and ethical issues unforeseen (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  32.  11
    Deep learning models and the limits of explainable artificial intelligence.Jens Christian Bjerring, Jakob Mainz & Lauritz Munch - 2025 - Asian Journal of Philosophy 4 (1):1-26.
    It has often been argued that we face a trade-off between accuracy and opacity in deep learning models. The idea is that we can only harness the accuracy of deep learning models by simultaneously accepting that the grounds for the models’ decision-making are epistemically opaque to us. In this paper, we ask the following question: what are the prospects of making deep learning models transparent without compromising on their accuracy? We argue that the answer to this question depends on which (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  33.  59
    Artificial intelligence assistants and risk: framing a connectivity risk narrative.Martin Cunneen, Martin Mullins & Finbarr Murphy - 2020 - AI and Society 35 (3):625-634.
    Our social relations are changing, we are now not just talking to each other, but we are now also talking to artificial intelligence (AI) assistants. We claim AI assistants present a new form of digital connectivity risk and a key aspect of this risk phenomenon is related to user risk awareness (or lack of) regarding AI assistant functionality. AI assistants present a significant societal risk phenomenon amplified by the global scale of the products and the increasing use in (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  34.  38
    Karl Jaspers and artificial neural nets: on the relation of explaining and understanding artificial intelligence in medicine.Christopher Poppe & Georg Starke - 2022 - Ethics and Information Technology 24 (3):1-10.
    Assistive systems based on Artificial Intelligence (AI) are bound to reshape decision-making in all areas of society. One of the most intricate challenges arising from their implementation in high-stakes environments such as medicine concerns their frequently unsatisfying levels of explainability, especially in the guise of the so-called black-box problem: highly successful models based on deep learning seem to be inherently opaque, resisting comprehensive explanations. This may explain why some scholars claim that research should focus on rendering AI systems (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  35.  92
    Artificial intelligence in medicine and the disclosure of risks.Maximilian Kiener - 2021 - AI and Society 36 (3):705-713.
    This paper focuses on the use of ‘black box’ AI in medicine and asks whether the physician needs to disclose to patients that even the best AI comes with the risks of cyberattacks, systematic bias, and a particular type of mismatch between AI’s implicit assumptions and an individual patient’s background situation.Pacecurrent clinical practice, I argue that, under certain circumstances, these risks do need to be disclosed. Otherwise, the physician either vitiates a patient’s informed consent or violates a more general obligation (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   10 citations  
  36. (1 other version)Can Artificial Intelligence Make Art?Elzė Sigutė Mikalonytė & Markus Kneer - 2022 - ACM Transactions on Human-Robot Interactions.
    In two experiments (total N=693) we explored whether people are willing to consider paintings made by AI-driven robots as art, and robots as artists. Across the two experiments, we manipulated three factors: (i) agent type (AI-driven robot v. human agent), (ii) behavior type (intentional creation of a painting v. accidental creation), and (iii) object type (abstract v. representational painting). We found that people judge robot paintings and human painting as art to roughly the same extent. However, people are much less (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  37.  12
    A sustainable artificial intelligence facilities management outsourcing relationships system: Case studies.Ka Leung Lok, Albert So, Alex Opoku & Charles Chen - 2022 - Frontiers in Psychology 13.
    The purpose of this article was to validate the published artificial intelligence facilities management outsourcing relationships system by real business cases in the working environment. The research aims to inspire the modern FM professionals in different industries with some challenging and innovative concepts about FM outsourcing relationships between facilities owners and service providers. First, it will briefly introduce the theory of the FM outsourcing relationships system on how it can help the FM seniors and strategists to design their (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  38.  96
    Natural and Artificial Intelligence: A Comparative Analysis of Cognitive Aspects.Francesco Abbate - 2023 - Minds and Machines 33 (4):791-815.
    Moving from a behavioral definition of intelligence, which describes it as the ability to adapt to the surrounding environment and deal effectively with new situations (Anastasi, 1986), this paper explains to what extent the performance obtained by ChatGPT in the linguistic domain can be considered as intelligent behavior and to what extent they cannot. It also explains in what sense the hypothesis of decoupling between cognitive and problem-solving abilities, proposed by Floridi (2017) and Floridi and Chiriatti (2020) should be (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  39.  67
    Heuristic evaluation functions in artificial intelligence search algorithms.Richard E. Korf - 1995 - Minds and Machines 5 (4):489-498.
    We consider a special case of heuristics, namely numeric heuristic evaluation functions, and their use in artificial intelligence search algorithms. The problems they are applied to fall into three general classes: single-agent path-finding problems, two-player games, and constraint-satisfaction problems. In a single-agent path-finding problem, such as the Fifteen Puzzle or the travelling salesman problem, a single agent searches for a shortest path from an initial state to a goal state. Two-player games, such as chess and checkers, involve an (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  40.  27
    What Kind of Artificial Intelligence Should We Want for Use in Healthcare Decision-Making Applications?Jordan Wadden - 2021 - Canadian Journal of Bioethics / Revue canadienne de bioéthique 4 (1):94-100.
    The prospect of including artificial intelligence (AI) in clinical decision-making is an exciting next step for some areas of healthcare. This article provides an analysis of the available kinds of AI systems, focusing on macro-level characteristics. This includes examining the strengths and weaknesses of opaque systems and fully explainable systems. Ultimately, the article argues that “grey box” systems, which include some combination of opacity and transparency, ought to be used in healthcare settings.
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  41.  21
    Artificial Intelligence and Creativity.Terry Dartnall (ed.) - 1993 - Springer.
    Creativity is one of the least understood aspects of intelligence and is often seen as intuitive' and not susceptible to rational enquiry. Recently, however, there has been a resurgence of interest in the area, principally in artificial intelligence and cognitive science, but also in psychology, philosophy, computer science, logic, mathematics, sociology, and architecture and design. This volume brings this work together and provides an overview of this rapidly developing field. It addresses a range of issues. Can computers (...)
    Direct download  
     
    Export citation  
     
    Bookmark   4 citations  
  42.  64
    The naturalness of artificial intelligence from the evolutionary perspective.Vladimír Havlík - 2019 - AI and Society 34 (4):889-898.
    Current discussions on artificial intelligence, in both the theoretical and practical realms, contain a fundamental lack of clarity regarding the nature of artificial intelligence, perhaps due to the fact that the distinction between natural and artificial appears, at first sight, both intuitive and evident. Is AI something unnatural, non-human and therefore dangerous to humanity, or is it only a continuation of man’s natural tendency towards creativity? It is not surprising that from the philosophical point of (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  43.  18
    Influence of Artificial Intelligence and Robotics Awareness on Employee Creativity in the Hotel Industry.Hui Wang, Han Zhang, Zhezhi Chen, Jian Zhu & Yue Zhang - 2022 - Frontiers in Psychology 13.
    The current literature in artificial intelligence and robotics awareness focused on the dark side of AIRA. Accordingly, this study sheds light on the positive effect of AIRA on employee creativity by exploring how and when hotel employees may take proactive behavior facing the threat of AI and robotics to further stimulate creativity. Based on the work adjustment theory and the locus of control theory, this study constructs a moderating multiple mediation model to explain the influence of AIRA on (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  44.  19
    Artificial Intelligence Governance and the Blockchain Revolution.Qiqi Gao & Jiteng Zhang - 2024 - Springer Nature Singapore.
    This is the first professional academic work in China to discuss artificial intelligence and blockchain together. Artificial intelligence is a productivity revolution, and its development has a significant and profound impact on global changes. However, at the same time, its development also brings a series of challenges to human society, such as privacy, security, and fairness issues. Therefore, the significance of blockchain is even more prominent. Blockchain is a revolution in production relations, which will propose important (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  45.  99
    Artificial intelligence as law. [REVIEW]Bart Verheij - 2020 - Artificial Intelligence and Law 28 (2):181-206.
    Information technology is so ubiquitous and AI’s progress so inspiring that also legal professionals experience its benefits and have high expectations. At the same time, the powers of AI have been rising so strongly that it is no longer obvious that AI applications (whether in the law or elsewhere) help promoting a good society; in fact they are sometimes harmful. Hence many argue that safeguards are needed for AI to be trustworthy, social, responsible, humane, ethical. In short: AI should be (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   9 citations  
  46.  94
    Philosophy and distributed artificial intelligence: The case of joint intention.Raimo Tuomela - 1996 - In N. Jennings & G. O'Hare (eds.), Foundations of Distributed Artificial Intelligence. Wiley.
    In current philosophical research the term 'philosophy of social action' can be used - and has been used - in a broad sense to encompass the following central research topics: 1) action occurring in a social context; this includes multi-agent action; 2) joint attitudes (or "we-attitudes" such as joint intention, mutual belief) and other social attitudes needed for the explication and explanation of social action; 3) social macro-notions, such as actions performed by social groups and properties of social groups such (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  47.  41
    Embodied human language models vs. Large Language Models, or why Artificial Intelligence cannot explain the modal be able to.Sergio Torres-Martínez - 2024 - Biosemiotics 17 (1):185-209.
    This paper explores the challenges posed by the rapid advancement of artificial intelligence specifically Large Language Models (LLMs). I show that traditional linguistic theories and corpus studies are being outpaced by LLMs’ computational sophistication and low perplexity levels. In order to address these challenges, I suggest a focus on language as a cognitive tool shaped by embodied-environmental imperatives in the context of Agentive Cognitive Construction Grammar. To that end, I introduce an Embodied Human Language Model (EHLM), inspired by (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  48. Artificial Intelligence in Extended Minds: Intrapersonal Diffusion of Responsibility and Legal Multiple Personality.Jan-Hendrik Heinrichs - 2020 - In Technology, Anthropology, and Dimensions of Responsibility. Stuttgart, Deutschland: pp. 159-176.
    Can an artificially intelligent tool be a part of a human’s extended mind? There are two opposing streams of thought in this regard. One of them can be identified as the externalist perspective in the philosophy of mind, which tries to explain complex states and processes of an individual as co-constituted by elements of the individual’s material and social environment. The other strand is normative and explanatory atomism which insists that what is to be explained and evaluated is the behaviour (...)
     
    Export citation  
     
    Bookmark  
  49.  33
    Making Artificial Intelligence Transparent: Fairness and the Problem of Proxy Variables.Richard Warner & Robert H. Sloan - 2021 - Criminal Justice Ethics 40 (1):23-39.
    AI-driven decisions can draw data from virtually any area of your life to make a decision about virtually any other area of your life. That creates fairness issues. Effective regulation to ensure fairness requires that AI systems be transparent. That is, regulators must have sufficient access to the factors that explain and justify the decisions. One approach to transparency is to require that systems be explainable, as that concept is understood in computer science. A system is explainable if (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  50.  66
    Artificial Intelligence and Medical Humanities.Kirsten Ostherr - 2022 - Journal of Medical Humanities 43 (2):211-232.
    The use of artificial intelligence in healthcare has led to debates about the role of human clinicians in the increasingly technological contexts of medicine. Some researchers have argued that AI will augment the capacities of physicians and increase their availability to provide empathy and other uniquely human forms of care to their patients. The human vulnerabilities experienced in the healthcare context raise the stakes of new technologies such as AI, and the human dimensions of AI in healthcare have (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
1 — 50 / 974