Results for 'AI Agency'

978 found
Order:
  1. AI, agency and responsibility: the VW fraud case and beyond.Deborah G. Johnson & Mario Verdicchio - 2019 - AI and Society 34 (3):639-647.
    The concept of agency as applied to technological artifacts has become an object of heated debate in the context of AI research because some AI researchers ascribe to programs the type of agency traditionally associated with humans. Confusion about agency is at the root of misconceptions about the possibilities for future AI. We introduce the concept of a triadic agency that includes the causal agency of artifacts and the intentional agency of humans to better (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   10 citations  
  2.  21
    Civics and Moral Education in Singapore: lessons for citizenship education?Joy Ai - 1998 - Journal of Moral Education 27 (4):505-524.
    Civics and Moral Educationwas implemented as a new moral education programme in Singapore schools in 1992. This paper argues that the underlying theme is that of citizenship training and that new measures are under way to strengthen the capacity of the school system to transmit national values for economic and political socialisation. The motives and motivation for retaining a formal moral education programme have remained strong. A discussion of the structure and content of key modules in Civics and Moral Education (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  3.  39
    AI&Society: editorial volume 35.2: the trappings of AI Agency.Karamjit S. Gill - 2020 - AI and Society 35 (2):289-296.
  4.  77
    Moral agency without responsibility? Analysis of three ethical models of human-computer interaction in times of artificial intelligence (AI).Alexis Fritz, Wiebke Brandt, Henner Gimpel & Sarah Bayer - 2020 - De Ethica 6 (1):3-22.
    Philosophical and sociological approaches in technology have increasingly shifted toward describing AI (artificial intelligence) systems as ‘(moral) agents,’ while also attributing ‘agency’ to them. It is only in this way – so their principal argument goes – that the effects of technological components in a complex human-computer interaction can be understood sufficiently in phenomenological-descriptive and ethical-normative respects. By contrast, this article aims to demonstrate that an explanatory model only achieves a descriptively and normatively satisfactory result if the concepts of (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  5. Varieties of transparency: exploring agency within AI systems.Gloria Andrada, Robert William Clowes & Paul Smart - 2023 - AI and Society 38 (4):1321-1331.
    AI systems play an increasingly important role in shaping and regulating the lives of millions of human beings across the world. Calls for greater _transparency_ from such systems have been widespread. However, there is considerable ambiguity concerning what “transparency” actually means, and therefore, what greater transparency might entail. While, according to some debates, transparency requires _seeing through_ the artefact or device, widespread calls for transparency imply _seeing into_ different aspects of AI systems. These two notions are in apparent tension with (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   15 citations  
  6. AI as Agency Without Intelligence: on ChatGPT, Large Language Models, and Other Generative Models.Luciano Floridi - 2023 - Philosophy and Technology 36 (1):1-7.
  7.  6
    Philosophy, agency, and AI.James Mattingly - 2024 - Cambridge, Massachusetts: The MIT Press. Edited by Beba Cibralic.
    An accessible philosophy of technology textbook intended for interested students who don't necessarily have a background in philosophy of science.
    Direct download  
     
    Export citation  
     
    Bookmark  
  8.  24
    Agency in an AI Avalanche: Education for Citizen Empowerment.Harry C. Boyte & Marie-Louise Ström - 2020 - Eidos. A Journal for Philosophy of Culture 4 (2):142-161.
    Preview: In this essay, drawing on the case of Australia in particular, we develop the argument of “schools for democracy” as part of communities that prioritize developing people’s civic agency for human flourishing. We begin with the concept of social capital – norms, values, and practices of trust and reciprocity essential to vibrant civic life and healthy democratic society – and discuss social capital’s decline in recent years as well as its relationship to what we call public work. Declining (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  9.  56
    Artificial Intelligence and Agency: Tie-breaking in AI Decision-Making.Danielle Swanepoel & Daniel Corks - 2024 - Science and Engineering Ethics 30 (2):1-16.
    Determining the agency-status of machines and AI has never been more pressing. As we progress into a future where humans and machines more closely co-exist, understanding hallmark features of agency affords us the ability to develop policy and narratives which cater to both humans and machines. This paper maintains that decision-making processes largely underpin agential action, and that in most instances, these processes yield good results in terms of making good choices. However, in some instances, when faced with (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  10. Models of rational agency in human-centered AI: the realist and constructivist alternatives.Jacob Sparks & Ava Thomas Wright - 2025 - AI and Ethics 5.
    Recent proposals for human-centered AI (HCAI) help avoid the challenging task of specifying an objective for AI systems, since HCAI is designed to learn the objectives of the humans it is trying to assist. We think the move to HCAI is an important innovation but are concerned with how an instrumental, economic model of human rational agency has dominated research into HCAI. This paper brings the philosophical debate about human rational agency into the HCAI context, showing how more (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  11. Tool, Collaborator, or Participant: AI and Artistic Agency.Anthony Cross - forthcoming - British Journal of Aesthetics.
    Artificial intelligence is now capable of generating sophisticated and compelling images from simple text prompts. In this paper, I focus specifically on how artists might make use of AI to create art. Most existing discourse analogizes AI to a tool or collaborator; this focuses our attention on AI’s contribution to the production of an artistically significant output. I propose an alternative approach, the exploration paradigm, which suggests that artists instead relate to AI as a participant: artists create a space for (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  12. Find the Gap: AI, Responsible Agency and Vulnerability.Shannon Vallor & Tillmann Vierkant - 2024 - Minds and Machines 34 (3):1-23.
    The responsibility gap, commonly described as a core challenge for the effective governance of, and trust in, AI and autonomous systems (AI/AS), is traditionally associated with a failure of the epistemic and/or the control condition of moral responsibility: the ability to know what we are doing and exercise competent control over this doing. Yet these two conditions are a red herring when it comes to understanding the responsibility challenges presented by AI/AS, since evidence from the cognitive sciences shows that individual (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  13.  20
    Connectionism about human agency: responsible AI and the social lifeworld.Jörg Noller - forthcoming - AI and Society:1-10.
    This paper analyzes responsible human–machine interaction concerning artificial neural networks (ANNs) and large language models (LLMs) by considering the extension of human agency and autonomy by means of artificial intelligence (AI). Thereby, the paper draws on the sociological concept of “interobjectivity,” first introduced by Bruno Latour, and applies it to technologically situated and interconnected agency. Drawing on Don Ihde’s phenomenology of human-technology relations, this interobjective account of AI allows to understand human–machine interaction as embedded in the social lifeworld. (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  14.  17
    Reconsidering Agency in the Age of AI.Gerhard Schreiber - 2024 - Filozofia 79 (5):529-537.
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  15.  90
    First-person representations and responsible agency in AI.Miguel Ángel Sebastián & Fernando Rudy-Hiller - 2021 - Synthese 199 (3-4):7061-7079.
    In this paper I investigate which of the main conditions proposed in the moral responsibility literature are the ones that spell trouble for the idea that Artificial Intelligence Systems could ever be full-fledged responsible agents. After arguing that the standard construals of the control and epistemic conditions don’t impose any in-principle barrier to AISs being responsible agents, I identify the requirement that responsible agents must be aware of their own actions as the main locus of resistance to attribute that kind (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  16. Group Agency and Artificial Intelligence.Christian List - 2021 - Philosophy and Technology (4):1-30.
    The aim of this exploratory paper is to review an under-appreciated parallel between group agency and artificial intelligence. As both phenomena involve non-human goal-directed agents that can make a difference to the social world, they raise some similar moral and regulatory challenges, which require us to rethink some of our anthropocentric moral assumptions. Are humans always responsible for those entities’ actions, or could the entities bear responsibility themselves? Could the entities engage in normative reasoning? Could they even have rights (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   35 citations  
  17.  26
    AI Films and the Fudging of Consciousness.Stephen Asma - 2023 - Apa Blog.
    Films involving Artificial Intelligence often miss the disembodied nature of consciousness and promulgate an erroneous conceptualization of mind. This article explores agency, Spinoza's "conatus," and its neuro-chemical foundations in light of films like: "2001: A Space Oddity," "After Yang," "Upgrade," "Ex Machina," and more.
    Direct download  
     
    Export citation  
     
    Bookmark  
  18.  44
    The Dawn of the AI Robots: Towards a New Framework of AI Robot Accountability.Zsófia Tóth, Robert Caruana, Thorsten Gruber & Claudia Loebbecke - 2022 - Journal of Business Ethics 178 (4):895-916.
    Business, management, and business ethics literature pay little attention to the topic of AI robots. The broad spectrum of potential ethical issues pertains to using driverless cars, AI robots in care homes, and in the military, such as Lethal Autonomous Weapon Systems. However, there is a scarcity of in-depth theoretical, methodological, or empirical studies that address these ethical issues, for instance, the impact of morality and where accountability resides in AI robots’ use. To address this dearth, this study offers a (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  19.  27
    Grasping AI: experiential exercises for designers.Dave Murray-Rust, Maria Luce Lupetti, Iohanna Nicenboim & Wouter van der Hoog - 2024 - AI and Society 39 (6):2891-2911.
    Artificial intelligence (AI) and machine learning (ML) are increasingly integrated into the functioning of physical and digital products, creating unprecedented opportunities for interaction and functionality. However, there is a challenge for designers to ideate within this creative landscape, balancing the possibilities of technology with human interactional concerns. We investigate techniques for exploring and reflecting on the interactional affordances, the unique relational possibilities, and the wider social implications of AI systems. We introduced into an interaction design course (n = 100) nine (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  20.  34
    Navigating subjectivity in AI-generated photography: The quest for ethics and creative agency.Paula Gortázar - 2024 - Philosophy of Photography 15 (1):143-157.
    This study identifies alternative models for the production of AI-generated images to those currently used by mainstream AI platforms. Based on primitive computational art processes, these systems allow designers to gain greater control over the final visual result while avoiding potential issues with intellectual property theft and breach of privacy. The article starts by analysing the level of artificiality that might be effectively attributed to each part of the creative process involved in the development of AI-generated images. It then moves (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  21. Social Agency for Artifacts: Chatbots and the Ethics of Artificial Intelligence.John Symons & Syed AbuMusab - 2024 - Digital Society 3:1-28.
    Ethically significant consequences of artificially intelligent artifacts will stem from their effects on existing social relations. Artifacts will serve in a variety of socially important roles—as personal companions, in the service of elderly and infirm people, in commercial, educational, and other socially sensitive contexts. The inevitable disruptions that these technologies will cause to social norms, institutions, and communities warrant careful consideration. As we begin to assess these effects, reflection on degrees and kinds of social agency will be required to (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  22.  44
    The AI doctor will see you now: assessing the framing of AI in news coverage.Mercedes Bunz & Marco Braghieri - 2022 - AI and Society 37 (1):9-22.
    One of the sectors for which Artificial Intelligence applications have been considered as exceptionally promising is the healthcare sector. As a public-facing sector, the introduction of AI applications has been subject to extended news coverage. This article conducts a quantitative and qualitative data analysis of English news media articles covering AI systems that allow the automation of tasks that so far needed to be done by a medical expert such as a doctor or a nurse thereby redistributing their agency. (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  23. The AI-Stance: Crossing the Terra Incognita of Human-Machine Interactions?Anna Strasser & Michael Wilby - 2022 - In Raul Hakli, Pekka Mäkelä & Johanna Seibt (eds.), Social Robots in Social Institutions. Proceedings of Robophilosophy’22. IOS Press. pp. 286-295.
    Although even very advanced artificial systems do not meet the demanding conditions which are required for humans to be a proper participant in a social interaction, we argue that not all human-machine interactions (HMIs) can appropriately be reduced to mere tool-use. By criticizing the far too demanding conditions of standard construals of intentional agency we suggest a minimal approach that ascribes minimal agency to some artificial systems resulting in the proposal of taking minimal joint actions as a case (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  24.  28
    Using AI Methods to Evaluate a Minimal Model for Perception.Chris Fields & Robert Prentner - 2019 - Open Philosophy 2 (1):503-524.
    The relationship between philosophy and research on artificial intelligence (AI) has been difficult since its beginning, with mutual misunderstanding and sometimes even hostility. By contrast, we show how an approach informed by both philosophy and AI can be productive. After reviewing some popular frameworks for computation and learning, we apply the AI methodology of “build it and see” to tackle the philosophical and psychological problem of characterizing perception as distinct from sensation. Our model comprises a network of very simple, but (...)
    No categories
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  25.  23
    Domesticating AI technology in public services. The case of the City of Espoo’s artificial intelligence experiment.Marja Alastalo, Jaana Parviainen & Marta Choroszewicz - 2022 - Yhteiskuntapolitiikka 87 (3):185–196.
    Public sector institutions are increasingly investing resources in data collection and data analytics to provide better public services at lower cost, to anticipate demand for services, to identify high-risk groups, and to develop targeted interventions. Prior research has shown that the media shape understanding of the possibilities of technology and creates related expectations. In this article we explore how artificial intelligence and emerging data-driven technologies are made familiar and by whose voices they are talked about in the media. Empirically, we (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark  
  26.  15
    Air Canada’s chatbot illustrates persistent agency and responsibility gap problems for AI.Joshua L. M. Brand - forthcoming - AI and Society:1-3.
  27.  25
    AI, doping and ethics: On why increasing the effectiveness of detecting doping fraud in sport may be morally wrong.Thomas Søbirk Petersen, Sebastian Jon Holmen & Jesper Ryberg - 2025 - Journal of Medical Ethics 51 (2):102-106.
    In this article, our aim is to show why increasing the effectiveness of detecting doping fraud in sport by the use of artificial intelligence (AI) may be morally wrong. The first argument in favour of this conclusion is that using AI to make a non-ideal antidoping policy even more effective can be morally wrong. Whether the increased effectiveness is morally wrong depends on whether you believe that the current antidoping system administrated by the World Anti-Doping Agency is already morally (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  28.  28
    Correction to: First-person representations and responsible agency in AI.Miguel Ángel Sebastián - 2021 - Synthese 199 (3):7081-7081.
    A correction to this paper has been published: https://doi.org/10.1007/s11229-021-03136-1.
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  29. Human Goals Are Constitutive of Agency in Artificial Intelligence.Elena Popa - 2021 - Philosophy and Technology 34 (4):1731-1750.
    The question whether AI systems have agency is gaining increasing importance in discussions of responsibility for AI behavior. This paper argues that an approach to artificial agency needs to be teleological, and consider the role of human goals in particular if it is to adequately address the issue of responsibility. I will defend the view that while AI systems can be viewed as autonomous in the sense of identifying or pursuing goals, they rely on human goals and other (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   10 citations  
  30.  16
    The agency in language agents.Patrick Butlin - forthcoming - Inquiry: An Interdisciplinary Journal of Philosophy.
    Language agents are AI systems that combine large language models with other elements to facilitate interaction with an environment. They include LLM-based chatbots but can have a wide range of additional features to support learning, reasoning and decision-making. Goldstein and Kirk-Giannini. Citationm.s. [AI wellbeing] argue that some language agents have beliefs and desires, but it is not obvious that they are agents at all, since they select outputs by querying language models. This paper investigates agency and desires in language (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  31.  67
    AI-Aided Moral Enhancement – Exploring Opportunities and Challenges.Andrea Berber - forthcoming - In Martin Hähnel & Regina Müller (eds.), A Companion to Applied Philosophy of AI. Wiley-Blackwell (2025). Wiley-Blackwell.
    In this chapter, I introduce three different types of AI-based moral enhancement proposals discussed in the literature – substitutive enhancement, value-driven enhancement, and value-open moral enhancement. I analyse them based on the following criteria: effectiveness, examining whether they bring about tangible moral changes; autonomy, assessing whether they infringe on human autonomy and agency; and developmental impact, considering whether they hinder the development of natural moral skills. This analysis demonstrates that no single approach to AI enhancement can satisfy all proposed (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark  
  32. Can AI Rely on the Systematicity of Truth? The Challenge of Modelling Normative Domains.Matthieu Queloz - manuscript
    A key assumption fuelling optimism about the progress of Large Language Models (LLMs) in modelling the world is that the truth is systematic: true statements about the world form a whole that is not just consistent, in that it contains no contradictions, but coherent, in that the truths are inferentially interlinked. This holds out the prospect that LLMs might in principle rely on that systematicity to fill in gaps and correct inaccuracies in the training data: consistency and coherence promise to (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  33.  97
    Analysis of Beliefs Acquired from a Conversational AI: Instruments-based Beliefs, Testimony-based Beliefs, and Technology-based Beliefs.Ori Freiman - 2024 - Episteme 21 (3):1031-1047.
    Speaking with conversational AIs, technologies whose interfaces enable human-like interaction based on natural language, has become a common phenomenon. During these interactions, people form their beliefs due to the say-so of conversational AIs. In this paper, I consider, and then reject, the concepts of testimony-based beliefs and instrument-based beliefs as suitable for analysis of beliefs acquired from these technologies. I argue that the concept of instrument-based beliefs acknowledges the non-human agency of the source of the belief. However, the analysis (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  34. AI-Testimony, Conversational AIs and Our Anthropocentric Theory of Testimony.Ori Freiman - 2024 - Social Epistemology 38 (4):476-490.
    The ability to interact in a natural language profoundly changes devices’ interfaces and potential applications of speaking technologies. Concurrently, this phenomenon challenges our mainstream theories of knowledge, such as how to analyze linguistic outputs of devices under existing anthropocentric theoretical assumptions. In section 1, I present the topic of machines that speak, connecting between Descartes and Generative AI. In section 2, I argue that accepted testimonial theories of knowledge and justification commonly reject the possibility that a speaking technological artifact can (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  35. AI Can Help Us Live More Deliberately.Julian Friedland - 2019 - MIT Sloan Management Review 60 (4).
    Our rapidly increasing reliance on frictionless AI interactions may increase cognitive and emotional distance, thereby letting our adaptive resilience slacken and our ethical virtues atrophy from disuse. Many trends already well underway involve the offloading of cognitive, emotional, and ethical labor to AI software in myriad social, civil, personal, and professional contexts. Gradually, we may lose the inclination and capacity to engage in critically reflective thought, making us more cognitively and emotionally vulnerable and thus more anxious and prone to manipulation (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  36. Hijacking Epistemic Agency - How Emerging Technologies Threaten our Wellbeing as Knowers.John Dorsch - 2022 - Proceedings of the 2022 Aaai/Acm Conference on Ai, Ethics, and Society 1.
    The aim of this project to expose the reasons behind the pandemic of misinformation (henceforth, PofM) by examining the enabling conditions of epistemic agency and the emerging technologies that threaten it. I plan to research the emotional origin of epistemic agency, i.e. on the origin of our capacity to acquire justification for belief, as well as on the significance this emotional origin has for our lives as epistemic agents in our so-called Misinformation Age. This project has three objectives. (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  37. Action and Agency in Artificial Intelligence: A Philosophical Critique.Justin Nnaemeka Onyeukaziri - 2023 - Philosophia: International Journal of Philosophy (Philippine e-journal) 24 (1):73-90.
    The objective of this work is to explore the notion of “action” and “agency” in artificial intelligence (AI). It employs a metaphysical notion of action and agency as an epistemological tool in the critique of the notion of “action” and “agency” in artificial intelligence. Hence, both a metaphysical and cognitive analysis is employed in the investigation of the quiddity and nature of action and agency per se, and how they are, by extension employed in the language (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  38. Moral Responsibility for AI Systems.Sander Beckers - forthcoming - Advances in Neural Information Processing Systems 36 (Neurips 2023).
    As more and more decisions that have a significant ethical dimension are being outsourced to AI systems, it is important to have a definition of moral responsibility that can be applied to AI systems. Moral responsibility for an outcome of an agent who performs some action is commonly taken to involve both a causal condition and an epistemic condition: the action should cause the outcome, and the agent should have been aware -- in some form or other -- of the (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  39.  66
    When Doctors and AI Interact: on Human Responsibility for Artificial Risks.Mario Verdicchio & Andrea Perin - 2022 - Philosophy and Technology 35 (1):1-28.
    A discussion concerning whether to conceive Artificial Intelligence systems as responsible moral entities, also known as “artificial moral agents”, has been going on for some time. In this regard, we argue that the notion of “moral agency” is to be attributed only to humans based on their autonomy and sentience, which AI systems lack. We analyze human responsibility in the presence of AI systems in terms of meaningful control and due diligence and argue against fully automated systems in medicine. (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  40. Explainable machine learning practices: opening another black box for reliable medical AI.Emanuele Ratti & Mark Graves - 2022 - AI and Ethics:1-14.
    In the past few years, machine learning (ML) tools have been implemented with success in the medical context. However, several practitioners have raised concerns about the lack of transparency—at the algorithmic level—of many of these tools; and solutions from the field of explainable AI (XAI) have been seen as a way to open the ‘black box’ and make the tools more trustworthy. Recently, Alex London has argued that in the medical context we do not need machine learning tools to be (...)
    Direct download  
     
    Export citation  
     
    Bookmark   7 citations  
  41. Blame It on the AI? On the Moral Responsibility of Artificial Moral Advisors.Mihaela Constantinescu, Constantin Vică, Radu Uszkai & Cristina Voinea - 2021 - Philosophy and Technology 35 (2):1-26.
    Deep learning AI systems have proven a wide capacity to take over human-related activities such as car driving, medical diagnosing, or elderly care, often displaying behaviour with unpredictable consequences, including negative ones. This has raised the question whether highly autonomous AI may qualify as morally responsible agents. In this article, we develop a set of four conditions that an entity needs to meet in order to be ascribed moral responsibility, by drawing on Aristotelian ethics and contemporary philosophical research. We encode (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  42.  73
    Decolonization of AI: a Crucial Blind Spot.Carlos Largacha-Martínez & John W. Murphy - 2022 - Philosophy and Technology 35 (4):1-13.
    Critics are calling for the decolonization of AI (artificial intelligence). The problem is that this technology is marginalizing other modes of knowledge with dehumanizing applications. What is needed to remedy this situation is the development of human-centric AI. However, there is a serious blind spot in this strategy that is addressed in this paper. The corrective that is usually proposed—participatory design—lacks the philosophical rigor to undercut the autonomy of AI, and thus the colonization spawned by this technology. A more radical (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  43. Creative Agents: Rethinking Agency and Creativity in Human and Artificial Systems.Caterina Moruzzi - 2023 - Journal of Aesthetics and Phenomenology 9 (2):245-268.
    1. In the last decade, technological systems based on Artificial Intelligence (AI) architectures entered our lives at an increasingly fast pace. Virtual assistants facilitate our daily tasks, recom...
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  44. The Problem Of Moral Agency In Artificial Intelligence.Riya Manna & Rajakishore Nath - 2021 - 2021 IEEE Conference on Norbert Wiener in the 21st Century (21CW).
    Humans have invented intelligent machinery to enhance their rational decision-making procedure, which is why it has been named ‘augmented intelligence’. The usage of artificial intelligence (AI) technology is increasing enormously with every passing year, and it is becoming a part of our daily life. We are using this technology not only as a tool to enhance our rationality but also heightening them as the autonomous ethical agent for our future society. Norbert Wiener envisaged ‘Cybernetics’ with a view of a brain-machine (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  45.  72
    Moral control and ownership in AI systems.Raul Gonzalez Fabre, Javier Camacho Ibáñez & Pedro Tejedor Escobar - 2021 - AI and Society 36 (1):289-303.
    AI systems are bringing an augmentation of human capabilities to shape the world. They may also drag a replacement of human conscience in large chunks of life. AI systems can be designed to leave moral control in human hands, to obstruct or diminish that moral control, or even to prevent it, replacing human morality with pre-packaged or developed ‘solutions’ by the ‘intelligent’ machine itself. Artificial Intelligent systems (AIS) are increasingly being used in multiple applications and receiving more attention from the (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  46. AI-Related Misdirection Awareness in AIVR.Nadisha-Marie Aliman & Leon Kester - manuscript
    Recent AI progress led to a boost in beneficial applications from multiple research areas including VR. Simultaneously, in this newly unfolding deepfake era, ethically and security-relevant disagreements arose in the scientific community regarding the epistemic capabilities of present-day AI. However, given what is at stake, one can postulate that for a responsible approach, prior to engaging in a rigorous epistemic assessment of AI, humans may profit from a self-questioning strategy, an examination and calibration of the experience of their own epistemic (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark  
  47.  59
    Dynamics of Perceptible Agency: The Case of Social Robots.Maria Brincker - 2016 - Minds and Machines 26 (4):441-466.
    How do we perceive the agency of others? Do the same rules apply when interacting with others who are radically different from ourselves, like other species or robots? We typically perceive other people and animals through their embodied behavior, as they dynamically engage various aspects of their affordance field. In second personal perception we also perceive social or interactional affordances of others. I discuss various aspects of perceptible agency, which might begin to give us some tools to understand (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  48.  47
    Explanation and Agency: exploring the normative-epistemic landscape of the “Right to Explanation”.Esther Keymolen & Fleur Jongepier - 2022 - Ethics and Information Technology 24 (4):1-11.
    A large part of the explainable AI literature focuses on what explanations are in general, what algorithmic explainability is more specifically, and how to code these principles of explainability into AI systems. Much less attention has been devoted to the question of why algorithmic decisions and systems should be explainable and whether there ought to be a right to explanation and why. We therefore explore the normative landscape of the need for AI to be explainable and individuals having a right (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  49. Establishing the rules for building trustworthy AI.Luciano Floridi - 2019 - Nature Machine Intelligence 1 (6):261-262.
    AI is revolutionizing everyone’s life, and it is crucial that it does so in the right way. AI’s profound and far-reaching potential for transformation concerns the engineering of systems that have some degree of autonomous agency. This is epochal and requires establishing a new, ethical balance between human and artificial autonomy.
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   22 citations  
  50.  42
    Can AI Weapons Make Ethical Decisions?Ross W. Bellaby - 2021 - Criminal Justice Ethics 40 (2):86-107.
    The ability of machines to make truly independent and autonomous decisions is a goal of many, not least of military leaders who wish to take the human out of the loop as much as possible, claiming that autonomous military weaponry—most notably drones—can make decisions more quickly and with greater accuracy. However, there is no clear understanding of how autonomous weapons should be conceptualized and of the implications that their “autonomous” nature has on them as ethical agents. It will be argued (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
1 — 50 / 978