Computer Ethics

Edited by Lavinia Marin (Delft University of Technology)
About this topic
Summary Computer ethics is a relatively new field of ethical inquiry, although some of its foundational texts range from 1960s. It can be seen as either a field of applied ethics (ethics applied to computers) or as form of professional ethics, but, more widely, as an attempt to re-think the human condition in light of digital technology developments. Fundamental ethical topics in this area include: responsibility, privacy, surveillance, automation and autonomy, the good life online, evil online, etc.
Key works Weckert, John (ed.). Computer ethics. Routledge, 2017 is an edited collection containing a selection of fundamental texts in computer ethics ranging from the 1960's until 2004.  Another comprehensive book is van den Hoven & Weckert 2008Information Technology and Moral Philosophy (2008)
Introductions Moor 1985  Floridi 2010 Müller 2020
Related

Contents
1181 found
Order:
1 — 50 / 1181
Material to categorize
  1. Cut the crap: a critical response to “ChatGPT is bullshit”.David Gunkel & Simon Coghlan - 2025 - Ethics and Information Technology 27 (2):1-11.
    In a recent thought-provoking essay called “ChatGPT is Bullshit,” Hicks, Humphries and Slater call such large language models (LLMs) “bullshitters” and “bullshit machines.” Unlike the term “bullshit,” they argue, commonly used anthropomorphic terms such as “hallucination” and “confabulation” mispresent LLMs and sow confusion that could be socially harmful. This paper criticizes their essay in two steps. First, its reliance on Harry Frankfurt’s classic characterization of bullshit as indifference to truth, though understandable and compelling in one sense, risks misrepresenting LLMs. Second, (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  2. Voter deterrence campaigns and the moral-epistemic landscape of political microtargeting.Samantha L. Seybold - 2025 - Ethics and Information Technology 27 (2):1-12.
    A multimillion-dollar digital voter deterrence ad industry has emerged in the United States. Beginning with the 2016 US Presidential Election, interested parties ranging from international intelligence agencies to campaigning politicians have enlisted social media platforms’ microtargeted advertising infrastructure to inundate certain voter demographics with anti-voting content. These tactics are used disproportionately against people of color, especially Black voters, and remain virtually unregulated and unaddressed. This paper responds to the issue in two ways. First, I strive to catalyze a broader awareness (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  3. Navigating the Nexus of ethical standards and moral values.Teresa Hammerschmidt - 2025 - Ethics and Information Technology 27 (2):1-18.
    This study examines how ethical standards established by stakeholders such as developers and policymakers provide top-down guidance aligned with deontological ethics or utilitarian goals. It also highlights a complementary bottom-up approach, rooted in virtue ethics, in which individuals engage in ethical deliberations shaped by their moral values. Both approaches have limitations, and, at times, ethical standards can clash with moral values, thus blurring lines of responsibilities. Deontological principles may offer a structured framework, but often lack adaptability to diverse cultural contexts; (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  4. Taking responsibility for the outcomes of autonomous technologies.Niël H. Conradie & Saskia K. Nagel - 2025 - Ethics and Information Technology 27 (2):1-15.
    It has been extensively argued that emerging autonomous technologies can represent a challenge for our traditional responsibility practices. Though these challenges differ in a variety of ways, at the center of these challenges is the worrying possibility that there may be outcomes of autonomous technologies for which there are legitimate demands for responsibility but no legitimate target to bear this responsibility. This is well exemplified by the possibility of techno-responsibility gaps. These challenges have elicited a number of responses, including dismissals (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  5. (1 other version)A moving target in AI-assisted decision-making: Dataset shift, model updating, and the problem of update opacity.Joshua Hatherley - 2025 - Ethics and Information Technology 27 (2):20.
    Machine learning (ML) systems are vulnerable to performance decline over time due to dataset shift. To address this problem, experts often suggest that ML systems should be regularly updated to ensure ongoing performance stability. Some scholarly literature has begun to address the epistemic and ethical challenges associated with different updating methodologies. Thus far, however, little attention has been paid to the impact of model updating on the ML-assisted decision-making process itself. This article aims to address this gap. It argues that (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  6. Act consequentialism and the gamer’s dilemma.Michael Hemmingsen - 2025 - Ethics and Information Technology 27 (2):1-10.
    This paper considers how act consequentialism can engage with the Gamer’s Dilemma (the seeming moral distinction between virtual murder and virtual child molestation in video games). I argue that often the dilemma is implicitly or explicitly framed in a way that presumes that an answer can only be in terms of action types rather than tokens. This framing unfairly stacks the deck against act consequentialist approaches. An act consequentialist approach to the Gamer’s Dilemma speaks to its own set of concerns, (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  7. Trust and Power in Airbnb’s Digital Rating and Reputation System.Tim Christiaens - 2025 - Ethics and Information Technology (2):1-13.
    Customer ratings and reviews are playing a key role in the contemporary platform economy. To establish trust among stran- gers without having to directly monitor platform users themselves, companies ask people to evaluate each other. Firms like Uber, Deliveroo, or Airbnb construct digital reputation scores by combining these consumer data with their own information from the algorithmic surveillance of workers. Trustworthy behavior is subsequently rewarded with a good reputation score and higher potential earnings, while untrustworthy behavior can be algorithmically penalized. (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  8. Responsible guidelines for authorship attribution tasks in NLP.Vageesh Saxena, Aurelia Tamò-Larrieux, Gijs Van Dijck & Gerasimos Spanakis - 2025 - Ethics and Information Technology 27 (2).
    Authorship Attribution (AA) approaches in Natural Language Processing (NLP) are important in various domains, including forensic analysis and cybercrime. However, they pose Ethical, Legal, and Societal Implications/Aspects (ELSI/ELSA) challenges that remain underexplored. Inspired by foundational AI ethics guidelines and frameworks, this research introduces a comprehensive framework of responsible guidelines that focuses on AA tasks in NLP, which are tailored to different stakeholders and development phases. These guidelines are structured around four core principles: privacy and data protection, fairness and non-discrimination, transparency (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  9. Correction: Beyond transparency and explainability: on the need for adequate and contextualized user guidelines for LLM use.Kristian González Barman, Nathan Wood & Pawel Pawlowski - 2025 - Ethics and Information Technology 27 (1):1-1.
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  10. The Attention Economy and The Right to Attention: Some Lessons from Theravāda and Mahāyāna Thought.Mark Fortney - 2025 - Journal of Buddhist Ethics 32.
    Much of the work in the rapidly growing field of computer ethics relies on the concepts and theories of Western philosophy. With this article my aim is to help stimulate conversations that draw on a wider range of ethical perspectives. I build on recent work on the sense in which the regular operations of the attention economy might violate our right to attention, and I do so through looking to a range of Theravāda and Mahāyāna Buddhist texts. As I argue, (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  11. Cultural analytics to discover regularities in cultural movements: A book review.Manh-Tung Ho - manuscript
    In Cultural Analytics, Lev Manovich (2020) outlines the recent developments and the historical roots of a new, exciting research field called cultural analytics. Cultural analytics emerges as a discipline that utilizes methods from computer science, data visualization, and media arts for the exploration and analysis of cultural objects and their user interactions. Manovich continuously admonishes future researchers to think hard about the challenges of how cultural phenomenon can be represented as data to avoid the reductivism trap, as he quotes Gitelman (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  12. The Testimony Gap: Machines and Reasons.Robert Sparrow & Gene Flenady - 2025 - Minds and Machines 35 (1):1-16.
    Most people who have considered the matter have concluded that machines cannot be moral agents. Responsibility for acting on the outputs of machines must always rest with a human being. A key problem for the ethical use of AI, then, is to ensure that it does not block the attribution of responsibility to humans or lead to individuals being unfairly held responsible for things over which they had no control. This is the “responsibility gap”. In this paper, we argue that (...)
    No categories
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  13. What responsibility gaps are and what they should be.Herman Veluwenkamp - 2025 - Ethics and Information Technology 27 (1):1-13.
    Responsibility gaps traditionally refer to scenarios in which no one is responsible for harm caused by artificial agents, such as autonomous machines or collective agents. By carefully examining the different ways this concept has been defined in the social ontology and ethics of technology literature, I argue that our current concept of responsibility gaps is defective. To address this conceptual flaw, I argue that the concept of responsibility gaps should be revised by distinguishing it into two more precise concepts: epistemic (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  14. Disembodied friendship: virtual friends and the tendencies of technologically mediated friendship.Daniel Grasso - 2025 - Ethics and Information Technology 27 (1):1-11.
    This paper engages the ongoing debate around the possibility of virtue friendships in the Aristotelian sense through online mediation. However, I argue that since the current literature has remained overly focused on the mere possibility of virtual friendship, it has obscured the more common phenomena of using digital communication to sustain previous in-person friendships which are now at a distance. While I agree with those who argue that entirely virtual friendship is possible, I argue that the current rebuttals to the (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  15. Diving into Fair Pools: Algorithmic Fairness, Ensemble Forecasting, and the Wisdom of Crowds.Rush T. Stewart & Lee Elkin - forthcoming - Analysis.
    Is the pool of fair predictive algorithms fair? It depends, naturally, on both the criteria of fairness and on how we pool. We catalog the relevant facts for some of the most prominent statistical criteria of algorithmic fairness and the dominant approaches to pooling forecasts: linear, geometric, and multiplicative. Only linear pooling, a format at the heart of ensemble methods, preserves any of the central criteria we consider. Drawing on work in the social sciences and social epistemology on the theoretical (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  16. AI Mimicry and Human Dignity: Chatbot Use as a Violation of Self-Respect.Jan-Willem van der Rijt, Dimitri Coelho Mollo & Bram Vaassen - manuscript
    This paper investigates how human interactions with AI-powered chatbots may offend human dignity. Current chatbots, driven by large language models (LLMs), mimic human linguistic behaviour but lack the moral and rational capacities essential for genuine interpersonal respect. Human beings are prone to anthropomorphise chatbots—indeed, chatbots appear to be deliberately designed to elicit that response. As a result, human beings’ behaviour toward chatbots often resembles behaviours typical of interaction between moral agents. Drawing on a second-personal, relational account of dignity, we argue (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  17. Designing responsible agents.Zacharus Gudmunsen - 2025 - Ethics and Information Technology 27 (1):1-11.
    Raul Hakli & Pekka Mäkelä (2016, 2019) make a popular assumption in machine ethics explicit by arguing that artificial agents cannot be responsible because they are designed. Designed agents, they think, are analogous to manipulated humans and therefore not meaningfully in control of their actions. Contrary to this, I argue that under all mainstream theories of responsibility, designed agents can be responsible. To do so, I identify the closest parallel discussion in the literature on responsibility and free will, which concerns (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  18. Two Types of AI Existential Risk: Decisive and Accumulative.Atoosa Kasirzadeh - 2025 - Philosophical Studies 1:1-29.
    The conventional discourse on existential risks (x-risks) from AI typically focuses on abrupt, dire events caused by advanced AI systems, particularly those that might achieve or surpass human-level intelligence. These events have severe consequences that either lead to human extinction or irreversibly cripple human civilization to a point beyond recovery. This decisive view, however, often neglects the serious possibility of AI x-risk manifesting gradually through an incremental series of smaller yet interconnected disruptions, crossing critical thresholds over time. This paper contrasts (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  19. Social Misattributions in Conversations with Large Language Models.Andrea Ferrario, Alberto Termine & Alessandro Facchini - manuscript
    We investigate a typology of socially and ethically risky phenomena emerging from the interaction between humans and large language model (LLM)-based conversational systems. As they relate to the way in which humans attribute social identity components, such as role and face, to LLM-based conversational systems, we term these phenomena 'social misattributions.' Drawing on classical theories of social identity and recent debates in the philosophy of technology, we argue that these social misattributions represent higher-order forms of anthropomorphisation of LLM-based conversational systems (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  20. Technology, Liberty, and Guardrails.Kevin Mills - 2025 - AI and Ethics 5:39-46.
    Technology companies are increasingly being asked to take responsibility for the technologies they create. Many of them are rising to the challenge. One way they do this is by implementing “guardrails”: restrictions on functionality that prevent people from misusing their technologies (per some standard of misuse). While there can be excellent reasons for implementing guardrails (and doing so is sometimes morally obligatory), I argue that the unrestricted authority to implement guardrails is incompatible with proper respect for user freedom, and is (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  21. Who Should obey Asimov’s Laws of Robotics? A Question of Responsibility.Maria Hedlund & Erik Persson - 2024 - In Spyridon Stelios & Kostas Theologou, The Ethics Gap in the Engineering of the Future. Emerald Publishing. pp. 9-25.
    The aim of this chapter is to explore the safety value of implementing Asimov’s Laws of Robotics as a future general framework that humans should obey. Asimov formulated laws to make explicit the safeguards of the robots in his stories: (1) A robot may not injure or harm a human being or, through inaction, allow a human being to come to harm; (2) A robot must obey the orders given to it by human beings except where such orders would conflict (...)
    No categories
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  22. Dating apps as tools for social engineering.Martin Beckstein & Bouke De Vries - 2025 - Ethics and Information Technology 27 (1):1-13.
    In a bid to boost their below-replacement fertility levels, some countries, such as China, India, Iran, and Japan, have launched state-sponsored dating apps, with more potentially following. However, the use of dating apps as tools for social engineering has been largely neglected by political theorists and public policy experts. This article fills this gap. While acknowledging the risks and historical baggage of social engineering, the article provides a qualified defense of using these apps for three purposes: raising below-replacement birth rates, (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  23. Robots, institutional roles and joint action: some key ethical issues.Seumas Miller - 2025 - Ethics and Information Technology 27 (1):1-11.
    In this article, firstly, cooperative interaction between robots and humans is discussed; specifically, the possibility of human/robot joint action and (relatedly) the possibility of robots occupying institutional roles alongside humans. The discussion makes use of concepts developed in social ontology. Secondly, certain key moral (or ethical—these terms are used interchangeably here) issues arising from this cooperative action are discussed, specifically issues that arise from robots performing (including qua role occupants) morally significant actions jointly with humans. Such morally significant human/robot joint (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  24. Possibilities and challenges in the moral growth of large language models: a philosophical perspective.Guoyu Wang, Wei Wang, Yiqin Cao, Yan Teng, Qianyu Guo, Haofen Wang, Junyu Lin, Jiajie Ma, Jin Liu & Yingchun Wang - 2025 - Ethics and Information Technology 27 (1):1-11.
    With the rapid expansion of parameters in large language models (LLMs) and the application of Reinforcement Learning with Human Feedback (RLHF), there has been a noticeable growth in the moral competence of LLMs. However, several questions warrant further exploration: Is it really possible for LLMs to fully align with human values through RLHF? How can the current moral growth be philosophically contextualized? We identify similarities between LLMs’ moral growth and Deweyan ethics in terms of the discourse of human moral development. (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  25. Leading good digital lives.Johannes Müller-Salo - 2025 - Ethics and Information Technology 27 (1):1-11.
    The paper develops a conception of the good life within a digitalized society. Martha Nussbaum’s capability theory offers an adequate normative framework for that purpose as it systematically integrates the analysis of flourishing human lives with a normative theory of justice. The paper argues that a theory of good digital lives should focus on everyday life, on the impact digitalization has on ordinary actions, routines and corresponding practical knowledge. Based on Nussbaum’s work, the paper develops a concept of digital capabilities. (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  26. AI responsibility gap: not new, inevitable, unproblematic.Huzeyfe Demirtas - 2025 - Ethics and Information Technology 27 (1):1-10.
    Who is responsible for a harm caused by AI, or a machine or system that relies on artificial intelligence? Given that current AI is neither conscious nor sentient, it’s unclear that AI itself is responsible for it. But given that AI acts independently of its developer or user, it’s also unclear that the developer or user is responsible for the harm. This gives rise to the so-called responsibility gap: cases where AI causes a harm, but no one is responsible for (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  27. LLMs beyond the lab: the ethics and epistemics of real-world AI research.Joost Mollen - 2025 - Ethics and Information Technology 27 (1):1-11.
    Research under real-world conditions is crucial to the development and deployment of robust AI systems. Exposing large language models to complex use settings yields knowledge about their performance and impact, which cannot be obtained under controlled laboratory conditions or through anticipatory methods. This epistemic need for real-world research is exacerbated by large-language models’ opaque internal operations and potential for emergent behavior. However, despite its epistemic value and widespread application, the ethics of real-world AI research has received little scholarly attention. To (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  28. Nullius in Explanans: an ethical risk assessment for explainable AI.Luca Nannini, Diletta Huyskes, Enrico Panai, Giada Pistilli & Alessio Tartaro - 2025 - Ethics and Information Technology 27 (1):1-28.
    Explanations are conceived to ensure the trustworthiness of AI systems. Yet, relying solemnly on algorithmic solutions, as provided by explainable artificial intelligence (XAI), might fall short to account for sociotechnical risks jeopardizing their factuality and informativeness. To mitigate these risks, we delve into the complex landscape of ethical risks surrounding XAI systems and their generated explanations. By employing a literature review combined with rigorous thematic analysis, we uncover a diverse array of technical risks tied to the robustness, fairness, and evaluation (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  29. “Emergent Abilities,” AI, and Biosecurity: Conceptual Ambiguity, Stability, and Policy.Alex John London - 2024 - Disincentivizing Bioweapons: Theory and Policy Approaches.
    Recent claims that artificial intelligence (AI) systems demonstrate “emergent abilities” have fueled excitement but also fear grounded in the prospect that such systems may enable a wider range of parties to make unprecedented advances in areas that include the development of chemical or biological weapons. Ambiguity surrounding the term “emergent abilities” has added avoidable uncertainty to a topic that has the potential to destabilize the strategic landscape, including the perception of key parties about the viability of nonproliferation efforts. To avert (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  30. Ethical principles shaping values-based cybersecurity decision-making.Joseph Fenech, Deborah Richards & Paul Formosa - 2024 - Computers and Society 140 (103795).
    The human factor in information systems is a large vulnerability when implementing cybersecurity, and many approaches, including technical and policy driven solutions, seek to mitigate this vulnerability. Decisions to apply technical or policy solutions must consider how an individual’s values and moral stance influence their responses to these implementations. Our research aims to evaluate how individuals prioritise different ethical principles when making cybersecurity sensitive decisions and how much perceived choice they have when doing so. Further, we sought to use participants’ (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  31. Urban Digital Twins and metaverses towards city multiplicities: uniting or dividing urban experiences?Javier Argota Sánchez-Vaquerizo - 2025 - Ethics and Information Technology 27 (1):1-31.
    Urban Digital Twins (UDTs) have become the new buzzword for researchers, planners, policymakers, and industry experts when it comes to designing, planning, and managing sustainable and efficient cities. It encapsulates the last iteration of the technocratic and ultra-efficient, post-modernist vision of smart cities. However, while more applications branded as UDTs appear around the world, its conceptualization remains ambiguous. Beyond being technically prescriptive about what UDTs are, this article focuses on their aspects of interaction and operationalization in connection to people in (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  32. Posthumanist Phenomenology and Artificial Intelligence.Avery Rijos - 2024 - Philosophy Papers (Philpapers).
    This paper examines the ontological and epistemological implications of artificial intelligence (AI) through posthumanist philosophy, integrating the works of Deleuze, Foucault, and Haraway with contemporary computational methodologies. It introduces concepts such as negative augmentation, praxes of revealing, and desedimentation, while extending ideas like affirmative cartographies, ethics of alterity, and planes of immanence to critique anthropocentric assumptions about identity, cognition, and agency. By redefining AI systems as dynamic assemblages emerging through networks of interaction and co-creation, the paper challenges traditional dichotomies such (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  33. Privacy: scepticism, normative approaches and legal protection. A review of the theoretical debate and a discussion of recent developments in the EU.Elisa Orrù - 2022 - Dpce Online 52 (2):779–800.
    Digitalisation has lent the right to privacy increasing philosophical and legal relevance. However, privacy’s epistemic status and associated normative values are constantly subject to radical criticisms. This article investigates the validity, in theory and practice, of three radical critiques of privacy. A review of the philosophical and interdisciplinary discourse on privacy during the last half century is followed by analyses of recent legal developments within the EU. Privacy emerges as a highly differentiated and powerful tool to protect individuals and social (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  34. Correction: The repugnant resolution: has Coghlan & Cox resolved the Gamer’s Dilemma?Thomas Montefiore & Morgan Luck - 2025 - Ethics and Information Technology 27 (1):1-1.
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  35. Mind the gap: bridging the divide between computer scientists and ethicists in shaping moral machines.Pablo Muruzábal Lamberti, Gunter Bombaerts & Wijnand IJsselsteijn - 2025 - Ethics and Information Technology 27 (1):1-11.
    This paper examines the ongoing challenges of interdisciplinary collaboration in Machine Ethics (ME), particularly the integration of ethical decision-making capacities into AI systems. Despite increasing demands for ethical AI, ethicists often remain on the sidelines, contributing primarily to metaethical discussions without directly influencing the development of moral machines. This paper revisits concerns highlighted by Tolmeijer et al. (2020), who identified the pitfall that computer scientists may misinterpret ethical theories without philosophical input. Using the MACHIAVELLI moral benchmark and the Delphi artificial (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  36. Procedural fairness in algorithmic decision-making: the role of public engagement.Marie Christin Decker, Laila Wegner & Carmen Leicht-Scholten - 2025 - Ethics and Information Technology 27 (1):1-16.
    Despite the widespread use of automated decision-making (ADM) systems, they are often developed without involving the public or those directly affected, leading to concerns about systematic biases that may perpetuate structural injustices. Existing formal fairness approaches primarily focus on statistical outcomes across demographic groups or individual fairness, yet these methods reveal ambiguities and limitations in addressing fairness comprehensively. This paper argues for a holistic approach to algorithmic fairness that integrates procedural fairness, considering both decision-making processes and their outcomes. Procedural fairness (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  37. Value-laden challenges for technical standards supporting regulation in the field of AI.Alessio Tartaro - 2024 - Ethics and Information Technology 26 (4):1-12.
    This perspective paper critically examines value-laden challenges that emerge when using standards to support regulation in the field of artificial intelligence, particularly within the context of the AI Act. It presents a dilemma arising from the inherent vagueness and contestable nature of the AI Act’s requirements. The effective implementation of these requirements necessitates addressing hard normative questions that involve complex value judgments. These questions, such as determining the acceptability of risks or the appropriateness of accuracy levels, need to be addressed (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  38. Correction: AI content detection in the emerging information ecosystem: new obligations for media and tech companies.Alistair Knott, Dino Pedreschi, Toshiya Jitsuzumi, Susan Leavy, David Eyers, Tapabrata Chakraborti, Andrew Trotman, Sundar Sundareswaran, Ricardo Baeza-Yates, Przemyslaw Biecek, Adrian Weller, Paul D. Teal, Subhadip Basu, Mehmet Haklidir, Virginia Morini, Stuart Russell & Yoshua Bengio - 2024 - Ethics and Information Technology 26 (4):1-2.
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  39. Digital sovereignty and artificial intelligence: a normative approach.Huw Roberts - 2024 - Ethics and Information Technology 26 (4):1-10.
    Digital sovereignty is a term increasingly used by academics and policymakers to describe efforts by states, private companies, and citizen groups to assert control over digital technologies. This descriptive conception of digital sovereignty is normatively deficient as it centres discussion on how power is being asserted rather than evaluating whether actions are legitimate. In this article, I argue that digital sovereignty should be understood as a normative concept that centres on authority (i.e., legitimate control). A normative approach to digital sovereignty (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  40. Automated informed consent.Adam Andreotta & Bjorn Lundgren - 2024 - Big Data and Society 11 (4).
    Online privacy policies or terms and conditions ideally provide users with information about how their personal data are being used. The reality is that very few users read them: they are long, often hard to understand, and ubiquitous. The average internet user cannot realistically read and understand all aspects that apply to them and thus give informed consent to the companies who use their personal data. In this article, we provide a basic overview of a solution to the problem. We (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  41. Ethics of smart cities and smart societies.Andrej Zwitter & Dirk Helbing - 2024 - Ethics and Information Technology 26 (4):1-5.
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  42. The repugnant resolution: has Coghlan & Cox resolved the Gamer’s Dilemma?Thomas Montefiore & Morgan Luck - 2024 - Ethics and Information Technology 26 (4):1-11.
    Coghlan and Cox (Between death and suffering: Resolving the gamer’s dilemma. Ethics and Information Technology) offer a new resolution to the Gamer’s Dilemma (Luck, The Gamer’s Dilemma. Ethics and Information Technology). They argue that, while it is fitting for a person committing virtual child molestation to feel self-repugnance, it is not fitting for a person committing virtual murder to feel the same, and the fittingness of this feeling indicates each act’s moral permissibility. The aim of this paper is to determine (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  43. Trust in AI: Progress, Challenges, and Future Directions.Saleh Afroogh, Ali Akbari, Emmie Malone, Mohammadali Kargar & Hananeh Alambeigi - forthcoming - Nature Humanities and Social Sciences Communications.
    The increasing use of artificial intelligence (AI) systems in our daily life through various applications, services, and products explains the significance of trust/distrust in AI from a user perspective. AI-driven systems have significantly diffused into various fields of our lives, serving as beneficial tools used by human agents. These systems are also evolving to act as co-assistants or semi-agents in specific domains, potentially influencing human thought, decision-making, and agency. Trust/distrust in AI plays the role of a regulator and could significantly (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  44. Deepfakes and Dishonesty.Tobias Flattery & Christian B. Miller - 2024 - Philosophy and Technology 37 (120):1-24.
    Deepfakes raise various concerns: risks of political destabilization, depictions of persons without consent and causing them harms, erosion of trust in video and audio as reliable sources of evidence, and more. These concerns have been the focus of recent work in the philosophical literature on deepfakes. However, there has been almost no sustained philosophical analysis of deepfakes from the perspective of concerns about honesty and dishonesty. That deepfakes are potentially deceptive is unsurprising and has been noted. But under what conditions (...)
    Remove from this list   Direct download (5 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  45. Large language models and their big bullshit potential.Sarah A. Fisher - 2024 - Ethics and Information Technology 26 (4):1-8.
    Newly powerful large language models have burst onto the scene, with applications across a wide range of functions. We can now expect to encounter their outputs at rapidly increasing volumes and frequencies. Some commentators claim that large language models are bullshitting, generating convincing output without regard for the truth. If correct, that would make large language models distinctively dangerous discourse participants. Bullshitters not only undermine the norm of truthfulness (by saying false things) but the normative status of truth itself (by (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  46. Correction: Framing the Gamer’s Dilemma.Michael Hemmingsen - 2024 - Ethics and Information Technology 26 (4):1-1.
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  47. (1 other version)Introduction to the topical collection on AI and responsibility.Niël Conradie, Hendrik Kempt & Peter Königs - 2022 - Ethics and Information Technology 24 (3).
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  48. The virtues of interpretable medical AI.Joshua Hatherley, Robert Sparrow & Mark Howard - 2024 - Cambridge Quarterly of Healthcare Ethics 33 (3).
    Artificial intelligence (AI) systems have demonstrated impressive performance across a variety of clinical tasks. However, notoriously, sometimes these systems are “black boxes.” The initial response in the literature was a demand for “explainable AI.” However, recently, several authors have suggested that making AI more explainable or “interpretable” is likely to be at the cost of the accuracy of these systems and that prioritizing interpretability in medical AI may constitute a “lethal prejudice.” In this paper, we defend the value of interpretability (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  49. Socially Disruptive Technologies and Conceptual Engineering.Herman Veluwenkamp, Jeroen Hopster, Sebastian Köhler & Guido Löhr - 2024 - Ethics and Information Technology 26 (4):1-6.
    In this special issue, we focus on the connection between conceptual engineering and the philosophy of technology. Conceptual engineering is the enterprise of introducing, eliminating, or revising words and concepts. The philosophy of technology examines the nature and significance of technology. We investigate how technologies such as AI and genetic engineering (so-called “socially disruptive technologies”) disrupt our practices and concepts, and how conceptual engineering can address these disruptions. We also consider how conceptual engineering can enhance the practice of ethical design. (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  50. A data-centric approach for ethical and trustworthy AI in journalism.Laurence Dierickx, Andreas Lothe Opdahl, Sohail Ahmed Khan, Carl-Gustav Lindén & Diana Carolina Guerrero Rojas - 2024 - Ethics and Information Technology 26 (4):1-13.
    AI-driven journalism refers to various methods and tools for gathering, verifying, producing, and distributing news information. Their potential is to extend human capabilities and create new forms of augmented journalism. Although scholars agreed on the necessity to embed journalistic values in these systems to make AI systems accountable, less attention was paid to data quality, while the results’ accuracy and efficiency depend on high-quality data in any machine learning task. Assessing data quality in the context of AI-driven journalism requires a (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
1 — 50 / 1181