About this topic
Summary Briefly, to quote Nyholm 2023, technology ethics is simply "the sub-field of philosophy that focuses on the ethical implications of new technologies...", and how "various technologies influence and shape individuals and society both positively and negatively". On PhilPapers, this category encompasses papers in technology ethics which does not fit in neatly in any of the subcategories under "Technology Ethics". Sibling subcategories deal with specific areas such as "Ethics of Artificial Intelligence", "Nanotechnology", and "Biotechnology Ethics", to name a few.
Key works As this subcategory has a diverse breadth of coverage, several suggested compendia of topics, as starting points for further exploration, include Nyholm 2023, Robson & Tsou 2023, and Véliz 2023.
Related

Contents
424 found
Order:
1 — 50 / 424
  1. The Worst Way (Not) to Communicate.Joseph S. Fulda - manuscript
    Evaluates e-mail critically from four perspectives. Note: This is /not/ the full version. The full version is available upon written request only.
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  2. Algorithmic neutrality.Milo Phillips-Brown - manuscript
    Algorithms wield increasing control over our lives—over the jobs we get, the loans we're granted, the information we see online. Algorithms can and often do wield their power in a biased way, and much work has been devoted to algorithmic bias. In contrast, algorithmic neutrality has been largely neglected. I investigate algorithmic neutrality, tackling three questions: What is algorithmic neutrality? Is it possible? And when we have it in mind, what can we learn about algorithmic bias?
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  3. Virtual Reality Interview (Metaphysics and Epistemology): "Welcome Back!".Erick Jose Ramirez & Miles Elliott - manuscript
    This is a virtual reality simulation that imagines its subject as emerging from a long stint in Robert Nozick's "Experience Machine." The simulation is an interview (with many branching paths) meant to gauge the subject's views on the metaphysics of virtual objects and the ethics of virtual actions. It draws heavily from the published work of David Chalmers, Mark Silcox, Jon Cogburn, Morgan Luck, and Nick Bostrom. *Requires an Oculus Rift (or Rift-S) or HTC Vive and a VR capable computer. (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  4. Big Data Ethics.Nicolae Sfetcu - manuscript
    Big Data ethics involves adherence to the concepts of right and wrong behavior regarding data, especially personal data. Big Data ethics focuses on structured or unstructured data collectors and disseminators. Big Data ethics is supported, at EU level, by extensive documentation, which seeks to find concrete solutions to maximize the value of Big Data without sacrificing fundamental human rights. The European Data Protection Supervisor (EDPS) supports the right to privacy and the right to the protection of personal data in the (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  5. Procesarea Big Data.Nicolae Sfetcu - manuscript
    Datele trebuie procesate cu instrumente avansate de colectare și analiză, pe baza unor algoritmi prestabiliți, pentru a putea obține informații relevante. Algoritmii trebuie să ia în considerare și aspecte invizibile pentru percepțiile directe. Big Data în procesele guvernamentale cresc eficiența costurilor, productivitatea și inovația. Registrele civile sunt o sursă pentru Big Data. Datele prelucrate ajută în domenii critice de dezvoltare, cum ar fi îngrijirea sănătății, ocuparea forței de muncă, productivitatea economică, criminalitatea, securitatea și gestionarea dezastrelor naturale și a resurselor. DOI: (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  6. Probleme etice în lucrul cu Big Data.Nicolae Sfetcu - manuscript
    Etica Big Data presupune aderarea la conceptele de comportament corect și greșit în ceea ce privește datele, în special datele cu caracter personal. Etica Big Data pune accentul pe colectorii și diseminatorii de date structurate sau nestructurate. Etica Big Data este susținută, la nivelul UE, de o amplă documentație, prin care se încearcă să se găsească soluții concrete pentru maximizarea valorii Big Data fără a sacrifica drepturile fundamentale ale omului. Autoritatea Europeană pentru Protecția Datelor (AEPD) sprijină dreptul la viață privată (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  7. Could slaughterbots wipe out humanity? Assessment of the global catastrophic risk posed by autonomous weapons.Alexey Turchin - manuscript
    Recently criticisms against autonomous weapons were presented in a video in which an AI-powered drone kills a person. However, some said that this video is a distraction from the real risk of AI—the risk of unlimitedly self-improving AI systems. In this article, we analyze arguments from both sides and turn them into conditions. The following conditions are identified as leading to autonomous weapons becoming a global catastrophic risk: 1) Artificial General Intelligence (AGI) development is delayed relative to progress in narrow (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  8. (2 other versions)Trust and distrust in institutions and governance.Mark Alfano & Nicole Huijts - forthcoming - In Judith Simon (ed.), Handbook of Trust and Philosophy. Routledge.
    First, we explain the conception of trustworthiness that we employ. We model trustworthiness as a relation among a trustor, a trustee, and a field of trust defined and delimited by its scope. In addition, both potential trustors and potential trustees are modeled as being more or less reliable in signaling either their willingness to trust or their willingness to prove trustworthy in various fields in relation to various other agents. Second, following Alfano (forthcoming) we argue that the social scale of (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   4 citations  
  9. (2 other versions)Trust and distrust in institutions and governance.Mark Alfano, Nicole Huijts & Sabine Roeser - forthcoming - In Judith Simon (ed.), Handbook of Trust and Philosophy. Routledge.
    First, we explain the conception of trustworthiness that we employ. We model trustworthiness as a relation among a trustor, a trustee, and a field of trust defined and delimited by its scope. In addition, both potential trustors and potential trustees are modeled as being more or less reliable in signaling either their willingness to trust or their willingness to prove trustworthy in various fields in relation to various other agents. Second, following Alfano (forthcoming) we argue that the social scale of (...)
    Remove from this list  
     
    Export citation  
     
    Bookmark  
  10. (2 other versions)Trust and distrust in institutions and governance.Mark Alfano, Nicole Huijts & Sabine Roeser - forthcoming - In Judith Simon (ed.), Handbook of Trust and Philosophy. Routledge.
    First, we explain the conception of trustworthiness that we employ. We model trustworthiness as a relation among a trustor, a trustee, and a field of trust defined and delimited by its scope. In addition, both potential trustors and potential trustees are modeled as being more or less reliable in signaling either their willingness to trust or their willingness to prove trustworthy in various fields in relation to various other agents. Second, following Alfano (forthcoming) we argue that the social scale of (...)
    Remove from this list  
     
    Export citation  
     
    Bookmark  
  11. Building Epistemically Healthier Platforms.Dallas Amico-Korby, Maralee Harrell & David Danks - forthcoming - Episteme.
    When thinking about designing social media platforms, we often focus on factors such as usability, functionality, aesthetics, ethics, and so forth. Epistemic considerations have rarely been given the same level of attention in design discussions. This paper aims to rectify this neglect. We begin by arguing that there are epistemic norms that govern environments, including social media environments. Next, we provide a framework for applying these norms to the question of platform design. We then apply this framework to the real-world (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  12. The Ethics of Extended Cognition: Is Having your Computer Compromised a Personal Assault?J. Adam Carter & S. Orestis Palermos - forthcoming - Journal of the American Philosophical Association.
    Philosophy of mind and cognitive science (e.g., Clark and Chalmers 1998; Clark 2010; Palermos 2014) have recently become increasingly receptive tothe hypothesis of extended cognition, according to which external artifacts such as our laptops and smartphones can—under appropriate circumstances—feature as material realisers of a person’s cognitive processes. We argue that, to the extent that the hypothesis of extended cognition is correct, our legal and ethical theorising and practice must be updated, by broadening our conception of personal assault so as to (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   6 citations  
  13. Does Predictive Sentencing Make Sense?Clinton Castro, Alan Rubel & Lindsey Schwartz - forthcoming - Inquiry: An Interdisciplinary Journal of Philosophy.
    This paper examines the practice of using predictive systems to lengthen the prison sentences of convicted persons when the systems forecast a higher likelihood of re-offense or re-arrest. There has been much critical discussion of technologies used for sentencing, including questions of bias and opacity. However, there hasn’t been a discussion of whether this use of predictive systems makes sense in the first place. We argue that it does not by showing that there is no plausible theory of punishment that (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  14. Mental Integrity in the Attention Economy: in Search of the Right to Attention.Bartek Chomanski - forthcoming - Neuroethics.
    Is it wrong to distract? Is it wrong to direct others’ attention in ways they otherwise would not choose? If so, what are the grounds of this wrong – and, in expounding them, do we have to at once condemn large chunks of contemporary digital commerce (also known as the attention economy)? In what follows, I attempt to cast light on these questions. Specifically, I argue – following the pioneering work of Jasper Tran and Anuj Puri – that there is (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  15. Sims and Vulnerability: On the Ethics of Creating Emulated Minds.Bartek Chomanski - forthcoming - Science and Engineering Ethics.
    It might become possible to build artificial minds with the capacity for experience. This raises a plethora of ethical issues, explored, among others, in the context of whole brain emulations (WBE). In this paper, I will take up the problem of vulnerability – given, for various reasons, less attention in the literature – that the conscious emulations will likely exhibit. Specifically, I will examine the role that vulnerability plays in generating ethical issues that may arise when dealing with WBEs. I (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  16. The Missing Ingredient in the Case for Regulating Big Tech.Bartek Chomanski - forthcoming - Minds and Machines.
    Having been involved in a slew of recent scandals, many of the world’s largest technology companies (“Big Tech,” “Digital Titans”) embarked on devising numerous codes of ethics, intended to promote improved standards in the conduct of their business. These efforts have attracted largely critical interdisciplinary academic attention. The critics have identified the voluntary character of the industry ethics codes as among the main obstacles to their efficacy. This is because individual industry leaders and employees, flawed human beings that they are, (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  17. Online consent: how much do we need to know?Bartek Chomanski & Lode Lauwaert - forthcoming - AI and Society.
    This paper argues, against the prevailing view, that consent to privacy policies that regular internet users usually give is largely unproblematic from the moral point of view. To substantiate this claim, we rely on the idea of the right not to know (RNTK), as developed by bioethicists. Defenders of the RNTK in bioethical literature on informed consent claim that patients generally have the right to refuse medically relevant information. In this article we extend the application of the RNTK to online (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  18. The Law and Ethics of Virtual Sexual Assault.John Danaher - forthcoming - In Barfield Enter Author Name Without Selecting A. Profile: Woodrow & Blitz Enter Author Name Without Selecting A. Profile: Marc (eds.), The Law of Virtual and Augmented Reality. Edward Elgar Press.
    This chapter provides a general overview and introduction to the law and ethics of virtual sexual assault. It offers a definition of the phenomenon and argues that there are six interesting types. It then asks and answers three questions: (i) should we criminalise virtual sexual assault? (ii) can you be held responsible for virtual sexual assault? and (iii) are there issues with 'consent' to virtual sexual activity that might make it difficult to prosecute or punish virtual sexual assault?
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   2 citations  
  19. The Philosophical Case for Robot Friendship.John Danaher - forthcoming - Journal of Posthuman Studies.
    Friendship is an important part of the good life. While many roboticists are eager to create friend-like robots, many philosophers and ethicists are concerned. They argue that robots cannot really be our friends. Robots can only fake the emotional and behavioural cues we associate with friendship. Consequently, we should resist the drive to create robot friends. In this article, I argue that the philosophical critics are wrong. Using the classic virtue-ideal of friendship, I argue that robots can plausibly be considered (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   26 citations  
  20. Deontology and Safe Artificial Intelligence.William D’Alessandro - forthcoming - Philosophical Studies:1-24.
    The field of AI safety aims to prevent increasingly capable artificially intelligent systems from causing humans harm. Research on moral alignment is widely thought to offer a promising safety strategy: if we can equip AI systems with appropriate ethical rules, according to this line of thought, they'll be unlikely to disempower, destroy or otherwise seriously harm us. Deontological morality looks like a particularly attractive candidate for an alignment target, given its popularity, relative technical tractability and commitment to harm-avoidance principles. I (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  21. Book review: This is Technology Ethics: An Introduction, by Sven Nyholm. [REVIEW]Tobias Flattery - forthcoming - Journal of Moral Philosophy.
  22. The Self and the Ontic Trust: Toward Technologies of Care and Meaning.Tim Gorichanaz - forthcoming - Journal of Information, Communication and Ethics in Society 17 (3).
    Purpose – Contemporary technology has been implicated in the rise of perfectionism, a personality trait that is associated with depression, suicide and other ills. is paper explores how technology can be developed to promote an alternative to perfectionism, which is a self- constructionist ethic. Design/methodology/approach – is paper takes the form of a philosophical discussion. A conceptual framework is developed by connecting the literature on perfectionism and personal meaning with discussions in information ethics on the self, the ontic trust and (...)
    Remove from this list   Direct download (5 more)  
     
    Export citation  
     
    Bookmark  
  23. Are clinicians ethically obligated to disclose their use of medical machine learning systems to patients?Joshua Hatherley - forthcoming - Journal of Medical Ethics.
    It is commonly accepted that clinicians are ethically obligated to disclose their use of medical machine learning systems to patients, and that failure to do so would amount to a moral fault for which clinicians ought to be held accountable. Call this ‘the disclosure thesis.’ Four main arguments have been, or could be, given to support the disclosure thesis in the ethics literature: the risk-based argument, the rights-based argument, the materiality argument and the autonomy argument. In this article, I argue (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  24. A matter of trust: : Higher education institutions as information fiduciaries in an age of educational data mining and learning analytics.Kyle M. L. Jones, Alan Rubel & Ellen LeClere - forthcoming - JASIST: Journal of the Association for Information Science and Technology.
    Higher education institutions are mining and analyzing student data to effect educational, political, and managerial outcomes. Done under the banner of “learning analytics,” this work can—and often does—surface sensitive data and information about, inter alia, a student’s demographics, academic performance, offline and online movements, physical fitness, mental wellbeing, and social network. With these data, institutions and third parties are able to describe student life, predict future behaviors, and intervene to address academic or other barriers to student success (however defined). Learning (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   2 citations  
  25. Rage Against the Authority Machines: How to Design Artificial Moral Advisors for Moral Enhancement.Ethan Landes, Cristina Voinea & Radu Uszkai - forthcoming - AI and Society:1-12.
    This paper aims to clear up the epistemology of learning morality from Artificial Moral Advisors (AMAs). We start with a brief consideration of what counts as moral enhancement and consider the risk of deskilling raised by machines that offer moral advice. We then shift focus to the epistemology of moral advice and show when and under what conditions moral advice can lead to enhancement. We argue that people’s motivational dispositions are enhanced by inspiring people to act morally, instead of merely (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  26. (1 other version)Disruptive Innovation and Moral Uncertainty.Philip J. Nickel - forthcoming - NanoEthics: Studies in New and Emerging Technologies.
    This paper develops a philosophical account of moral disruption. According to Robert Baker (2013), moral disruption is a process in which technological innovations undermine established moral norms without clearly leading to a new set of norms. Here I analyze this process in terms of moral uncertainty, formulating a philosophical account with two variants. On the Harm Account, such uncertainty is always harmful because it blocks our knowledge of our own and others’ moral obligations. On the Qualified Harm Account, there is (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   7 citations  
  27. Automated Influence and the Challenge of Cognitive Security.Sarah Rajtmajer & Daniel Susser - forthcoming - HoTSoS: ACM Symposium on Hot Topics in the Science of Security.
    Advances in AI are powering increasingly precise and widespread computational propaganda, posing serious threats to national security. The military and intelligence communities are starting to discuss ways to engage in this space, but the path forward is still unclear. These developments raise pressing ethical questions, about which existing ethics frameworks are silent. Understanding these challenges through the lens of “cognitive security,” we argue, offers a promising approach.
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  28. How Much Should Governments Pay to Prevent Catastrophes? Longtermism's Limited Role.Carl Shulman & Elliott Thornley - forthcoming - In Jacob Barrett, Hilary Greaves & David Thorstad (eds.), Essays on Longtermism. Oxford University Press.
    Longtermists have argued that humanity should significantly increase its efforts to prevent catastrophes like nuclear wars, pandemics, and AI disasters. But one prominent longtermist argument overshoots this conclusion: the argument also implies that humanity should reduce the risk of existential catastrophe even at extreme cost to the present generation. This overshoot means that democratic governments cannot use the longtermist argument to guide their catastrophe policy. In this paper, we show that the case for preventing catastrophe does not depend on longtermism. (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  29. Sphere transgressions: reflecting on the risks of big tech expansionism.Marthe Stevens, Steven R. Kraaijeveld & Tamar Sharon - forthcoming - Information, Communication and Society.
    The rapid expansion of Big Tech companies into various societal domains (e.g., health, education, and agriculture) over the past decade has led to increasing concerns among governments, regulators, scholars, and civil society. While existing theoretical frameworks—often revolving around privacy and data protection, or market and platform power—have shed light on important aspects of Big Tech expansionism, there are other risks that these frameworks cannot fully capture. In response, this editorial proposes an alternative theoretical framework based on the notion of sphere (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  30. Digital Slot Machines: Social Media Platforms as Attentional Scaffolds.Cristina Voinea, Lavinia Marin & Constantin Vică - forthcoming - Topoi:1-11.
    In this paper we introduce the concept of attentional scaffolds and show the resemblance between social media platforms and slot machines, both functioning as hostile attentional scaffolds. The first section establishes the groundwork for the concept of attentional scaffolds and draws parallels to the mechanics of slot machines, to argue that social media platforms aim to capture users’ attention to maximize engagement through a system of intermittent rewards. The second section shifts focus to the interplay between emotions and attention, revealing (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  31. (1 other version)Authenticity and the 'Authentic City'.Ryan Wittingslow - forthcoming - In Michael Nagenborg, Margoth González Woge, Taylor Stone & Pieter Vermaas (eds.), Technologies and Urban Life: Towards a Philosophy of Urban Technologies.
    On paper, ‘smart cities’ are an easy sell. Thanks to the transformative power of information and communication technologies (the much-vaunted ‘internet of things’), smart cities purport to offer managers and bureaucrats a more harmonious and efficient means of reducing traffic, managing assets, and increasing public safety. However, I am dubious of these utopian sentiments. Indeed, I argue that the benefits that smart cities purport to provide cohere poorly with a number of our shared phenomenological intuitions about the relationships(s) between authentic (...)
    Remove from this list  
     
    Export citation  
     
    Bookmark  
  32. AI responsibility gap: not new, inevitable, unproblematic.Huzeyfe Demirtas - 2025 - Ethics and Information Technology 27 (1):1-10.
    Who is responsible for a harm caused by AI, or a machine or system that relies on artificial intelligence? Given that current AI is neither conscious nor sentient, it’s unclear that AI itself is responsible for it. But given that AI acts independently of its developer or user, it’s also unclear that the developer or user is responsible for the harm. This gives rise to the so-called responsibility gap: cases where AI causes a harm, but no one is responsible for (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  33. The Point of Blaming AI Systems.Hannah Altehenger & Leonhard Menges - 2024 - Journal of Ethics and Social Philosophy 27 (2).
    As Christian List (2021) has recently argued, the increasing arrival of powerful AI systems that operate autonomously in high-stakes contexts creates a need for “future-proofing” our regulatory frameworks, i.e., for reassessing them in the face of these developments. One core part of our regulatory frameworks that dominates our everyday moral interactions is blame. Therefore, “future-proofing” our extant regulatory frameworks in the face of the increasing arrival of powerful AI systems requires, among others things, that we ask whether it makes sense (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  34. The Duty to Promote Digital Minimalism in Group Agents.Timothy Aylsworth & Clinton Castro - 2024 - In Timothy Aylsworth & Clinton Castro (eds.), Kantian Ethics and the Attention Economy. Palgrave Macmillan.
    In this chapter, we turn our attention to the effects of the attention economy on our ability to act autonomously as a group. We begin by clarifying which sorts of groups we are concerned with, which are structured groups (groups sufficiently organized that it makes sense to attribute agency to the group itself). Drawing on recent work by Purves and Davis (2022), we describe the essential roles of trust (i.e., depending on groups to fulfill their commitments) and trustworthiness (i.e., the (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  35. The Harm of Social Media to Public Reason.Paige Benton & Michael W. Schmidt - 2024 - Topoi 43 (5): 1433–1449.
    It is commonly agreed that so-called echo chambers and epistemic bubbles, associated with social media, are detrimental to liberal democracies. Drawing on John Rawls’s political liberalism, we offer a novel explanation of why social media platforms amplifying echo chambers and epistemic bubbles are likely contributing to the violation of the democratic norms connected to the ideal of public reason. These norms are clarified with reference to the method of (full) reflective equilibrium, which we argue should be cultivated as a civic (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  36. Dual-Use in Cybersecurity Research. Towards a New Culture of Research Ethics.Kaya Cassing & Sebastian Weydner-Volkmann - 2024 - In Elisabeth Ehrensperger, Jeannette Behringer, Michael Decker, Bert Droste-Franke, Nils B. Heyen, Mashid Sotoudeh & Birgit Weimert (eds.), Gestreamt, gelikt, flüchtig – schöne neue Kulturwelt? Digitalisierung und Kultur im Licht der Technikfolgenabschätzung. Baden-Baden: Nomos. pp. 349-359.
    The fact that information and communication technologies (ICTs) increasingly shape our online and offline lifeworlds has lead to the emergence of a new societal threat in the form of vulnerabilities in critical ICT systems that may be exploited by malicious actors. Cybersecurity researchers work on finding such vulnerabilities and on identifying new attack vectors, i.e. they systematically step into the role of attackers. Normatively, however, the goal of this research is to strengthen ICTs against cyberattacks and, thus, to reduce the (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  37. Deepfakes: a survey and introduction to the topical collection.Dan Cavedon-Taylor - 2024 - Synthese 204 (1):1-19.
    Deepfakes are extremely realistic audio/video media. They are produced via a complex machine-learning process, one that centrally involves training an algorithm on hundreds or thousands of audio/video recordings of an object or person, S, with the aim of either creating entirely new audio/video media of S or else altering existing audio/video media of S. Deepfakes are widely predicted to have deleterious consequences (principally, moral and epistemic ones) for both individuals and various of our social practices and institutions. In this introduction (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  38. Regulating Misinformation: Political Irrationality as a Feasibility Constraint.Bartlomiej Chomanski - 2024 - Topoi 43 (5):1389-1404.
    This paper argues that the well-established fact of political irrationality imposes substantial constraints on how governments may combat the threat of political misinformation. Though attempts at regulating misinformation are becoming increasingly popular, both among policymakers and theorists, I intend to show that, for a wide range of anti-misinformation interventions (collectively termed “debunking” and “source labeling”), these attempts ought to be abandoned. My argument relies primarily on the fact that most people process politically-relevant information in biased and motivated ways. Since debunking (...)
    Remove from this list   Direct download (5 more)  
     
    Export citation  
     
    Bookmark  
  39. The Summit of Safe Horror: Defending Most Horror Films.Cara Rei Cummings-Coughlin - 2024 - European Journal of Analytic Philosophy 20 (2):323-343.
    Many people regularly watch horror films. While it seems clear that sporadically watching horror films will not make us bad people, if it is the main type of media that we consume, then are we still safe? I will defend most horror films from Di Muzio (2006), who worries that we are harming our moral character by watching them. Most horror films (e.g., Candyman, Get Out, and Scream) fall into what I call the summit of safe horror (SoSH), the inverse (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  40. Artificial Intelligence in Higher Education in South Africa: Some Ethical Considerations.Tanya de Villiers-Botha - 2024 - Kagisano 15:165-188.
    There are calls from various sectors, including the popular press, industry, and academia, to incorporate artificial intelligence (AI)-based technologies in general, and large language models (LLMs) (such as ChatGPT and Gemini) in particular, into various spheres of the South African higher education sector. Nonetheless, the implementation of such technologies is not without ethical risks, notably those related to bias, unfairness, privacy violations, misinformation, lack of transparency, and threats to autonomy. This paper gives an overview of the more pertinent ethical concerns (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  41. Deepfakes and Dishonesty.Tobias Flattery & Christian B. Miller - 2024 - Philosophy and Technology 37 (120):1-24.
    Deepfakes raise various concerns: risks of political destabilization, depictions of persons without consent and causing them harms, erosion of trust in video and audio as reliable sources of evidence, and more. These concerns have been the focus of recent work in the philosophical literature on deepfakes. However, there has been almost no sustained philosophical analysis of deepfakes from the perspective of concerns about honesty and dishonesty. That deepfakes are potentially deceptive is unsurprising and has been noted. But under what conditions (...)
    Remove from this list   Direct download (6 more)  
     
    Export citation  
     
    Bookmark  
  42. Adaptive Interventions Reducing Social Identity Threat to Increase Equity in Higher Distance Education: A Use Case and Ethical Considerations on Algorithmic Fairness.Laura Froehlich & Sebastian Weydner-Volkmann - 2024 - Journal of Learning Analytics 11 (2):112-122.
    Educational disparities between traditional and non-traditional student groups in higher distance education can potentially be reduced by alleviating social identity threat and strengthening students’ sense of belonging in the academic context. We present a use case of how Learning Analytics and Machine Learning can be applied to develop and implement an algorithm to classify students as at-risk of experiencing social identity threat. These students would be presented with an intervention fostering a sense of belonging. We systematically analyze the intervention’s intended (...)
    Remove from this list  
     
    Export citation  
     
    Bookmark  
  43. A way forward for responsibility in the age of AI.Dane Leigh Gogoshin - 2024 - Inquiry: An Interdisciplinary Journal of Philosophy:1-34.
    Whatever one makes of the relationship between free will and moral responsibility – e.g. whether it’s the case that we can have the latter without the former and, if so, what conditions must be met; whatever one thinks about whether artificially intelligent agents might ever meet such conditions, one still faces the following questions. What is the value of moral responsibility? If we take moral responsibility to be a matter of being a fitting target of moral blame or praise, what (...)
    No categories
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  44. Patient Preferences Concerning Humanoid Features in Healthcare Robots.Dane Leigh Gogoshin - 2024 - Science and Engineering Ethics 30 (6):1-16.
    In this paper, I argue that patient preferences concerning human physical attributes associated with race, culture, and gender should be excluded from public healthcare robot design. On one hand, healthcare should be (objective, universal) needs oriented. On the other hand, patient well-being (the aim of healthcare) is, in concrete ways, tied to preferences, as is patient satisfaction (a core WHO value). The shift toward patient-centered healthcare places patient preferences into the spotlight. Accordingly, the design of healthcare technology cannot simply disregard (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  45. Data over dialogue: Why artificial intelligence is unlikely to humanise medicine.Joshua Hatherley - 2024 - Dissertation, Monash University
    Recently, a growing number of experts in artificial intelligence (AI) and medicine have be-gun to suggest that the use of AI systems, particularly machine learning (ML) systems, is likely to humanise the practice of medicine by substantially improving the quality of clinician-patient relationships. In this thesis, however, I argue that medical ML systems are more likely to negatively impact these relationships than to improve them. In particular, I argue that the use of medical ML systems is likely to comprise the (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  46. The virtues of interpretable medical AI.Joshua Hatherley, Robert Sparrow & Mark Howard - 2024 - Cambridge Quarterly of Healthcare Ethics 33 (3):323-332.
    Artificial intelligence (AI) systems have demonstrated impressive performance across a variety of clinical tasks. However, notoriously, sometimes these systems are 'black boxes'. The initial response in the literature was a demand for 'explainable AI'. However, recently, several authors have suggested that making AI more explainable or 'interpretable' is likely to be at the cost of the accuracy of these systems and that prioritising interpretability in medical AI may constitute a 'lethal prejudice'. In this paper, we defend the value of interpretability (...)
    Remove from this list   Direct download (5 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  47. (1 other version)Technology, Personal Information, and Identity.Muriel Leuenberger - 2024 - Techné: Research in Philosophy and Technology 28 (1):22-48.
    Novel and emerging technologies can provide users with new kinds and unprecedented amounts of information about themselves, such as autobiographical information, neurodata, health information, or characteristics inferred from online behavior. Technology providing extensive personal information (PI technology) can impact who we take ourselves to be, how we constitute ourselves, and indeed who we are. This paper analyzes how the external, quantified perspective on us offered by PI technology affects identity based on a narrative identity theory. Disclosing the intimate relationship between (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  48. The Engineer as an Educator: Goods, Virtues, and Secondary Practices.Piotr Machura - 2024 - Studies in Logic, Grammar and Rhetoric 69 (82):203-220.
    How should ethical standards be maintained within engineering and engineering education? The present paper addresses this question with relation to the dominant models of engineering ethics (EE) to show that their limits might be overcome by incorporating the vocabulary of neo-Aristotelian virtue ethics. On the basis of the MacIntyrean concept of practice, the secondary role of engineering is highlighted which echoes similar debates concerning education. This similarity is picked up to argue that the role of the engineer in relation to (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  49. Framing the Virtue-Ethical Account in the Ethics of Technology.Piotr Machura - 2024 - Forum Philosophicum: International Journal for Philosophy 29 (1):111-137.
    In recent years there has been growing interest in adapting virtue ethics to the ethics of technology. However, it has most typically been invoked to address some particular issue of moral importance, and there is only a limited range of works dealing with the methodological question of how virtue ethics may contribute to this field. My approach in this paper is threefold. I start with a brief discussion of Aristotelian virtue ethics, with a view to constructing a framework in which (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  50. Defining Digital Authoritarianism.James S. Pearson - 2024 - Philosophy and Technology 37 (2):1-19.
    It is becoming increasingly common for authoritarian regimes to leverage digital technologies to surveil, repress and manipulate their citizens. Experts typically refer to this practice as digital authoritarianism (DA). Existing definitions of DA consistently presuppose a politically repressive agent intentionally exploiting digital technologies to pursue authoritarian ends. I refer to this as the intention-based definition. This paper argues that this definition is untenable as a general description of DA. I begin by illustrating the current predominance of the intention-based definition (Section (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
1 — 50 / 424