Results for 'chatbots, chatGPT, ethics of AI, AI, emojis, manipulation, deception'

954 found
Order:
  1. Chatbots shouldn’t use emojis.Carissa Véliz - 2023 - Nature 615:375.
    Limits need to be set on AI’s ability to simulate human feelings. Ensuring that chatbots don’t use emotive language, including emojis, would be a good start. Emojis are particularly manipulative. Humans instinctively respond to shapes that look like faces — even cartoonish or schematic ones — and emojis can induce these reactions.
    Direct download  
     
    Export citation  
     
    Bookmark   5 citations  
  2.  21
    Chatbots, Robots, and the Ethics of Automating Psychotherapy.Eric B. Litwack - 2024 - Athens Journal of Philosophy 3 (3):111-122.
    Recent developments in artificial intelligence—AI--have caused considerable discussion among both philosophers of technology and psychotherapists. In particular, the question of whether or not new forms of AI will complement or even replace traditional psychotherapists has emerged as a major contemporary debate. This debate is not entirely new, as it has its origins in the Turing Test of 1950, and an early psychotherapy chatbot named Eliza, developed in 1966 at MIT. However, recent developments in AI technology, coupled with long waiting lists (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  3.  21
    AI Chatbots and Challenges of HIPAA Compliance for AI Developers and Vendors.Delaram Rezaeikhonakdar - 2023 - Journal of Law, Medicine and Ethics 51 (4):988-995.
    Developers and vendors of large language models (“LLMs”) — such as ChatGPT, Google Bard, and Microsoft’s Bing at the forefront—can be subject to Health Insurance Portability and Accountability Act of 1996 (“HIPAA”) when they process protected health information (“PHI”) on behalf of the HIPAA covered entities. In doing so, they become business associates or subcontractors of a business associate under HIPAA.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  4. Ethics of generative AI and manipulation: a design-oriented research agenda.Michael Klenk - 2024 - Ethics and Information Technology 26 (1):1-15.
    Generative AI enables automated, effective manipulation at scale. Despite the growing general ethical discussion around generative AI, the specific manipulation risks remain inadequately investigated. This article outlines essential inquiries encompassing conceptual, empirical, and design dimensions of manipulation, pivotal for comprehending and curbing manipulation risks. By highlighting these questions, the article underscores the necessity of an appropriate conceptualisation of manipulation to ensure the responsible development of Generative AI technologies.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  5.  42
    Deception and manipulation in generative AI.Christian Tarsney - forthcoming - Philosophical Studies.
    Large language models now possess human-level linguistic abilities in many contexts. This raises the concern that they can be used to deceive and manipulate on unprecedented scales, for instance spreading political misinformation on social media. In future, agentic AI systems might also deceive and manipulate humans for their own purposes. In this paper, first, I argue that AI-generated content should be subject to stricter standards against deception and manipulation than we ordinarily apply to humans. Second, I offer new characterizations (...)
    No categories
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  6. Can AI Lie? Chatbot Technologies, the Subject, and the Importance of Lying.Jack Black - 2024 - Social Science Computer Review (xx):xx.
    This article poses a simple question: can AI lie? In response to this question, the article examines, as its point of inquiry, popular AI chatbots, such as, ChatGPT. In doing so, an examination of the psychoanalytic, philosophical, and technological significance of AI and its complexities are located in relation to the dynamics of truth, falsity, and deception. That is, by critically exploring the chatbot’s capacity to engage in natural language conversations and deliver contextually relevant responses, it is argued that (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  7.  23
    Plagiarism and Wrong Content as Potential Challenges of Using Chatbots Like ChatGPT in Medical Research.Sam Sedaghat - forthcoming - Journal of Academic Ethics:1-4.
    Chatbots such as ChatGPT have the potential to change researchers’ lives in many ways. Despite all the advantages of chatbots, many challenges to using chatbots in medical research remain. Wrong and incorrect content presented by chatbots is a major possible disadvantage. The authors’ credibility could be tarnished if wrong content is presented in medical research. Additionally, ChatGPT, as the currently most popular generative AI, does not routinely present references for its answers. Double-checking references and resources used by chatbots might be (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  8.  29
    Why ChatGPT Means Communication Ethics Problems for Bioethics.Andrew J. Barnhart, Jo Ellen M. Barnhart & Kris Dierickx - 2023 - American Journal of Bioethics 23 (10):80-82.
    In his article, “What should ChatGPT mean for bioethics?” I. Glenn Cohen explores the bioethical implications of Open AI’s chatbot ChatGPT and the use of similar Large Language Models (LLMs) (Cohen...
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  9. Ethics of generative AI.Hazem Zohny, John McMillan & Mike King - 2023 - Journal of Medical Ethics 49 (2):79-80.
    Artificial intelligence (AI) and its introduction into clinical pathways presents an array of ethical issues that are being discussed in the JME. 1–7 The development of AI technologies that can produce text that will pass plagiarism detectors 8 and are capable of appearing to be written by a human author 9 present new issues for medical ethics. One set of worries concerns authorship and whether it will now be possible to know that an author or student in fact produced (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   9 citations  
  10. Escape climate apathy by harnessing the power of generative AI.Quan-Hoang Vuong & Manh-Tung Ho - 2024 - AI and Society 39 (6):1-2.
    “Throw away anything that sounds too complicated. Only keep what is simple to grasp...If the information appears fuzzy and causes the brain to implode after two sentences, toss it away and stop listening. Doing so will make the news as orderly and simple to understand as the truth.” - In “GHG emissions,” The Kingfisher Story Collection, (Vuong 2022a).
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  11.  20
    Africa, ChatGPT, and Generative AI Systems: Ethical Benefits, Concerns, and the Need for Governance.Kutoma Wakunuma & Damian Eke - 2024 - Philosophies 9 (3):80.
    This paper examines the impact and implications of ChatGPT and other generative AI technologies within the African context while looking at the ethical benefits and concerns that are particularly pertinent to the continent. Through a robust analysis of ChatGPT and other generative AI systems using established approaches for analysing the ethics of emerging technologies, this paper provides unique ethical benefits and concerns for these systems in the African context. This analysis combined approaches such as anticipatory technology ethics (ATE), (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  12.  58
    Liability to Deception and Manipulation: The Ethics of Undercover Policing.Christopher Nathan - 2016 - Journal of Applied Philosophy 34 (3):370-388.
    Does undercover police work inevitably wrong its targets? Or are undercover activities justified by a general security benefit? In this article I argue that people can make themselves liable to deception and manipulation. The debate on undercover policing will proceed more fruitfully if the tactic can be conceptualised along those lines, rather than as essentially ‘dirty hands’ activity, in which people are wronged in pursuit of a necessary good, or in instrumentalist terms, according to which the harms of undercover (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  13.  36
    Friend or foe? Exploring the implications of large language models on the science system.Benedikt Fecher, Marcel Hebing, Melissa Laufer, Jörg Pohle & Fabian Sofsky - forthcoming - AI and Society:1-13.
    The advent of ChatGPT by OpenAI has prompted extensive discourse on its potential implications for science and higher education. While the impact on education has been a primary focus, there is limited empirical research on the effects of large language models (LLMs) and LLM-based chatbots on science and scientific practice. To investigate this further, we conducted a Delphi study involving 72 researchers specializing in AI and digitization. The study focused on applications and limitations of LLMs, their effects on the science (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  14.  25
    ChatGPT or Gemini: Who Makes the Better Scientific Writing Assistant?Hatoon S. AlSagri, Faiza Farhat, Shahab Saquib Sohail & Abdul Khader Jilani Saudagar - forthcoming - Journal of Academic Ethics:1-15.
    The rapid evolution of scientific research has created a pressing need for efficient and versatile tools to aid researchers. While using artificial intelligence (AI) to write scientific articles is unethical and unreliable due to the potential for inaccuracy, AI can be a valuable tool for assisting with other aspects of research, such as language editing, reference formatting, and journal finding. Two of the latest AI-driven assistants that have become indispensable assets to scientists are ChatGPT and Gemini (Bard). These assistants offer (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  15. Ethics of Artificial Intelligence and Robotics.Vincent C. Müller - 2020 - In Edward N. Zalta (ed.), Stanford Encylopedia of Philosophy. pp. 1-70.
    Artificial intelligence (AI) and robotics are digital technologies that will have significant impact on the development of humanity in the near future. They have raised fundamental questions about what we should do with these systems, what the systems themselves should do, what risks they involve, and how we can control these. - After the Introduction to the field (§1), the main themes (§2) of this article are: Ethical issues that arise with AI systems as objects, i.e., tools made and used (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   34 citations  
  16. Models of rational agency in human-centered AI: the realist and constructivist alternatives.Jacob Sparks & Ava Thomas Wright - 2025 - AI and Ethics 5.
    Recent proposals for human-centered AI (HCAI) help avoid the challenging task of specifying an objective for AI systems, since HCAI is designed to learn the objectives of the humans it is trying to assist. We think the move to HCAI is an important innovation but are concerned with how an instrumental, economic model of human rational agency has dominated research into HCAI. This paper brings the philosophical debate about human rational agency into the HCAI context, showing how more substantive ways (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  17.  68
    Ethical Problems of the Use of Deepfakes in the Arts and Culture.Rafael Cejudo - 2023 - In Francisco Lara & Jan Deckers (eds.), Ethics of Artificial Intelligence. Springer Nature Switzerland. pp. 129-148.
    Deepfakes are highly realistic, albeit fake, audiovisual contents created with AI. This technology allows the use of deceptive audiovisual material that can impersonate someone’s identity to erode their reputation or manipulate the audience. Deepfakes are also one of the applications of AI that can be used in cultural industries and even to produce works of art. On the one hand, it is important to clarify whether deepfakes in arts and culture are free from the ethical dangers mentioned above. On the (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark  
  18. AI Can Help Us Live More Deliberately.Julian Friedland - 2019 - MIT Sloan Management Review 60 (4).
    Our rapidly increasing reliance on frictionless AI interactions may increase cognitive and emotional distance, thereby letting our adaptive resilience slacken and our ethical virtues atrophy from disuse. Many trends already well underway involve the offloading of cognitive, emotional, and ethical labor to AI software in myriad social, civil, personal, and professional contexts. Gradually, we may lose the inclination and capacity to engage in critically reflective thought, making us more cognitively and emotionally vulnerable and thus more anxious and prone to manipulation (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  19.  60
    Ethical and legal challenges of AI in marketing: an exploration of solutions.Dinesh Kumar & Nidhi Suthar - 2024 - Journal of Information, Communication and Ethics in Society 22 (1):124-144.
    Artificial intelligence (AI) has sparked interest in various areas, including marketing. However, this exhilaration is being tempered by growing concerns about the moral and legal implications of using AI in marketing. Although previous research has revealed various ethical and legal issues, such as algorithmic discrimination and data privacy, there are no definitive answers. This paper aims to fill this gap by investigating AI’s ethical and legal concerns in marketing and suggesting feasible solutions.,The paper synthesises information from academic articles, industry reports, (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  20. The dialectic of desire: AI chatbots and the desire not to know.Jack Black - 2023 - Psychoanalysis, Culture and Society 28 (4):607--618.
    Exploring the relationship between humans and AI chatbots, as well as the ethical concerns surrounding their use, this paper argues that our relations with chatbots are not solely based on their function as a source of knowledge, but, rather, on the desire for the subject not to know. It is argued that, outside of the very fears and anxieties that underscore our adoption of AI, the desire not to know reveals the potential to embrace the very loss AI avers. Consequently, (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  21. ChatGPT: deconstructing the debate and moving it forward.Mark Coeckelbergh & David J. Gunkel - 2024 - AI and Society 39 (5):2221-2231.
    Large language models such as ChatGPT enable users to automatically produce text but also raise ethical concerns, for example about authorship and deception. This paper analyses and discusses some key philosophical assumptions in these debates, in particular assumptions about authorship and language and—our focus—the use of the appearance/reality distinction. We show that there are alternative views of what goes on with ChatGPT that do not rely on this distinction. For this purpose, we deploy the two phased approach of deconstruction (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   9 citations  
  22. The Hazards of Putting Ethics on Autopilot.Julian Friedland, B. Balkin, David & Kristian Myrseth - 2024 - MIT Sloan Management Review 65 (4).
    The generative AI boom is unleashing its minions. Enterprise software vendors have rolled out legions of automated assistants that use large language model (LLM) technology, such as ChatGPT, to offer users helpful suggestions or to execute simple tasks. These so-called copilots and chatbots can increase productivity and automate tedious manual work. In this article, we explain how that leads to the risk that users' ethical competence may degrade over time — and what to do about it.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  23.  38
    Are All Deceptions Manipulative or All Manipulations Deceptive?Shlomo Cohen - 2023 - Journal of Ethics and Social Philosophy 25 (2).
    Moral reflection and deliberation on both deception and manipulation is hindered by lack of agreement on the precise meanings of these concepts. Specifically, there is disagreement on how to understand their relation vis-à-vis each other. Curiously, according to one prominent view, all deceptions are instances of manipulations, while according to another, all manipulations are instances of deceptions. This paper makes that implicit disagreement explicit, and argues that both views are untenable. It concludes that deception and manipulation partially overlap, (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  24. Social Agency for Artifacts: Chatbots and the Ethics of Artificial Intelligence.John Symons & Syed AbuMusab - 2024 - Digital Society 3:1-28.
    Ethically significant consequences of artificially intelligent artifacts will stem from their effects on existing social relations. Artifacts will serve in a variety of socially important roles—as personal companions, in the service of elderly and infirm people, in commercial, educational, and other socially sensitive contexts. The inevitable disruptions that these technologies will cause to social norms, institutions, and communities warrant careful consideration. As we begin to assess these effects, reflection on degrees and kinds of social agency will be required to make (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  25. ChatGPT: Temptations of Progress.Rushabh H. Doshi, Simar S. Bajaj & Harlan M. Krumholz - 2023 - American Journal of Bioethics 23 (4):6-8.
    ChatGPT is an artificial intelligence (AI) chatbot that processes and generates natural language text, offering human-like responses to a wide range of questions and prompts. Five days after its re...
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  26.  70
    Plagiarism, Academic Ethics, and the Utilization of Generative AI in Academic Writing.Julian Koplin - 2023 - International Journal of Applied Philosophy 37 (2):17-40.
    In the wake of ChatGPT’s release, academics and journal editors have begun making important decisions about whether and how to integrate generative artificial intelligence (AI) into academic publishing. Some argue that AI outputs in scholarly works constitute plagiarism, and so should be disallowed by academic journals. Others suggest that it is acceptable to integrate AI output into academic papers, provided that its contributions are transparently disclosed. By drawing on Taylor’s work on academic norms, this paper argues against both views. Unlike (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  27. AI ethics should not remain toothless! A call to bring back the teeth of ethics.Rowena Rodrigues & Anaïs Rességuier - 2020 - Big Data and Society 7 (2).
    Ethics has powerful teeth, but these are barely being used in the ethics of AI today – it is no wonder the ethics of AI is then blamed for having no teeth. This article argues that ‘ethics’ in the current AI ethics field is largely ineffective, trapped in an ‘ethical principles’ approach and as such particularly prone to manipulation, especially by industry actors. Using ethics as a substitute for law risks its abuse and misuse. (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark   33 citations  
  28.  87
    How to feel about emotionalized artificial intelligence? When robot pets, holograms, and chatbots become affective partners.Eva Weber-Guskar - 2021 - Ethics and Information Technology 23 (4):601-610.
    Interactions between humans and machines that include artificial intelligence are increasingly common in nearly all areas of life. Meanwhile, AI-products are increasingly endowed with emotional characteristics. That is, they are designed and trained to elicit emotions in humans, to recognize human emotions and, sometimes, to simulate emotions (EAI). The introduction of such systems in our lives is met with some criticism. There is a rather strong intuition that there is something wrong about getting attached to a machine, about having certain (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   9 citations  
  29.  17
    Ethical exploration of chatGPT in the modern K-14 economics classroom.Brad Scott & Sandy van der Poel - 2024 - International Journal of Ethics Education 9 (1):65-77.
    This paper addresses the challenge of ethically integrating ChatGPT, a sophisticated AI language model, into K-14 economics education. Amidst the growing presence of AI in classrooms, it proposes the “Evaluate, Reflect, Assurance” model, a novel decision-making framework grounded in normative and virtue ethics, to guide educators. This approach is detailed through a theoretical decision tree, offering educators a heuristic tool to weigh the educational advantages and ethical dimensions of using ChatGPT. An educator can use the decision tree to reach (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  30.  69
    Generative AI, Specific Moral Values: A Closer Look at ChatGPT’s New Ethical Implications for Medical AI.Gavin Victor, Jean-Christophe Bélisle-Pipon & Vardit Ravitsky - 2023 - American Journal of Bioethics 23 (10):65-68.
    Cohen’s (2023) mapping exercise of possible bioethical issues emerging from the use of ChatGPT in medicine provides an informative, useful, and thought-provoking trigger for discussions of AI ethic...
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  31. All too human? Identifying and mitigating ethical risks of Social AI.Henry Shevlin - manuscript
    This paper presents an overview of the risks and benefits of Social AI, understood as conversational AI systems that cater to human social needs like romance, companionship, or entertainment. Section 1 of the paper provides a brief history of conversational AI systems and introduces conceptual distinctions to help distinguish varieties of Social AI and pathways to their deployment. Section 2 of the paper adds further context via a brief discussion of anthropomorphism and its relevance to assessment of human-chatbot relationships. Section (...)
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  32. Facing Immersive “Post-Truth” in AIVR?Nadisha-Marie Aliman & Leon Kester - 2020 - Philosophies 5 (4):45.
    In recent years, prevalent global societal issues related to fake news, fakery, misinformation, and disinformation were brought to the fore, leading to the construction of descriptive labels such as “post-truth” to refer to the supposedly new emerging era. Thereby, the (mis-)use of technologies such as AI and VR has been argued to potentially fuel this new loss of “ground-truth”, for instance, via the ethically relevant deepfakes phenomena and the creation of realistic fake worlds, presumably undermining experiential veracity. Indeed, _unethical_ and (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  33. Can ChatGPT be an author? Generative AI creative writing assistance and perceptions of authorship, creatorship, responsibility, and disclosure.Paul Formosa, Sarah Bankins, Rita Matulionyte & Omid Ghasemi - forthcoming - AI and Society.
    The increasing use of Generative AI raises many ethical, philosophical, and legal issues. A key issue here is uncertainties about how different degrees of Generative AI assistance in the production of text impacts assessments of the human authorship of that text. To explore this issue, we developed an experimental mixed methods survey study (N = 602) asking participants to reflect on a scenario of a human author receiving assistance to write a short novel as part of a 3 (high, medium, (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  34.  38
    The ethics of disclosing the use of artificial intelligence tools in writing scholarly manuscripts.Mohammad Hosseini, David B. Resnik & Kristi Holmes - 2023 - Research Ethics 19 (4):449-465.
    In this article, we discuss ethical issues related to using and disclosing artificial intelligence (AI) tools, such as ChatGPT and other systems based on large language models (LLMs), to write or edit scholarly manuscripts. Some journals, such as Science, have banned the use of LLMs because of the ethical problems they raise concerning responsible authorship. We argue that this is not a reasonable response to the moral conundrums created by the use of LLMs because bans are unenforceable and would encourage (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   10 citations  
  35.  31
    Death of a reviewer or death of peer review integrity? the challenges of using AI tools in peer reviewing and the need to go beyond publishing policies.Vasiliki Mollaki - 2024 - Research Ethics 20 (2):239-250.
    Peer review facilitates quality control and integrity of scientific research. Although publishing policies have adapted to include the use of Artificial Intelligence (AI) tools, such as Chat Generative Pre-trained Transformer (ChatGPT), in the preparation of manuscripts by authors, there is a lack of guidelines or policies on whether peer reviewers can use such tools. The present article highlights the lack of policies on the use of AI tools in the peer review process (PRP) and argues that we need to go (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  36. The Whiteness of AI.Stephen Cave & Kanta Dihal - 2020 - Philosophy and Technology 33 (4):685-703.
    This paper focuses on the fact that AI is predominantly portrayed as white—in colour, ethnicity, or both. We first illustrate the prevalent Whiteness of real and imagined intelligent machines in four categories: humanoid robots, chatbots and virtual assistants, stock images of AI, and portrayals of AI in film and television. We then offer three interpretations of the Whiteness of AI, drawing on critical race theory, particularly the idea of the White racial frame. First, we examine the extent to which this (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   32 citations  
  37.  67
    The Ethics of ‘Deathbots’.Nora Freya Lindemann - 2022 - Science and Engineering Ethics 28 (6):1-15.
    Recent developments in AI programming allow for new applications: individualized chatbots which mimic the speaking and writing behaviour of one specific living or dead person. ‘Deathbots’, chatbots of the dead, have already been implemented and are currently under development by the first start-up companies. Thus, it is an urgent issue to consider the ethical implications of deathbots. While previous ethical theories of deathbots have always been based on considerations of the dignity of the deceased, I propose to shift the focus (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  38. ChatGPT: towards AI subjectivity.Kristian D’Amato - 2024 - AI and Society 39:1-15.
    Motivated by the question of responsible AI and value alignment, I seek to offer a uniquely Foucauldian reconstruction of the problem as the emergence of an ethical subject in a disciplinary setting. This reconstruction contrasts with the strictly human-oriented programme typical to current scholarship that often views technology in instrumental terms. With this in mind, I problematise the concept of a technological subjectivity through an exploration of various aspects of ChatGPT in light of Foucault’s work, arguing that current systems lack (...)
    Direct download (6 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  39. The Ethics of Military Influence Operations.Michael Skerker - 2023 - Conatus 8 (2):589-612.
    This article articulates a framework for normatively assessing influence operations, undertaken by national security institutions. Section I categorizes the vast field of possible types of influence operations according to the communication’s content, its attribution, the rights of the target audience, the communication’s purpose, and its secondary effects. Section II populates these categories with historical examples and section III evaluates these cases with a moral framework. I argue that deceptive or manipulative communications directed at non-liable audiences are presumptively immoral and illegitimate (...)
    No categories
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  40. Ethics of Artificial Intelligence.Vincent C. Müller - 2021 - In Anthony Elliott (ed.), The Routledge Social Science Handbook of Ai. Routledge. pp. 122-137.
    Artificial intelligence (AI) is a digital technology that will be of major importance for the development of humanity in the near future. AI has raised fundamental questions about what we should do with such systems, what the systems themselves should do, what risks they involve and how we can control these. - After the background to the field (1), this article introduces the main debates (2), first on ethical issues that arise with AI systems as objects, i.e. tools made and (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  41.  97
    Analysis of Beliefs Acquired from a Conversational AI: Instruments-based Beliefs, Testimony-based Beliefs, and Technology-based Beliefs.Ori Freiman - 2024 - Episteme 21 (3):1031-1047.
    Speaking with conversational AIs, technologies whose interfaces enable human-like interaction based on natural language, has become a common phenomenon. During these interactions, people form their beliefs due to the say-so of conversational AIs. In this paper, I consider, and then reject, the concepts of testimony-based beliefs and instrument-based beliefs as suitable for analysis of beliefs acquired from these technologies. I argue that the concept of instrument-based beliefs acknowledges the non-human agency of the source of the belief. However, the analysis focuses (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  42. AI and the future of humanity: ChatGPT-4, philosophy and education – Critical responses.Michael A. Peters, Liz Jackson, Marianna Papastephanou, Petar Jandrić, George Lazaroiu, Colin W. Evers, Bill Cope, Mary Kalantzis, Daniel Araya, Marek Tesar, Carl Mika, Lei Chen, Chengbing Wang, Sean Sturm, Sharon Rider & Steve Fuller - 2024 - Educational Philosophy and Theory 56 (9):828-862.
    1. Michael A PetersBeijing Normal UniversityChatGPT is an AI chatbot released by OpenAI on November 30, 2022 and a ‘stable release’ on February 13, 2023. It belongs to OpenAI’s GPT-3 family (genera...
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  43.  3
    The role of generative AI in academic and scientific authorship: an autopoietic perspective.Steven Watson, Erik Brezovec & Jonathan Romic - forthcoming - AI and Society:1-11.
    The integration of generative artificial intelligence (AI), particularly large language models like ChatGPT, presents new challenges as well as possibilities for scientific authorship. This paper draws on social systems theory to offer a nuanced understanding of the interplay between technology, individuals, society and scholarly authorial practices. This contrasts with orthodoxy, where individuals and technology are treated as essentialized entities. This approach offers a critique of the binary positions of sociotechnological determinism and accelerationist instrumentality while still acknowledging that generative AI presents (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  44.  26
    The Ethics of Democratic Deceit.Derek Edyvane - 2014 - Journal of Applied Philosophy 32 (3):310-325.
    Deception presents a distinctive ethical problem for democratic politicians. This is because there seem in certain situations to be compelling democratic reasons for politicians both to deceive and not to deceive the public. Some philosophers have sought to negotiate this tension by appeal to moral principle, but such efforts may misrepresent the felt ambivalence surrounding dilemmas of public office. A different approach appeals to the moral character of politicians, and to the variety of forms of manipulative communication at their (...)
    Direct download  
     
    Export citation  
     
    Bookmark   3 citations  
  45.  37
    The ethics of ex-bots.Paula Sweeney - 2024 - AI and Society 39 (6):3055-3056.
    Imagine if, when broken-hearted by their romantic partner leaving them, a person could continue the relationship with a chatbot or avatar version of them. This might seem like a far-fetched scenario but a little thought reveals that, first, this is a product that could plausibly make its way to the market and, second, it would be harmful for both parties of the former relationship and plausibly abusive for the person who has been ‘bot-ed’ without their consent.
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  46.  10
    Stoking fears of AI X-Risk (while forgetting justice here and now).Nancy S. Jecker, Caesar Alimsinya Atuire, Jean-Christophe Bélisle-Pipon, Vardit Ravitsky & Anita Ho - 2024 - Journal of Medical Ethics 50 (12):827-828.
    We appreciate the helpful commentaries on our paper, ‘AI and the falling sky: interrogating X-Risk’.1 We agree with many points commentators raise, which opened our eyes to concerns we had not previously considered. This reply focuses on the tension many commentators noted between AI’s existential risks (X-Risks) and justice here and now. In ‘Existential risk and the justice turn in bioethics’, Corsico frames the tension between AI X-Risk and justice here and now as part of a larger shift within bioethics.2 (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  47.  47
    Hybrid Ethics for Generative AI: Some Philosophical Inquiries on GANs.Antonio Carnevale, Claudia Falchi Delgado & Piercosma Bisconti - 2023 - Humana Mente 16 (44).
    Until now, the mass spread of fake news and its negative consequences have implied mainly textual content towards a loss of citizens' trust in institutions. Recently, a new type of machine learning framework has arisen, Generative Adversarial Networks (GANs) – a class of deep neural network models capable of creating multimedia content (photos, videos, audio) that simulate accurate content with extreme precision. While there are several areas of worthwhile application of GANs – e.g., in the field of audio-visual production, human-computer (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark  
  48. What is a subliminal technique? An ethical perspective on AI-driven influence.Juan Pablo Bermúdez, Rune Nyrup, Sebastian Deterding, Celine Mougenot, Laura Moradbakhti, Fangzhou You & Rafael A. Calvo - 2023 - Ieee Ethics-2023 Conference Proceedings.
    Concerns about threats to human autonomy feature prominently in the field of AI ethics. One aspect of this concern relates to the use of AI systems for problematically manipulative influence. In response to this, the European Union’s draft AI Act (AIA) includes a prohibition on AI systems deploying subliminal techniques that alter people’s behavior in ways that are reasonably likely to cause harm (Article 5(1)(a)). Critics have argued that the term ‘subliminal techniques’ is too narrow to capture the target (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  49. A phenomenology and epistemology of large language models: transparency, trust, and trustworthiness.Richard Heersmink, Barend de Rooij, María Jimena Clavel Vázquez & Matteo Colombo - 2024 - Ethics and Information Technology 26 (3):1-15.
    This paper analyses the phenomenology and epistemology of chatbots such as ChatGPT and Bard. The computational architecture underpinning these chatbots are large language models (LLMs), which are generative artificial intelligence (AI) systems trained on a massive dataset of text extracted from the Web. We conceptualise these LLMs as multifunctional computational cognitive artifacts, used for various cognitive tasks such as translating, summarizing, answering questions, information-seeking, and much more. Phenomenologically, LLMs can be experienced as a “quasi-other”; when that happens, users anthropomorphise them. (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  50.  48
    Ethics of placebo use in clinical practice: why we need to look beyond deontology.Rosanna Plowman & Sally Spurr - 2021 - Journal of Medical Ethics 47 (4):271-273.
    Beneficent clinical usage of placebos has been a problem for the application of Kant’s deontology in medical ethics, which, in its strictest form, rejects deception universally. Some defenders of deontology have countered this by arguing placebos can be used by a physician without necessarily being deceptive. In this paper we argue that such a manipulation of Kant’s absolutism is not credible, and therefore, that we should look beyond deontology in our consideration of placebo usage in clinical practice. We (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark  
1 — 50 / 954