Results for ' AI responsibility & authorship'

10 found
Order:
  1. Can ChatGPT be an author? Generative AI creative writing assistance and perceptions of authorship, creatorship, responsibility, and disclosure.Paul Formosa, Sarah Bankins, Rita Matulionyte & Omid Ghasemi - forthcoming - AI and Society.
    The increasing use of Generative AI raises many ethical, philosophical, and legal issues. A key issue here is uncertainties about how different degrees of Generative AI assistance in the production of text impacts assessments of the human authorship of that text. To explore this issue, we developed an experimental mixed methods survey study (N = 602) asking participants to reflect on a scenario of a human author receiving assistance to write a short novel as part of a 3 (high, (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  2.  23
    Responsibility is not required for authorship.Neil Levy - 2025 - Journal of Medical Ethics 51 (4):230-232.
    The Committee on Publication Ethics (COPE) maintains that AIs (artificial intelligences) cannot be authors of academic papers, because they are unable to take responsibility for them. COPE appears to have the _answerability_ sense of responsibility in mind. It is true that AIs cannot be answerable for papers, but responsibility in this sense is not required for authorship in the sciences. I suggest that ethics will be forced to follow suit in dropping responsibility as a criterion (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  3.  2
    Responsible guidelines for authorship attribution tasks in NLP.Vageesh Saxena, Aurelia Tamò-Larrieux, Gijs Van Dijck & Gerasimos Spanakis - 2025 - Ethics and Information Technology 27 (2).
    Authorship Attribution (AA) approaches in Natural Language Processing (NLP) are important in various domains, including forensic analysis and cybercrime. However, they pose Ethical, Legal, and Societal Implications/Aspects (ELSI/ELSA) challenges that remain underexplored. Inspired by foundational AI ethics guidelines and frameworks, this research introduces a comprehensive framework of responsible guidelines that focuses on AA tasks in NLP, which are tailored to different stakeholders and development phases. These guidelines are structured around four core principles: privacy and data protection, fairness and non-discrimination, (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  4.  12
    Mind Design, AI Epistemology, and Outsourcing.Steven Gubka, Garrett Mindt & Susan Schneider - 2025 - Social Epistemology.
    From brain machine interfaces to neural implants, present and future technological developments are not merely tools, but will change human beings themselves. Of particular interest is human integration with AI. In this paper, we focus on enhancements that enable us to outsource epistemic work to AI. How does outsourcing epistemic work to enhancements affect the authorship of and responsibility for the final product of that work? We argue that in the context of performing and reporting research, outsourcing does (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  5. Authorship and ChatGPT: a Conservative View.René van Woudenberg, Chris Ranalli & Daniel Bracker - 2024 - Philosophy and Technology 37 (1):1-26.
    Is ChatGPT an author? Given its capacity to generate something that reads like human-written text in response to prompts, it might seem natural to ascribe authorship to ChatGPT. However, we argue that ChatGPT is not an author. ChatGPT fails to meet the criteria of authorship because it lacks the ability to perform illocutionary speech acts such as promising or asserting, lacks the fitting mental states like knowledge, belief, or intention, and cannot take responsibility for the texts it (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  6. Generative AI entails a credit–blame asymmetry.Sebastian Porsdam Mann, Brian D. Earp, Sven Nyholm, John Danaher, Nikolaj Møller, Hilary Bowman-Smart, Joshua Hatherley, Julian Koplin, Monika Plozza, Daniel Rodger, Peter V. Treit, Gregory Renard, John McMillan & Julian Savulescu - 2023 - Nature Machine Intelligence 5 (5):472-475.
    Generative AI programs can produce high-quality written and visual content that may be used for good or ill. We argue that a credit–blame asymmetry arises for assigning responsibility for these outputs and discuss urgent ethical and policy implications focused on large-scale language models.
    Direct download  
     
    Export citation  
     
    Bookmark   4 citations  
  7. AI Enters Public Discourse: a Habermasian Assessment of the Moral Status of Large Language Models.Paolo Monti - 2024 - Ethics and Politics 61 (1):61-80.
    Large Language Models (LLMs) are generative AI systems capable of producing original texts based on inputs about topic and style provided in the form of prompts or questions. The introduction of the outputs of these systems into human discursive practices poses unprecedented moral and political questions. The article articulates an analysis of the moral status of these systems and their interactions with human interlocutors based on the Habermasian theory of communicative action. The analysis explores, among other things, Habermas's inquiries into (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  8. The ethics of disclosing the use of artificial intelligence tools in writing scholarly manuscripts.Mohammad Hosseini, David B. Resnik & Kristi Holmes - 2023 - Research Ethics 19 (4):449-465.
    In this article, we discuss ethical issues related to using and disclosing artificial intelligence (AI) tools, such as ChatGPT and other systems based on large language models (LLMs), to write or edit scholarly manuscripts. Some journals, such as Science, have banned the use of LLMs because of the ethical problems they raise concerning responsible authorship. We argue that this is not a reasonable response to the moral conundrums created by the use of LLMs because bans are unenforceable and would (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   11 citations  
  9. Holding Large Language Models to Account.Ryan Miller - 2023 - In Berndt Müller, Proceedings of the AISB Convention. Society for the Study of Artificial Intelligence and the Simulation of Behaviour. pp. 7-14.
    If Large Language Models can make real scientific contributions, then they can genuinely use language, be systematically wrong, and be held responsible for their errors. AI models which can make scientific contributions thereby meet the criteria for scientific authorship.
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  10.  54
    Vox Populi, Vox ChatGPT: Large Language Models, Education and Democracy.Niina Zuber & Jan Gogoll - 2024 - Philosophies 9 (1):13.
    In the era of generative AI and specifically large language models (LLMs), exemplified by ChatGPT, the intersection of artificial intelligence and human reasoning has become a focal point of global attention. Unlike conventional search engines, LLMs go beyond mere information retrieval, entering into the realm of discourse culture. Their outputs mimic well-considered, independent opinions or statements of facts, presenting a pretense of wisdom. This paper explores the potential transformative impact of LLMs on democratic societies. It delves into the concerns regarding (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation