Results for 'Language model'

963 found
Order:
See also
  1. Large Language Models and Biorisk.William D’Alessandro, Harry R. Lloyd & Nathaniel Sharadin - 2023 - American Journal of Bioethics 23 (10):115-118.
    We discuss potential biorisks from large language models (LLMs). AI assistants based on LLMs such as ChatGPT have been shown to significantly reduce barriers to entry for actors wishing to synthesize dangerous, potentially novel pathogens and chemical weapons. The harms from deploying such bioagents could be further magnified by AI-assisted misinformation. We endorse several policy responses to these dangers, including prerelease evaluations of biomedical AIs by subject-matter experts, enhanced surveillance and lab screening procedures, restrictions on AI training data, and (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  2. Large Language Models and the Reverse Turing Test.Terrence Sejnowski - 2023 - Neural Computation 35 (3):309–342.
    Large Language Models (LLMs) have been transformative. They are pre-trained foundational models that are self-supervised and can be adapted with fine tuning to a wide range of natural language tasks, each of which previously would have required a separate network model. This is one step closer to the extraordinary versatility of human language. GPT-3 and more recently LaMDA can carry on dialogs with humans on many topics after minimal priming with a few examples. However, there has (...)
    Direct download  
     
    Export citation  
     
    Bookmark   4 citations  
  3.  98
    Large Language Models Demonstrate the Potential of Statistical Learning in Language.Pablo Contreras Kallens, Ross Deans Kristensen-McLachlan & Morten H. Christiansen - 2023 - Cognitive Science 47 (3):e13256.
    To what degree can language be acquired from linguistic input alone? This question has vexed scholars for millennia and is still a major focus of debate in the cognitive science of language. The complexity of human language has hampered progress because studies of language–especially those involving computational modeling–have only been able to deal with small fragments of our linguistic skills. We suggest that the most recent generation of Large Language Models (LLMs) might finally provide the (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  4. Large language models and linguistic intentionality.Jumbly Grindrod - 2024 - Synthese 204 (2):1-24.
    Do large language models like Chat-GPT or Claude meaningfully use the words they produce? Or are they merely clever prediction machines, simulating language use by producing statistically plausible text? There have already been some initial attempts to answer this question by showing that these models meet the criteria for entering meaningful states according to metasemantic theories of mental content. In this paper, I will argue for a different approach—that we should instead consider whether language models meet the (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  5. Are Language Models More Like Libraries or Like Librarians? Bibliotechnism, the Novel Reference Problem, and the Attitudes of LLMs.Harvey Lederman & Kyle Mahowald - 2024 - Transactions of the Association for Computational Linguistics 12:1087-1103.
    Are LLMs cultural technologies like photocopiers or printing presses, which transmit information but cannot create new content? A challenge for this idea, which we call bibliotechnism, is that LLMs generate novel text. We begin with a defense of bibliotechnism, showing how even novel text may inherit its meaning from original human-generated text. We then argue that bibliotechnism faces an independent challenge from examples in which LLMs generate novel reference, using new names to refer to new entities. Such examples could be (...)
    Direct download  
     
    Export citation  
     
    Bookmark   7 citations  
  6. Language Models as Critical Thinking Tools: A Case Study of Philosophers.Andre Ye, Jared Moore, Rose Novick & Amy Zhang - manuscript
    Current work in language models (LMs) helps us speed up or even skip thinking by accelerating and automating cognitive work. But can LMs help us with critical thinking -- thinking in deeper, more reflective ways which challenge assumptions, clarify ideas, and engineer new concepts? We treat philosophy as a case study in critical thinking, and interview 21 professional philosophers about how they engage in critical thinking and on their experiences with LMs. We find that philosophers do not find LMs (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  7.  62
    AUTOGEN: A Personalized Large Language Model for Academic Enhancement—Ethics and Proof of Principle.Sebastian Porsdam Mann, Brian D. Earp, Nikolaj Møller, Suren Vynn & Julian Savulescu - 2023 - American Journal of Bioethics 23 (10):28-41.
    Large language models (LLMs) such as ChatGPT or Google’s Bard have shown significant performance on a variety of text-based tasks, such as summarization, translation, and even the generation of new...
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   30 citations  
  8. (1 other version)Creating a large language model of a philosopher.Eric Schwitzgebel, David Schwitzgebel & Anna Strasser - 2023 - Mind and Language 39 (2):237-259.
    Can large language models produce expert‐quality philosophical texts? To investigate this, we fine‐tuned GPT‐3 with the works of philosopher Daniel Dennett. To evaluate the model, we asked the real Dennett 10 philosophical questions and then posed the same questions to the language model, collecting four responses for each question without cherry‐picking. Experts on Dennett's work succeeded at distinguishing the Dennett‐generated and machine‐generated answers above chance but substantially short of our expectations. Philosophy blog readers performed similarly to (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   20 citations  
  9.  51
    Large language models in medical ethics: useful but not expert.Andrea Ferrario & Nikola Biller-Andorno - 2024 - Journal of Medical Ethics 50 (9):653-654.
    Large language models (LLMs) have now entered the realm of medical ethics. In a recent study, Balaset alexamined the performance of GPT-4, a commercially available LLM, assessing its performance in generating responses to diverse medical ethics cases. Their findings reveal that GPT-4 demonstrates an ability to identify and articulate complex medical ethical issues, although its proficiency in encoding the depth of real-world ethical dilemmas remains an avenue for improvement. Investigating the integration of LLMs into medical ethics decision-making appears to (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  10.  29
    Large language models in cryptocurrency securities cases: can a GPT model meaningfully assist lawyers?Arianna Trozze, Toby Davies & Bennett Kleinberg - forthcoming - Artificial Intelligence and Law:1-47.
    Large Language Models (LLMs) could be a useful tool for lawyers. However, empirical research on their effectiveness in conducting legal tasks is scant. We study securities cases involving cryptocurrencies as one of numerous contexts where AI could support the legal process, studying GPT-3.5’s legal reasoning and ChatGPT’s legal drafting capabilities. We examine whether a) GPT-3.5 can accurately determine which laws are potentially being violated from a fact pattern, and b) whether there is a difference in juror decision-making based on (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  11.  43
    Large language models and their big bullshit potential.Sarah A. Fisher - 2024 - Ethics and Information Technology 26 (4):1-8.
    Newly powerful large language models have burst onto the scene, with applications across a wide range of functions. We can now expect to encounter their outputs at rapidly increasing volumes and frequencies. Some commentators claim that large language models are bullshitting, generating convincing output without regard for the truth. If correct, that would make large language models distinctively dangerous discourse participants. Bullshitters not only undermine the norm of truthfulness (by saying false things) but the normative status of (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  12. AI language models cannot replace human research participants.Jacqueline Harding, William D’Alessandro, N. G. Laskowski & Robert Long - 2024 - AI and Society 39 (5):2603-2605.
    In a recent letter, Dillion et. al (2023) make various suggestions regarding the idea of artificially intelligent systems, such as large language models, replacing human subjects in empirical moral psychology. We argue that human subjects are in various ways indispensable.
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  13.  7
    Large Language Models and the Enhancement of Human Cognition: Some Theoretical Insights.Aistė Diržytė - 2025 - Filosofija. Sociologija 36 (1).
    This essay explores the possible contribution of Large Language Models (LLMs) to human cognition. It investigates whether human cognition can be enhanced by advanced AI systems such as LLMs. Can LLMs make people as learners smarter, or, on the contrary, make them reason/think less? The author discusses the concepts of human and artificial intelligence and examines LLMs as advanced AI systems, which use deep learning techniques and can be considered as excelling in neural network architectures, data volume, generalisation and (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  14.  73
    Large Language Models, Agency, and Why Speech Acts are Beyond Them (For Now) – A Kantian-Cum-Pragmatist Case.Reto Gubelmann - 2024 - Philosophy and Technology 37 (1):1-24.
    This article sets in with the question whether current or foreseeable transformer-based large language models (LLMs), such as the ones powering OpenAI’s ChatGPT, could be language users in a way comparable to humans. It answers the question negatively, presenting the following argument. Apart from niche uses, to use language means to act. But LLMs are unable to act because they lack intentions. This, in turn, is because they are the wrong kind of being: agents with intentions need (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  15.  35
    (1 other version)Large language models and their role in modern scientific discoveries.В. Ю Филимонов - 2024 - Philosophical Problems of IT and Cyberspace (PhilIT&C) 1:42-57.
    Today, large language models are very powerful, informational and analytical tools that significantly accelerate most of the existing methods and methodologies for processing informational processes. Scientific information is of particular importance in this capacity, which gradually involves the power of large language models. This interaction of science and qualitative new opportunities for working with information lead us to new, unique scientific discoveries, their great quantitative diversity. There is an acceleration of scientific research, a reduction in the time spent (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  16. Could a large language model be conscious?David J. Chalmers - 2023 - Boston Review 1.
    [This is an edited version of a keynote talk at the conference on Neural Information Processing Systems (NeurIPS) on November 28, 2022, with some minor additions and subtractions.] -/- There has recently been widespread discussion of whether large language models might be sentient or conscious. Should we take this idea seriously? I will break down the strongest reasons for and against. Given mainstream assumptions in the science of consciousness, there are significant obstacles to consciousness in current models: for example, (...)
    Direct download  
     
    Export citation  
     
    Bookmark   44 citations  
  17.  57
    Large Language Models: A Historical and Sociocultural Perspective.Eugene Yu Ji - 2024 - Cognitive Science 48 (3):e13430.
    This letter explores the intricate historical and contemporary links between large language models (LLMs) and cognitive science through the lens of information theory, statistical language models, and socioanthropological linguistic theories. The emergence of LLMs highlights the enduring significance of information‐based and statistical learning theories in understanding human communication. These theories, initially proposed in the mid‐20th century, offered a visionary framework for integrating computational science, social sciences, and humanities, which nonetheless was not fully fulfilled at that time. The subsequent (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  18. Generalization Bias in Large Language Model Summarization of Scientific Research.Uwe Peters & Benjamin Chin-Yee - forthcoming - Royal Society Open Science.
    Artificial intelligence chatbots driven by large language models (LLMs) have the potential to increase public science literacy and support scientific research, as they can quickly summarize complex scientific information in accessible terms. However, when summarizing scientific texts, LLMs may omit details that limit the scope of research conclusions, leading to generalizations of results broader than warranted by the original study. We tested 10 prominent LLMs, including ChatGPT-4o, ChatGPT-4.5, DeepSeek, LLaMA 3.3 70B, and Claude 3.7 Sonnet, comparing 4900 LLM-generated summaries (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  19. Machine Advisors: Integrating Large Language Models into Democratic Assemblies.Petr Špecián - forthcoming - Social Epistemology.
    Could the employment of large language models (LLMs) in place of human advisors improve the problem-solving ability of democratic assemblies? LLMs represent the most significant recent incarnation of artificial intelligence and could change the future of democratic governance. This paper assesses their potential to serve as expert advisors to democratic representatives. While LLMs promise enhanced expertise availability and accessibility, they also present specific challenges. These include hallucinations, misalignment and value imposition. After weighing LLMs’ benefits and drawbacks against human advisors, (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  20. Ontologies, arguments, and Large-Language Models.John Beverley, Francesco Franda, Hedi Karray, Dan Maxwell, Carter Benson & Barry Smith - 2024 - In Ítalo Oliveira, Joint Ontologies Workshops (JOWO). Twente, Netherlands: CEUR. pp. 1-9.
    Abstract The explosion of interest in large language models (LLMs) has been accompanied by concerns over the extent to which generated outputs can be trusted, owing to the prevalence of bias, hallucinations, and so forth. Accordingly, there is a growing interest in the use of ontologies and knowledge graphs to make LLMs more trustworthy. This rests on the long history of ontologies and knowledge graphs in constructing human-comprehensible justification for model outputs as well as traceability concerning the impact (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  21.  28
    Large Language Models and Inclusivity in Bioethics Scholarship.Sumeeta Varma - 2023 - American Journal of Bioethics 23 (10):105-107.
    In the target article, Porsdam Mann and colleagues (2023) broadly survey the ethical opportunities and risks of using general and personalized large language models (LLMs) to generate academic pros...
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  22.  81
    Do Large Language Models Know What Humans Know?Sean Trott, Cameron Jones, Tyler Chang, James Michaelov & Benjamin Bergen - 2023 - Cognitive Science 47 (7):e13309.
    Humans can attribute beliefs to others. However, it is unknown to what extent this ability results from an innate biological endowment or from experience accrued through child development, particularly exposure to language describing others' mental states. We test the viability of the language exposure hypothesis by assessing whether models exposed to large quantities of human language display sensitivity to the implied knowledge states of characters in written passages. In pre‐registered analyses, we present a linguistic version of the (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  23. Holding Large Language Models to Account.Ryan Miller - 2023 - In Berndt Müller, Proceedings of the AISB Convention. Society for the Study of Artificial Intelligence and the Simulation of Behaviour. pp. 7-14.
    If Large Language Models can make real scientific contributions, then they can genuinely use language, be systematically wrong, and be held responsible for their errors. AI models which can make scientific contributions thereby meet the criteria for scientific authorship.
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  24. Do Large Language Models Hallucinate Electric Fata Morganas?Kristina Šekrst - forthcoming - Journal of Consciousness Studies.
    This paper explores the intersection of AI hallucinations and the question of AI consciousness, examining whether the erroneous outputs generated by large language models (LLMs) could be mistaken for signs of emergent intelligence. AI hallucinations, which are false or unverifiable statements produced by LLMs, raise significant philosophical and ethical concerns. While these hallucinations may appear as data anomalies, they challenge our ability to discern whether LLMs are merely sophisticated simulators of intelligence or could develop genuine cognitive processes. By analyzing (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  25. Language, Models, and Reality: Weak existence and a threefold correspondence.Neil Barton & Giorgio Venturi - manuscript
    How does our language relate to reality? This is a question that is especially pertinent in set theory, where we seem to talk of large infinite entities. Based on an analogy with the use of models in the natural sciences, we argue for a threefold correspondence between our language, models, and reality. We argue that so conceived, the existence of models can be underwritten by a weak notion of existence, where weak existence is to be understood as existing (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  26.  43
    Event Knowledge in Large Language Models: The Gap Between the Impossible and the Unlikely.Carina Kauf, Anna A. Ivanova, Giulia Rambelli, Emmanuele Chersoni, Jingyuan Selena She, Zawad Chowdhury, Evelina Fedorenko & Alessandro Lenci - 2023 - Cognitive Science 47 (11):e13386.
    Word co‐occurrence patterns in language corpora contain a surprising amount of conceptual knowledge. Large language models (LLMs), trained to predict words in context, leverage these patterns to achieve impressive performance on diverse semantic tasks requiring world knowledge. An important but understudied question about LLMs’ semantic abilities is whether they acquire generalized knowledge of common events. Here, we test whether five pretrained LLMs (from 2018's BERT to 2023's MPT) assign a higher likelihood to plausible descriptions of agent−patient interactions than (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  27.  27
    The rise of large language models: challenges for Critical Discourse Studies.Mathew Gillings, Tobias Kohn & Gerlinde Mautner - forthcoming - Critical Discourse Studies.
    Large language models (LLMs) such as ChatGPT are opening up new areas of research and teaching potential across a variety of domains. The purpose of the present conceptual paper is to map this new terrain from the point of view of Critical Discourse Studies (CDS). We demonstrate that the usage of LLMs raises concerns that definitely fall within the remit of CDS; among them, power and inequality. After an initial explanation of LLMs, we focus on three key areas of (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  28.  14
    Can Large Language Models Counter the Recent Decline in Literacy Levels? An Important Role for Cognitive Science.Falk Huettig & Morten H. Christiansen - 2024 - Cognitive Science 48 (8):e13487.
    Literacy is in decline in many parts of the world, accompanied by drops in associated cognitive skills (including IQ) and an increasing susceptibility to fake news. It is possible that the recent explosive growth and widespread deployment of Large Language Models (LLMs) might exacerbate this trend, but there is also a chance that LLMs can help turn things around. We argue that cognitive science is ideally suited to help steer future literacy development in the right direction by challenging and (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  29.  3
    Large language models for surgical informed consent: an ethical perspective on simulated empathy.Pranab Rudra, Wolf-Tilo Balke, Tim Kacprowski, Frank Ursin & Sabine Salloch - forthcoming - Journal of Medical Ethics.
    Informed consent in surgical settings requires not only the accurate communication of medical information but also the establishment of trust through empathic engagement. The use of large language models (LLMs) offers a novel opportunity to enhance the informed consent process by combining advanced information retrieval capabilities with simulated emotional responsiveness. However, the ethical implications of simulated empathy raise concerns about patient autonomy, trust and transparency. This paper examines the challenges of surgical informed consent, the potential benefits and limitations of (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  30.  7
    Large Language Model Displays Emergent Ability to Interpret Novel Literary Metaphors.Los Angeles - 2024 - Metaphor and Symbol 39 (4):296-309.
    Despite the exceptional performance of large language models (LLMs) on a wide range of tasks involving natural language processing and reasoning, there has been sharp disagreement as to whether their abilities extend to more creative human abilities. A core example is the interpretation of novel metaphors. Here we assessed the ability of GPT-4, a state-of-the-art large language model, to provide natural-language interpretations of a recent AI benchmark (Fig-QA dataset), novel literary metaphors drawn from Serbian poetry (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  31.  9
    Large Language Model Displays Emergent Ability to Interpret Novel Literary Metaphors.Nicholas Ichien, Dušan Stamenković & Keith J. Holyoak - 2024 - Metaphor and Symbol 39 (4):296-309.
    Despite the exceptional performance of large language models (LLMs) on a wide range of tasks involving natural language processing and reasoning, there has been sharp disagreement as to whether their abilities extend to more creative human abilities. A core example is the interpretation of novel metaphors. Here we assessed the ability of GPT-4, a state-of-the-art large language model, to provide natural-language interpretations of a recent AI benchmark (Fig-QA dataset), novel literary metaphors drawn from Serbian poetry (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  32. Large Language models are stochastic measuring devices.Fintan Mallory - forthcoming - In Herman Cappelen & Rachel Sterken, Communicating with AI: Philosophical Perspectives. Oxford: Oxford University Press.
  33.  7
    Improving Language Models for Emotion Analysis: Insights from Cognitive Science.Constant Bonard & Gustave Cortal - 2024 - In Tatsuki Kuribayashi, Giulia Rambelli, Ece Takmaz, Philipp Wicke & Yohei Oseki, Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics. Bangkok: Association for Computational Linguistics. pp. 264–77.
    We propose leveraging cognitive science research on emotions and communication to improve language models for emotion analysis. First, we present the main emotion theories in psychology and cognitive science. Then, we introduce the main methods of emotion annotation in natural language processing and their connections to psychological theories. We also present the two main types of analyses of emotional communication in cognitive pragmatics. Finally, based on the cognitive science research presented, we propose directions for improving language models (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark  
  34. Can large language models help solve the cost problem for the right to explanation?Lauritz Munch & Jens Christian Bjerring - forthcoming - Journal of Medical Ethics.
    By now a consensus has emerged that people, when subjected to high-stakes decisions through automated decision systems, have a moral right to have these decisions explained to them. However, furnishing such explanations can be costly. So the right to an explanation creates what we call the cost problem: providing subjects of automated decisions with appropriate explanations of the grounds of these decisions can be costly for the companies and organisations that use these automated decision systems. In this paper, we explore (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  35. Babbling stochastic parrots? A Kripkean argument for reference in large language models.Steffen Koch - forthcoming - Philosophy of Ai.
    Recently developed large language models (LLMs) perform surprisingly well in many language-related tasks, ranging from text correction or authentic chat experiences to the production of entirely new texts or even essays. It is natural to get the impression that LLMs know the meaning of natural language expressions and can use them productively. Recent scholarship, however, has questioned the validity of this impression, arguing that LLMs are ultimately incapable of understanding and producing meaningful texts. This paper develops a (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  36.  2
    Large Language Models to make museum archive collections more accessible.Manon Reusens, Amy Adams & Bart Baesens - forthcoming - AI and Society:1-13.
    Keywords are essential to the searchability and therefore discoverability of museum and archival collections in the modern world. Without them, the collection management systems (CMS) and online collections these cultural organisations rely on to record, organise, and make their collections accessible, do not operate efficiently. However, generating these keywords manually is time consuming for these already resource strapped organisations. Artificial intelligence (AI), particularly generative AI and Large Language Models (LLMs), could hold the key to generating, even automating, this key (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  37. Language Models and the Private Language Argument: a Wittgensteinian Guide to Machine Learning.Giovanni Galli - 2024 - Anthem Press:145-164.
    Wittgenstein’s ideas are a common ground for developers of Natural Language Processing (NLP) systems and linguists working on Language Acquisition and Mastery (LAM) models (Mills 1993; Lowney, Levy, Meroney and Gayler 2020; Skelac and Jandrić 2020). In recent years, we have witnessed a fast development of NLP systems capable of performing tasks as never before. NLP and LAM have been implemented based on deep learning neural networks, which learn concepts representation from rough data, but are nonetheless very effective (...)
     
    Export citation  
     
    Bookmark  
  38. The use of large language models as scaffolds for proleptic reasoning.Olya Kudina, Brian Ballsun-Stanton & Mark Alfano - 2025 - Asian Journal of Philosophy 4 (1):1-18.
    This paper examines the potential educational uses of chat-based large language models (LLMs), moving past initial hype and skepticism. Although LLM outputs often evoke fascination and resemble human writing, they are unpredictable and must be used with discernment. Several metaphors—like calculators, cars, and drunk tutors—highlight distinct models for student interactions with LLMs, which we explore in the paper. We suggest that LLMs hold a potential in students’ learning by fostering proleptic reasoning through scaffolding, i.e., presenting a technological accompaniment in (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  39.  11
    Evaluating large language models’ ability to generate interpretive arguments.Zaid Marji & John Licato - forthcoming - Argument and Computation.
    In natural language understanding, a crucial goal is correctly interpreting open-textured phrases. In practice, disagreements over the meanings of open-textured phrases are often resolved through the generation and evaluation of interpretive arguments, arguments designed to support or attack a specific interpretation of an expression within a document. In this paper, we discuss some of our work towards the goal of automatically generating and evaluating interpretive arguments. We have curated a set of rules from the code of ethics of various (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  40. “Large Language Models” Do Much More than Just Language: Some Bioethical Implications of Multi-Modal AI.Joshua August Skorburg, Kristina L. Kupferschmidt & Graham W. Taylor - 2023 - American Journal of Bioethics 23 (10):110-113.
    Cohen (2023) takes a fair and measured approach to the question of what ChatGPT means for bioethics. The hype cycles around AI often obscure the fact that ethicists have developed robust frameworks...
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  41.  3
    Conceptual Combination in Large Language Models: Uncovering Implicit Relational Interpretations in Compound Words With Contextualized Word Embeddings.Marco Ciapparelli, Calogero Zarbo & Marco Marelli - 2025 - Cognitive Science 49 (3):e70048.
    Large language models (LLMs) have been proposed as candidate models of human semantics, and as such, they must be able to account for conceptual combination. This work explores the ability of two LLMs, namely, BERT-base and Llama-2-13b, to reveal the implicit meaning of existing and novel compound words. According to psycholinguistic theories, understanding the meaning of a compound (e.g., “snowman”) involves its automatic decomposition into constituent meanings (“snow,” “man”), which are then connected by an implicit semantic relation selected from (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  42. Introspective Capabilities in Large Language Models.Robert Long - 2023 - Journal of Consciousness Studies 30 (9):143-153.
    This paper considers the kind of introspection that large language models (LLMs) might be able to have. It argues that LLMs, while currently limited in their introspective capabilities, are not inherently unable to have such capabilities: they already model the world, including mental concepts, and already have some introspection-like capabilities. With deliberate training, LLMs may develop introspective capabilities. The paper proposes a method for such training for introspection, situates possible LLM introspection in the 'possible forms of introspection' framework (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  43.  68
    Would you pass the Turing Test? Mirroring human intelligence with large language models.Renne Pesonen & Samuli Reijula - manuscript
    Can large language models be considered intelligent? Arguments against this proposition often assume that genuine intelligence cannot exist without consciousness, understanding, or creative thinking. We discuss each of these roadblocks to machine intelligence and conclude that, in light of findings and conceptualizations in scientific research on these topics, none of them rule out the possibility of viewing current AI systems based on large language models as intelligent. We argue that consciousness is not relevant for AI, while creativity and (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  44.  52
    Embodied human language models vs. Large Language Models, or why Artificial Intelligence cannot explain the modal be able to.Sergio Torres-Martínez - 2024 - Biosemiotics 17 (1):185-209.
    This paper explores the challenges posed by the rapid advancement of artificial intelligence specifically Large Language Models (LLMs). I show that traditional linguistic theories and corpus studies are being outpaced by LLMs’ computational sophistication and low perplexity levels. In order to address these challenges, I suggest a focus on language as a cognitive tool shaped by embodied-environmental imperatives in the context of Agentive Cognitive Construction Grammar. To that end, I introduce an Embodied Human Language Model (EHLM), (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  45. Learning alone: Language models, overreliance, and the goals of education.Leonard Dung & Dominik Balg - manuscript
    The development and ubiquitous availability of large language model based systems (LLMs) poses a plurality of potentials and risks for education in schools and universities. In this paper, we provide an analysis and discussion of the overreliance concern as one specific risk: that students might fail to acquire important capacities, or be inhibited in the acquisition of these capacities, because they overly rely on LLMs. We use the distinction between global and local goals of education to guide our (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  46.  91
    LLMs, Turing tests and Chinese rooms: the prospects for meaning in large language models.Emma Borg - forthcoming - Inquiry: An Interdisciplinary Journal of Philosophy.
    Discussions of artificial intelligence have been shaped by two brilliant thought-experiments: Turing’s Imitation Test for thinking systems and Searle’s Chinese Room Argument. In many ways, debates about large language models (LLMs) struggle to move beyond these original, opposing thought-experiments. So, in this paper, I ask whether we can move debate forward by exploring the features Sceptics about LLM abilities take to ground meaning. Section 1 sketches the options, while Sections 2 and 3 explore the common requirement for a robust (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  47.  34
    Combining prompt-based language models and weak supervision for labeling named entity recognition on legal documents.Vitor Oliveira, Gabriel Nogueira, Thiago Faleiros & Ricardo Marcacini - forthcoming - Artificial Intelligence and Law:1-21.
    Named entity recognition (NER) is a very relevant task for text information retrieval in natural language processing (NLP) problems. Most recent state-of-the-art NER methods require humans to annotate and provide useful data for model training. However, using human power to identify, circumscribe and label entities manually can be very expensive in terms of time, money, and effort. This paper investigates the use of prompt-based language models (OpenAI’s GPT-3) and weak supervision in the legal domain. We apply both (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  48. Reviving the Philosophical Dialogue with Large Language Models.Robert Smithson & Adam Zweber - 2024 - Teaching Philosophy 47 (2):143-171.
    Many philosophers have argued that large language models (LLMs) subvert the traditional undergraduate philosophy paper. For the enthusiastic, LLMs merely subvert the traditional idea that students ought to write philosophy papers “entirely on their own.” For the more pessimistic, LLMs merely facilitate plagiarism. We believe that these controversies neglect a more basic crisis. We argue that, because one can, with minimal philosophical effort, use LLMs to produce outputs that at least “look like” good papers, many students will complete paper (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  49. AI Enters Public Discourse: a Habermasian Assessment of the Moral Status of Large Language Models.Paolo Monti - 2024 - Ethics and Politics 61 (1):61-80.
    Large Language Models (LLMs) are generative AI systems capable of producing original texts based on inputs about topic and style provided in the form of prompts or questions. The introduction of the outputs of these systems into human discursive practices poses unprecedented moral and political questions. The article articulates an analysis of the moral status of these systems and their interactions with human interlocutors based on the Habermasian theory of communicative action. The analysis explores, among other things, Habermas's inquiries (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  50.  89
    On pitfalls (and advantages) of sophisticated Large Language Models.Anna Strasser - 2024 - In Joan Casas-Roma, Santi Caballe & Jordi Conesa, Ethics in Online AI-Based Systems: Risks and Opportunities in Current Technological Trends. Academic Press.
    Natural language processing based on large language models (LLMs) is a booming field of AI research. After neural networks have proven to outperform humans in games and practical domains based on pattern recognition, we might stand now at a road junction where artificial entities might eventually enter the realm of human communication. However, this comes with serious risks. Due to the inherent limitations regarding the reliability of neural networks, overreliance on LLMs can have disruptive consequences. Since it will (...)
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
1 — 50 / 963