This category needs an editor. We encourage you to help if you are qualified.
Volunteer, or read more about what this involves.
Related

Contents
63 found
Order:
1 — 50 / 63
  1. AI-Based Solutions for Environmental Monitoring in Urban Spaces.Hilda Andrea - manuscript
    The rapid advancement of urbanization has necessitated the creation of "smart cities," where information and communication technologies (ICT) are used to improve the quality of urban life. Central to the smart city paradigm is data integration—connecting disparate data sources from various urban systems, such as transportation, healthcare, utilities, and public safety. This paper explores the role of Artificial Intelligence (AI) in facilitating data integration within smart cities, focusing on how AI technologies can enable effective urban governance. By examining the current (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  2. Do Large Language Models Defend Inferentialist Semantics?: On the Logical Expressivism and Anti-Representationalism of LLMs.Yuzuki Arai & Sho Tsugawa - manuscript
    The philosophy of language, which has historically been developed through an anthropocentric lens, is now being forced to move towards post-anthropocentrism due to the advent of large language models (LLMs) like ChatGPT (OpenAI), Claude (Anthropic), which are considered to possess linguistic abilities comparable to those of humans. Traditionally, LLMs have been explained through distributional semantics as their foundational semantics. However, recent research is exploring alternative foundational semantics beyond distributional semantics. This paper proposes Robert Brandom's inferentialist semantics as an suitable foundational (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  3. Artificial Leviathan: Exploring Social Evolution of LLM Agents Through the Lens of Hobbesian Social Contract Theory.Gordon Dai, Weijia Zhang, Jinhan Li, Siqi Yang, Chidera Ibe, Srihas Rao, Arthur Caetano & Misha Sra - manuscript
    The emergence of Large Language Models (LLMs) and advancements in Artificial Intelligence (AI) offer an opportunity for computational social science research at scale. Building upon prior explorations of LLM agent design, our work introduces a simulated agent society where complex social relationships dynamically form and evolve over time. Agents are imbued with psychological drives and placed in a sandbox survival environment. We conduct an evaluation of the agent society through the lens of Thomas Hobbes's seminal Social Contract Theory (SCT). We (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  4. What Good is Superintelligent AI?Tanya de Villiers-Botha - manuscript
    Extraordinary claims about both the imminenceof superintelligent AI systems and their foreseen capabilities have gone mainstream. It is even argued that we should exacerbate known risks such as climate change in the short term in the attempt to develop superintelligence (SI), which will then purportedly solve those very problems. Here, I examine the plausibility of these claims. I first ask what SI is taken to be and then ask whether such SI could possibly hold the benefits often envisioned. I conclude (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  5. What is AI safety? What do we want it to be?Jacqueline Harding & Cameron Domenico Kirk-Giannini - manuscript
    The field of AI safety seeks to prevent or reduce the harms caused by AI systems. A simple and appealing account of what is distinctive of AI safety as a field holds that this feature is constitutive: a research project falls within the purview of AI safety just in case it aims to prevent or reduce the harms caused by AI systems. Call this appealingly simple account The Safety Conception of AI safety. Despite its simplicity and appeal, we argue that (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  6. On DeLancey’s The Passionate Engines: Affective engineering and counterfactual thinking. [REVIEW]Manh-Tung Ho - manuscript
    Craig Delancey's The Passionate Engines presents a comprehensive account of “what basic emotions reveal about central problems of the philosophy of mind” (2001, p. vii). The book discusses five major issues: The affect program theory, intentionality, phenomenal consciousness, and artificial intelligence (AI). In this essay, I would like to briefly review the major tenets in the book and then focus on its discussion of AI, which has not been reviewed in detail. I outline some of the recent developments in cognitive (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  7. A taxonomy of epistemic injustice in the context of AI and the case for generative hermeneutical erasure.Warmhold Jan Thomas Mollema - manuscript
    Whether related to machine learning models’ epistemic opacity, algorithmic classification systems’ discriminatory automation of testimonial prejudice, the distortion of human beliefs via the hallucinations of generative AI, the inclusion of the global South in global AI governance, the execution of bureaucratic violence via algorithmic systems, or located in the interaction with conversational artificial agents epistemic injustice related to AI is a growing concern. Based on a proposed general taxonomy of epistemic injustice, this paper first sketches a taxonomy of the types (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  8. “i am a stochastic parrot, and so r u”: Is AI-based framing of human behaviour and cognition a conceptual metaphor or conceptual engineering?Warmhold Jan Thomas Mollema & Thomas Wachter - manuscript
    Understanding human behaviour, neuroscience and psychology using the concepts of ‘computer’, ‘software and hardware’ and ‘AI’ is becoming increasingly popular. In popular media and parlance, people speak of being ‘overloaded’ like a CPU, ‘computing an answer to a question’, of ‘being programmed’ to do something. Now, given the massive integration of AI technologies into our daily lives, AI-related concepts are being used to metaphorically compare AI systems with human behaviour and/or cognitive abilities like language acquisition. Rightfully, the epistemic success of (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  9. Before the Systematicity Debate: Recovering the Rationales for Systematizing Thought.Matthieu Queloz - manuscript
    Over the course of the twentieth century, the notion of the systematicity of thought has acquired a much narrower meaning than it used to carry for much of its history. The so-called “systematicity debate” that has dominated the philosophy of language, cognitive science, and AI research over the last thirty years understands the systematicity of thought in terms of the compositionality of thought. But there is an older, broader, and more demanding notion of systematicity that is now increasingly relevant again. (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  10. Can Word Models be World Models? Language as a Window onto the Conditional Structure of the World.Matthieu Queloz - manuscript
    LLMs are, in the first instance, models of the statistical distribution of tokens in the vast linguistic corpus they have been trained on. But their often surprising emergent capabilities raise the question of how much understanding of the extralinguistic world LLMs can glean from this statistical distribution of words alone. Here, I explore and evaluate the idea that the probability distribution of words in the public corpus offers a window onto the conditional structure of the world. To become a good (...)
    Remove from this list  
     
    Export citation  
     
    Bookmark  
  11. Sideloading: Creating A Model of a Person via LLM with Very Large Prompt.Alexey Turchin & Roman Sitelew - manuscript
    Sideloading is the creation of a digital model of a person during their life via iterative improvements of this model based on the person's feedback. The progress of LLMs with large prompts allows the creation of very large, book-size prompts which describe a personality. We will call mind-models created via sideloading "sideloads"; they often look like chatbots, but they are more than that as they have other output channels, like internal thought streams and descriptions of actions. -/- By arranging the (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  12. A Hybrid Approach for Intrusion Detection in IoT Using Machine Learning and Signature-Based Methods.Janet Yan - manuscript
    Internet of Things (IoT) devices have transformed various industries, enabling advanced functionalities across domains such as healthcare, smart cities, and industrial automation. However, the increasing number of connected devices has raised significant concerns regarding their security. IoT networks are highly vulnerable to a wide range of cyber threats, making Intrusion Detection Systems (IDS) critical for identifying and mitigating malicious activities. This paper proposes a hybrid approach for intrusion detection in IoT networks by combining Machine Learning (ML) techniques with Signature-Based Methods. (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  13. ‘Interpretability’ and ‘Alignment’ are Fool’s Errands: A Proof that Controlling Misaligned Large Language Models is the Best Anyone Can Hope For.Marcus Arvan - forthcoming - AI and Society.
    This paper uses famous problems from philosophy of science and philosophical psychology—underdetermination of theory by evidence, Nelson Goodman’s new riddle of induction, theory-ladenness of observation, and “Kripkenstein’s” rule-following paradox—to show that it is empirically impossible to reliably interpret which functions a large language model (LLM) AI has learned, and thus, that reliably aligning LLM behavior with human values is provably impossible. Sections 2 and 3 show that because of how complex LLMs are, researchers must interpret their learned functions largely in (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  14. Will Large Language Models Overwrite Us?Walter Barta - forthcoming - Double Helix.
  15. The Curious Case of Uncurious Creation.Lindsay Brainard - forthcoming - Inquiry: An Interdisciplinary Journal of Philosophy.
    This paper seeks to answer the question: Can contemporary forms of artificial intelligence be creative? To answer this question, I consider three conditions that are commonly taken to be necessary for creativity. These are novelty, value, and agency. I argue that while contemporary AI models may have a claim to novelty and value, they cannot satisfy the kind of agency condition required for creativity. From this discussion, a new condition for creativity emerges. Creativity requires curiosity, a motivation to pursue epistemic (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  16. The Foundations of the Mentalist Theory and the Statistical Machine Learning Challenge: Comments on Matthias Mahlmann’s Mind and Rights.Vincent Carchidi - forthcoming - Symposium on Matthias Mahlmann's Mind and Rights.
    Matthias Mahlmann’s Mind and Rights (M&R) argues that the mentalist theory of moral cognition—premised on an approach to the mind most closely associated with generative linguistics—is the appropriate lens through which to understand moral judgment’s roots in the mind. Specifically, he argues that individuals possess an inborn moral faculty responsible for the principled generation of moral intuitions. These moral intuitions, once sufficiently abstracted, generalized, and universalized by individuals, gave rise to the idea of human rights embodied in such conventions as (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  17. Social Choice Should Guide AI Alignment in Dealing with Diverse Human Feedback.Vincent Conitzer, Rachel Freedman, Jobst Heitzig, Wesley H. Holliday, Bob M. Jacobs, Nathan Lambert, Milan Mosse, Eric Pacuit, Stuart Russell, Hailey Schoelkopf, Emanuel Tewolde & William S. Zwicker - forthcoming - Proceedings of the Forty-First International Conference on Machine Learning.
    Foundation models such as GPT-4 are fine-tuned to avoid unsafe or otherwise problematic behavior, such as helping to commit crimes or producing racist text. One approach to fine-tuning, called reinforcement learning from human feedback, learns from humans' expressed preferences over multiple outputs. Another approach is constitutional AI, in which the input from humans is a list of high-level principles. But how do we deal with potentially diverging input from humans? How can we aggregate the input into consistent data about "collective" (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  18. Conversations with Chatbots.P. Connolly - forthcoming - In Patrick Connolly, Sandy Goldberg & Jennifer Saul, Conversations Online. Oxford University Press.
    The problem considered in this chapter emerges from the tension we find when looking at the design and architecture of chatbots on the one hand and their conversational aptitude on the other. In the way that LLM chatbots are designed and built, we have good reason to suppose they don't possess second-order capacities such as intention, belief or knowledge. Yet theories of conversation make great use of second-order capacities of speakers and their audiences to explain how aspects of interaction succeed. (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  19. Addressing Social Misattributions of Large Language Models: An HCXAI-based Approach.Andrea Ferrario, Alberto Termine & Alessandro Facchini - forthcoming - Available at Https://Arxiv.Org/Abs/2403.17873 (Extended Version of the Manuscript Accepted for the Acm Chi Workshop on Human-Centered Explainable Ai 2024 (Hcxai24).
    Human-centered explainable AI (HCXAI) advocates for the integration of social aspects into AI explanations. Central to the HCXAI discourse is the Social Transparency (ST) framework, which aims to make the socio-organizational context of AI systems accessible to their users. In this work, we suggest extending the ST framework to address the risks of social misattributions in Large Language Models (LLMs), particularly in sensitive areas like mental health. In fact LLMs, which are remarkably capable of simulating roles and personas, may lead (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  20. Distributional Semantics, Holism, and the Instability of Meaning.Jumbly Grindrod, J. D. Porter & Nat Hansen - forthcoming - In Herman Cappelen & Rachel Sterken, Communicating with AI: Philosophical Perspectives. Oxford: Oxford University Press.
    Large Language Models are built on the so-called distributional semantic approach to linguistic meaning that has the distributional hypothesis at its core. The distributional hypothesis involves a holistic conception of word meaning: the meaning of a word depends upon its relations to other words in the model. A standard objection to holism is the charge of instability: any change in the meaning properties of a linguistic system (a human speaker, for example) would lead to many changes or a complete change (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  21. Why do We Need to Employ Exemplars in Moral Education? Insights from Recent Advances in Research on Artificial Intelligence.Hyemin Han - forthcoming - Ethics and Behavior.
    In this paper, I examine why moral exemplars are useful and even necessary in moral education despite several critiques from researchers and educators. To support my point, I review recent AI research demonstrating that exemplar-based learning is superior to rule-based learning in model performance in training neural networks, such as large language models. I particularly focus on why education aiming at promoting the development of multifaceted moral functioning can be done effectively by using exemplars, which is similar to exemplar-based learning (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  22. What is it for a Machine Learning Model to Have a Capability?Jacqueline Harding & Nathaniel Sharadin - forthcoming - British Journal for the Philosophy of Science.
    What can contemporary machine learning (ML) models do? Given the proliferation of ML models in society, answering this question matters to a variety of stakeholders, both public and private. The evaluation of models' capabilities is rapidly emerging as a key subfield of modern ML, buoyed by regulatory attention and government grants. Despite this, the notion of an ML model possessing a capability has not been interrogated: what are we saying when we say that a model is able to do something? (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  23. Large Language models are stochastic measuring devices.Fintan Mallory - forthcoming - In Herman Cappelen & Rachel Sterken, Communicating with AI: Philosophical Perspectives. Oxford: Oxford University Press.
  24. Interventionist Methods for Interpreting Deep Neural Networks.Raphaël Millière & Cameron Buckner - forthcoming - In Gualtiero Piccinini, Neurocognitive Foundations of Mind. Routledge.
    Recent breakthroughs in artificial intelligence have primarily resulted from training deep neural networks (DNNs) with vast numbers of adjustable parameters on enormous datasets. Due to their complex internal structure, DNNs are frequently characterized as inscrutable ``black boxes,'' making it challenging to interpret the mechanisms underlying their impressive performance. This opacity creates difficulties for explanation, safety assurance, trustworthiness, and comparisons to human cognition, leading to divergent perspectives on these systems. This chapter examines recent developments in interpretability methods for DNNs, with a (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   2 citations  
  25. Reflection, confabulation, and reasoning.Jennifer Nagel - forthcoming - In Luis Oliveira & Joshua DiPaolo, Kornblith and His Critics. Wiley-Blackwell.
    Humans have distinctive powers of reflection: no other animal seems to have anything like our capacity for self-examination. Many philosophers hold that this capacity has a uniquely important guiding role in our cognition; others, notably Hilary Kornblith, draw attention to its weaknesses. Kornblith chiefly aims to dispel the sense that there is anything ‘magical’ about second-order mental states, situating them in the same causal net as ordinary first-order mental states. But elsewhere he goes further, suggesting that there is something deeply (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  26. Reference without intentions in large language models.Jessica Pepp - forthcoming - Inquiry: An Interdisciplinary Journal of Philosophy.
    1. During the 1960s and 1970s, Keith Donnellan (1966, 1970, 1974) and Saul Kripke ([1972] 1980) developed influential critiques of then-prevailing ‘description theories’ of reference. In place of s...
    No categories
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  27. Generalization Bias in Large Language Model Summarization of Scientific Research.Uwe Peters & Benjamin Chin-Yee - forthcoming - Royal Society Open Science.
    Artificial intelligence chatbots driven by large language models (LLMs) have the potential to increase public science literacy and support scientific research, as they can quickly summarize complex scientific information in accessible terms. However, when summarizing scientific texts, LLMs may omit details that limit the scope of research conclusions, leading to generalizations of results broader than warranted by the original study. We tested 10 prominent LLMs, including ChatGPT-4o, ChatGPT-4.5, DeepSeek, LLaMA 3.3 70B, and Claude 3.7 Sonnet, comparing 4900 LLM-generated summaries to (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  28. Language and thought: The view from LLMs.Daniel Rothschild - forthcoming - In David Sosa & Ernie Lepore, Oxford Studies in Philosophy of Language Volume 3.
  29. From Enclosure to Foreclosure and Beyond: Opening AI’s Totalizing Logic.Katia Schwerzmann - forthcoming - AI and Society.
    This paper reframes the issue of appropriation, extraction, and dispossession through AI—an assemblage of machine learning models trained on big data—in terms of enclosure and foreclosure. While enclosures are the product of a well-studied set of operations pertaining to both the constitution of the sovereign State and the primitive accumulation of capital, here, I want to recover an older form of the enclosure operation to then contrast it with foreclosure to better understand the effects of current algorithmic rationality. I argue (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  30. Do Large Language Models Hallucinate Electric Fata Morganas?Kristina Šekrst - forthcoming - Journal of Consciousness Studies.
    This paper explores the intersection of AI hallucinations and the question of AI consciousness, examining whether the erroneous outputs generated by large language models (LLMs) could be mistaken for signs of emergent intelligence. AI hallucinations, which are false or unverifiable statements produced by LLMs, raise significant philosophical and ethical concerns. While these hallucinations may appear as data anomalies, they challenge our ability to discern whether LLMs are merely sophisticated simulators of intelligence or could develop genuine cognitive processes. By analyzing the (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  31. Security practices in AI development.Petr Spelda & Vit Stritecky - forthcoming - AI and Society.
    What makes safety claims about general purpose AI systems such as large language models trustworthy? We show that rather than the capabilities of security tools such as alignment and red teaming procedures, it is security practices based on these tools that contributed to reconfiguring the image of AI safety and made the claims acceptable. After showing what causes the gap between the capabilities of security tools and the desired safety guarantees, we critically investigate how AI security practices attempt to fill (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  32. Deception and manipulation in generative AI.Christian Tarsney - forthcoming - Philosophical Studies.
    Large language models now possess human-level linguistic abilities in many contexts. This raises the concern that they can be used to deceive and manipulate on unprecedented scales, for instance spreading political misinformation on social media. In future, agentic AI systems might also deceive and manipulate humans for their own purposes. In this paper, first, I argue that AI-generated content should be subject to stricter standards against deception and manipulation than we ordinarily apply to humans. Second, I offer new characterizations of (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  33. Review of The Philosophy of Theoretical Linguistics: A Contemporary Outlook by Ryan M. Nefdt (CUP, 2024). [REVIEW]Keith Begley - 2025 - Linguist List 36 (243).
    The author, Nefdt (hereafter N), identifies as his target audience “advanced students of either philosophy or linguistics and experienced practitioners at the intersection between these fields” (p. x). N’s “goal is to provide not only a songbird’s-eye view of the interconnections between different subdisciplines and frameworks of linguistic theory but to showcase common problems and present novel analyses of the study of language that only a contemporary philosophical overview can offer” (p. ix). The book has xi + 231 pages, beginning (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  34. I Contain Multitudes: A Typology of Digital Doppelgängers.William D’Alessandro, Trenton W. Ford & Michael Yankoski - 2025 - American Journal of Bioethics 25 (2):132-134.
    Iglesias et al. (2025) argue that “some of the aims or ostensible goods of person-span expansion could plausibly be fulfilled in part by creating a digital doppelgänger”—that is, an AI system desig...
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  35. Materiality and Machinic Embodiment: A Postphenomenological Inquiry into ChatGPT’s Active User Interface.Selin Gerlek & Sebastian Weydner-Volkmann - 2025 - Journal of Human-Technology Relations 3 (1):1-15.
    The rise of ChatGPT affords a fundamental transformation of the dynamics in human-technology interaction, as Large Language Model (LLM) applications increasingly emulate our social habits in digital communication. This poses a challenge to Don Ihde’s explicit focus on material technics and their affordances: ChatGPT did not introduce new material technics. Rather, it is a new digital app that runs on the same physical devices we have used for years. This paper undertakes a re-evaluation of some postphenomenological concepts, introducing the notion (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  36. AI wellbeing.Simon Goldstein & Cameron Domenico Kirk-Giannini - 2025 - Asian Journal of Philosophy 4 (1):1-22.
    Under what conditions would an artificially intelligent system have wellbeing? Despite its clear bearing on the ethics of human interactions with artificial systems, this question has received little direct attention. Because all major theories of wellbeing hold that an individual’s welfare level is partially determined by their mental life, we begin by considering whether artificial systems have mental states. We show that a wide range of theories of mental states, when combined with leading theories of wellbeing, predict that certain existing (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   13 citations  
  37. Can AI Rely on the Systematicity of Truth? The Challenge of Modelling Normative Domains.Matthieu Queloz - 2025 - Philosophy and Technology 38 (34):1-27.
    A key assumption fuelling optimism about the progress of large language models (LLMs) in accurately and comprehensively modelling the world is that the truth is systematic: true statements about the world form a whole that is not just consistent, in that it contains no contradictions, but coherent, in that the truths are inferentially interlinked. This holds out the prospect that LLMs might in principle rely on that systematicity to fill in gaps and correct inaccuracies in the training data: consistency and (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  38. (1 other version)Artificial Intelligence (AI) and Global Justice.Siavosh Sahebi & Paul Formosa - 2025 - Minds and Machines 35 (4):1-29.
    This paper provides a philosophically informed and robust account of the global justice implications of Artificial Intelligence (AI). We first discuss some of the key theories of global justice, before justifying our focus on the Capabilities Approach as a useful framework for understanding the context-specific impacts of AI on lowto middle-income countries. We then highlight some of the harms and burdens facing low- to middle-income countries within the context of both AI use and the AI supply chain, by analyzing the (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  39. The AI-mediated communication dilemma: epistemic trust, social media, and the challenge of generative artificial intelligence.Siavosh Sahebi & Paul Formosa - 2025 - Synthese 205 (3):1-24.
    The rapid adoption of commercial Generative Artificial Intelligence (Gen AI) products raises important questions around the impact this technology will have on our communicative interactions. This paper provides an analysis of some of the potential implications that Artificial Intelligence-Mediated Communication (AI-MC) may have on epistemic trust in online communications, specifically on social media. We argue that AI-MC poses a risk to epistemic trust being diminished in online communications on both normative and descriptive grounds. Descriptively, AI-MC seems to (roughly) lower levels (...)
    No categories
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  40. Attributions toward Artificial Agents in a modified Moral Turing Test.Eyal Aharoni, Sharlene Fernandes, Daniel Brady, Caelan Alexander, Michael Criner, Kara Queen, Javier Rando, Eddy Nahmias & Victor Crespo - 2024 - Scientific Reports 14 (8458):1-11.
    Advances in artificial intelligence (AI) raise important questions about whether people view moral evaluations by AI systems similarly to human-generated moral evaluations. We conducted a modified Moral Turing Test (m-MTT), inspired by Allen et al. (Exp Theor Artif Intell 352:24–28, 2004) proposal, by asking people to distinguish real human moral evaluations from those made by a popular advanced AI language model: GPT-4. A representative sample of 299 U.S. adults first rated the quality of moral evaluations when blinded to their source. (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   3 citations  
  41. Creative Minds Like Ours? Large Language Models and the Creative Aspect of Language Use.Vincent Carchidi - 2024 - Biolinguistics 18:1-31.
    Descartes famously constructed a language test to determine the existence of other minds. The test made critical observations about how humans use language that purportedly distinguishes them from animals and machines. These observations were carried into the generative (and later biolinguistic) enterprise under what Chomsky in his Cartesian Linguistics, terms the “creative aspect of language use” (CALU). CALU refers to the stimulus-free, unbounded, yet appropriate use of language—a tripartite depiction whose function in biolinguistics is to highlight a species-specific form of (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  42. Affective Artificial Agents as sui generis Affective Artifacts.Marco Facchin & Giacomo Zanotti - 2024 - Topoi 43 (3).
    AI-based technologies are increasingly pervasive in a number of contexts. Our affective and emotional life makes no exception. In this article, we analyze one way in which AI-based technologies can affect them. In particular, our investigation will focus on affective artificial agents, namely AI-powered software or robotic agents designed to interact with us in affectively salient ways. We build upon the existing literature on affective artifacts with the aim of providing an original analysis of affective artificial agents and their distinctive (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  43. Large language models and linguistic intentionality.Jumbly Grindrod - 2024 - Synthese 204 (2):1-24.
    Do large language models like Chat-GPT or Claude meaningfully use the words they produce? Or are they merely clever prediction machines, simulating language use by producing statistically plausible text? There have already been some initial attempts to answer this question by showing that these models meet the criteria for entering meaningful states according to metasemantic theories of mental content. In this paper, I will argue for a different approach—that we should instead consider whether language models meet the criteria given by (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  44. The FHJ debate: Will artificial intelligence replace clinical decision-making within our lifetimes?Joshua Hatherley, Anne Kinderlerer, Jens Christian Bjerring, Lauritz Munch & Lynsey Threlfall - 2024 - Future Healthcare Journal 11 (3):100178.
  45. (1 other version)Taking It Not at Face Value: A New Taxonomy for the Beliefs Acquired from Conversational AIs.Shun Iizuka - 2024 - Techné: Research in Philosophy and Technology 28 (2):219-235.
    One of the central questions in the epistemology of conversational AIs is how to classify the beliefs acquired from them. Two promising candidates are instrument-based and testimony-based beliefs. However, the category of instrument-based beliefs faces an intrinsic problem, and a challenge arises in its application. On the other hand, relying solely on the category of testimony-based beliefs does not encompass the totality of our practice of using conversational AIs. To address these limitations, I propose a novel classification of beliefs that (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  46. (1 other version)Taking It Not at Face Value: A New Taxonomy for the Beliefs Acquired from Conversational AIs.Shun Iizuka - 2024 - Techné: Research in Philosophy and Technology 28 (2):219-235.
    One of the central questions in the epistemology of conversational AIs is how to classify the beliefs acquired from them. Two promising candidates are instrument-based and testimony-based beliefs. However, the category of instrument-based beliefs faces an intrinsic problem, and a challenge arises in its application. On the other hand, relying solely on the category of testimony-based beliefs does not encompass the totality of our practice of using conversational AIs. To address these limitations, I propose a novel classification of beliefs that (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  47. Smart Route Optimization for Emergency Vehicles: Enhancing Ambulance Efficiency through Advanced Algorithms.R. Indoria - 2024 - Technosaga 1 (1):1-6.
    Emergency response times play a critical role in saving lives, especially in urban settings where traffic congestion and unpredictable events can delay ambulance arrivals. This paper explores a novel framework for smart route optimization for emergency vehicles, leveraging artificial intelligence (AI), Internet of Things (IoT) technologies, and dynamic traffic analytics. We propose a real-time adaptive routing system that integrates machine learning (ML) for predictive modeling and IoT-enabled communication with traffic infrastructure. The system is evaluated using simulated urban environments, achieving a (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  48. Is Alignment Unsafe?Cameron Domenico Kirk-Giannini - 2024 - Philosophy and Technology 37 (110):1–4.
    Inchul Yum (2024) argues that the widespread adoption of language agent architectures would likely increase the risk posed by AI by simplifying the process of aligning artificial systems with human values and thereby making it easier for malicious actors to use them to cause a variety of harms. Yum takes this to be an example of a broader phenomenon: progress on the alignment problem is likely to be net safety-negative because it makes artificial systems easier for malicious actors to control. (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  49. Imagination, Creativity, and Artificial Intelligence.Peter Langland-Hassan - 2024 - In Amy Kind & Julia Langkau, Oxford Handbook of Philosophy of Imagination and Creativity. Oxford University Press.
    This chapter considers the potential of artificial intelligence (AI) to exhibit creativity and imagination, in light of recent advances in generative AI and the use of deep neural networks (DNNs). Reasons for doubting that AI exhibits genuine creativity or imagination are considered, including the claim that the creativity of an algorithm lies in its developer, that generative AI merely reproduces patterns in its training data, and that AI is lacking in a necessary feature for creativity or imagination, such as consciousness, (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  50. Generative AI in EU Law: Liability, Privacy, Intellectual Property, and Cybersecurity.Claudio Novelli, Federico Casolari, Philipp Hacker, Giorgio Spedicato & Luciano Floridi - 2024 - Computer Law and Security Review 55.
    The complexity and emergent autonomy of Generative AI systems introduce challenges in predictability and legal compliance. This paper analyses some of the legal and regulatory implications of such challenges in the European Union context, focusing on four areas: liability, privacy, intellectual property, and cybersecurity. It examines the adequacy of the existing and proposed EU legislation, including the Artificial Intelligence Act (AIA), in addressing the challenges posed by Generative AI in general and LLMs in particular. The paper identifies potential gaps and (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   4 citations  
1 — 50 / 63