Contents
94 found
Order:
1 — 50 / 94
  1. What Good is Superintelligent AI?Tanya de Villiers-Botha - manuscript
    Extraordinary claims about both the imminenceof superintelligent AI systems and their foreseen capabilities have gone mainstream. It is even argued that we should exacerbate known risks such as climate change in the short term in the attempt to develop superintelligence (SI), which will then purportedly solve those very problems. Here, I examine the plausibility of these claims. I first ask what SI is taken to be and then ask whether such SI could possibly hold the benefits often envisioned. I conclude (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  2. (1 other version)Responsibility Gaps and Retributive Dispositions: Evidence from the US, Japan and Germany.Markus Kneer & Markus Christen - manuscript
    Danaher (2016) has argued that increasing robotization can lead to retribution gaps: Situation in which the normative fact that nobody can be justly held responsible for a harmful outcome stands in conflict with our retributivist moral dispositions. In this paper, we report a cross-cultural empirical study based on Sparrow’s (2007) famous example of an autonomous weapon system committing a war crime, which was conducted with participants from the US, Japan and Germany. We find that (i) people manifest a considerable willingness (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  3. A taxonomy of epistemic injustice in the context of AI and the case for generative hermeneutical erasure.Warmhold Jan Thomas Mollema - manuscript
    Whether related to machine learning models’ epistemic opacity, algorithmic classification systems’ discriminatory automation of testimonial prejudice, the distortion of human beliefs via the hallucinations of generative AI, the inclusion of the global South in global AI governance, the execution of bureaucratic violence via algorithmic systems, or located in the interaction with conversational artificial agents epistemic injustice related to AI is a growing concern. Based on a proposed general taxonomy of epistemic injustice, this paper first sketches a taxonomy of the types (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  4. “i am a stochastic parrot, and so r u”: Is AI-based framing of human behaviour and cognition a conceptual metaphor or conceptual engineering?Warmhold Jan Thomas Mollema & Thomas Wachter - manuscript
    Understanding human behaviour, neuroscience and psychology using the concepts of ‘computer’, ‘software and hardware’ and ‘AI’ is becoming increasingly popular. In popular media and parlance, people speak of being ‘overloaded’ like a CPU, ‘computing an answer to a question’, of ‘being programmed’ to do something. Now, given the massive integration of AI technologies into our daily lives, AI-related concepts are being used to metaphorically compare AI systems with human behaviour and/or cognitive abilities like language acquisition. Rightfully, the epistemic success of (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  5. Recommender Systems as Commercial Speech: A Framing for US Legislation.Andrew West, Claudio Novelli, Mariarosaria Taddeo & Luciano Floridi - manuscript
    Recommender Systems (RS) on digital platforms increasingly influence user behavior, raising ethical concerns, privacy risks, harmful content promotion, and diminished user autonomy. This article examines RS within the framework of regulations and lawsuits in the United States and advocates for legislation that can withstand constitutional scrutiny under First Amendment protections. We propose (re)framing RS-curated content as commercial speech, which is subject to lessened free speech protections. This approach provides a practical path for future legislation that would allow for effective oversight (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   2 citations  
  6. Human Perception and The Artificial Gaze.Emanuele Arielli & Lev Manovich - forthcoming - In Emanuele Arielli & Lev Manovich, Artificial Aesthetics.
  7. ‘Interpretability’ and ‘Alignment’ are Fool’s Errands: A Proof that Controlling Misaligned Large Language Models is the Best Anyone Can Hope For.Marcus Arvan - forthcoming - AI and Society.
    This paper uses famous problems from philosophy of science and philosophical psychology—underdetermination of theory by evidence, Nelson Goodman’s new riddle of induction, theory-ladenness of observation, and “Kripkenstein’s” rule-following paradox—to show that it is empirically impossible to reliably interpret which functions a large language model (LLM) AI has learned, and thus, that reliably aligning LLM behavior with human values is provably impossible. Sections 2 and 3 show that because of how complex LLMs are, researchers must interpret their learned functions largely in (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  8. AI Survival Stories: a Taxonomic Analysis of AI Existential Risk.Herman Cappelen, Simon Goldstein & John Hawthorne - forthcoming - Philosophy of Ai.
    Since the release of ChatGPT, there has been a lot of debate about whether AI systems pose an existential risk to humanity. This paper develops a general framework for thinking about the existential risk of AI systems. We analyze a two-premise argument that AI systems pose a threat to humanity. Premise one: AI systems will become extremely powerful. Premise two: if AI systems become extremely powerful, they will destroy humanity. We use these two premises to construct a taxonomy of ‘survival (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  9. Risks Deriving from the Agential Profiles of Modern AI Systems.Barnaby Crook - forthcoming - In Vincent C. Müller, Leonard Dung, Guido Löhr & Aliya Rumana, Philosophy of Artificial Intelligence: The State of the Art. Berlin: SpringerNature.
    Modern AI systems based on deep learning are neither traditional tools nor full-blown agents. Rather, they are characterised by idiosyncratic agential profiles, i.e., combinations of agency-relevant properties. Modern AI systems lack superficial features which enable people to recognise agents but possess sophisticated information processing capabilities which can undermine human goals. I argue that systems fitting this description, when they are adversarial with respect to human users, pose particular risks to those users. To explicate my argument, I provide conditions under which (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  10. Tool, Collaborator, or Participant: AI and Artistic Agency.Anthony Cross - forthcoming - British Journal of Aesthetics.
    Artificial intelligence is now capable of generating sophisticated and compelling images from simple text prompts. In this paper, I focus specifically on how artists might make use of AI to create art. Most existing discourse analogizes AI to a tool or collaborator; this focuses our attention on AI’s contribution to the production of an artistically significant output. I propose an alternative approach, the exploration paradigm, which suggests that artists instead relate to AI as a participant: artists create a space for (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  11. Deontology and Safe Artificial Intelligence.William D’Alessandro - forthcoming - Philosophical Studies:1-24.
    The field of AI safety aims to prevent increasingly capable artificially intelligent systems from causing humans harm. Research on moral alignment is widely thought to offer a promising safety strategy: if we can equip AI systems with appropriate ethical rules, according to this line of thought, they'll be unlikely to disempower, destroy or otherwise seriously harm us. Deontological morality looks like a particularly attractive candidate for an alignment target, given its popularity, relative technical tractability and commitment to harm-avoidance principles. I (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  12. Digital Necrolatry: Thanabots and the Prohibition of Post-Mortem AI Simulations.Demetrius Floudas - forthcoming - Submissions to Eu Ai Office's Plenary Drafting the Code of Practice for General-Purpose Artificial Intelligence.
    The emergence of Thanabots —artificial intelligence systems designed to simulate deceased individuals—presents unprecedented challenges at the intersection of artificial intelligence, legal rights, and societal configuration. This short policy recommendations report examines the legal, social and psychological implications of these posthumous simulations and argues for their prohibition on ethical, sociological, and legal grounds.
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  13. Home as Mind: AI Extenders and Affective Ecologies in Dementia Care.Joel Krueger - forthcoming - Synthese.
    I consider applications of “AI extenders” (Vold & Hernández-Orallo 2021) to dementia care. AI extenders are AI-powered technologies that extend minds in ways interestingly different from old-school tech like notebooks, sketch pads, models, and microscopes. I focus on AI extenders as ambiance: so thoroughly embedded into things and spaces that they fade from view and become part of a subject’s taken-for-granted background. Using dementia care as a case study, I argue that ambient AI extenders are promising because they afford richer (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  14. Generalization Bias in Large Language Model Summarization of Scientific Research.Uwe Peters & Benjamin Chin-Yee - forthcoming - Royal Society Open Science.
    Artificial intelligence chatbots driven by large language models (LLMs) have the potential to increase public science literacy and support scientific research, as they can quickly summarize complex scientific information in accessible terms. However, when summarizing scientific texts, LLMs may omit details that limit the scope of research conclusions, leading to generalizations of results broader than warranted by the original study. We tested 10 prominent LLMs, including ChatGPT-4o, ChatGPT-4.5, DeepSeek, LLaMA 3.3 70B, and Claude 3.7 Sonnet, comparing 4900 LLM-generated summaries to (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  15. Using artificial intelligence in health research.Daniel Rodger - forthcoming - Evidence-Based Nursing.
    Artificial intelligence is now widely accessible and already being used by healthcare researchers throughout various stages in the research process, such as assisting with systematic reviews, supporting data collection, facilitating data analysis and drafting manuscripts for publication. The most common AI tools used are forms of generative AI such as ChatGPT, Claude and Gemini. Generative AI is a type of AI that can generate human-like text, audio, videos, code and images based on text-based prompts inputted by a human user. Generative (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  16. The Global Brain Argument: Nodes, Computroniums and the AI Megasystem (Target Paper for Special Issue).Susan Schneider - forthcoming - Disputatio.
    The Global Brain Argument contends that many of us are, or will be, part of a global brain network that includes both biological and artificial intelligences (AIs), such as generative AIs with increasing levels of sophistication. Today’s internet ecosystem is but a hodgepodge of fairly unintegrated programs, but it is evolving by the minute. Over time, technological improvements will facilitate smarter AIs and faster, higher-bandwidth information transfer and greater integration between devices in the internet-of-things. The Global Brain (GB) Argument says (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  17. The biological objection against strong AI.Sebastian Sunday Grève - forthcoming - Inquiry: An Interdisciplinary Journal of Philosophy.
    According to the biological objection against strong artificial intelligence (AI), machines cannot have human mindedness – that is, they cannot be conscious, intelligent, sentient, etc. in the precise way that a human being typically is – because this requires being alive, and machines are not alive. Proponents of the objection include John Lucas, Hubert Dreyfus, and John Searle. The present paper explains the nature and significance of the biological objection, before arguing that it currently represents an essentially irrational position.
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  18. Why AI May Undermine Phronesis and What to Do about It.Cheng-Hung Tsai & Hsiu-lin Ku - forthcoming - AI and Ethics.
    Phronesis, or practical wisdom, is a capacity the possession of which enables one to make good practical judgments and thus fulfill the distinctive function of human beings. Nir Eisikovits and Dan Feldman convincingly argue that this capacity may be undermined by statistical machine-learning-based AI. The critic questions: why should we worry that AI undermines phronesis? Why can’t we epistemically defer to AI, especially when it is superintelligent? Eisikovits and Feldman acknowledge such objection but do not consider it seriously. In this (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  19. Explainable AI and stakes in medicine: A user study.Sam Baron, Andrew James Latham & Somogy Varga - 2025 - Artificial Intelligence 340 (C):104282.
    The apparent downsides of opaque algorithms has led to a demand for explainable AI (XAI) methods by which a user might come to understand why an algorithm produced the particular output it did, given its inputs. Patients, for example, might find that the lack of explanation of the process underlying the algorithmic recommendations for diagnosis and treatment hinders their ability to provide informed consent. This paper examines the impact of two factors on user perceptions of explanations for AI systems in (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  20. Toward a pluralism of perspectives on AI: a book review of Stephen Cave (ed.), Kanta Dihal (ed.): “Imagining AI: How the World Sees Intelligent Machines”. [REVIEW]Manh-Tung Ho & Tung-Duong Hoang - 2025 - AI and Society (14 March 2025).
    Imagining AI: How the World Sees Intelligent Machines edited by Stephen Cave and Kanta Dihal (2023) was published as a result of the Global AI Narratives (GAIN) project of the Leverhulme Center for the Future of Intelligence, University of Cambridge. The book has 25 chapters grouped into four parts, corresponding to different geographical regions: Europe; the Americas and Pacific; Africa, Middle East, and South Asia; East and South East Asia. All chapters contributed comprehensively to the mission of showing “the imaginary (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  21. Political Power - Rethinking Nations and Governance.Peter Newzella - 2025 - Medium.
    This analysis explores the nature, inertia, power dynamics, and mechanisms of systemic entities. Systems such as nations or institutions emerge as phenotypes of collective human behavior, shaped by shared narratives and psychological dynamics, relying on embodied human emotions and communal significance for cohesion. Institutions resist change due to vested interests and stakeholders equating system survival with personal survival; this inertia ensures stability but impedes rapid adaptation. Power operates not only hierarchically but through peer-driven social rewards and punishments, with ostracism serving (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  22. (1 other version)Artificial Intelligence (AI) and Global Justice.Siavosh Sahebi & Paul Formosa - 2025 - Minds and Machines 35 (4):1-29.
    This paper provides a philosophically informed and robust account of the global justice implications of Artificial Intelligence (AI). We first discuss some of the key theories of global justice, before justifying our focus on the Capabilities Approach as a useful framework for understanding the context-specific impacts of AI on lowto middle-income countries. We then highlight some of the harms and burdens facing low- to middle-income countries within the context of both AI use and the AI supply chain, by analyzing the (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  23. The AI-mediated communication dilemma: epistemic trust, social media, and the challenge of generative artificial intelligence.Siavosh Sahebi & Paul Formosa - 2025 - Synthese 205 (3):1-24.
    The rapid adoption of commercial Generative Artificial Intelligence (Gen AI) products raises important questions around the impact this technology will have on our communicative interactions. This paper provides an analysis of some of the potential implications that Artificial Intelligence-Mediated Communication (AI-MC) may have on epistemic trust in online communications, specifically on social media. We argue that AI-MC poses a risk to epistemic trust being diminished in online communications on both normative and descriptive grounds. Descriptively, AI-MC seems to (roughly) lower levels (...)
    No categories
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  24. The Heart of an AI: Agency, Moral Sense, and Friendship.Evandro Barbosa & Thaís Alves Costa - 2024 - Unisinos Journal of Philosophy 25 (1):01-16.
    The article presents an analysis centered on the emotional lapses of artificial intelligence (AI) and the influence of these lapses on two critical aspects. Firstly, the article explores the ontological impact of emotional lapses, elucidating how they hinder AI’s capacity to develop a moral sense. The absence of a moral emotion, such as sympathy, creates a barrier for machines to grasp and ethically respond to specific situations. This raises fundamental questions about machines’ ability to act as moral agents in the (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  25. Understanding Moral Responsibility in Automated Decision-Making: Responsibility Gaps and Strategies to Address Them.Andrea Berber & Jelena Mijić - 2024 - Theoria: Beograd 67 (3):177-192.
    This paper delves into the use of machine learning-based systems in decision-making processes and its implications for moral responsibility as traditionally defined. It focuses on the emergence of responsibility gaps and examines proposed strategies to address them. The paper aims to provide an introductory and comprehensive overview of the ongoing debate surrounding moral responsibility in automated decision-making. By thoroughly examining these issues, we seek to contribute to a deeper understanding of the implications of AI integration in society.
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  26. Synthetic Socio-Technical Systems: Poiêsis as Meaning Making.Piercosma Bisconti, Andrew McIntyre & Federica Russo - 2024 - Philosophy and Technology 37 (3):1-19.
    With the recent renewed interest in AI, the field has made substantial advancements, particularly in generative systems. Increased computational power and the availability of very large datasets has enabled systems such as ChatGPT to effectively replicate aspects of human social interactions, such as verbal communication, thus bringing about profound changes in society. In this paper, we explain that the arrival of generative AI systems marks a shift from ‘interacting through’ to ‘interacting with’ technologies and calls for a reconceptualization of socio-technical (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  27. ChatGPT and the Technology-Education Tension: Applying Contextual Virtue Epistemology to a Cognitive Artifact.Guido Cassinadri - 2024 - Philosophy and Technology 37 (14):1-28.
    According to virtue epistemology, the main aim of education is the development of the cognitive character of students (Pritchard, 2014, 2016). Given the proliferation of technological tools such as ChatGPT and other LLMs for solving cognitive tasks, how should educational practices incorporate the use of such tools without undermining the cognitive character of students? Pritchard (2014, 2016) argues that it is possible to properly solve this ‘technology-education tension’ (TET) by combining the virtue epistemology framework with the theory of extended cognition (...)
    Remove from this list   Direct download (5 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  28. Sora: The Dawn of Digital Dreams.David Côrtes Cavalcante - 2024 - Takk™ Innovate Studio. Edited by David Côrtes Cavalcante. Translated by David Côrtes Cavalcante.
    In "Sora: The Dawn of Digital Dreams", humanity stands on the brink of a new epoch, where the OpenAI Sora technology interweaves the fabric of reality and imagination into a tapestry of digital dreams. Set against the backdrop of a futuristic metropolis, this narrative explores the duality of technological advancement—its power to create and to corrupt. As society navigates the blurred lines between the authentic and the artificial, "Sora: The Dawn of Digital Dreams" invites readers to ponder the essence of (...)
    Remove from this list   Direct download (5 more)  
     
    Export citation  
     
    Bookmark  
  29. Artificial Intelligence 2024 - 2034: What to expect in the next ten years.Demetrius Floudas - 2024 - 'Agi Talks' Series at Daniweb.
    In this public communication, AI policy theorist Demetrius Floudas introduces a novel era classification for the AI epoch and reveals the hidden dangers of AGI, predicting the potential obsolescence of humanity. In retort, he proposes a provocative International Control Treaty. -/- According to this scheme, the age of AI will unfold in three distinct phases, introduced here for the first time. An AGI Control & non-Proliferation Treaty may be humanity’s only safeguard. This piece aims to provide a publicly accessible exposé (...)
    No categories
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  30. The FHJ debate: Will artificial intelligence replace clinical decision-making within our lifetimes?Joshua Hatherley, Anne Kinderlerer, Jens Christian Bjerring, Lauritz Munch & Lynsey Threlfall - 2024 - Future Healthcare Journal 11 (3):100178.
  31. Năm yếu tố tiền đề của tương tác giữa người và máy trong kỷ nguyên trí tuệ nhân tạo.Manh-Tung Ho & T. Hong-Kong Nguyen - 2024 - Tạp Chí Thông Tin Và Truyền Thông 4 (4/2024):84-91.
    Bài viết này giới thiệu năm yếu tố tiền đề đó với mục đích gia tăng nhận thức về quan hệ giữa người và máy trong bối cảnh công nghệ ngày càng thay đổi cuộc sống thường nhật. Năm tiền đề bao gồm: Tiền đề về cấu trúc xã hội, văn hóa, chính trị, và lịch sử; về tính tự chủ và sự tự do của con người; về nền tảng triết học và nhân văn của nhân loại; về hiện (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  32. Real Feeling and Fictional Time in Human-AI Interactions.Krueger Joel & Tom Roberts - 2024 - Topoi 43 (3).
    As technology improves, artificial systems are increasingly able to behave in human-like ways: holding a conversation; providing information, advice, and support; or taking on the role of therapist, teacher, or counsellor. This enhanced behavioural complexity, we argue, encourages deeper forms of affective engagement on the part of the human user, with the artificial agent helping to stabilise, subdue, prolong, or intensify a person’s emotional condition. Here, we defend a fictionalist account of human/AI interaction, according to which these encounters involve an (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  33. Chess AI does not know chess - The death of Type B strategy and its philosophical implications.Spyridon Kakos - 2024 - Harmonia Philosophica Articles.
    Playing chess is one of the first sectors of human thinking that were conquered by computers. From the historical win of Deep Blue against chess champion Garry Kasparov until today, computers have completely dominated the world of chess leaving no room for question as to who is the king in this sport. However, the better computers become in chess the more obvious their basic disadvantage becomes: Even though they can defeat any human in chess and play phenomenally great and intuitive (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  34. Just probabilities.Chad Lee-Stronach - 2024 - Noûs 58 (4):948-972.
    I defend the thesis that legal standards of proof are reducible to thresholds of probability. Many reject this thesis because it appears to permit finding defendants liable solely on the basis of statistical evidence. To the contrary, I argue – by combining Thomson's (1986) causal analysis of legal evidence with formal methods of causal inference – that legal standards of proof can be reduced to probabilities, but that deriving these probabilities involves more than just statistics.
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  35. Personalized Patient Preference Predictors Are Neither Technically Feasible nor Ethically Desirable.Nathaniel Sharadin - 2024 - American Journal of Bioethics 24 (7):62-65.
    Except in extraordinary circumstances, patients' clinical care should reflect their preferences. Incapacitated patients cannot report their preferences. This is a problem. Extant solutions to the problem are inadequate: surrogates are unreliable, and advance directives are uncommon. In response, some authors have suggested developing algorithmic "patient preference predictors" (PPPs) to inform care for incapacitated patients. In a recent paper, Earp et al. propose a new twist on PPPs. Earp et al. suggest we personalize PPPs using modern machine learning (ML) techniques. In (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  36. EI & AI In Leadership and How It Can Affect Future Leaders.Ramakrishnan Vivek & Oleksandr P. Krupskyi - 2024 - European Journal of Management Issues 32 (3):174-182.
    Purpose: The aim of this study is to examine how the integration of Emotional Intelligence (EI) and Artificial Intelligence (AI) in leadership can enhance leadership effectiveness and influence the development of future leaders. -/- Design / Method / Approach: The research employs a mixed-methods approach, combining qualitative and quantitative analyses. The study utilizes secondary data sources, including scholarly articles, industry reports, and empirical studies, to analyze the interaction between EI and AI in leadership settings. -/- Findings: The findings reveal that (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  37. A Comparative Defense of Self-initiated Prospective Moral Answerability for Autonomous Robot harm.Marc Champagne & Ryan Tonkens - 2023 - Science and Engineering Ethics 29 (4):1-26.
    As artificial intelligence becomes more sophisticated and robots approach autonomous decision-making, debates about how to assign moral responsibility have gained importance, urgency, and sophistication. Answering Stenseke’s (2022a) call for scaffolds that can help us classify views and commitments, we think the current debate space can be represented hierarchically, as answers to key questions. We use the resulting taxonomy of five stances to differentiate—and defend—what is known as the “blank check” proposal. According to this proposal, a person activating a robot could (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  38. Philosophers Ought to Develop, Theorize About, and Use Philosophically Relevant AI.Graham Clay & Caleb Ontiveros - 2023 - Metaphilosophy 54 (4):463-479.
    The transformative power of artificial intelligence (AI) is coming to philosophy—the only question is the degree to which philosophers will harness it. In this paper, we argue that the application of AI tools to philosophy could have an impact on the field comparable to the advent of writing, and that it is likely that philosophical progress will significantly increase as a consequence of AI. The role of philosophers in this story is not merely to use AI but also to help (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  39. Humans in the meta-human era (Meta-philosophical analysis).Spyridon Kakos - 2023 - Harmonia Philosophica Papers.
    Humans are obsolete. In the post-ChatGPT era, artificial intelligence systems have replaced us in the last sectors of life that we thought were our personal kingdom. Yet, humans still have a place in this life. But they can find it only if they forget all those things that we believe make us unique. Only if we go back to doing nothing, can we truly be alive and meet our Self. Only if we stop thinking can we accept the Cosmos as (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  40. GOLEMA XIV prognoza rozwoju ludzkiej cywilizacji a typologia osobliwości technologicznych.Rachel Palm - 2023 - Argument: Biannual Philosophical Journal 13 (1):75–89.
    The GOLEM XIV’s forecast for the development of the human civilisation and a typology of technological singularities: In the paper, a conceptual analysis of technological singularity is conducted and results in the concept differentiated into convergent singularity, existential singularity, and forecasting singularity, based on selected works of Ray Kurzweil, Nick Bostrom, and Vernor Vinge respectively. A comparison is made between the variants and the forecast of GOLEM XIV (a quasi-alter ego and character by Stanisław Lem) for the possible development of (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  41. Algorithmic Nudging: The Need for an Interdisciplinary Oversight.Christian Schmauder, Jurgis Karpus, Maximilian Moll, Bahador Bahrami & Ophelia Deroy - 2023 - Topoi 42 (3):799-807.
    Nudge is a popular public policy tool that harnesses well-known biases in human judgement to subtly guide people’s decisions, often to improve their choices or to achieve some socially desirable outcome. Thanks to recent developments in artificial intelligence (AI) methods new possibilities emerge of how and when our decisions can be nudged. On the one hand, algorithmically personalized nudges have the potential to vastly improve human daily lives. On the other hand, blindly outsourcing the development and implementation of nudges to (...)
    Remove from this list   Direct download (7 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  42. Bare statistical evidence and the legitimacy of software-based judicial decisions.Eva Schmidt, Maximilian Köhl & Andreas Sesing-Wagenpfeil - 2023 - Synthese 201 (4):1-27.
    Can the evidence provided by software systems meet the standard of proof for civil or criminal cases, and is it individualized evidence? Or, to the contrary, do software systems exclusively provide bare statistical evidence? In this paper, we argue that there are cases in which evidence in the form of probabilities computed by software systems is not bare statistical evidence, and is thus able to meet the standard of proof. First, based on the case of State v. Loomis, we investigate (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  43. The future won’t be pretty: The nature and value of ugly, AI-designed experiments.Michael T. Stuart - 2023 - In Milena Ivanova & Alice Murphy, The Aesthetics of Scientific Experiments. New York, NY: Routledge.
    Can an ugly experiment be a good experiment? Philosophers have identified many beautiful experiments and explored ways in which their beauty might be connected to their epistemic value. In contrast, the present chapter seeks out (and celebrates) ugly experiments. Among the ugliest are those being designed by AI algorithms. Interestingly, in the contexts where such experiments tend to be deployed, low aesthetic value correlates with high epistemic value. In other words, ugly experiments can be good. Given this, we should conclude (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  44. Teogonia technologiczna. Nominalistyczna koncepcja bóstwa dla transhumanizmu i posthumanizmu.Rachel 'Preppikoma' Palm - 2022 - In Kamila Grabowska-Derlatka, Jakub Gomułka & Rachel 'Preppikoma' Palm, PhilosophyPulp: Vol. 2. Kraków, Poland: Wydawnictwo Libron. pp. 129–143.
  45. AI-aesthetics and the Anthropocentric Myth of Creativity.Emanuele Arielli & Lev Manovich - 2022 - NODES 1 (19-20).
    Since the beginning of the 21st century, technologies like neural networks, deep learning and “artificial intelligence” (AI) have gradually entered the artistic realm. We witness the development of systems that aim to assess, evaluate and appreciate artifacts according to artistic and aesthetic criteria or by observing people’s preferences. In addition to that, AI is now used to generate new synthetic artifacts. When a machine paints a Rembrandt, composes a Bach sonata, or completes a Beethoven symphony, we say that this is (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  46. Posthuman to Inhuman: mHealth Technologies and the Digital Health Assemblage.Jack Black & Jim Cherrington - 2022 - Theory and Event 25 (4):726--750.
    In exploring the intra-active, relational and material connections between humans and non- humans, proponents of posthumanism advocate a questioning of the ‘human’ beyond its traditional anthropocentric conceptualization. By referring specifically to controversial developments in mHealth applications, this paper critically diverges from posthuman accounts of human/non-human assemblages. Indeed, we argue that, rather than ‘dissolving’ the human subject, the power of assemblages lie in their capacity to highlight the antagonisms and contradictions that inherently affirm the importance of the subject. In outlining this (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  47. The Algorithmic Leviathan: Arbitrariness, Fairness, and Opportunity in Algorithmic Decision-Making Systems.Kathleen Creel & Deborah Hellman - 2022 - Canadian Journal of Philosophy 52 (1):26-43.
    This article examines the complaint that arbitrary algorithmic decisions wrong those whom they affect. It makes three contributions. First, it provides an analysis of what arbitrariness means in this context. Second, it argues that arbitrariness is not of moral concern except when special circumstances apply. However, when the same algorithm or different algorithms based on the same data are used in multiple contexts, a person may be arbitrarily excluded from a broad range of opportunities. The third contribution is to explain (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark   16 citations  
  48. Medical AI and human dignity: Contrasting perceptions of human and artificially intelligent (AI) decision making in diagnostic and medical resource allocation contexts.Paul Formosa, Wendy Rogers, Yannick Griep, Sarah Bankins & Deborah Richards - 2022 - Computers in Human Behaviour 133.
    Forms of Artificial Intelligence (AI) are already being deployed into clinical settings and research into its future healthcare uses is accelerating. Despite this trajectory, more research is needed regarding the impacts on patients of increasing AI decision making. In particular, the impersonal nature of AI means that its deployment in highly sensitive contexts-of-use, such as in healthcare, raises issues associated with patients’ perceptions of (un) dignified treatment. We explore this issue through an experimental vignette study comparing individuals’ perceptions of being (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  49. A Fuzzy-Cognitive-Maps Approach to Decision-Making in Medical Ethics.Alice Hein, Lukas J. Meier, Alena Buyx & Klaus Diepold - 2022 - 2022 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE).
    Although machine intelligence is increasingly employed in healthcare, the realm of decision-making in medical ethics remains largely unexplored from a technical perspective. We propose an approach based on fuzzy cognitive maps (FCMs), which builds on Beauchamp and Childress’ prima-facie principles. The FCM’s weights are optimized using a genetic algorithm to provide recommendations regarding the initiation, continuation, or withdrawal of medical treatment. The resulting model approximates the answers provided by our team of medical ethicists fairly well and offers a high degree (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   3 citations  
  50. Algorithms for Ethical Decision-Making in the Clinic: A Proof of Concept.Lukas J. Meier, Alice Hein, Klaus Diepold & Alena Buyx - 2022 - American Journal of Bioethics 22 (7):4-20.
    Machine intelligence already helps medical staff with a number of tasks. Ethical decision-making, however, has not been handed over to computers. In this proof-of-concept study, we show how an algorithm based on Beauchamp and Childress’ prima-facie principles could be employed to advise on a range of moral dilemma situations that occur in medical institutions. We explain why we chose fuzzy cognitive maps to set up the advisory system and how we utilized machine learning to train it. We report on the (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark   32 citations  
1 — 50 / 94